Thursday, December 5, 2013

December 2's Muddiest Point

In terms of the structure of the web search engines, what are "crawlers"? Are they a program or components in software? How do they interact and contribute to the search engine overall?

Week 14 Readings

Articles

Knorr, E., and G. Gruman. (n.d.). “What cloud computing really means.” InfoWorld. http://www.infoworld.com/d/cloud-computing/what-cloud-computing-really-means-031.

           The different types of computing possible in cloud computing mentioned in the article are interesting. Supposedly SaaS (software as a service) offers a single application through the browser, requiring no upfront investment from customers and few costs for the provider to maintain (Knorr and Gruman, n.d., p. 1). However, why just one application? I can think of a number of reasons for this. Most probably, it could be because of costs – the creator may have just decided to offer one application entirely rather than add more. There could also be a storage problem; the browser might not be able to handle more than one application at a time. Another – web service providers – provide APIs that allow developers to exploit functionality over the Internet instead of offering “full-blown” applications. They encompass a wide range of services; Knorr and Gruman (n.d.) explicitly list “offering discrete business services” and credit card processing (p. 2). As such, the services sound like they focus on services for the exchange of money or some sort of numerical computing. This would explain the “discrete” nature of a couple of the services; there is an aura of secretiveness about the services described, and the exchange of numbers and money usually involves private information so users would seek web service providers which could ensure their privacy.

 
Barnatt, C. (2008, May 10). “Explaining Cloud Computing.” YouTube video, 00:05:52. Accessed December 5, 2013. http://www.youtube.com/watch?v=hplXnFUlPmg&NR=1.

The section on “Hardware as a Service” was enlightening. In particular, I thought it was interesting learning about the main two rivals in selling/offering HaaS. C. Barnatt (2008) introduces both as Amazon EC2 and Google App Engine, providing screen shots of each – mainly listings ordered according to price and capabilities for Amazon and the overall formatting of Google (Barnatt, 2008, 00:01:43-00:03:03). Such an approach benefits viewers since it acts as a preview of the sites, giving viewers a way to recognize the sites in their later searches. I wonder, though, if there are other HaaS providers available besides Amazon and Google. The two have name recognition, so they would be considered the top players in providing such services. However, they can’t be the only ones.
The benefits mentioned appears incomplete. Barnatt only lists the benefits of Amazon EC2, which includes the products being elastic, flexible, inexpensive, and reliable (Barnatt, 2008, 00:01:43-00:03:03). Does Google App Engine offer the same benefits? Or any other attributes, such as speed or time warranty? If they are the same, then how has it been able to compete with Amazon without offering something new or different? What about the disadvantages of HaaS, whether overall or the products offered by Amazon and Google?

 
Frey, T. (2006, Nov. 2). “The Future of Libraries: Future scenario, historical perspective, technology trends.” FuturistSpeaker.com. http://www.futuristspeaker.com/2006/11/the-future-of-libraries/.

            Some of the trends T. Frey (2006) asserts will influence future incarnations of the library sounds a little ridiculous. For example, one trend suggested states that “Search Technology will become increasingly more complicated.” Frey starts his description by assuming that currently search technology is relatively simple (Frey, 2006, under “Trend #4,” para. 1). However, this is inaccurate; while it seems easy, since you just have to type in a search term, ease depends on what a user is searching for. He will find an object more easily if it was popular and/or specific than something relatively unknown or shares a name with other objects.
            Next, Frey predicts that the next stage in search technology will include queries on “taste, smell, texture, reflectivity, opacity, mass, density, tone, speed, and volume” (Frey, 2006, under “Trend #4,” para. 2). At this point I am skeptical. Such searches seem unnecessary and extravagant, especially since some of the terms are subjective or sound impossible to measure correctly. He may be right and it will happen, but if it does, it will probably be a passing interest or only favored by those rich enough to afford such technology.
            The last paragraph, though, sounds more arguable. Frey ends the section by stating that as search becomes more complicated and people have less time and skills to keep updated on the newest innovations, the librarian will increasingly step in as a guide in finding information (Frey, 2006, under “Trend #4,” para. 3). I agree that potentially search will become more complicated, since developments of technology will lead to newer and greater uses of it to find more information. Librarians would be present a possible “expert” on the matter, since they would have experience organizing, finding, and explaining information for patrons. They would have to act quickly, though, to establish this role with the populace; they may give this role to other information professionals otherwise.

Thursday, November 28, 2013

November 25's Muddiest Point

In terms of networks – or social networks – I’m still a little unsure about the terminology. I understand that “edges” connect “nodes” but what exactly are “edges” and “nodes”? How do “edges” connect “nodes”? What do such linking represent?

Week 13 Readings

Articles

Hamer, J. (2008, Sept. 24). “My Turn: Protecting privacy rights in libraries.” Library News.
http://greatlibrarynews.blogspot.com/2008/09/myturn-protecting-privacy-rights-in.html.

             Much of what J. Hamer (2008) covers involves crucial issues in privacy and security of library records. I agree with Hamer’s position on the matter overall. While clearing up and adding the librarian perspective on a Vermont law intending to protect the privacy of library records, Hamer states that the Vermont Library Association personally advocated for privacy protection for all patrons of all ages, concerned with how comfortable a child should be if they wish to research sensitive issues (Hamer, 2008, para. 3). Such a concern is logical; a child needs guidance, but information should not be limited to them based on what a parents considers “suitable” since such a stance remains subjective. Additionally, in the case of the Brooke Bennett investigation and “impeding” the search by requiring a court order to investigate information on the public computers (Hamer, 2008, para. 4), I agree that a court order should have been presented rather than have the police assume they can take whatever information they want. However, I wonder if there could be a substitute for the court order that could be used in an emergency – depending on the community, the case, and the court, there may be corruption or delays in gaining a court order that should have been acquired right away. Otherwise, though, the policy is in place to protect individuals’ safety and privacy. Most of the information acquired from the library would most likely be check-out histories, addresses, and phone numbers – the former not security-related, the latter two possibly a cause of alarm if a person has not recorded such information publicly already, but overall such information should be protected since a person’s privacy is integral and a matter of trust. If no one can live their lives without someone – whether from the government or not – acting like a “big brother” and investigating their records, no matter how useless the information is, then that person can’t live freely as an individual.

 
Shrivastava, M. (2013, July 8). “MIT’s ‘Immersion’ Project Reveals The Importance of Metadata.” Techchronic.http://techchronic.blogspot.in/2013/07/have-gmail-account-see-what-nsa-knows.html.

             I don’t know how melodramatic M. Shrivastava (2013) is in his article. He states that some officials and ministers defended the internet surveillance by NSA by claiming that “they are only collecting metadata related to your mails, messages and interactions from phone and internet companies” (Shrivastava, 2013, para. 1). It is alarming that they can access such information. Looking at the image of a social network provided (Shrivastava, 2013), one can easily see a whole communication tree. Not only can an individual see the communication lines between one person and a few others, but how it relates to a whole “community” of people, even ones who are connected to one person in the main “tree.” That is powerful; in the wrong hands, a person could be charged easily just for communicating – or communicating with someone who interacted with – a suspect or a person under investigation. The NSA does not access the content itself (Shrivastava, 2013, para. 1), so that does make a difference. It is not like they know exactly what you say, so a person has some protection. But even with the lack of information, a lot could be presumed or conjectured about a person’s interactions. Additionally, this is based on whether Shrivastava could be considered a reliable source – on the internet, anyone can lie about their identity or where they got their information.

 
“The Privacy Show.” (n.d.). On The Media. http://www.onthemedia.org/story/258658-the-privacy-show/.

            It is interesting the variety of concerns expressed over privacy issues online during the “Privacy Show” (n.d.) – mainly that it was akin to wearing an ankle bracelet tracking a person’s every “movement,” that there has not been an increase in security alongside the decrease in privacy, and that bias towards a person’s race/gender/sexuality/etc – or, in the case of one man, concerns that there may be homophobic engineers in Google who would discriminate against him – could affect an employee’s willingness to help someone online (00:00-01:16). Their range of reasons is enlightening; some people are worried that they have become “prisoners” to the system or that they lose out on giving up their privacy without any benefit taking its place or that even basic information about their identity could harm them. Overall, there is a sense that each user has lost control over their identity and abilities online, no longer “free” in the World Wide Web.
            The three factors mentioned concerning government and company “prying” into personal information online is interesting as well. Primarily these factors are the fear of terrorism, profit motives, and users’ desire for fun and convenience online (“Privacy Show,” n.d., 01:17-01:55). So it sounds like the government and companies pry because they are reacting – or want to take advantage – of the concerns of the populace. All of these factors derive from user wishes – whether for security against terrorism, especially in the wake of 9/11, or for better service.

Thursday, November 21, 2013

November 18's Muddiest Point


What is the DSpace Digital Library System? What makes it different from other DL?

Week 12 Readings


Articles

Blossom, J. (2011). “What makes social media tick: seven secrets of social media.” In Content Nationl Surviving and Thriving as Social Media Changes Our Work, Out Lives, and Our Future. Indianapolis: John Wiley & Sons.

             Blossom specifically defines social media as “any highly scalable and accessible communications technology or technique that enables any individual to influence groups of individuals easily” (Blossom, 2011, p. 1). His logic sounds reasonable. He elucidates how although the scale and access differs between technologies and audiences involved, there is a constant need for social media to be scalable and easy to access (Blossom, 2011, p. 2-3). He additionally explains that social media remains a peer-to-peer medium, users relating to each other as peers and basing their authority on whether they share information considered authoritative by each other, and that its influence is unpredictable on the masses due to its scalability, information meant for a limited audience sometimes reaching greater influences in the wider world (Blossom, 2011, p. 3). I agree with the first part of his definition; social media is social in that many users can use and access technologies that allow them to communicate to a wide group of people. However, I disagree with the rest of his definition. While a user may engage in social media to influence others, that is just one reason. Numerous other causes can motivate a user to utilize social media, such as seeking information – thus being influenced themselves – or just communicating with others that they cannot talk or write to normally (whether due to the busyness of their own lives or geographical distances). As for his elucidation on “peer-to-peer,” I think that depends on the social media and individuals involved. Some hierarchy remains; mods and admins of discussion sites, for example, can delete posts made by regular users, undermining their wish to influence others.

 
Allan, C. (2007, April). “Using a wiki to manage a library instruction program: Sharing knowledge to better serve patrons.” C&RL News 68(4): 242-244.

             As I was reading the article, I kept on being distracted by the presentation of the content itself. While it was largely readable, the article divided into clear categories, the scan kept skipping on individual letters. For example, the sentence “The wiki, which is accessible by invited users at speci c URL, somewhat visually resembles a word processing program” (Allan, 2007, p. 242) has lost letters (“fi”) in the word “specific” while the sentence “The   rst use of the wiki deals with sharing information and new experiences….” (Allan, 2007, p. 243) loses the first two letters of the second word “rst” – based on latter paragraphs, probably was meant to be “first.” Such a problem does not lie with the writer, but with the digital format. It was published in an academic journal, so such glaring misspellings would have been fixed, and copying and pasting the sentences in a separate document retrieves the letters. This reminds me of our OCR task in the first assignment – not all of the information we were required to OCR had legible results – and my own troubles with making quotation marks and hyphens uploaded onto my website in assignment 5 (which, at this time, are still unsuccessful). Ultimately, this is a good lesson in itself when working with digital tools – become familiar with digitization and be prepared for some information not automatically translating well into new formats.
            The article itself was enlightening on the topic of wikis. Having heard about Wikipedia, I originally thought it was the only one of its kind – quite a naïve belief, considering that its format and type suggests development from a root model. So the idea that I could create my own wiki was interesting and could be beneficial for me in my future career. Whether I could fully use them as C. Allan (2007) suggests for libraries, such as managing public services information, collaborating on and monitoring reference questions, and evaluating databases (p. 242-243) or find my own uses, I don’t know – I’ll have to test them out myself to see. Curious about the “best known sites” Allan notes – seedwiki, pbwiki, jotspot, twiki.org, and phpwiki – (Allan, 2007, p. 242) I decided to look them up online and understand better how you would go about creating one. However, there have been many changes since 2007. twiki.org, pbwiki (relaunched as pbworks), jotspot (bought by Google and renamed Google Sites – http://www.crunchbase.com/company/jotspot), and phpwiki (changing its web address from phpwiki.org to http://phpwiki.sourceforge.net/) remain wiki providers. seedwiki has been turned off (see http://seedwiki.com/ for its creator’s last words). Comparing the past to the present was interesting – most of them have persisted in providing wiki tools, but have developed as well, either changing its structure or management.

 
Arch, X. (2007, February). “Creating the academic library folksonomy: Put social tagging to work at your institution.” C&RL News 68 (2): 80-81.

             I found the risks, which X. Arch (2007) notes in his article, to be quite interesting. Specifically, Arch asserts that one issue in including social tagging is that spagging may occur, during which “users with bad intentions” tag irrelevant or inappropriate sites so as to make a profit or create chaos (Arch, 2007, p. 81). I agree that these are two main reasons for a user to tag unsuitable sites – it is part of human nature to seek ways to profit themselves in any way, whether by acquiring monetary sources or causing trouble for others. I think, however, that such a view assumes that everyone can rightly judge the nature of a site. Some may have tagged sites that they thought was helpful and relevant to the tagging or they may have misread the tags available. Thus, I would add that users with well-intentioned purposed but lead by ill judgment could be a cause for concern as well. Another issue was that, without a proper standard taught or enforced, users would have little knowledge in creating tags following a shared terminology nor would they know what keywords to choose (Arch, 2007, p. 81). I think this is a viable problem that should be solved. In such a case, a standard should be used. Although this may go against the idea of a folksonomy, such a taxonomy cannot be helpful if no one understands how to use it. It doesn’t have to be something strict and official, just a loose basis that could be a model for future additions to the tags. In such a case, you could mix both standards and local jargon – use standards for ideas that have few words to represent them and to act as a template for how to form keywords, use users’ inflections in what type of words are more common among the majority.

 
Wales, J. (2006, August). “Jimmy Wales: The birth of Wikipedia.” TED Talks video, 20:05. Accessed November 21, 2013. http://www.ted.com/talks/jimmy_wales_on_the_birth_of_wikipedia.html.

            J. Wales’s (2006) stances on neutrality seem to conflict somewhat with his analysis of the structure of those who run/edit/manage Wikipedia itself. He asserts that Wikipedia follows a neutrality policy that is nonnegotiable, wherein users must merely report information and any controversies without bias for any side (Wales, 2006, 0:07:45-0:09:02). I think that this is a good standard to follow overall; Wikipedia is a global phenomenon, so there are more differences in personal opinions, political and economic beliefs, philosophies, religions, and societal factors to take into consideration than in one country alone. Such a stance not only allows the users to work with each other without contentions as Wales promotes (Wales, 2006, 0:09:03-0:010:00) but it also means that information about individual points will not be limited by one point-of-view but encompass more facets – not every detail, since neutrality does imply a side that considers the main opinionated, upfront viewpoints involved, but at least a varied mix of points from every side rather than one viewpoint.
            When compared to the structure of Wikipedia’s “staff,” though, the neutrality policy appears a little off. Wales explains that the management of the site derives principles from a wide variety of political philosophies, including the democratic nuances of voting for the erasure or inclusion of pages, the aristocratic assumption that certain figures who have worked closely with the set-up and management of Wikipedia for a certain length of time exert greater influence in the framework, and monarchic principles – not “benevolent dictator,” which Wales refuses to acknowledge – that Wales, the founder, makes the final decisions over everything (Wales, 2006, 0:14:08-0:17:50). Such a mishmash of political standards does not imply an anti-political or a flexible management standpoint alone, but a different form of management style. It is democracy tempered, a social construct that – while having relative equality amongst its management – still assumes clear leadership whose opinions do matter. If the “aristocratic” members and the “Queen” deemed it, they could alter the neutrality principle itself, declaring an opinion to be “neutral” when it is not while other members will have to yield to such a decision.

Thursday, November 14, 2013

November 11's Muddiest Point

I'm still a little confused on the relationships between the different types of languages available (SGML, XML, HTML, XHTML, etc.). How does SGML simplify XML? Does XML define XHTML like SGML defines HTML, or does XML do something different entirely?

Week 11 Readings

Articles

Paepcke, A., H. Garcia-Molina, and R. Wesley. (2005, July/August). “Dewey Meets Turing: Librarians, Computer Scientists, and the Digital Libraries Initiative.” D-Lib Magazine 11: 7-8.

             A. Paepcke and his co-authors (2005), unusually, view the Web very negatively in the article. In terms of the Digital Library Initiative (DLI), they portray the Web as the disrupter of peace and alliance between computer scientists and librarians. It was the “somewhat undisciplined teenager,” a new son/daughter in terms that it has ruined their plans for their initiative by providing alternate sources of information (Paepcke, Garcia-Molina, and Wesley, 2005, under “The Cuckoo’s Egg Surprise,” para. 2). It challenged their assumptions about forming digital libraries and what was considered the primary source for finding and using materials. Yet Parpcke and his colleagues direct their analogy into an Oedipus/Elektra complex. The ‘teenager’ now has “sex appeal” for computer scientists; the Web offered a fertile area for machine learning, statistical, and experimental methods to become applicable to information search and organization, drawing in legions of researchers to participate (Paepcke, Garcia-Molina, and Wesley, 2005, under “The Cuckoo’s Egg Surprise,” para. 8). So it seduced computer scientists to the other side, leaving librarians off-balance in the Initiative. As such, it has become an adulterer, betraying the trust computer scientists and librarians had. Such language seems over-dramatic, though; the authors want to explain why digital libraries aren’t succeeding as they expected, so they found a scapegoat in the Web and made it the source of all of their troubles. I think a lot more factors are involved. Additionally, the Web is not the librarians’ enemy; it may cause hardships for the Initiative overall, but it has increasingly become a tool for librarians to use in linking, organizing, and creating information.

 
Lynch, C. A. (2003, February). “Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age.” ARL no. 226: 1-7.

             It is interesting that C.A. Lynch (2003) sets up the repository as a collaborative effort. He specifically states that a successful institutional repository portrays a collaboration between librarians, information technologists, archives and records managers, faculty, and university administrators and policymakers (Lynch, 2003, p. 2). This is interesting in that he takes a overarching viewpoint of the repository; rather than focusing on one identity or how one particular group of people creates or uses the institution, he suggests that it involves the work of many individuals. Thus the institutional repository does not appear to be so 2-dimensional, but more complex, requiring the actions of many people to work. This does fit its purpose to disseminate digital materials to its institution and related members – particularly the intellectual works of faculty and students (Lynch, 2003, p. 2) – in a sense implying that collaboration supplies benefits to a wider sample of people. I wonder, though, if institutional repositories should only distribute the works of faculty and students. Such a narrowing of focus may help the repository focus its aim to build up a collection and access to scholarship, but it also limits what counts as legitimate sources or information.
            Another interesting point is Lynch’s concerns over the use of repositories. The main troubles he foresees includes degenerating the repository into a tool for institutional control over intellectual work (Lynch, 2003, p. 4-5), adding additional “distracting and irrelevant policy baggage” to it (Lynch, 2003, p. 5), and – with increasing demand for institutional repositories – repositories may became hastily-made, hollow services (Lynch, 2003, p. 6). Thus he is concerned over the quality of institutional repositories; he associates a true repository as one little influenced by the politics of its institution [being almost ‘uncontaminated’ or pure, having its own agenda rather than fulfilling the agendas of its institution], yet requiring the full support and resources of its institution to be well-made and resourceful for its users. Can such conflicting worries coexist? Such a repository imagined by Lynch would need to be created by an institution which upholds values of open access to information and ideals on unrestrained/uncensored information. Reality, though, means that the repository in question has to submit to its institution to some degree if it is to receive funding or support in its own endeavors. As such, I don’t know how realistic Lynch’s concerns are or if they can be fixed according to his own values.

 
Hawking, D. (2006, June). “Web Search Engines: Part 1.” Computer: 86-88.
AND
Hawking, D. (2006, August). “Web Search Engines: Part 2.” Computer: 88-90.

             According to D. Hawking (2006), Web search engines require a lot of attention and work to operate. Physically, they can be sprawling. Each operates from numerous, geographically spread data centers and within each center are a variable number of servers to support services and specialized duties (Hawking, “Part 1,” 2006, p. 86). Thus search engines are not one entity, but a composition of entities; it needs different parts to ensure that it functions as it should. Its actions itself imply complexity as well. For example, search engines employ inverted files to identify indexing terms. Inverted files can only be created in two phases – first, scanning the text of each input document; second, sorting temporary files into term number order (Hawking, “Part 2,” 2006, p. 88). This requires a user to invest of lot of time and attention to create the files needed for the process. Taken altogether, this is a sobering thought; search engines have become such a common feature on the Web that to not see one would be a cause for outcry. It seems so easy to use – just enter a word or phrase, click, and you get your results – yet a lot of work goes into making sure it works.

 
Shreeves, S.L., T.G. Habing, K. Hagedorn, and J.A. Young. (2005). “Current Developments and Future Trends for the OAI Protocol for Metadata Harvesting.” Library Trends 53, no. 4: 576-589.

             In this article, I found the development of OAI services quite interesting. Rather than remain a tool only in the e-print archives community, others – libraries, museums, archives, etc. – started using it for their own services, creating user group-specific service providers (Shreeves et al, 2005, p. 578). Thus others started seeing its usefulness – probably through observing how the e-print archives community used it and debated its pros/cons. Yet, other communities didn’t mimic the original users explicitly. Rather, they not only utilized the servers to help provide federated access to resources but also developed further standards, tools, and metadata schemas to contribute to the OAI protocol (Shreeves et al, 2005, p. 578). In this way, the OAI provides a good lesson in using technologies, especially those created for a specific purpose or group. If others purposes exist for a device, then a user should test it out. Other people can use the same tool for different purposes. Additionally, what it is now does not mean that it will maintain that structure in the future; users can add new standards or other accessories to the technology to better adapt it to their situation.

Friday, November 1, 2013

October 28's Muddiest Point

I'm still a little confused about the role of CSS Comments (slide 30). How do the Comments explain the code used? Why are they ignored by the browsers?

Week 10 Readings


Articles

Bryan, M. (1997). “An Introduction to the Extensible Markup Language (XML).” The SGML Centre. Retrieved from http://www.is-thought.co.uk/xmlintro.htm.

             M. Bryan (1997) notes a very integral facet – scrupulousness – of the XML language. While he relates the multiplicity of XML languages available, as is considered in the other readings for this week, Bryan also states that XML can transfer information about the component parts of documents to other computer systems and is malleable enough to describe any logical text structure – memos, letters, dictionaries, databases, and the like (Bryan, 1997, under “What is XML?,” para. 6). At the same time, it identifies where the change of appearance happens, where a new element begins, and what boundaries exist for each part of a document (ibid., under “The components of XML,” para. 2). Thus XML is thorough. It concentrates on the individual parts of information – providing equal attention to each cog and not skipping over different key components – and covers a wide range of text structures representing a multitude of information types. Additionally, it sets up to mark everything – its beginnings, its endings, and its limits, to name a few. Such meticulousness ensures that all information is explained and moved entirely to other computers.
 

Ogbuji, U. (2004, January 20). “A survey of XML Standards: Part 1, The core standards – a foundation for the wide world of XML.” IBM: developerWorks. Retrieved from http://www.ibm.com/developerworks/library/x-stand1/index.html

             U. Ogbuji (2004) highlights an important factor in understanding not only his survey, but also in interpreting XML. Rather than assume that he and his users will read his article in the same way, he defines what he means by “standards” in his introduction. Ogbuji sustains “that the word itself is a bit slippery,” having multiple forms, but that he himself “follow[s] the practical approach of defining a standard as any specification that is significantly adopted by a diversity of vendors, or is recommended by a respectable, vendor-neutral organization” (Ogbuji, 2004, para. 2). According to Ogbuji, there exists no customary “language” for determining XML Standards or its related premises – that even its subject of standards remains vague if no one actively elaborated on the topic. Yet he assumes that he is taking the “practical approach” – the more logical, possibly superior method of interpretation – for defining the word. While attempting to make the concept clearer is inarguably beneficial in this context, ensuring that readers have a clearer idea on how to analyze his article, such a viewpoint remains one bias on how to read it. It is good that he provides a definition, but what he may deem as “practical” may not be so in the overarching framework on XML discussion.
            Continuing along this concept of language (albeit within XML itself), the use of namespaces offers ways to manipulate vocabulary. Namespaces themselves can assign a vocabulary marker to each XHTML element, allowing the user to differentiate elements from the host vocabulary elements which use the same names (Ogbuji, 2004, under “XML Namespaces”). Such a method is fascinating in that it links standardized languages between levels. Namespaces acknowledge the issue that sometimes the official vocabulary repeats itself, confusing the contents of the document as a whole. Thus it provides markers that follow another standardization. Although complex in practice – or, as Ogbuji notes, a controversial move that may cause more problems than it should (ibid.), it does provide a framework for viewing how different forms of standard languages interact. Thus it may not be beneficial when a person needs to use it, but it is useful for theoretical analyses.

 
Bergholz, Andre. (2000, July-August). “Extending Your Markup: An XML Tutorial.” IEEE Internet Computing: 76-79. Retrieved from http://xml.coverpages.org/BergholzTutorial.pdf.

             Out of all of the articles required for this week, I believe that A. Bergholz (2000) provides one of the clearest definitions for XML and what it does. Specifically, he asserts that XML is “a semantic language that lets you meaningfully annotate text,” making it easier for users and computers to understand (Bergholz, 2000, p. 74). This is clearer than how the “XML Tutorial” (n.d.) of W3Schools describes XML. XML annotates – making comments, marking points with more in-depth descriptions that ensures smoother computation. This particular definition is also succinct, pinpointing the key characteristics of XML that a user would need to know to differentiate it from other concepts.
            What was new for me was XSL. Bergholz (2000) introduces XSL – the Extensible Stylesheet Language – as a complex of two languages, XSL transformations (or XSLT) and XSL formatting objects/language (p. 77-78). As far as I know, I have never heard about XSL. So reading about XSL proved most informative. Specifically, users can utilize XSLT to transform XML into HTML and reformat XML documents so that a variety of XML representations are mapped onto each other (Bergholz, 2000, p. 78). XSLT, in this manner, is relatively powerful. Although it cannot change the basic nature of HTML or XML, XSLT can reform its approach and appearance contrary to their character. I don’t know whether Bergholz’s claim that XSLT especially helps electronic commerce and electronic data interchange (ibid., p. 78) is true – I don’t have the necessary background and education to decide – but the premise sounds possible; if XSLT can reformat XML into different forms, then it can provide a wider range of documents that can be read more easily.

 
“XML Tutorial.” (n.d.). w3schools.com. Retrieved from http://www.w3schools.com/xml/default.asp

            Similar to Uche Ogbuji’s article “A Survey of XML Standards: Part 1,” the “XML Tutorial” (n.d.) focuses on the manipulation of language and how users can use it. It specifically acts as a markup language, carrying data (not displaying it) and remaining self-descriptive (“XML Tutorial,” n.d., under “Introduction to XML,” “What is XML?”). Thus it has its own vocabulary, acting as a method of communication between user and computer. The Tutorial, however, argues that it “does not do anything;” it can “structure, store, and transport information” but “it is just information wrapped in tags,” needing additional software to either send, receive, or display it (ibid., “XML Does Not DO Anything”). Compared to Bergholz’s definition (Bergholz, 2000, p. 74), this definition isn’t as clear. I think I understand the basic meaning the Tutorial purports – that XML only marks up the structure and describes features, not actually commanding anything to be done – but I think that stating that it “does not do anything” confuses more than explains XML.
            The language of XML itself seems to be its own creation. It is very fertile; since the XML language does not have any predefined tags, the user can determine her own tags and document structure (“XML Tutorial,” n.d., under “Introduction to XML,” “With XML You Invent Your Own Tags”), so the number of XML language possible is limitless (ibid., under “How Can XML be Used?,” “XML is Used to Create”). As such, XML is almost alive, allowing users to create multiple languages to attain different purposes. A few rules still apply. For example, XML tags are case-sensitive (ibid., under “XML Syntax Rules,” “XML Tags are Case Sensitive”) and all attribute values have to be quoted (ibid., “XML Attribute Values”). So some limits exist, restricting the number and type of possible languages available. However, some restrictions are necessary so that XML language creation does not become too chaotic, following some basic pattern to work in practice and having an anchor in what does and doesn’t work.

Thursday, October 24, 2013

October 21's Muddiest Point


In the HTML page, the elements – and, in conjunction, the tags and attributes – all produce “permanent” effects that can only be changed by the creator in the HTML page, i.e. tags <b>…</b> bolds text and <body bg color = “green”> makes the background color perpetually green. Are these “static” elements the only kinds, though? Can there be elements coded to change on their own initiative or allow users to make changes themselves, such as writing and submitting comments?

Week 9 Readings


Articles

CSS Tutorial. (n.d.). w3schools.com. Retrieved from http://www.w3schools.com/css/

             The CSS Tutorial (n.d.) offers interesting details on CSS. One such point is the origins of CSS. According to the Tutorial, when developers added tags such as <font> and color attributes to the HTML 3.2 specification, the process of developing large web sites became longer and more expensive for web developers to complete. As such, the World Wide Web Consortium (W3C) constructed CSS so that, in HTML 4.0, a user can remove the formatting from HTML documents and store them in a separate CSS file (under Introduction, “Styles Solved a Big Problem”). In effect, then, the CSS was developed as a solution to an earlier problem – revealing how software and digital technologies evolved through trial and mishap.
            Another interesting detail was how a person could insert CSS into their work. The Tutorial (n.d.) lists three ways to do so: 1) an external style sheet, which changes the appearance of the Web site by changing one file (under CSS How To…, “External Style Sheet”); 2) an internal style sheet, used for single documents which have a unique style (ibid., “Internal Style Sheet”); and 3) inline styles, which mixes content with presentation (ibid., “Inline Styles). Such categories imply organization to the CSS’s development as well as the different manners in which CSS affects visual elements. A user can use CSS for more focused projects; no one has to stick to manipulating the presentation of a whole website when she or he really needs to differentiate individual pages from each other, for example.
 

CSS Tutorial: Starting with HTML + CSS. (n.d.) W3C. Retrieved from http://www.w3.org/Style/Examples/011/firstcss

             The “CSS Tutorial: Starting with HTML + CSS” (n.d.) covers some details that I’m unfamiliar with. For example, in the second “warning” for Step 1, it notes how the “ul” elements represent a list with one hyperlink per item, serving as the “site navigation menu,” while the “hl” and “p” elements “form the unique content of this page” (CSS Tutorial, n.d., under Step 1: The HTML). I found such “warnings” to be fascinating; they elaborate and build on the basic information of the Tutorial and introduce new ideas of using HTML. In the case of the example I gave, I am now curious at what “unique content” the “ul” and “p” elements produce and want to try that out.
            I particularly liked the Tutorial’s approach to colors. Step 2 covers the basics, teaching how to add color using the <style> elements – specifically <style type=“text/css”> and elaborating with how to set the colors for the text and background of the body (CSS Tutorial, n.d., under Step 2: Adding Some Colors). I love adding variation to my work when I can, experimenting with colors, type fonts, and the like, so I will need to investigate this when working with HTML. Similarly, the analysis on link colors under Step 5 was most interesting. I am familiar with the standard for having links to pages I haven’t visited remain blue while those which I have clicked becomes purple (ibid., under Step 5: Styling Links). However, I am now curious of why this is the standard. Having a consistent color-coordination is beneficial for the Web overall, allowing fewer confusions for newcomers without varying the colors. But why blue and purple? Why not some other colors, like green and red? Were the colors chosen randomly, or was thought put into it or does it correspond to a cultural norm? I really want to experiment with this, see if anyone would actually react if I changed the colors.

 
Lie, H.W. and B. Bos. (1999). Chapter 2 in Cascading Style Sheets, Designing for the Web. Boston: Addison-Wesley.

             After reading the Chapter, I found that H.W. Lie and B. Bos (1999) provided interest notes using organic metaphors. For example, their “anatomy” of rules and declarations. Each are made up of two parts – the rule consisting of the selector [the link between HTML documents and the style] and declaration [determines the effect of the rule] (Lie and Bos, 1999, under “Rules and Style Sheets,” “Anatomy of a rule”) and the declaration made up of properties [the quality] and value [specifies the type of quality] (ibid., “Anatomy of a declaration”). Each are interlinked, the declaration and all of its components contributing to the overall form of the rule. One part cannot exist with the other, or otherwise the rule as a whole fails. This is almost true for the human body; although the body can continue living without both kidneys, for example, overall it needs most of its organs intact to function fully as a living being. Viewing the anatomy of the rule like this emphasizes both its limitations and complexities in CSS.
            Another organic-like feature involves formatting documents in CSS as tree-structures. Lie and Bos suggest this course to emphasize the “inheritance” factor of the elements; “through inheritance, CSS property values set on one element will be transferred down the tree to its descendants” (Lies and Bos, 1999, under “Tree Structures and Inheritance”) but sometimes elements override others in the “children” (ibid., under “Overriding Inheritance”) or cannot be inherited (ibid., under “Properties that don’t inherit”). As such, the organic-metaphor allows users to understand CSS better. Most American public high schools teach the basics on genes and genealogy in biology courses, so theoretically a good number of people understand the basic idea of the transference of genes, and if not, most people are familiar with how family trees work. Users would be familiar with the logistics; thus, applied to the CSS, they can make the leap between metaphor and reality and be able to understand how CSS works.
            By emphasizing the organic metaphors, Lie and Bos almost imply that CCS documents are almost alive. They have “organs” which determine whether they live fully or not or how well they accomplish basic functions. They have elements that can be passed on to others “genetically.” This is an interesting method for them to use. They could be doing it accidently, for poetic reasons, to make their concepts more relatable, or to make a point on its complexity.

Thursday, October 17, 2013

October 15's Muddiest Point


Different groups of people manage the Internet, such as the Internet Society (ISOC) and the Regional Internet Registries. Who do they answer to? Who or what determines the standards and policies they go by in overseeing the Internet? Or are they their own organizations, creating their own guidelines?

Week 8 Readings


Articles

“HTML Tutorial.” (n.d.). W3Schools. Retrieved from http://www.w3schools.com/HTML/

Overall, the document “HTML Tutorial” (n.d.) provided straightforward information on using HTML. A lot of it was new and unfamiliar. For example, under the chapter “HTML Editors,” the tutorial recommends different HTML editors for editing HTML – even going so far as to suggest that utilizing a basic text editor would help new users learn about HTML (“HTML Tutorial,” n.d., under “HTML Editors”). I did not know that such editors existed or that anyone would need it. In retrospect, though, it would make writing HTML more quick and efficient. I do wonder if their claim would work, however. It would depend on the user; some people can learn more easily with firsthand experience while others might need a more human guide in learning HTML.
There were some parts, though, I want to know more details about. In the introduction, the document states that HTML tags and HTML elements usually describe the same things, “but strictly speaking, an HTML element is everything between the start tag and the end tag, including the tags” (“HTML Tutorial,” n.d., under “Introduction”). Usually with the phrase “but strictly speaking,” a person means an opposite idea; in this case, it implies that although the tags and elements are used in the same way, they aren’t the same nor act in the same way. What’s confusing me is what the difference is. The description above for HTML tags describe the exact same thing (ibid.). So is there a difference? Should there be a difference? Additionally, I understand the basic idea for why the tutorial recommends using lower case attributes/attribute values since they are case-sensitive (ibid., under “HTML Attributes”) but I feel like there is more to it than that. If I understood the logistics better, I probably would know why lower case is used instead of upper case – such knowledge would hopefully clear things up a little. But why one over the other? If both can theoretically work, then maybe including both can expand the list of attributes to encompass new kinds or maybe help organize the ones in existence.

 
“HTML Cheatsheet Guide.” (2008). Webmonkey.com. Retrieved from http://www.wired.com/images/multimedia/webmonkeycheatsheet_full.pdf

            The guide “HTML Cheatsheet Guide” (2008) seems like it would be suitable for a quick reference. It provides tags and their descriptions for a lot of the most basic HTML a person might have to do, such as creating a HTML document with <html></html> (“HTML Cheatsheet Guide, 2008, under “Basic Tags”) or forming new paragraphs with <p></p> (ibid., under “Formatting). I’ve never had to use HTML like this before, so I cannot say for sure if the guide covers all of the basic tags or elements a user would necessarily need. From what I can interpret, though, it accomplishes its goals.
            If I ever had to create a HTML document by using HTML tags, I would like to experiment with the tags offered. I would need to get the basics down first, but the later sections – “Forms,” Graphical Elements,” and “Links” – would prove a good basic challenge. What would be the most interesting to try is the tags for creating Submit buttons (“HTML Cheatsheet Guide, 2008, under “Forms”) and adding images and their descriptions (ibid., under “Graphical Elements). The former seems a little more complicated than the other more straight-forward tags provided and the later would be fun to learn, especially since it would be helpful in the future to have an option in adding pictures to a document.

 
Pratter, F. E. (2011). “Introduction to HTML,” Chapter 2 of Web Development with SAS by Example. Retrieved from http://books.google.com/books?id=GQxv8xaIPFYC&printsec=frontcover&dq=inauthor:%22Frederick+E.+Pratter%22&hl=en&sa=X&ei=Mr9eUtnXEdSp4APItICACQ&ved=0CDoQ6AEwAg#v=onepage&q&f=false

            Reading this chapter helped answer some of my questions that I posed in the “HTML Tutorial.” For example, concerning lower vs upper case, F. E. Pratter (2011) notes that HTML 4.0 tags aren’t case sensitive but standard requires lower case (20). While this does not answer why lower case is preferred, it does elaborate on the background for my questions. However, it disagrees somewhat with the other readings. Apparently professional Web developers prefer to write HTML from its roots by employing text editors such as Notepad or KEDIT (Pratter, 2011, 16) – this questions somewhat the recommendation of “HTML Tutorial” that a basic editor would help beginners learn HTML (“HTML Tutorial,” n.d., under “HTML Editors”). Based on the context, they might be both wrong and right; some text editors would be basic enough for a novice to use and learn from, while there are more advanced versions for the professionals. It seems kind of strange that Pratter does not acknowledge the types of editors available based on expertise, or categorize them by some sort of evaluation. In this regard, the “HTML Tutorial” at least implies a difference even though it does not state it explicitly. This might have to do with the type of audiences they each target; both introduce users to HTML, but whereas the “HTML Tutorial” seems more like shorthand notes – noting differences and steps – the other is focused on explanations.

 
Goans, D., G. Leach, and T. M. Vogel. (2006). “Beyond HTML: Developing and re-imagining library web guides in a content management system.” Library Hi Tech 24(1): 29-53. DOI:10.1108/0737883061065209.

            The article overall provides a good analysis of web guides. In particular, D. Goans and his fellow writers’ (2006) look at what content is in a CMS was interesting. They note that “content” consists of a broad spectrum of forms depending on the organization, usually including resource links, webpages, image files, PDFs, PowerPoint presentations, and Word documents (Goans, Leach, and Vogel, 2006, 31-2). As such, there is no standard to what “content” is. It can include a wide variety of types of information as long as it is part of the CMS. Additionally, the content itself “is disconnected from the layout and design elements of the page” (ibid., 31). It makes sense; the content is the information itself and permanent while the layout and design elements can change. However, this can’t be entirely true. While the two can be separate, the content and layout and design elements do depend on each other to transfer knowledge. Layout and design elements also determine the appearance of the content, influencing how a user interprets the information, so the two are interconnected.
            Reading the article also offers a inside look into the development and consideration of web guides, which was informative. For example, the explanation of how they decided on the solutions to their problem – whether through commercial software such as Dreamweaver (Goans, Leach, and Vogel, 2006, 33), open source web site systems or “Frankensteining” products together (ibid., 34), or introducing an in-house web development project (ibid., 34) – supplied not only information on what options are out there for institutions in similar situations, but also gives an idea of how library management works, evaluating and determining options as a group.

Thursday, October 10, 2013

Sorry about the multiple postings. My internet connection is a little off today, so when I was making my posts, I wasn't sure if my Muddiest Point came through. So my answer to the problem? Push update repeatedly, until I got the oh-so-bright idea to actually check my blog to see if anything came through. Did that - ergo the multiple postings. I tried to fix it up, but I wasn't able to erase any posts except their content. So ignore these extra posts, just focus on the Week 7 readings and the true Muddiest Point.

October 7's Muddiest Point

I still don't understand the difference between IPv4 and IPv6. Is it only the amount of bits each has, or something more? Why is IPv4 more popular than IPv6?

Week 7 Readings


Articles

Tyson, J. (n.d.). “How Internet Infrastructure Works.” Retrieved October 8, 2013, from http://computer.howstuffworks.com/internet/basics/internet-infrastructure.htm

            Overall, the article provides a good recap of this past Monday’s topics. It covers basic information such as how routers function in terms of devices and the Internet (Tyson, n.d., p. 4), the use of IP addresses (ibid., p. 6) and the Domain Names System (DNS) (ibid., p. 7), and the Uniform Resource Locater (URL) and its connection to the domain name (ibid., p. 8). J. Tyson (n.d.) offers other details as well, such as how the octets of the IP address not only separate numbers but can be split into two categories – Net, containing the first octet, and Host, or Node, having the last octet (ibid., p. 6). As such, the article was a good review, reteaching and investigating these newly-learned info.
            Tyson (n.d.) brings up a crucial thought near the beginning of the article when describing the Internet. Specifically, he notes that no one owns the Internet itself (Tyson, n.d., p. 1) – a state that I am personally happy does not exist, since it would limit the content of the Internet, give one person or group more global power than anyone should have, and/or require a fee to use. Tyson, however, adds that the Internet is being monitored and maintained; for example, the Internet Society “oversees the formation of the policies and protocols that define how [users] use and interact with the Internet” (ibid., p. 1). This raises some questions. Who authorizes such monitoring? Does the Internet Society abide by a neutral, unbiased stance in its duties?
  

Pace, A.K. (2004, February 1). “Dismantling Integrated Library Systems.” Library Journal, 129(2), 34-36. Retrieved from http://lj.libraryjournal.com/2004/02/ljarchives/dismantling-integrated-library-systems/

            A. K. Pace (2004) firmly situates the article on the status of ILS systems. Based on his descriptions, the situation seems to be a seesawing of construction and destruction; Pace notes that while librarians are dismantling systems, they develop new modules (under “Librarians and their vendors,” para. 2). However, since “the interoperability in library automation is more myth than reality” (ibid., para. 3), this implies that a balance cannot be attained at this time. The system is too chaotic for any librarians to settle on any one system or module. Thus, in this regard, steadiness is more important than fueling new creations which will quickly become obsolete.
            I do not entirely agree, though, on the Innovative Interface’s argument. Pace supports their idea that if the core functionality of the established ILS has not changed, then it is better to retain the business logic and workflow standards supporting it rather than replace them with new logics and workflows (Pace, 2004, under “More alike than not,” para. 3). If such a model has been successful so far, then it would make sense to not change anything too much. Such a stance, however, implies a stubborn refusal to accept that the world and technologies are constantly changing. Keeping the basic ideas is acceptable, but there should exist some flexibility so that the functionality of the ILS adapts more easily to arising issues.


Brin, S. and Larry P. (n.d.). “Sergey Brin and Larry Page: Inside the Google Machine.” TED Talks video, 20:36. Accessed October 8, 2013. http://www.tv.com/web/ted-talks/watch/sergey-brin-and-larry-page-inside-the-google-machine-1545457/

S. Brin (n.d.) introduces the episode with a look at how Google affects the world. His methods in presenting it was impactful. Showing the world and the travel of queries real-time (Brin and Page, n.d., 0:40-3:58) makes their job physical, something that can be seen and rather than imagined. Thus there is almost an illusion that their job has a physical presence in the world and that they manipulate and produce physical things rather than digital. Additionally, the use of lights and colors to represent the flow of queries plays off of human psychology. In Western thought at least, light – particularly white – represents goodness, purity, and truth. When combined with images of parts of the world black or empty of light, it reinforces assumptions that Google is providing information that act as beacons in a world dark with ignorance.
L. Page (n.d.) continues the episode by summarizing the small projects Google has invested in for developing web tools and how staff work within the company. In particular, one note he says caught my attention. He acknowledges that a person has to be smart in how they search via the search engine, and that the ideal search engine would have artificial intelligence (Brin and Page, n.d., 16:35-17:02). He, however, doesn’t elaborate on what kind of “smart” is necessary. In fact, I think it would take more than intelligence to become successful in searching the web. Anyone who has never spent much time searching for anything would have trouble no matter how intelligent they are. Additionally, since everyone uses the Internet, there are different standards and methods of organizing information and various terminology that varies with each field. Being smart helps figure out the patterns and routes to take, but other factors – experience and good judgment skills, for example – should also be taken into consideration.

Thursday, October 3, 2013

October 7's Muddiest Point


The Dublin Core has two types – Simple and “Qualified” (nowadays “Refinement”). Why was “Qualified” changed to “Refinement” Dublin Core? Does the “Qualified” type provide any more features besides extensibility?

Week 6 Readings


Articles

“Local area network.” (2013, September 30). Retrieved September 30, 2013, from Wikipedia: http://en.wikipedia.org/wiki/Local_Area_Network

             I was able to understand the article “Local area network” (2013) overall; I was familiar with the concept before reading the Wikipedia entry, so the more in-depth inspection of the local area network (LAN) was interesting. What makes up a LAN provided some remarkable details that I didn’t know about. For me, LAN was just a network. Learning what actually goes into it – switches, firewalls, sensors, load balancers, and routers (“Local area network,” 2013, under “Technical Aspects,” para. 2) – shows how complex the operation really is.
            The driving force behind networking reveals how much digitization has meant to and improved life nowadays. According to the article, initially the desire to share storage and printers – both of which was costlier back in the 1970sencouraged people to network their computers (ibid., under “Standards evolution,” para. 1). On a basic level, this motivation offers an example of basic human nature to avoid unnecessary costs, or find ways to ameliorate conditions. That they turn to others exhibits the tendency to seek others for aid and communicate with each other. This fits the current uses of computers and the Internet nowadays, so I guess the LAN is one of the “founding” technologies for current attitudes on technologies.

 
"Computer network.” (2013, October 1). Retrieved October 1, 2013, from Wikipedia: http://en.wikipedia.org/wiki/Computer_network

             The article for “Computer network” (2013) was a good piece to read after reading the Wikipedia article “Local area network” (2013). It touched on some of the same ideas that “Local area network” did but went into further detail – for example, it elaborated more on the different types of wired technologies, such as the twisted pair, coaxial cable, and optical fiber (“Computer network,” 2013, under “Wired technologies”), as well as the differences between LAN and other networks such as PAN or WAN based on their scopes, areas of influence, capabilities, and content (ibid., under “Geographic scale”). As such, the article provided good parallels to compare with the other Wikipedia article.
            One part – particularly the terminology – has got me thinking. The article relates how “exotic technologies” exist, wherein people use unorthodox methods of sending information. The examples provided includes IP over Avian Carriers (IPoAC), a joke proposal for sending IP traffic on homing pigeons, and spreading the Internet to interplanetary dimensions (ibid., under “Exotic technologies”). In this case, “exotic” – as compared to wired or wireless technologies – means unusual technologies, based on jokes more than serious ideas. But this may change. Of course, the pigeon idea is ludicrous. If in the future, though, more complex technologies are sent out into space or astronauts are able to travel farther distances, there will be a greater need in ensuring that they can keep in contact with people on earth or with others throughout space. Setting up Internet systems elsewhere may become necessary. Although this might be impossible, it is interesting to consider theoretically. Could the systems work on their own? Would they be able to work with other Internets and, if so, how? Would they be the same as or modeled off of the Internet we have now, or would it work in different ways? How quick would the technology be in sharing information between planets?

 
Coyle, K. (2005, September). “Management of RFID in Libraries.” The Journal of Academic Librarianship, 31(5): 486-9. Retrieved from http://www.sciencedirect.com/science/article/pii/S0099133305000856

          Costs overshadow the article as a whole. Although the article is aimed towards librarians and other information science professionals, K. Coyle (2005) circulates around the commercialization aspects of using RFID in libraries – the varieties of RFID technologies based on such factors as frequency of radio waves and “of course its cost” (Coyle, 2005, under introduction, para. 3); the RFID’s developing payment systems and “micro-payment” plans (ibid., under “Should Libraries use RFID?,” para. 3); and its functions better cost-suited for the “renting” model of libraries compared to the retail stacking more costs by using RFID for items that come in and out (ibid., under “RFID and Library Functions,” para. 1). Such a preoccupation is understandable; the RFID is used mainly in retail sectors of society, so costs are a major factor in using them. It also touches on a major concern in libraries; the institutions have had problems with funding for years, and knowing more about the products would help librarians find cost-effective solutions to current and new problems.
Some problems posed, though, seem more a common-sense issue than actual barriers. For example, Coyle notes that less sturdy items may not have enough space for the two-inch square tag and may require a different checkout system altogether (ibid., under “Some Problems Remain,” para. 2) and that oddly shaped and metal-accessorized items produce similar problems (ibid., para. 3). I’m sure that if I had more knowledge on this issue, I would not be arguing what I’m about to suggest. However, based on the available knowledge, I think the problems could be bypassed. If the issue is the structure of the items in question, why not change it? Maybe store the items in plastic slips or small “boxes” which provide space for the tags. While this might add more costs, it could prevent future problems once installed and be less expensive than maintaining two systems. Otherwise, this could just be a contemporary problem. The technology itself is advancing; in a few years, there could be smaller, lighter, more-efficient tags that can be used on the items or ways to combine different RFIDs so that they all operate on the same system. It all depends on whether libraries can wait for it to appear or if the problem is immediate.

Thursday, September 26, 2013

September 23's Muddiest Point


Concerning the Entity Relational Model, is there any differences between the Chen, Crow’s Feet, and UML Notation beyond their graphical features? Does it matter which one a person can use, or do they each have specific uses?

Week 5 Readings


Articles

Gilliland, A. J. (2008). Setting the Stage. In Introduction to Metadata, Second Edition. Retrieved September 24, 2013, from http://www.getty.edu/research/publications/electronic_publications/intrometadata/setting.html
             A. J. Gilliland’s (2008) take on metadata is all-encompassing. Rather than give a fixed definition of according to her profession, she chooses to acknowledge many different peoples’ impressions on “metadata” – from the metadata involving indexes, bibliographic records, and abstracts of libraries (Gilliland “Setting”) to the information encoded into HTML META tags encountered by the average Internet resource provider (Gilliland “Setting”). Such a method has its pros and cons. On the one hand, by including all of the interpretations, she provides a good basis for understanding the flexibility of metadata, its uses beyond one particular field, and what links and differentiates each profession’s metadata – an overview of the concept. On the other hand, though, she is providing a lot of information, maybe too much to adequately investigate particular aspects of metadata.
            One comment caught my attention, though; Gilliland notes that “it would seem to be a desirable goal” to join together various materials linked by provenance or subject but disseminated across museum, archives, and library repositories (Gilliland “Setting”). Would this be desirable? I’m not sure myself; if it could be done, it would make finding, organizing, and storing materials easier. It, however, would assume a “one size fits all” approach, disregarding the differences between the professions that would require different interests for each.

 
Miller, E. J. (1999, June 6). An Overview of the Dublin Core Data Model. Dublin Core Metadata Initiative. Retrieved from http://dublincore.org/1999/06/06-overview/
            While I was able to understand the underlying purpose of the article of examining the Dublin Core Data Model, I had a hard time focusing on the article as a whole. E. J. Miller (1999) got a little wordy at times. For example, in the first paragraph of the section “Semantic Refinement,” Miller states that the Dublin Core Metadata Initiative “additional recognized early on that various communities may choose to utilize richer semantic definitions” and that “a requirement evolved from this recognition” (Miller, 1999, Semantic Refinement, para. 1) – I think this could have been cut down more. Doing so would have allowed him to be more direct in explaining the Initiative’s reactions to users using semantic definitions outside the Dublin Core Element Set – maybe using the extra space to elaborate further. In addition, misspellings abound. To name a few, in the quote given above, he forgets to add “y” to “additional” to have “additionally;” under “DCMI Requirements,” he misspells “data model,” “representing,” and “required” in the same sentence – “…a formal datamodel able to support the requirements of the DCMI and a corresponding means of syntactic represetnating this information is requireed” (ibid., DCMI Requirements, para. 2); and under “Compound Values,” I believe he meant to use “whether” instead of “weather” in the last sentence [“These characteristics are independent of weather this person…”] (ibid., Compound Values, para. 1). The content is valuable, but such sloppy writing detracts its value. I do wonder, though, what the state of writing will become with digitization. Will it become better or worse or about the same? Is grammar linked to digitization? Will metadata have an effect on writing as well, adding new dimensions and structures to it?

 
Meloni, J. (2010, July 19). Using Mendeley for Research Management. The Chronicle of Higher Education. Retrieved from http://chronicle.com/blogs/profhacker/using-mendeley-for-research-management/25627
            The article itself provides an interesting view on Mendeley. Being a Zotero user (presumably a rival company), J. Meloni (2010) investigates the management tool by applying for an account herself, experiencing Mendeley first-hand (Meloni, 2010, para. 2). Thus – although biased – she provides commentary only a user can, especially one who has tried other platforms.
            While Mendeley does seem like a viable management tool, I can see one major problem arising. When describing the key features of Mendeley, Meloni (2010) notes how a user can “view the most read authors, journals, and publications within [their] field or other fields” (ibid., Key Features, para. 1). While such discoveries ensure that a person can remain updated on the most popular readings and trends for a particular field, they also limit what one can read. The “most read” refers to what other users tend to read the most; their interests would not necessarily match every person’s interests. In this model, a user may have a harder time finding documents or authors who are not popular with the users but still provide key information on topics.
            Overall, though, I can see how Mendeley and Zotero can develop further. Meloni (2010) relates how she was able to use the “Import from Zotero” feature to seed her Mendeley account with her Zotero data, syncing the tools to improve her research capabilities (ibid., What About, para. 2). If a person can combine different modules together, they could create a new form of digital tools – fluid features which can combine and work together, increasing efficiency and the ability of the user to acquire whatever they need.