Thursday, December 5, 2013
December 2's Muddiest Point
In terms of the structure of the web search engines, what are "crawlers"? Are they a program or components in software? How do they interact and contribute to the search engine overall?
Week 14 Readings
Articles
Barnatt, C. (2008, May 10). “Explaining Cloud Computing.” YouTube video, 00:05:52. Accessed
December 5, 2013. http://www.youtube.com/watch?v=hplXnFUlPmg&NR=1.
The last paragraph, though, sounds more arguable. Frey ends the section by stating that as search becomes more complicated and people have less time and skills to keep updated on the newest innovations, the librarian will increasingly step in as a guide in finding information (Frey, 2006, under “Trend #4,” para. 3). I agree that potentially search will become more complicated, since developments of technology will lead to newer and greater uses of it to find more information. Librarians would be present a possible “expert” on the matter, since they would have experience organizing, finding, and explaining information for patrons. They would have to act quickly, though, to establish this role with the populace; they may give this role to other information professionals otherwise.
Knorr, E., and G. Gruman. (n.d.). “What cloud computing really
means.” InfoWorld. http://www.infoworld.com/d/cloud-computing/what-cloud-computing-really-means-031.
The different types of computing
possible in cloud computing mentioned in the article are interesting.
Supposedly SaaS (software as a service) offers a single application through the
browser, requiring no upfront investment from customers and few costs for the
provider to maintain (Knorr and Gruman, n.d., p. 1). However, why just one
application? I can think of a number of reasons for this. Most probably, it
could be because of costs – the creator may have just decided to offer one
application entirely rather than add more. There could also be a storage
problem; the browser might not be able to handle more than one application at a
time. Another – web service providers – provide APIs that allow developers to
exploit functionality over the Internet instead of offering “full-blown”
applications. They encompass a wide range of services; Knorr and Gruman (n.d.)
explicitly list “offering discrete business services” and credit card
processing (p. 2). As such, the services sound like they focus on services for
the exchange of money or some sort of numerical computing. This would explain
the “discrete” nature of a couple of the services; there is an aura of
secretiveness about the services described, and the exchange of numbers and
money usually involves private information so users would seek web service
providers which could ensure their privacy.
The
section on “Hardware as a Service” was enlightening. In particular, I thought
it was interesting learning about the main two rivals in selling/offering HaaS.
C. Barnatt (2008) introduces both as Amazon EC2 and Google App Engine,
providing screen shots of each – mainly listings ordered according to price and
capabilities for Amazon and the overall formatting of Google (Barnatt, 2008, 00:01:43-00:03:03).
Such an approach benefits viewers since it acts as a preview of the sites,
giving viewers a way to recognize the sites in their later searches. I wonder,
though, if there are other HaaS providers available besides Amazon and Google.
The two have name recognition, so they would be considered the top players in
providing such services. However, they can’t be the only ones.
The
benefits mentioned appears incomplete. Barnatt only lists the benefits of
Amazon EC2, which includes the products being elastic, flexible, inexpensive,
and reliable (Barnatt, 2008, 00:01:43-00:03:03). Does Google App Engine offer
the same benefits? Or any other attributes, such as speed or time warranty? If
they are the same, then how has it been able to compete with Amazon without
offering something new or different? What about the disadvantages of HaaS,
whether overall or the products offered by Amazon and Google?
Frey, T. (2006, Nov. 2). “The Future of Libraries: Future
scenario, historical perspective, technology trends.” FuturistSpeaker.com. http://www.futuristspeaker.com/2006/11/the-future-of-libraries/.
Some of the trends T. Frey (2006)
asserts will influence future incarnations of the library sounds a little
ridiculous. For example, one trend suggested states that “Search Technology
will become increasingly more complicated.” Frey starts his description by
assuming that currently search technology is relatively simple (Frey, 2006,
under “Trend #4,” para. 1). However, this is inaccurate; while it seems easy,
since you just have to type in a search term, ease depends on what a user is
searching for. He will find an object more easily if it was popular and/or
specific than something relatively unknown or shares a name with other objects.
Next, Frey predicts that the next
stage in search technology will include queries on “taste, smell, texture,
reflectivity, opacity, mass, density, tone, speed, and volume” (Frey, 2006,
under “Trend #4,” para. 2). At this point I am skeptical. Such searches seem
unnecessary and extravagant, especially since some of the terms are subjective
or sound impossible to measure correctly. He may be right and it will happen,
but if it does, it will probably be a passing interest or only favored by those
rich enough to afford such technology.The last paragraph, though, sounds more arguable. Frey ends the section by stating that as search becomes more complicated and people have less time and skills to keep updated on the newest innovations, the librarian will increasingly step in as a guide in finding information (Frey, 2006, under “Trend #4,” para. 3). I agree that potentially search will become more complicated, since developments of technology will lead to newer and greater uses of it to find more information. Librarians would be present a possible “expert” on the matter, since they would have experience organizing, finding, and explaining information for patrons. They would have to act quickly, though, to establish this role with the populace; they may give this role to other information professionals otherwise.
Thursday, November 28, 2013
November 25's Muddiest Point
In
terms of networks – or social networks – I’m still a little unsure about the
terminology. I understand that “edges” connect “nodes” but what exactly are “edges”
and “nodes”? How do “edges” connect “nodes”? What do such linking represent?
Week 13 Readings
Articles
Hamer, J. (2008, Sept. 24). “My Turn: Protecting privacy rights in libraries.” Library News.
http://greatlibrarynews.blogspot.com/2008/09/myturn-protecting-privacy-rights-in.html.
Much of what J. Hamer (2008) covers
involves crucial issues in privacy and security of library records. I agree
with Hamer’s position on the matter overall. While clearing up and adding the
librarian perspective on a Vermont law intending to protect the privacy of
library records, Hamer states that the Vermont Library Association personally
advocated for privacy protection for all patrons of all ages, concerned with
how comfortable a child should be if they wish to research sensitive issues
(Hamer, 2008, para. 3). Such a concern is logical; a child needs guidance, but
information should not be limited to them based on what a parents considers
“suitable” since such a stance remains subjective. Additionally, in the case of
the Brooke Bennett investigation and “impeding” the search by requiring a court
order to investigate information on the public computers (Hamer, 2008, para.
4), I agree that a court order should have been presented rather than have the
police assume they can take whatever information they want. However, I wonder
if there could be a substitute for the court order that could be used in an
emergency – depending on the community, the case, and the court, there may be
corruption or delays in gaining a court order that should have been acquired
right away. Otherwise, though, the policy is in place to protect individuals’
safety and privacy. Most of the information acquired from the library would
most likely be check-out histories, addresses, and phone numbers – the former
not security-related, the latter two possibly a cause of alarm if a person has
not recorded such information publicly already, but overall such information
should be protected since a person’s privacy is integral and a matter of trust.
If no one can live their lives without someone – whether from the government or
not – acting like a “big brother” and investigating their records, no matter
how useless the information is, then that person can’t live freely as an
individual.
Shrivastava, M. (2013, July 8). “MIT’s ‘Immersion’ Project
Reveals The Importance of Metadata.” Techchronic.http://techchronic.blogspot.in/2013/07/have-gmail-account-see-what-nsa-knows.html.
I don’t know how melodramatic M.
Shrivastava (2013) is in his article. He states that some officials and
ministers defended the internet surveillance by NSA by claiming that “they are
only collecting metadata related to your mails, messages and interactions from
phone and internet companies” (Shrivastava, 2013, para. 1). It is alarming that
they can access such information. Looking at the image of a social network
provided (Shrivastava, 2013), one can easily see a whole communication tree.
Not only can an individual see the communication lines between one person and a
few others, but how it relates to a whole “community” of people, even ones who
are connected to one person in the main “tree.” That is powerful; in the wrong
hands, a person could be charged easily just for communicating – or
communicating with someone who interacted with – a suspect or a person under
investigation. The NSA does not access the content itself (Shrivastava, 2013,
para. 1), so that does make a difference. It is not like they know exactly what
you say, so a person has some protection. But even with the lack of
information, a lot could be presumed or conjectured about a person’s
interactions. Additionally, this is based on whether Shrivastava could be
considered a reliable source – on the internet, anyone can lie about their
identity or where they got their information.
“The Privacy Show.” (n.d.). On
The Media. http://www.onthemedia.org/story/258658-the-privacy-show/.
Hamer, J. (2008, Sept. 24). “My Turn: Protecting privacy rights in libraries.” Library News.
http://greatlibrarynews.blogspot.com/2008/09/myturn-protecting-privacy-rights-in.html.
It is interesting the variety of
concerns expressed over privacy issues online during the “Privacy Show” (n.d.) –
mainly that it was akin to wearing an ankle bracelet tracking a person’s every “movement,”
that there has not been an increase in security alongside the decrease in
privacy, and that bias towards a person’s race/gender/sexuality/etc – or, in
the case of one man, concerns that there may be homophobic engineers in Google
who would discriminate against him – could affect an employee’s willingness to
help someone online (00:00-01:16). Their range of reasons is enlightening; some
people are worried that they have become “prisoners” to the system or that they
lose out on giving up their privacy without any benefit taking its place or
that even basic information about their identity could harm them. Overall,
there is a sense that each user has lost control over their identity and
abilities online, no longer “free” in the World Wide Web.
The three factors mentioned
concerning government and company “prying” into personal information online is
interesting as well. Primarily these factors are the fear of terrorism, profit
motives, and users’ desire for fun and convenience online (“Privacy Show,”
n.d., 01:17-01:55). So it sounds like the government and companies pry because
they are reacting – or want to take advantage – of the concerns of the
populace. All of these factors derive from user wishes – whether for security
against terrorism, especially in the wake of 9/11, or for better service. Thursday, November 21, 2013
November 18's Muddiest Point
What
is the DSpace Digital Library System? What makes it different from other DL?
Week 12 Readings
Articles
Blossom, J. (2011). “What
makes social media tick: seven secrets of social media.” In Content
Nationl Surviving and Thriving as Social Media Changes Our Work, Out Lives, and
Our Future. Indianapolis: John Wiley & Sons.
Allan, C. (2007, April). “Using a wiki to manage a library
instruction program: Sharing knowledge to better serve patrons.” C&RL News 68(4): 242-244.
Wales, J. (2006, August). “Jimmy Wales: The birth of Wikipedia.”
TED Talks video, 20:05. Accessed
November 21, 2013. http://www.ted.com/talks/jimmy_wales_on_the_birth_of_wikipedia.html.
J. Wales’s (2006) stances on
neutrality seem to conflict somewhat with his analysis of the structure of
those who run/edit/manage Wikipedia itself. He asserts that Wikipedia follows a
neutrality policy that is nonnegotiable, wherein users must merely report
information and any controversies without bias for any side (Wales, 2006, 0:07:45-0:09:02).
I think that this is a good standard to follow overall; Wikipedia is a global
phenomenon, so there are more differences in personal opinions, political and
economic beliefs, philosophies, religions, and societal factors to take into
consideration than in one country alone. Such a stance not only allows the
users to work with each other without contentions as Wales promotes (Wales,
2006, 0:09:03-0:010:00) but it also means that information about individual
points will not be limited by one point-of-view but encompass more facets – not
every detail, since neutrality does imply a side that considers the main
opinionated, upfront viewpoints involved, but at least a varied mix of points
from every side rather than one viewpoint.
When compared to the
structure of Wikipedia’s “staff,” though, the neutrality policy appears a
little off. Wales explains that the management of the site derives principles
from a wide variety of political philosophies, including the democratic nuances
of voting for the erasure or inclusion of pages, the aristocratic assumption
that certain figures who have worked closely with the set-up and management of
Wikipedia for a certain length of time exert greater influence in the framework,
and monarchic principles – not “benevolent dictator,” which Wales refuses to
acknowledge – that Wales, the founder, makes the final decisions over
everything (Wales, 2006, 0:14:08-0:17:50). Such a mishmash of political
standards does not imply an anti-political or a flexible management standpoint
alone, but a different form of management style. It is democracy tempered, a
social construct that – while having relative equality amongst its management –
still assumes clear leadership whose opinions do matter. If the “aristocratic”
members and the “Queen” deemed it, they could alter the neutrality principle
itself, declaring an opinion to be “neutral” when it is not while other members
will have to yield to such a decision.
Thursday, November 14, 2013
November 11's Muddiest Point
I'm still a little confused on the relationships between the different types of languages available (SGML, XML, HTML, XHTML, etc.). How does SGML simplify XML? Does XML define XHTML like SGML defines HTML, or does XML do something different entirely?
Week 11 Readings
Articles
Paepcke, A., H. Garcia-Molina, and R. Wesley. (2005, July/August). “Dewey Meets Turing: Librarians, Computer Scientists, and the Digital Libraries Initiative.” D-Lib Magazine 11: 7-8.
A. Paepcke and his co-authors
(2005), unusually, view the Web very negatively in the article. In terms of the
Digital Library Initiative (DLI), they portray the Web as the disrupter of
peace and alliance between computer scientists and librarians. It was the
“somewhat undisciplined teenager,” a new son/daughter in terms that it has
ruined their plans for their initiative by providing alternate sources of
information (Paepcke, Garcia-Molina, and Wesley, 2005, under “The Cuckoo’s Egg
Surprise,” para. 2). It challenged their assumptions about forming digital
libraries and what was considered the primary source for finding and using
materials. Yet Parpcke and his colleagues direct their analogy into an
Oedipus/Elektra complex. The ‘teenager’ now has “sex appeal” for computer
scientists; the Web offered a fertile area for machine learning, statistical,
and experimental methods to become applicable to information search and
organization, drawing in legions of researchers to participate (Paepcke,
Garcia-Molina, and Wesley, 2005, under “The Cuckoo’s Egg Surprise,” para. 8).
So it seduced computer scientists to the other side, leaving librarians off-balance
in the Initiative. As such, it has become an adulterer, betraying the trust
computer scientists and librarians had. Such language seems over-dramatic,
though; the authors want to explain why digital libraries aren’t succeeding as
they expected, so they found a scapegoat in the Web and made it the source of
all of their troubles. I think a lot more factors are involved. Additionally,
the Web is not the librarians’ enemy; it may cause hardships for the Initiative
overall, but it has increasingly become a tool for librarians to use in
linking, organizing, and creating information.
Lynch, C. A. (2003, February). “Institutional Repositories:
Essential Infrastructure for Scholarship in the Digital Age.” ARL no. 226: 1-7.
It is interesting that C.A. Lynch (2003)
sets up the repository as a collaborative effort. He specifically states that a
successful institutional repository portrays a collaboration between
librarians, information technologists, archives and records managers, faculty,
and university administrators and policymakers (Lynch, 2003, p. 2). This is
interesting in that he takes a overarching viewpoint of the repository; rather
than focusing on one identity or how one particular group of people creates or
uses the institution, he suggests that it involves the work of many
individuals. Thus the institutional repository does not appear to be so
2-dimensional, but more complex, requiring the actions of many people to work.
This does fit its purpose to disseminate digital materials to its institution and
related members – particularly the intellectual works of faculty and students
(Lynch, 2003, p. 2) – in a sense implying that collaboration supplies benefits
to a wider sample of people. I wonder, though, if institutional repositories
should only distribute the works of faculty and students. Such a narrowing of
focus may help the repository focus its aim to build up a collection and access
to scholarship, but it also limits what counts as legitimate sources or
information.
Another interesting point is Lynch’s
concerns over the use of repositories. The main troubles he foresees includes
degenerating the repository into a tool for institutional control over
intellectual work (Lynch, 2003, p. 4-5), adding additional “distracting and
irrelevant policy baggage” to it (Lynch, 2003, p. 5), and – with increasing demand
for institutional repositories – repositories may became hastily-made, hollow
services (Lynch, 2003, p. 6). Thus he is concerned over the quality of institutional
repositories; he associates a true repository as one little influenced by the
politics of its institution [being almost ‘uncontaminated’ or pure, having its
own agenda rather than fulfilling the agendas of its institution], yet
requiring the full support and resources of its institution to be well-made and
resourceful for its users. Can such conflicting worries coexist? Such a
repository imagined by Lynch would need to be created by an institution which
upholds values of open access to information and ideals on
unrestrained/uncensored information. Reality, though, means that the repository
in question has to submit to its institution to some degree if it is to receive
funding or support in its own endeavors. As such, I don’t know how realistic
Lynch’s concerns are or if they can be fixed according to his own values.
Hawking, D. (2006, June). “Web Search Engines: Part 1.” Computer: 86-88.
AND
Hawking, D. (2006, August). “Web Search Engines: Part 2.” Computer: 88-90.
According
to D. Hawking (2006), Web search engines require a lot of attention and work to
operate. Physically, they can be sprawling. Each operates from numerous,
geographically spread data centers and within each center are a variable number
of servers to support services and specialized duties (Hawking, “Part 1,” 2006,
p. 86). Thus search engines are not one entity, but a composition of entities;
it needs different parts to ensure that it functions as it should. Its actions
itself imply complexity as well. For example, search engines employ inverted
files to identify indexing terms. Inverted files can only be created in two
phases – first, scanning the text of each input document; second, sorting
temporary files into term number order (Hawking, “Part 2,” 2006, p. 88). This
requires a user to invest of lot of time and attention to create the files
needed for the process. Taken altogether, this is a sobering thought; search
engines have become such a common feature on the Web that to not see one would
be a cause for outcry. It seems so easy to use – just enter a word or phrase,
click, and you get your results – yet a lot of work goes into making sure it
works.
Shreeves, S.L., T.G.
Habing, K. Hagedorn, and J.A. Young. (2005). “Current Developments and Future
Trends for the OAI Protocol for Metadata Harvesting.” Library Trends 53, no. 4: 576-589.
In
this article, I found the development of OAI services quite interesting. Rather
than remain a tool only in the e-print archives community, others – libraries,
museums, archives, etc. – started using it for their own services, creating
user group-specific service providers (Shreeves et al, 2005, p. 578). Thus
others started seeing its usefulness – probably through observing how the
e-print archives community used it and debated its pros/cons. Yet, other
communities didn’t mimic the original users explicitly. Rather, they not only
utilized the servers to help provide federated access to resources but also
developed further standards, tools, and metadata schemas to contribute to the
OAI protocol (Shreeves et al, 2005, p. 578). In this way, the OAI provides a
good lesson in using technologies, especially those created for a specific
purpose or group. If others purposes exist for a device, then a user should
test it out. Other people can use the same tool for different purposes.
Additionally, what it is now does not mean that it will maintain that structure
in the future; users can add new standards or other accessories to the
technology to better adapt it to their situation.
Paepcke, A., H. Garcia-Molina, and R. Wesley. (2005, July/August). “Dewey Meets Turing: Librarians, Computer Scientists, and the Digital Libraries Initiative.” D-Lib Magazine 11: 7-8.
AND
Hawking, D. (2006, August). “Web Search Engines: Part 2.” Computer: 88-90.
Friday, November 1, 2013
October 28's Muddiest Point
I'm still a little confused about the role of CSS Comments (slide 30). How do the Comments explain the code used? Why are they ignored by the browsers?
Week 10 Readings
Articles
Bryan, M. (1997). “An Introduction to the Extensible Markup
Language (XML).” The SGML Centre.
Retrieved from http://www.is-thought.co.uk/xmlintro.htm.
Ogbuji, U. (2004, January 20). “A survey of XML Standards: Part
1, The core standards – a foundation for the wide world of XML.” IBM: developerWorks. Retrieved from http://www.ibm.com/developerworks/library/x-stand1/index.html
Similar to Uche Ogbuji’s article “A
Survey of XML Standards: Part 1,” the “XML Tutorial” (n.d.) focuses on the
manipulation of language and how users can use it. It specifically acts as a
markup language, carrying data (not displaying it) and remaining self-descriptive
(“XML Tutorial,” n.d., under “Introduction to XML,” “What is XML?”). Thus it
has its own vocabulary, acting as a method of communication between user and
computer. The Tutorial, however, argues that it “does not do anything;” it can
“structure, store, and transport information” but “it is just information
wrapped in tags,” needing additional software to either send, receive, or
display it (ibid., “XML Does Not DO Anything”). Compared to Bergholz’s
definition (Bergholz, 2000, p. 74), this definition isn’t as clear. I think I
understand the basic meaning the Tutorial purports – that XML only marks up the
structure and describes features, not actually commanding anything to be done –
but I think that stating that it “does not do anything” confuses more than
explains XML.
The language of XML itself seems to
be its own creation. It is very fertile; since the XML language does not have
any predefined tags, the user can determine her own tags and document structure
(“XML Tutorial,” n.d., under “Introduction to XML,” “With XML You Invent Your
Own Tags”), so the number of XML language possible is limitless (ibid., under
“How Can XML be Used?,” “XML is Used to Create”). As such, XML is almost alive,
allowing users to create multiple languages to attain different purposes. A few
rules still apply. For example, XML tags are case-sensitive (ibid., under “XML
Syntax Rules,” “XML Tags are Case Sensitive”) and all attribute values have to
be quoted (ibid., “XML Attribute Values”). So some limits exist, restricting
the number and type of possible languages available. However, some restrictions
are necessary so that XML language creation does not become too chaotic,
following some basic pattern to work in practice and having an anchor in what
does and doesn’t work.Thursday, October 24, 2013
October 21's Muddiest Point
In
the HTML page, the elements – and, in conjunction, the tags and attributes –
all produce “permanent” effects that can only be changed by the creator in the
HTML page, i.e. tags <b>…</b> bolds text and <body bg color = “green”>
makes the background color perpetually green. Are these “static” elements the only
kinds, though? Can there be elements coded to change on their own initiative or
allow users to make changes themselves, such as writing and submitting
comments?
Week 9 Readings
Articles
CSS Tutorial. (n.d.). w3schools.com.
Retrieved from http://www.w3schools.com/css/
CSS Tutorial: Starting with HTML + CSS. (n.d.) W3C. Retrieved from http://www.w3.org/Style/Examples/011/firstcss
By emphasizing the organic metaphors, Lie and Bos almost imply that CCS documents are almost alive. They have “organs” which determine whether they live fully or not or how well they accomplish basic functions. They have elements that can be passed on to others “genetically.” This is an interesting method for them to use. They could be doing it accidently, for poetic reasons, to make their concepts more relatable, or to make a point on its complexity.
Thursday, October 17, 2013
October 15's Muddiest Point
Different
groups of people manage the Internet, such as the Internet Society (ISOC) and
the Regional Internet Registries. Who do they answer to? Who or what determines
the standards and policies they go by in overseeing the Internet? Or are they
their own organizations, creating their own guidelines?
Week 8 Readings
Articles
“HTML Tutorial.” (n.d.). W3Schools.
Retrieved from http://www.w3schools.com/HTML/
Overall,
the document “HTML Tutorial” (n.d.) provided straightforward information on
using HTML. A lot of it was new and unfamiliar. For example, under the chapter
“HTML Editors,” the tutorial recommends different HTML editors for editing HTML
– even going so far as to suggest that utilizing a basic text editor would help
new users learn about HTML (“HTML Tutorial,” n.d., under “HTML Editors”). I did
not know that such editors existed or that anyone would need it. In retrospect,
though, it would make writing HTML more quick and efficient. I do wonder if
their claim would work, however. It would depend on the user; some people can
learn more easily with firsthand experience while others might need a more
human guide in learning HTML.
There
were some parts, though, I want to know more details about. In the
introduction, the document states that HTML tags and HTML elements usually
describe the same things, “but strictly speaking, an HTML element is everything
between the start tag and the end tag, including the tags” (“HTML Tutorial,”
n.d., under “Introduction”). Usually with the phrase “but strictly speaking,” a
person means an opposite idea; in this case, it implies that although the tags
and elements are used in the same way, they aren’t the same nor act in the same
way. What’s confusing me is what the difference is. The description above for
HTML tags describe the exact same thing (ibid.). So is there a difference?
Should there be a difference? Additionally, I understand the basic idea for why
the tutorial recommends using lower case attributes/attribute values since they
are case-sensitive (ibid., under “HTML Attributes”) but I feel like there is
more to it than that. If I understood the logistics better, I probably would
know why lower case is used instead of upper case – such knowledge would
hopefully clear things up a little. But why one over the other? If both can
theoretically work, then maybe including both can expand the list of attributes
to encompass new kinds or maybe help organize the ones in existence.
“HTML Cheatsheet Guide.” (2008). Webmonkey.com. Retrieved from http://www.wired.com/images/multimedia/webmonkeycheatsheet_full.pdf
The guide “HTML Cheatsheet Guide”
(2008) seems like it would be suitable for a quick reference. It provides tags
and their descriptions for a lot of the most basic HTML a person might have to
do, such as creating a HTML document with <html></html> (“HTML Cheatsheet
Guide, 2008, under “Basic Tags”) or forming new paragraphs with
<p></p> (ibid., under “Formatting). I’ve never had to use HTML like
this before, so I cannot say for sure if the guide covers all of the basic tags
or elements a user would necessarily need. From what I can interpret, though,
it accomplishes its goals.
If I ever had to create a HTML
document by using HTML tags, I would like to experiment with the tags offered.
I would need to get the basics down first, but the later sections – “Forms,”
Graphical Elements,” and “Links” – would prove a good basic challenge. What
would be the most interesting to try is the tags for creating Submit buttons
(“HTML Cheatsheet Guide, 2008, under “Forms”) and adding images and their
descriptions (ibid., under “Graphical Elements). The former seems a little more
complicated than the other more straight-forward tags provided and the later
would be fun to learn, especially since it would be helpful in the future to
have an option in adding pictures to a document.
Pratter, F. E. (2011). “Introduction to HTML,” Chapter 2 of Web Development with SAS by Example. Retrieved
from http://books.google.com/books?id=GQxv8xaIPFYC&printsec=frontcover&dq=inauthor:%22Frederick+E.+Pratter%22&hl=en&sa=X&ei=Mr9eUtnXEdSp4APItICACQ&ved=0CDoQ6AEwAg#v=onepage&q&f=false
Reading this chapter helped answer
some of my questions that I posed in the “HTML Tutorial.” For example,
concerning lower vs upper case, F. E. Pratter (2011) notes that HTML 4.0 tags
aren’t case sensitive but standard requires lower case (20). While this does
not answer why lower case is preferred, it does elaborate on the background for
my questions. However, it disagrees somewhat with the other readings.
Apparently professional Web developers prefer to write HTML from its roots by
employing text editors such as Notepad or KEDIT (Pratter, 2011, 16) – this
questions somewhat the recommendation of “HTML Tutorial” that a basic editor
would help beginners learn HTML (“HTML Tutorial,” n.d., under “HTML Editors”).
Based on the context, they might be both wrong and right; some text editors
would be basic enough for a novice to use and learn from, while there are more
advanced versions for the professionals. It seems kind of strange that Pratter
does not acknowledge the types of editors available based on expertise, or
categorize them by some sort of evaluation. In this regard, the “HTML Tutorial”
at least implies a difference even though it does not state it explicitly. This
might have to do with the type of audiences they each target; both introduce
users to HTML, but whereas the “HTML Tutorial” seems more like shorthand notes
– noting differences and steps – the other is focused on explanations.
The article overall provides a good
analysis of web guides. In particular, D. Goans and his fellow writers’ (2006)
look at what content is in a CMS was interesting. They note that “content” consists
of a broad spectrum of forms depending on the organization, usually including
resource links, webpages, image files, PDFs, PowerPoint presentations, and Word
documents (Goans, Leach, and Vogel, 2006, 31-2). As such, there is no standard
to what “content” is. It can include a wide variety of types of information as
long as it is part of the CMS. Additionally, the content itself “is
disconnected from the layout and design elements of the page” (ibid., 31). It
makes sense; the content is the information itself and permanent while the layout
and design elements can change. However, this can’t be entirely true. While the
two can be separate, the content and layout and design elements do depend on
each other to transfer knowledge. Layout and design elements also determine the
appearance of the content, influencing how a user interprets the information,
so the two are interconnected.
Reading the article also offers a
inside look into the development and consideration of web guides, which was
informative. For example, the explanation of how they decided on the solutions
to their problem – whether through commercial software such as Dreamweaver (Goans,
Leach, and Vogel, 2006, 33), open source web site systems or “Frankensteining”
products together (ibid., 34), or introducing an in-house web development
project (ibid., 34) – supplied not only information on what options are out
there for institutions in similar situations, but also gives an idea of how
library management works, evaluating and determining options as a group.Thursday, October 10, 2013
Sorry about the multiple postings. My internet connection is a little off today, so when I was making my posts, I wasn't sure if my Muddiest Point came through. So my answer to the problem? Push update repeatedly, until I got the oh-so-bright idea to actually check my blog to see if anything came through. Did that - ergo the multiple postings. I tried to fix it up, but I wasn't able to erase any posts except their content. So ignore these extra posts, just focus on the Week 7 readings and the true Muddiest Point.
October 7's Muddiest Point
I still don't understand the difference between IPv4 and IPv6. Is it only the amount of bits each has, or something more? Why is IPv4 more popular than IPv6?
Week 7 Readings
Articles
Tyson,
J. (n.d.). “How Internet Infrastructure Works.” Retrieved October 8, 2013, from
http://computer.howstuffworks.com/internet/basics/internet-infrastructure.htm
Pace,
A.K. (2004, February 1). “Dismantling Integrated Library Systems.” Library Journal, 129(2), 34-36.
Retrieved from http://lj.libraryjournal.com/2004/02/ljarchives/dismantling-integrated-library-systems/
Brin,
S. and Larry P. (n.d.). “Sergey Brin and Larry Page: Inside the Google
Machine.” TED Talks video, 20:36.
Accessed October 8, 2013. http://www.tv.com/web/ted-talks/watch/sergey-brin-and-larry-page-inside-the-google-machine-1545457/
S.
Brin (n.d.) introduces the episode with a look at how Google affects the world.
His methods in presenting it was impactful. Showing the world and the travel of
queries real-time (Brin and Page, n.d., 0:40-3:58) makes their job physical,
something that can be seen and rather than imagined. Thus there is almost an
illusion that their job has a physical presence in the world and that they
manipulate and produce physical things rather than digital. Additionally, the
use of lights and colors to represent the flow of queries plays off of human
psychology. In Western thought at least, light – particularly white –
represents goodness, purity, and truth. When combined with images of parts of
the world black or empty of light, it reinforces assumptions that Google is
providing information that act as beacons in a world dark with ignorance.
L.
Page (n.d.) continues the episode by summarizing the small projects Google has
invested in for developing web tools and how staff work within the company. In
particular, one note he says caught my attention. He acknowledges that a person
has to be smart in how they search via the search engine, and that the ideal
search engine would have artificial intelligence (Brin and Page, n.d.,
16:35-17:02). He, however, doesn’t elaborate on what kind of “smart” is
necessary. In fact, I think it would take more than intelligence to become
successful in searching the web. Anyone who has never spent much time searching
for anything would have trouble no matter how intelligent they are.
Additionally, since everyone uses the Internet, there are different standards
and methods of organizing information and various terminology that varies with
each field. Being smart helps figure out the patterns and routes to take, but
other factors – experience and good judgment skills, for example – should also
be taken into consideration.
Thursday, October 3, 2013
October 7's Muddiest Point
The Dublin Core has two types – Simple and “Qualified”
(nowadays “Refinement”). Why was “Qualified” changed to “Refinement” Dublin
Core? Does the “Qualified” type provide any more features besides extensibility?
Week 6 Readings
Articles
“Local
area network.” (2013, September 30). Retrieved September 30, 2013, from
Wikipedia: http://en.wikipedia.org/wiki/Local_Area_Network
Some
problems posed, though, seem more a common-sense issue than actual barriers.
For example, Coyle notes that less sturdy items may not have enough space for
the two-inch square tag and may require a different checkout system altogether
(ibid., under “Some Problems Remain,” para. 2) and that oddly shaped and
metal-accessorized items produce similar problems (ibid., para. 3). I’m sure
that if I had more knowledge on this issue, I would not be arguing what I’m
about to suggest. However, based on the available knowledge, I think the
problems could be bypassed. If the issue is the structure of the items in
question, why not change it? Maybe store the items in plastic slips or small
“boxes” which provide space for the tags. While this might add more costs, it
could prevent future problems once installed and be less expensive than
maintaining two systems. Otherwise, this could just be a contemporary problem.
The technology itself is advancing; in a few years, there could be smaller, lighter,
more-efficient tags that can be used on the items or ways to combine different
RFIDs so that they all operate on the same system. It all depends on whether
libraries can wait for it to appear or if the problem is immediate.
Thursday, September 26, 2013
September 23's Muddiest Point
Concerning
the Entity Relational Model, is there any differences between the Chen, Crow’s
Feet, and UML Notation beyond their graphical features? Does it matter which
one a person can use, or do they each have specific uses?
Week 5 Readings
Articles
Gilliland, A. J. (2008). Setting the Stage. In Introduction to Metadata, Second Edition.
Retrieved September 24, 2013, from http://www.getty.edu/research/publications/electronic_publications/intrometadata/setting.html
One comment caught my attention, though; Gilliland notes that “it would seem to be a desirable goal” to join together various materials linked by provenance or subject but disseminated across museum, archives, and library repositories (Gilliland “Setting”). Would this be desirable? I’m not sure myself; if it could be done, it would make finding, organizing, and storing materials easier. It, however, would assume a “one size fits all” approach, disregarding the differences between the professions that would require different interests for each.
While I was able to understand the underlying purpose of the article of examining the Dublin Core Data Model, I had a hard time focusing on the article as a whole. E. J. Miller (1999) got a little wordy at times. For example, in the first paragraph of the section “Semantic Refinement,” Miller states that the Dublin Core Metadata Initiative “additional recognized early on that various communities may choose to utilize richer semantic definitions” and that “a requirement evolved from this recognition” (Miller, 1999, Semantic Refinement, para. 1) – I think this could have been cut down more. Doing so would have allowed him to be more direct in explaining the Initiative’s reactions to users using semantic definitions outside the Dublin Core Element Set – maybe using the extra space to elaborate further. In addition, misspellings abound. To name a few, in the quote given above, he forgets to add “y” to “additional” to have “additionally;” under “DCMI Requirements,” he misspells “data model,” “representing,” and “required” in the same sentence – “…a formal datamodel able to support the requirements of the DCMI and a corresponding means of syntactic represetnating this information is requireed” (ibid., DCMI Requirements, para. 2); and under “Compound Values,” I believe he meant to use “whether” instead of “weather” in the last sentence [“These characteristics are independent of weather this person…”] (ibid., Compound Values, para. 1). The content is valuable, but such sloppy writing detracts its value. I do wonder, though, what the state of writing will become with digitization. Will it become better or worse or about the same? Is grammar linked to digitization? Will metadata have an effect on writing as well, adding new dimensions and structures to it?
While Mendeley does seem like a viable management tool, I can see one major problem arising. When describing the key features of Mendeley, Meloni (2010) notes how a user can “view the most read authors, journals, and publications within [their] field or other fields” (ibid., Key Features, para. 1). While such discoveries ensure that a person can remain updated on the most popular readings and trends for a particular field, they also limit what one can read. The “most read” refers to what other users tend to read the most; their interests would not necessarily match every person’s interests. In this model, a user may have a harder time finding documents or authors who are not popular with the users but still provide key information on topics.
Overall, though, I can see how Mendeley and Zotero can develop further. Meloni (2010) relates how she was able to use the “Import from Zotero” feature to seed her Mendeley account with her Zotero data, syncing the tools to improve her research capabilities (ibid., What About, para. 2). If a person can combine different modules together, they could create a new form of digital tools – fluid features which can combine and work together, increasing efficiency and the ability of the user to acquire whatever they need.
Subscribe to:
Posts (Atom)