Thursday, October 24, 2013

October 21's Muddiest Point


In the HTML page, the elements – and, in conjunction, the tags and attributes – all produce “permanent” effects that can only be changed by the creator in the HTML page, i.e. tags <b>…</b> bolds text and <body bg color = “green”> makes the background color perpetually green. Are these “static” elements the only kinds, though? Can there be elements coded to change on their own initiative or allow users to make changes themselves, such as writing and submitting comments?

Week 9 Readings


Articles

CSS Tutorial. (n.d.). w3schools.com. Retrieved from http://www.w3schools.com/css/

             The CSS Tutorial (n.d.) offers interesting details on CSS. One such point is the origins of CSS. According to the Tutorial, when developers added tags such as <font> and color attributes to the HTML 3.2 specification, the process of developing large web sites became longer and more expensive for web developers to complete. As such, the World Wide Web Consortium (W3C) constructed CSS so that, in HTML 4.0, a user can remove the formatting from HTML documents and store them in a separate CSS file (under Introduction, “Styles Solved a Big Problem”). In effect, then, the CSS was developed as a solution to an earlier problem – revealing how software and digital technologies evolved through trial and mishap.
            Another interesting detail was how a person could insert CSS into their work. The Tutorial (n.d.) lists three ways to do so: 1) an external style sheet, which changes the appearance of the Web site by changing one file (under CSS How To…, “External Style Sheet”); 2) an internal style sheet, used for single documents which have a unique style (ibid., “Internal Style Sheet”); and 3) inline styles, which mixes content with presentation (ibid., “Inline Styles). Such categories imply organization to the CSS’s development as well as the different manners in which CSS affects visual elements. A user can use CSS for more focused projects; no one has to stick to manipulating the presentation of a whole website when she or he really needs to differentiate individual pages from each other, for example.
 

CSS Tutorial: Starting with HTML + CSS. (n.d.) W3C. Retrieved from http://www.w3.org/Style/Examples/011/firstcss

             The “CSS Tutorial: Starting with HTML + CSS” (n.d.) covers some details that I’m unfamiliar with. For example, in the second “warning” for Step 1, it notes how the “ul” elements represent a list with one hyperlink per item, serving as the “site navigation menu,” while the “hl” and “p” elements “form the unique content of this page” (CSS Tutorial, n.d., under Step 1: The HTML). I found such “warnings” to be fascinating; they elaborate and build on the basic information of the Tutorial and introduce new ideas of using HTML. In the case of the example I gave, I am now curious at what “unique content” the “ul” and “p” elements produce and want to try that out.
            I particularly liked the Tutorial’s approach to colors. Step 2 covers the basics, teaching how to add color using the <style> elements – specifically <style type=“text/css”> and elaborating with how to set the colors for the text and background of the body (CSS Tutorial, n.d., under Step 2: Adding Some Colors). I love adding variation to my work when I can, experimenting with colors, type fonts, and the like, so I will need to investigate this when working with HTML. Similarly, the analysis on link colors under Step 5 was most interesting. I am familiar with the standard for having links to pages I haven’t visited remain blue while those which I have clicked becomes purple (ibid., under Step 5: Styling Links). However, I am now curious of why this is the standard. Having a consistent color-coordination is beneficial for the Web overall, allowing fewer confusions for newcomers without varying the colors. But why blue and purple? Why not some other colors, like green and red? Were the colors chosen randomly, or was thought put into it or does it correspond to a cultural norm? I really want to experiment with this, see if anyone would actually react if I changed the colors.

 
Lie, H.W. and B. Bos. (1999). Chapter 2 in Cascading Style Sheets, Designing for the Web. Boston: Addison-Wesley.

             After reading the Chapter, I found that H.W. Lie and B. Bos (1999) provided interest notes using organic metaphors. For example, their “anatomy” of rules and declarations. Each are made up of two parts – the rule consisting of the selector [the link between HTML documents and the style] and declaration [determines the effect of the rule] (Lie and Bos, 1999, under “Rules and Style Sheets,” “Anatomy of a rule”) and the declaration made up of properties [the quality] and value [specifies the type of quality] (ibid., “Anatomy of a declaration”). Each are interlinked, the declaration and all of its components contributing to the overall form of the rule. One part cannot exist with the other, or otherwise the rule as a whole fails. This is almost true for the human body; although the body can continue living without both kidneys, for example, overall it needs most of its organs intact to function fully as a living being. Viewing the anatomy of the rule like this emphasizes both its limitations and complexities in CSS.
            Another organic-like feature involves formatting documents in CSS as tree-structures. Lie and Bos suggest this course to emphasize the “inheritance” factor of the elements; “through inheritance, CSS property values set on one element will be transferred down the tree to its descendants” (Lies and Bos, 1999, under “Tree Structures and Inheritance”) but sometimes elements override others in the “children” (ibid., under “Overriding Inheritance”) or cannot be inherited (ibid., under “Properties that don’t inherit”). As such, the organic-metaphor allows users to understand CSS better. Most American public high schools teach the basics on genes and genealogy in biology courses, so theoretically a good number of people understand the basic idea of the transference of genes, and if not, most people are familiar with how family trees work. Users would be familiar with the logistics; thus, applied to the CSS, they can make the leap between metaphor and reality and be able to understand how CSS works.
            By emphasizing the organic metaphors, Lie and Bos almost imply that CCS documents are almost alive. They have “organs” which determine whether they live fully or not or how well they accomplish basic functions. They have elements that can be passed on to others “genetically.” This is an interesting method for them to use. They could be doing it accidently, for poetic reasons, to make their concepts more relatable, or to make a point on its complexity.

Thursday, October 17, 2013

October 15's Muddiest Point


Different groups of people manage the Internet, such as the Internet Society (ISOC) and the Regional Internet Registries. Who do they answer to? Who or what determines the standards and policies they go by in overseeing the Internet? Or are they their own organizations, creating their own guidelines?

Week 8 Readings


Articles

“HTML Tutorial.” (n.d.). W3Schools. Retrieved from http://www.w3schools.com/HTML/

Overall, the document “HTML Tutorial” (n.d.) provided straightforward information on using HTML. A lot of it was new and unfamiliar. For example, under the chapter “HTML Editors,” the tutorial recommends different HTML editors for editing HTML – even going so far as to suggest that utilizing a basic text editor would help new users learn about HTML (“HTML Tutorial,” n.d., under “HTML Editors”). I did not know that such editors existed or that anyone would need it. In retrospect, though, it would make writing HTML more quick and efficient. I do wonder if their claim would work, however. It would depend on the user; some people can learn more easily with firsthand experience while others might need a more human guide in learning HTML.
There were some parts, though, I want to know more details about. In the introduction, the document states that HTML tags and HTML elements usually describe the same things, “but strictly speaking, an HTML element is everything between the start tag and the end tag, including the tags” (“HTML Tutorial,” n.d., under “Introduction”). Usually with the phrase “but strictly speaking,” a person means an opposite idea; in this case, it implies that although the tags and elements are used in the same way, they aren’t the same nor act in the same way. What’s confusing me is what the difference is. The description above for HTML tags describe the exact same thing (ibid.). So is there a difference? Should there be a difference? Additionally, I understand the basic idea for why the tutorial recommends using lower case attributes/attribute values since they are case-sensitive (ibid., under “HTML Attributes”) but I feel like there is more to it than that. If I understood the logistics better, I probably would know why lower case is used instead of upper case – such knowledge would hopefully clear things up a little. But why one over the other? If both can theoretically work, then maybe including both can expand the list of attributes to encompass new kinds or maybe help organize the ones in existence.

 
“HTML Cheatsheet Guide.” (2008). Webmonkey.com. Retrieved from http://www.wired.com/images/multimedia/webmonkeycheatsheet_full.pdf

            The guide “HTML Cheatsheet Guide” (2008) seems like it would be suitable for a quick reference. It provides tags and their descriptions for a lot of the most basic HTML a person might have to do, such as creating a HTML document with <html></html> (“HTML Cheatsheet Guide, 2008, under “Basic Tags”) or forming new paragraphs with <p></p> (ibid., under “Formatting). I’ve never had to use HTML like this before, so I cannot say for sure if the guide covers all of the basic tags or elements a user would necessarily need. From what I can interpret, though, it accomplishes its goals.
            If I ever had to create a HTML document by using HTML tags, I would like to experiment with the tags offered. I would need to get the basics down first, but the later sections – “Forms,” Graphical Elements,” and “Links” – would prove a good basic challenge. What would be the most interesting to try is the tags for creating Submit buttons (“HTML Cheatsheet Guide, 2008, under “Forms”) and adding images and their descriptions (ibid., under “Graphical Elements). The former seems a little more complicated than the other more straight-forward tags provided and the later would be fun to learn, especially since it would be helpful in the future to have an option in adding pictures to a document.

 
Pratter, F. E. (2011). “Introduction to HTML,” Chapter 2 of Web Development with SAS by Example. Retrieved from http://books.google.com/books?id=GQxv8xaIPFYC&printsec=frontcover&dq=inauthor:%22Frederick+E.+Pratter%22&hl=en&sa=X&ei=Mr9eUtnXEdSp4APItICACQ&ved=0CDoQ6AEwAg#v=onepage&q&f=false

            Reading this chapter helped answer some of my questions that I posed in the “HTML Tutorial.” For example, concerning lower vs upper case, F. E. Pratter (2011) notes that HTML 4.0 tags aren’t case sensitive but standard requires lower case (20). While this does not answer why lower case is preferred, it does elaborate on the background for my questions. However, it disagrees somewhat with the other readings. Apparently professional Web developers prefer to write HTML from its roots by employing text editors such as Notepad or KEDIT (Pratter, 2011, 16) – this questions somewhat the recommendation of “HTML Tutorial” that a basic editor would help beginners learn HTML (“HTML Tutorial,” n.d., under “HTML Editors”). Based on the context, they might be both wrong and right; some text editors would be basic enough for a novice to use and learn from, while there are more advanced versions for the professionals. It seems kind of strange that Pratter does not acknowledge the types of editors available based on expertise, or categorize them by some sort of evaluation. In this regard, the “HTML Tutorial” at least implies a difference even though it does not state it explicitly. This might have to do with the type of audiences they each target; both introduce users to HTML, but whereas the “HTML Tutorial” seems more like shorthand notes – noting differences and steps – the other is focused on explanations.

 
Goans, D., G. Leach, and T. M. Vogel. (2006). “Beyond HTML: Developing and re-imagining library web guides in a content management system.” Library Hi Tech 24(1): 29-53. DOI:10.1108/0737883061065209.

            The article overall provides a good analysis of web guides. In particular, D. Goans and his fellow writers’ (2006) look at what content is in a CMS was interesting. They note that “content” consists of a broad spectrum of forms depending on the organization, usually including resource links, webpages, image files, PDFs, PowerPoint presentations, and Word documents (Goans, Leach, and Vogel, 2006, 31-2). As such, there is no standard to what “content” is. It can include a wide variety of types of information as long as it is part of the CMS. Additionally, the content itself “is disconnected from the layout and design elements of the page” (ibid., 31). It makes sense; the content is the information itself and permanent while the layout and design elements can change. However, this can’t be entirely true. While the two can be separate, the content and layout and design elements do depend on each other to transfer knowledge. Layout and design elements also determine the appearance of the content, influencing how a user interprets the information, so the two are interconnected.
            Reading the article also offers a inside look into the development and consideration of web guides, which was informative. For example, the explanation of how they decided on the solutions to their problem – whether through commercial software such as Dreamweaver (Goans, Leach, and Vogel, 2006, 33), open source web site systems or “Frankensteining” products together (ibid., 34), or introducing an in-house web development project (ibid., 34) – supplied not only information on what options are out there for institutions in similar situations, but also gives an idea of how library management works, evaluating and determining options as a group.

Thursday, October 10, 2013

Sorry about the multiple postings. My internet connection is a little off today, so when I was making my posts, I wasn't sure if my Muddiest Point came through. So my answer to the problem? Push update repeatedly, until I got the oh-so-bright idea to actually check my blog to see if anything came through. Did that - ergo the multiple postings. I tried to fix it up, but I wasn't able to erase any posts except their content. So ignore these extra posts, just focus on the Week 7 readings and the true Muddiest Point.

October 7's Muddiest Point

I still don't understand the difference between IPv4 and IPv6. Is it only the amount of bits each has, or something more? Why is IPv4 more popular than IPv6?

Week 7 Readings


Articles

Tyson, J. (n.d.). “How Internet Infrastructure Works.” Retrieved October 8, 2013, from http://computer.howstuffworks.com/internet/basics/internet-infrastructure.htm

            Overall, the article provides a good recap of this past Monday’s topics. It covers basic information such as how routers function in terms of devices and the Internet (Tyson, n.d., p. 4), the use of IP addresses (ibid., p. 6) and the Domain Names System (DNS) (ibid., p. 7), and the Uniform Resource Locater (URL) and its connection to the domain name (ibid., p. 8). J. Tyson (n.d.) offers other details as well, such as how the octets of the IP address not only separate numbers but can be split into two categories – Net, containing the first octet, and Host, or Node, having the last octet (ibid., p. 6). As such, the article was a good review, reteaching and investigating these newly-learned info.
            Tyson (n.d.) brings up a crucial thought near the beginning of the article when describing the Internet. Specifically, he notes that no one owns the Internet itself (Tyson, n.d., p. 1) – a state that I am personally happy does not exist, since it would limit the content of the Internet, give one person or group more global power than anyone should have, and/or require a fee to use. Tyson, however, adds that the Internet is being monitored and maintained; for example, the Internet Society “oversees the formation of the policies and protocols that define how [users] use and interact with the Internet” (ibid., p. 1). This raises some questions. Who authorizes such monitoring? Does the Internet Society abide by a neutral, unbiased stance in its duties?
  

Pace, A.K. (2004, February 1). “Dismantling Integrated Library Systems.” Library Journal, 129(2), 34-36. Retrieved from http://lj.libraryjournal.com/2004/02/ljarchives/dismantling-integrated-library-systems/

            A. K. Pace (2004) firmly situates the article on the status of ILS systems. Based on his descriptions, the situation seems to be a seesawing of construction and destruction; Pace notes that while librarians are dismantling systems, they develop new modules (under “Librarians and their vendors,” para. 2). However, since “the interoperability in library automation is more myth than reality” (ibid., para. 3), this implies that a balance cannot be attained at this time. The system is too chaotic for any librarians to settle on any one system or module. Thus, in this regard, steadiness is more important than fueling new creations which will quickly become obsolete.
            I do not entirely agree, though, on the Innovative Interface’s argument. Pace supports their idea that if the core functionality of the established ILS has not changed, then it is better to retain the business logic and workflow standards supporting it rather than replace them with new logics and workflows (Pace, 2004, under “More alike than not,” para. 3). If such a model has been successful so far, then it would make sense to not change anything too much. Such a stance, however, implies a stubborn refusal to accept that the world and technologies are constantly changing. Keeping the basic ideas is acceptable, but there should exist some flexibility so that the functionality of the ILS adapts more easily to arising issues.


Brin, S. and Larry P. (n.d.). “Sergey Brin and Larry Page: Inside the Google Machine.” TED Talks video, 20:36. Accessed October 8, 2013. http://www.tv.com/web/ted-talks/watch/sergey-brin-and-larry-page-inside-the-google-machine-1545457/

S. Brin (n.d.) introduces the episode with a look at how Google affects the world. His methods in presenting it was impactful. Showing the world and the travel of queries real-time (Brin and Page, n.d., 0:40-3:58) makes their job physical, something that can be seen and rather than imagined. Thus there is almost an illusion that their job has a physical presence in the world and that they manipulate and produce physical things rather than digital. Additionally, the use of lights and colors to represent the flow of queries plays off of human psychology. In Western thought at least, light – particularly white – represents goodness, purity, and truth. When combined with images of parts of the world black or empty of light, it reinforces assumptions that Google is providing information that act as beacons in a world dark with ignorance.
L. Page (n.d.) continues the episode by summarizing the small projects Google has invested in for developing web tools and how staff work within the company. In particular, one note he says caught my attention. He acknowledges that a person has to be smart in how they search via the search engine, and that the ideal search engine would have artificial intelligence (Brin and Page, n.d., 16:35-17:02). He, however, doesn’t elaborate on what kind of “smart” is necessary. In fact, I think it would take more than intelligence to become successful in searching the web. Anyone who has never spent much time searching for anything would have trouble no matter how intelligent they are. Additionally, since everyone uses the Internet, there are different standards and methods of organizing information and various terminology that varies with each field. Being smart helps figure out the patterns and routes to take, but other factors – experience and good judgment skills, for example – should also be taken into consideration.

Thursday, October 3, 2013

October 7's Muddiest Point


The Dublin Core has two types – Simple and “Qualified” (nowadays “Refinement”). Why was “Qualified” changed to “Refinement” Dublin Core? Does the “Qualified” type provide any more features besides extensibility?

Week 6 Readings


Articles

“Local area network.” (2013, September 30). Retrieved September 30, 2013, from Wikipedia: http://en.wikipedia.org/wiki/Local_Area_Network

             I was able to understand the article “Local area network” (2013) overall; I was familiar with the concept before reading the Wikipedia entry, so the more in-depth inspection of the local area network (LAN) was interesting. What makes up a LAN provided some remarkable details that I didn’t know about. For me, LAN was just a network. Learning what actually goes into it – switches, firewalls, sensors, load balancers, and routers (“Local area network,” 2013, under “Technical Aspects,” para. 2) – shows how complex the operation really is.
            The driving force behind networking reveals how much digitization has meant to and improved life nowadays. According to the article, initially the desire to share storage and printers – both of which was costlier back in the 1970sencouraged people to network their computers (ibid., under “Standards evolution,” para. 1). On a basic level, this motivation offers an example of basic human nature to avoid unnecessary costs, or find ways to ameliorate conditions. That they turn to others exhibits the tendency to seek others for aid and communicate with each other. This fits the current uses of computers and the Internet nowadays, so I guess the LAN is one of the “founding” technologies for current attitudes on technologies.

 
"Computer network.” (2013, October 1). Retrieved October 1, 2013, from Wikipedia: http://en.wikipedia.org/wiki/Computer_network

             The article for “Computer network” (2013) was a good piece to read after reading the Wikipedia article “Local area network” (2013). It touched on some of the same ideas that “Local area network” did but went into further detail – for example, it elaborated more on the different types of wired technologies, such as the twisted pair, coaxial cable, and optical fiber (“Computer network,” 2013, under “Wired technologies”), as well as the differences between LAN and other networks such as PAN or WAN based on their scopes, areas of influence, capabilities, and content (ibid., under “Geographic scale”). As such, the article provided good parallels to compare with the other Wikipedia article.
            One part – particularly the terminology – has got me thinking. The article relates how “exotic technologies” exist, wherein people use unorthodox methods of sending information. The examples provided includes IP over Avian Carriers (IPoAC), a joke proposal for sending IP traffic on homing pigeons, and spreading the Internet to interplanetary dimensions (ibid., under “Exotic technologies”). In this case, “exotic” – as compared to wired or wireless technologies – means unusual technologies, based on jokes more than serious ideas. But this may change. Of course, the pigeon idea is ludicrous. If in the future, though, more complex technologies are sent out into space or astronauts are able to travel farther distances, there will be a greater need in ensuring that they can keep in contact with people on earth or with others throughout space. Setting up Internet systems elsewhere may become necessary. Although this might be impossible, it is interesting to consider theoretically. Could the systems work on their own? Would they be able to work with other Internets and, if so, how? Would they be the same as or modeled off of the Internet we have now, or would it work in different ways? How quick would the technology be in sharing information between planets?

 
Coyle, K. (2005, September). “Management of RFID in Libraries.” The Journal of Academic Librarianship, 31(5): 486-9. Retrieved from http://www.sciencedirect.com/science/article/pii/S0099133305000856

          Costs overshadow the article as a whole. Although the article is aimed towards librarians and other information science professionals, K. Coyle (2005) circulates around the commercialization aspects of using RFID in libraries – the varieties of RFID technologies based on such factors as frequency of radio waves and “of course its cost” (Coyle, 2005, under introduction, para. 3); the RFID’s developing payment systems and “micro-payment” plans (ibid., under “Should Libraries use RFID?,” para. 3); and its functions better cost-suited for the “renting” model of libraries compared to the retail stacking more costs by using RFID for items that come in and out (ibid., under “RFID and Library Functions,” para. 1). Such a preoccupation is understandable; the RFID is used mainly in retail sectors of society, so costs are a major factor in using them. It also touches on a major concern in libraries; the institutions have had problems with funding for years, and knowing more about the products would help librarians find cost-effective solutions to current and new problems.
Some problems posed, though, seem more a common-sense issue than actual barriers. For example, Coyle notes that less sturdy items may not have enough space for the two-inch square tag and may require a different checkout system altogether (ibid., under “Some Problems Remain,” para. 2) and that oddly shaped and metal-accessorized items produce similar problems (ibid., para. 3). I’m sure that if I had more knowledge on this issue, I would not be arguing what I’m about to suggest. However, based on the available knowledge, I think the problems could be bypassed. If the issue is the structure of the items in question, why not change it? Maybe store the items in plastic slips or small “boxes” which provide space for the tags. While this might add more costs, it could prevent future problems once installed and be less expensive than maintaining two systems. Otherwise, this could just be a contemporary problem. The technology itself is advancing; in a few years, there could be smaller, lighter, more-efficient tags that can be used on the items or ways to combine different RFIDs so that they all operate on the same system. It all depends on whether libraries can wait for it to appear or if the problem is immediate.