Articles
Galloway,
Edward A. (2004, May 3). Imaging Pittsburgh: Creating a shared gateway to
digital image collections of the Pittsburgh region. First Monday, 9(5). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/1141/1061
When reading the article, I was
struck by the benefits for using digitization which Edward A. Galloway voiced.
Many of the advantages appear to be for users – the website that provides
direct access to the collections had “greatly
increased public access to significant collections of historic material
documenting the growth and development of Pittsburgh and the surrounding
western Pennsylvania region during the nineteenth and early twentieth centuries”
(Galloway, 2004, Project Summary, para. 2) and users could gain a deeper
understanding of overall events, localities, infrastructure, land use, and
populations (Galloway, 2004, The Online collection, para. 2). As such, the
focus is on the users – they learn more and get more information through the
projects. While such benefits are explicit, that does not mean that the content
partners of the project do not gain advantages as well. While Galloway notes
only one benefit – income and financial funding (Galloway, 2004,
Characteristics, para. 2) – they would gain more than that. By attracting more
people to the site, they would get potential visitors for their main sites; the
attention of other institutes to collaborate on projects with; and developing
expertise in digitization, communication, and partnerships. Such results are
good in the long term.
Webb,
Paula L. (2007, June). YouTube and libraries: It could be a beautiful
relationship. College & Research
Libraries News, 68(6), 354-355. Retrieved from http://crln.acrl.org/content/68/6/354.full.pdf
Overall, the article presents an
interesting point of view. Usually when I access YouTube, I only see it from a
consumer point of view – I try to find clips and videos to watch for fun rather
than for business pursuits. Paula L. Webb, though, analyses it from a career
point. She addresses the librarians as her core group and attempts to transform
YouTube into a tool for libraries to use, describing how to sign-up (Webb,
2007, 354), the advantages (Webb, 2007, 354-355), and suggestions on how
librarians can use the media at their hands (Webb, 2007, 355). Such a stance
implies confidence, that those involved in the library sciences should not be
afraid of or dismiss the internet, but embrace it, which I think works better
than remain afraid of change.
One claim, though, pushed me to
investigate. When describing the advantages of using YouTube, Webb (2007) notes
that some of the regulations include a maximum file size of 100 MB and at most
10 minutes worth of footage per video (p. 354). At first I (having never
operated a YouTube account before) thought she meant that the site regulated
this due to its own limitations; that it could not store any more megabytes to
last more than 10 minutes. But this could not be true; in undergrad, I had
taken a couple of film courses that required watching movies outside of class.
In some cases, I could find whole movies on YouTube, such as His Girl Friday – a download of around 1
hour and 31 minutes total in one viewing – and Hedwig and the Angry Inch – over 1 hour and 31 minutes as well, but
with a fee. I realized then that the regulations had more to do with copy-right
infringement than technological limitations (the movies listed would pass – His Girl Friday was first produced in
1940 and the later has fees involved). It was enlightening, and makes me wonder
about what sorts of tension must exist between digitization, technological
advantages, and commercial reality.
Data
compression. (2013, September 9). Retrieved September 10, 2013, from Wikipedia:
http://en.wikipedia.org/wiki/Data_compression
The article (at the time of my first
viewing) was very informative, though heavy at times, in explaining data
compression. I got confused at some points in its explanation, such as in the
descriptions about the theories, including Machine Learning and Data
Differencing (Data compression, 2013, Theory). After Monday’s course, though, I
think I have a better understanding.
Looking over the text again, I am
drawn towards the examination of lossy data compression. The article not only
describes it, but links it to devices. In particular, it claims that users can
use lossy image compression in digital cameras “to increase storage capacity
with minimal degradation of picture quality” (Data compression, 2013, para. 2).
This reminded me of the first assignment. The images we will be working on for
the assignment will deteriorate when we use digital cameras (or, presumably,
scanners). That is understandable; we are taking pictures of objects, so they
are not exact replicas of their subjects. By how much, though, will the images
deteriorate? What is “minimal degradation”? Will we notice?
del-Negro,
Rui. (2013). Data compression basics. DVD-HQ.
Retrieved from http://dvd-hq.info/data_compression_1.php
Rui del-Negro (2013) writes in a
clear fashion using lots of examples, so I was able to follow along most of the
time. The “Note…” sections were especially interesting. They provided more
details that I would not have thought of, such as how the Huffman coding has
encompassed other prefix-free entropy codings beyond its original form
(del-Negro, 2013, Entropy coding, para. 9). Del-Negro’s note on the reference
to RLE/“squeezing algorithm” in Monstrous
Regiment, written by Terry Pratchett, (Run-length encoding, para. 31)
caught my attention. I am a fan of his series, so when I reread the book, I
will definitely keep the encoding in mind.
One
part, however, confused me. When describing the prediction algorithm, what it
actually is was related in a round-about way (at least for me). I understand
that the algorithm is based on studying two values only while assuming linear
variation (del-Negro, 2013, Prediction, para. 9) but the specifics are unclear.
From what I can read, the goal is to acquire efficient compression of an image
file (del-Negro, 2013, Prediction, para. 3). The procedure involves storing
errors, or subtracting predictions from the real values (whatever that is) of pixels
that come after two known pixels (del-Negro, 2013, Prediction, para. 4). The
results would show that the values might fit in less bits and – if the values are
in a small range – it would mean that there are repeated values, meaning there
are repeated sequences, which would allow a person to apply other compression
techniques on the errors list (del-Negro, Prediction, para. 7). Since the
description is spread out over several paragraphs, I do not know if this is the
right impression.
No comments:
Post a Comment