Thursday, November 20, 2008

Muddiest Point

I do really love the idea that libraries are integrating loads of 2.0 technology into their operations, but I can't shake the feeling that blogs & wikis still seem kind of...unprofessional. Could that change? Will that change? I don't know. Blogs to me are still sort of lifejournal-y and thus kind of emo and college-y. Aka unprofessional.

Unit 12 readings

Wikis are inarguably quite useful. They're collaborative and interactive and really, really easy to use. I think they're a little annoying, but ultimately good to know and use. I see no reason why wikis and libraries cannot coexist & cohabitate with results that are anything less than successful.

I love social tagging, and I use folksonomies like crazy. My flickr relies intrinsically on tags, as does my del.icio.us. Incorporating them into library catalogs scares me at times, but Penn has had such dazzling success that I don't see why it wouldn't work everywhere. And probably with really great results.

Friday, November 14, 2008

Comments

Commented on:

Kerri's blog


and

Maggie's blog

Muddiest Point

For all of the plain, visible quandaries with digital libraries, they exist in astonishing numbers. Do we owe this to the laziness of users or is it actually that librarians and computer scientists truly are bedfellows? I think it's the former.

Unit 11 readings

Mischo writes, "The goal of seamless federation across distributed, heterogeneous resources remains the holy grail of digital library work." How could we even achieve this? Not all authors are going to agree to this equal distribution. They want money, right? Not. Gonna. Happen. Also, this point really intrigued me, as it's definitely something I've noticed working reference at Hillman: "It is interesting that Google Scholar is being held up as the competition for both campus institutional repository systems (at least in terms of search and discovery) and academic library federated searching." This is tangential to the Digital Library issue, but I think catalogers will have to totally revamp catalogs to better reflect and serve the kind of searching that both students and the public will likely be doing as a result of using & loving Google.

The "Dewey Meets Turing" article brings up some really good points. The authors wrote, "The disruption to the library community was greatly exacerbated by many journal publishers' business decision to charge at a premium for digital content. This decision has been forcing academic libraries to cancel subscriptions, undermining their role as conduits to scholarly work," a point which so greatly frustrates me. Journal flipping, at the rate which we're going, makes me immeasurably nervous, and similarly, I think it does a great disservice to public patrons who will be unable to access things they otherwise could have accessed.

Maybe I don't yet trust digitization and digital libraries enough. Who knows.

Tuesday, November 11, 2008

Friday, November 7, 2008

Unit 10 readings

David Hawking's first article has made my little brain explode (this is a recurring theme in my reading notes!! brain explosion!!) I really like learning about web crawlers. I also secretly like learning about how Google and other search engines index search results. Hawkings writes that, "It is not uncommon to find that a crawler has locked up, ground to a halt, crashed, burned up an entire network traffic budget, or unintentionally inflicted a denial-of-service attack on a Web server whose operator is now very irate." This is kind of awesome. WHOA. Crawlers are totally awesome and Hawking's writings have only served to reinforce my belief of this.

In his second article, Hawking writes that "The major problem with the simple-query processor is that it returns poor results. In response to the query "the Onion" (seeking the satirical newspaper site), pages about soup and gardening would almost certainly swamp the desired result." How can we avoid this, especially in library catalogs? It seems to me we somehow need to create even smarter search engines (if that's possible?). This also reminds me of the time that I wanted to find information about a band called Condominium, and all I knew about them is that they were from Minneapolis, so I (foolishly) googled "Condominium Minneapolis" and was like, "Uh, I'm not interested in Minneapolis real estate...now what?"

WHY DOESN'T HAWKING TALK ABOUT SEARCH RELATED ADS??? I hate those. They creep me out. Why do they do that?!

The Deep Web! Bergman's article is some scary stuff!!!!
Bergman writes, "Internet searchers are therefore searching only 0.03% — or one in 3,000 — of the pages available to them today." I seriously never considered that this might even be so. Because I am a dummy and foolishly trust & love google, I assumed they were able to search and find everything. Foiled!

Muddiest Point

In David Hawking's article Web Search Engines part 1, he writes, "Search engines cannot and should not index every page on the Web." My question: why shouldn't they index every page? I mean, why not? Is this a question of ethics or merely a question of time?