On 30/08/2011 20:04, Joseph Montibello wrote:
<snip>Yes, thanks for clarifying that for me. Patrons should be able to work as they wish with the search results, but the results themselves, at least a part of them, should be standardized in some way to permit guaranteed access, i.e. a search that worked yesterday should work today and tomorrow as well.
Jim W. wrote:
Google does not allow any kind of "guaranteed" or "standardized" access--just the opposite. If the results vary for you and me, and even vary for ourselves depending on where we are searching from, plus it is tweaked almost twice a day, I think the public could possibly understand the argument for a more standardized means of access.I think personalized is better, from the perspective of most patrons. If you're doing research in medicine, you probably want to privilege recent stuff over older stuff. However, this doesn't mean that the metadata needs to be personalized. The underlying data needs to be standardized, but that doesn't mean the presentation of the data (including search result ranking) should be one-size-fits-all.
Why does Google tweak their algorithm constantly? Lots of reasons, I'm sure, and not all of them would be comforting to us. But I do think that they've shown an ability to produce useful results. So I'd argue against aiming at standardized access for all patrons. Returning personalized results sends a message to the patron - "we're trying to help you." In many cases, our standardized results tell the patrons "We think we have the answers, and one of those answers is that there's a whole skillset that you need to learn before you can do what you thought you wanted to do."
<snip>It takes a lot of resources and control over your own systems. A single, rich corporation like Google can do it, but for a diverse, loosely-organized group such as librarians, it would be much more difficult. Related to your previous comment, I think it's important to show our patrons that we are *trying* to improve matters *for them*, and that means there will be experiments, of which some might fail. Although failure is not such a great thing, I think the general populace understands that nothing is perfect and everything can be improved. That's how Google etc. work, and perhaps that is the lesson we should take: gradual, tiny improvements.
The best part of the video was its emphasis on big-time systematic testing and evidence-based decision making. One guy mentioned that for every time a certain feature didn't work, they wanted to be sure it worked 50 times. I suspect there's no sound reason for that ratio, it's just a practical line in the sand that they can shoot for.
How can we get that kind of production testing?