RE:The next generation of discovery tools (new LJ article)

Posting to NGC4LIB

Jonathan Rochkind wrote: (concerning relevance ranking being a “crapshoot”)

<snip>
Well, it depends on what you mean. That’s a dangerous statement, because it sounds like the kind some hardcore old school librarians use to say we shouldn’t do relevance ranking at all, I mean why provide a feature that’s “a crapshoot”, just sort by year instead. I don’t think it’s true that relevancy ranking provides no value, which is what “a crapshoot” implies.

Instead, relevance ranking, in actual testing (in general, not sure about library domain specifically), does _very well_ at putting the documents most users most of the time will find most valuable first. It does very well at providing an _order_. Thus the name “ranking”.
</snip>

In these economically very troubled times, I don’t think there is much of a chance that we won’t do any relevance ranking at all–my concern is quite the opposite: administrators are indeed *desperate* to save money wherever they can, and today, computerization is seen as a method to save money because people are “so expensive”. (A curious idea, by the way) If there is a danger, it is much more that it will be the practice of cataloging that will be tossed overboard, not the computerized relevance ranking.

Perhaps cataloging really will be thrown overboard–I don’t know, and the millennia-old practice of cataloging will be done automatically or semi-automatically, by students and secretaries with only a few minutes or hours of training, following few standards, if any whatsoever. I am sure that if there is a danger, it is to cataloging and not to any kind of computerized rankings. In any case, it shouldn’t be done without full understanding of what we would be losing.

Let’s discuss practice and consider whether Google-type relevance ranking really does “very well at putting the documents most users want”. The only way to determine if this is true is to compare it with some kind of alternative. Do we have anything? How about the library catalog?

Let’s take as an example that I want to do some kind of research (not for publication, just an undergraduate paper) on air warfare of WWI. Doing this search in Google retrieves 65,400 results http://tinyurl.com/5tbouru (at least on my machine) and gives me at first Wikipedia, something from Firstworldwar.com, Britannica, answers.com, life123.com, pages about games, and so on. In the “Wonder Wheel” I see synonyms for “air warfare wwi” except for naval warfare and surface warfare. Is this relevant to my search? (As an aside, Google’s menus letting people re-sort the results in several ways implies that they admit that the single relevance ranking is not enough)

To me, this is similar to my own experiences of very poor reference librarians who you ask for information on a topic, they run off into the stacks and come back with a book, often an encyclopedia, open to a chapter or article more or less on your topic. Then they leave you and return to their other work.

To decide if the Google result is relevant, we need to compare this with the correct, expert search in a library catalog, that I admit, no regular person would ever do: subject search: “World War, 1914-1918–Aerial operations” http://tinyurl.com/6hkgrcq, and I see a grouped result by American, Australian, Austrian, Belgian, etc., i.e. concepts I would not have considered on my own.

Now, if we look at the very first record:

Main title: The achievements of the Zeppelins, by a Swede.
Published/Created: London, T. F. Unwin, ltd. [1916]
Description: 16 p. incl. pl. 15 cm.
Notes: “Reprinted from the Stockholms Dagblad of 19th March, 1916.”
Subjects: World War, 1914-1918 –Aerial operations.
LC classification: D604 .A3

There is not a single word of the subject heading anywhere within the description of the item, and therefore, without the subject heading, the person interested in air warfare would not have found this and would have had to come up with “Zeppelins” independently somehow. Full-text would not have helped either, since this publication is from 1916, and WWI was not called WWI until WWII broke out.

What I am trying to show is that the subject heading arrangement–when used as it is designed to be used–is an incredible time saver for the searchers, since very quickly they can get a nice overview of what is in the local collection and decide, e.g. I am not interested in World War, 1914-1918–Aerial operations, Italian, and don’t have to look at any of those. This system is far from perfect, but there is real power in the traditional subject headings that *is not replicated* in relevance ranking, that is, so long as the library catalogers do their jobs satisfactorily.

This traditional method was designed for printed catalogs and I readily admit that it does not work in the world of the web, but the question naturally arises: could these clear sorts of result sets be repurposed to function on the web? Of course they could, if the powers-that-be decided to devote the resources. Yet, I fear that there is little chance that we want to devote the resources to this task and we want to put our faith in “relevance” ranking, which I think is, in reality, a search for the “perfect algorithm”, that I do not believe exists.

*Everybody* from provider to searcher has an interest in maintaining the idea that relevance ranking really does give us what is “relevant” (in the normal meaning of the term), and not actually some kind of incredibly complex mathematical algorithm that provides result rankings that no human being could ever explain since the mathematics are too complex, but results that that can only be accepted at face value. Yet, we must accept this since the only other choice would be to look at all of the 64,500 hits where the “relevance” really does trail down to .0001 sooner or later (and of course, we know this is only the tip of the iceberg of what is really on the web on this topic).

It would be nice if we could somehow get the two methods to work in tandem somehow, because where the subject headings are strong, relevance is weak, and where the subject headings are weak, relevance is strong.

Something tells me that that will be a very hard case to make however, since the administrators will no doubt claim it is double work, although they can say that it would be nice in a different economic environment and so on and so on.

-13

FacebookTwitterGoogle+PinterestShare

4 Comments

  1. Since you are responding to my comments, I feel an obligation to defend them. <br /><br />You&#39;re not comparing apples to apples. Google does not contain the same collection of things as your catalog (which is sometimes good, and sometimes not, depending on what the user is looking for). <br /><br />In your catalog, you are assuming the user correctly finds that subject heading — how do they

    March 31, 2011
    Reply
  2. Continued…<br /><br />But in fact, relevancy ranking and subject cataloging can work great together. One thing missing from my first new catalog example up there is including lead-in (&quot;see from&quot;) terms from LCSH in the keyword index. That&#39;s not there yet, but really should be — if it was, maybe &quot;wwi&quot; and &quot;air warfare&quot; would have matched some lead-in terms?<br

    March 31, 2011
    Reply
  3. Thanks for your comments. I would like to make a few observations of my own, but I think we are substantially in agreement: best would be for the traditional subject headings to work in tandem with relevance ranking. For this to happen however, the system of finding the subject headings must be improved because as I mentioned: no one (other than a cataloger!) would ever come up with terms like &

    March 31, 2011
    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *