ACAT An amazing record

Posting to Autocat

On 23/07/2014 16.33, Mary Beth Weber wrote:

I shared the original posting with a colleague who’s not on Autocat, and would like to share her response. She’s been charged with implementing Rutgers’ Open Access policy:

This is something we have been asked about in terms of the OA initiative as it was making its way through the Senate. Apparently it is quite typical for physics scholarship (much of which is in arxiv.org) to list literally hundreds of authors, all of them “primary”! I never thought about the fact that some would make their way to OCLC! Yikes.

As catalogs strive to become more “inclusive” in various ways, this will be the reality everyone will have to live with, and this will go for whether the records are physically added to the catalog or if a federated search is implemented. If, as was suggested, these records have been created automatically, these kinds of records can be pumped out by computers at a rate that make our collective efforts look puny. These generated records may follow other rules, or no rules at all, and often have all kinds of purposes, such as these records that add each and every person that has ever been attached to a project (that is my assumption) and the purpose is not so much for finding purposes (especially hard to demonstrate with names such as “Alison, J.” or “Becker, S.”) but I suppose it attaches a certain prestige to the records, since it shows how big and important the project is. Obviously, all of these people did not write all of these articles and the real authors are far fewer, but since the names appear to be in alphabetical order, there is no way to know which ones they are.

I don’t think we can change this trend but it has the potential to change searching in some fundamental ways. If this goes on, almost any name, or group of names, someone searches will bring up these records. At the same time, there is this push/need/compulsion to be able to search “everything” in one search (whatever the word “everything” happens to mean). They could be handled like Google handles their data: by pushing “unwanted” results down in the list, but I don’t know if that is any solution.

Is there anything we can do about it? Or do we just keep doing what we have always done?

-378

Share