What happens when all articles are easy to find?

Posting to Autocat

On 10/27/2015 12:49 AM, Mary Beran wrote:
I agree that subject indexing is the ultimate, but is it necessary these days to have a subject specialist to bring together the scholarly and non-scholarly verbiage of the same article? We don’t see it in our library catalogs today when we dump in records from various vendors/sources and yet we expect uses to click on the Subject facet when some articles have no descriptors at all. Certainly, authors should be well aware of the necessity of their articles being accessed, both for the author’s being cited and the less scholarly user who would be interested as well. With more being published, we have neither time nor money to provide additional access. For any publication, the publisher should ensure that the author submit a technical/nontechnical writing of the abstract as well as inclusive technical/nontechnical keywords.

You bring up a couple of excellent issues. It seems to me that the most important obstacle is, as you say, that libraries “dump in records from various vendors” into our catalogs, while we expect users to click on the subject facet when lots of these records have no subjects at all. How are searchers supposed to understand what they (aren’t) searching and thereby missing?

Libraries implement the “single search box” that searches “everything at once,” but that leads to incredible inconsistency for any search in the catalog and it can actually hide as much or more than it finds. That must have huge impacts on the users but has been little discussed. (Not every vendor uses LCNAF or even VIAF forms of names either) I discussed this with Julie Moore in a book chapter: Moore, Julie and Weinheimer, James. The Future of Traditional Technical Services., 2015 In: Rethinking Library Technical Services : Redefining Our Profession for the Future. Rowman & Littlefield, pp. 1-16. http://eprints.rclis.org/25338/

I don’t agree however, that you need to be a subject specialist to do subject analysis. Assignment of subjects should be done by specialists in the catalog who do as much as they can to maintain consistency within that catalog. It turns out that the authors of a resource, while they understand the content of the resource they have created, do not understand the over-riding importance of consistency and often, as one colleague put it to me, “consider assigning metadata as an extension of their own creativity.” When he told me that, it explained a lot about what I had seen with author-created metadata, and that’s why I wrote “Metadata Creation–Down and Dirty” to show how subjects rely–in a natural fashion–on consistency over almost everything else. http://blog.jweinheimer.net/2014/11/metadata-creation-down-and-dirty-updated.html

The issue is really one of funding. With enough money, almost anything can be done, but those who control the funding must be convinced they are allocating it wisely. Subject access in our catalogs has been broken for a long time, as I have demonstrated several times. This should be fixed. Plus, I think the tools that catalogers use are still stuck primarily in the 19th-century and many new ones could be made to make catalogers far more productive than they are today, and with less pain! Linked data may be able to help solve a couple of problems catalogers face but the biggest ones will remain.

At least with work from those such as Anurag Acharya and Google Scholar, we can proclaim confidently that people want subject access. That is a huge step. The next step is to provide it.

Should it be done in any way that is consistent, or in some other way? What are those other ways? Google has their way, libraries should come up with their own.

-167

Share