Thursday, December 23, 2010

RE: ONIX data

Posting to NGC4LIB

Cory Rockliff wrote:
<snip>
This is actually what I imagined would be the primary use case for ONIX in libraries--Even if there's a lot more to add in order to arrive at a "full" record (however we're to define that), deriving MARC record "stubs" from ONIX should significantly lessen the burden of transcription from the item-in-hand by catalogers.
</snip>
The word "should" here holds the entire point of whether to use ONIX records for library cataloging. There are certain levels of standards that must be adhered to if the entire system is not to dissolve into complete chaos. These standards must be linked to a certain level of assurance that the records actually *do* conform to those standards, i.e. while you can never get 100% compliance in anything, what is acceptable? 98%? 90%? 75%? 50%? If there is no assurance (within tolerable limits) that a specific record will conform to the standards, there are essentially two options:
  1. to give up, accept everything, and admit that there are no standards; or
  2. recheck each and every record received to ensure the standards are met within your own catalog
Of course, this problem is nothing new in the library cataloging world and has been going on since the beginning. It's no secret that some libraries produce "higher-quality" records than others, and that there is a lot of junk requiring mounds of reworking. Perhaps these other libraries *should* produce higher-quality records, but they just don't for all kinds of reasons, legitimate and less so. What is a solitary library supposed to do when they find a lousy copy record? Accept it or revise it?

[There is potentially a 3rd option, which would be quality control done retrospectively on random samples. That has always seemed dangerous to me, since what would you do if you found a major problem? Relating it to food, what happens when the controllers discover retrospectively that x% of canned tomatoes on the market have ptomaine? One thing that happens is that the producer responsible is severely punished. Other librarians may have some experience with such an option for catalog records.]

So, if taking ONIX records really were a matter only of *adding* information, i.e. essentially the headings, that would be one thing, but this assumes quite a bit: that the rest of the record conforms to your standards (AACR2, ISBD). From what I have read from others who have more experience working with these records, this is not at all the case and therefore, taking ONIX records will be just having more lousy copy cataloging available. As a result, the cataloger has to recheck and/or redo the entire record anyway, saving very little, nothing, or even incurring the additional labor of fixing everything. This or, the only other choice is to hold your nose, not do anything and accept whatever you find.

This is ostensibly one of the reasons for RDA: that we could receive RDA-compatible records from publishers through ONIX. I haven't seen any evidence for this. Why, if publishers won't give us AACR2-compatible records, would they be more willing to give us RDA-compatible records? I find that totally inconceivable.

One good point of having the ONIX records (along with other metadata records) is that we may be forced to confront what I think is reality: I have mentioned before that perhaps our current AACR2 standards are too high since so many libraries have problems reaching them. How could we design standards that these different bibliographic agencies could reach? These would be a reliable, assured, minimal level of quality.

In my own opinion, ISBD could be a very good beginning.

No comments:

Post a Comment