RE: ONIX data

Posting to NGC4LIB

Charles Ledvina wrote:

The Amazon To Marc Converter at takes Amazon’s ONIX data and creates a Marc where you can verify names via the VIAF API and add call numbers and subject headings using OCLC’s Classify API.

While this could potentially become a very useful tool, it also exemplifies the problems I mentioned about standards. Here is a record taken at random from this tool. I’ll assume everyone on this list knows basic MARC:
245 10 Central and eastern europe after transition / |c Alberto Febbrajo.
260 [S.l.] : |b Ashgate, |c 2010.
300 374 p. ; |c 24 cm.
490 1 Studies in modern law and policy.
[Sadurski exists as an added entry. Febbrajo is main entry.–JW]

Here is the LC ISBD information:
Central and Eastern Europe after transition : towards a new socio-legal semantics / edited by Alberto Febbrajo and Wojciech Sadurski.
Farnham, Surrey, England ; Burlington, VT : Ashgate Pub., c2010.
xi, 362 p. ; 25 cm.
[Both names as added entries]
It is interesting to note that when you search by the subtitle in Amazon (“towards a new socio-legal semantics”), you do not get this item.

Here is another one,
245 00 Picasso : |b the mediterranean years 1945-1962.
260 [S.l.] : |b Rizzoli, |c 2010.
300 390 p. ; |c 32 cm.
[Richardson, Cowling, Arnaud as 700s. Nothing for the gallery]

this copy from Columbia:
245 10 |a Picasso : |b the Mediterranean years 1945-1962 / |c curated by John Richardson ; [with contributions by Elizabeth Cowling, Claude Arnaud].
260__ |a London : |b Gagosian Gallery ; |a New York : |b Distributed by Rizzoli International Publications, |c c2010.
300__ |a 386 p. (some folded) : |b ill. (some col.), maps, ports. ; |c 31 cm.
[Main entry under the gallery–JW]

These two examples (with zillions more very easy to find) illustrate the problem of standards I keep pointing out: almost every single field *even in the ISBD areas* differs. So, while I agree that there is a type of “copy” here, its existence is essentially useless: it winds up saving the cataloger no time at all since every field must be redone, and, when faced with such a situation in the aggregate, each field of each record must be checked, even if no editing is done because it is obvious that nothing can be taken for granted. This is the only way for someone to ensure that a certain level of quality is achieved, otherwise all we can do is just give up and accept everything. The cataloger dealing with the records here might as well start from scratch, and experience shows that when confronting these kinds of records, it is often best to ignore them completely.

This is why I say that if a grocery store had to do this kind of work with every loaf of bread or any other item it sold, or if each business in the world that sells anything had to recheck every single item it sold, our society would completely fall apart. Such a situation would be called insanely inefficient. That’s why standards exist and why they absolutely must be enforced, by law if necessary. Achieving these types of standards can be done in almost every industry, except in bibliographic information. This has always struck me as very strange.

Any standards must ensure not only a certain minimal level of quality, they must be readily achievable. I, and many other catalogers, have noticed that record quality from other libraries has gone down significantly. I think that we must honestly question that perhaps our current standards are set too high since apparently so few can achieve them, and now with the idea of including other standards such as what we see here, I think we must reconsider completely what the purpose of our standards are and how they can *really* be achieved so that some level of assurance can be gained by all. Otherwise, all we will get will be hash or the insane inefficiency I mentioned.

Unfortunately, RDA heads in exactly the opposite direction.