On 18/03/2013 16:28, Kevin M Randall wrote:
This is making the assumption that existing data cannot be modified to meet the new standards, or that systems will never be able to deal with variant data. If that were true, there would be no point in trying to make any new developments at all. We should have just stuck with book catalogs centuries ago…
The real question should have been: what will we be able to do after the new standards that we cannot do now, and how much will it cost to get there? In other words, will the new standards be an improvement in practical terms or only in theoretical terms? And what will the practical effects be? But RDA has wanted to stay theoretical…
Of course, it is dangerous simply to assume that the so-called “legacy data” will be redone by magic. I haven’t yet seen any plans for spending money to update those millions of records! But I am sure it will be a piece of cake! 🙂 And in the meantime, the public will experience decreased access as I demonstrated–that is, if these new codes for relators and relationships are used for searching, and not just for display. In that case, people won’t notice any change at all. That’s a real step forward! All of that should at least be discussed.
Still, saying that not going with these so-called “higher standards” certainly doesn’t mean that nothing can change. Catalogs have changed tremendously already. Look at the facets and federated searching of all different types of content–not only of library records but of other kinds of records. These developments seem to have been pretty much ignored by the cataloging community. Don’t need RDA, FRBR or RDF for any of it. Science.gov is a great example of what can be done. http://www.science.gov/ And all of the technology is open source, downloadable for free. Rather amazing.
This is only the tip of the iceberg of what could be done if the emphasis were on changing the catalog and not the catalog records.