On 4/29/10 12:32 PM, MJ Suhonos wrote:
What I hope for is that OpenURL 1.0 eventually takes a place alongside SGML as 
a too-complex standard that directly paves the way for a universally adopted 
foundational technology like XML. What I fear is that it takes a place 
alongside MARC as an anachronistic standard that paralyzes an entire industry.
Hear hear.

I'm actually encouraged by Benjamin's linking (har har) to the httpRange-14 issue as being relevant 
to the concept of "link resolution", or at least redirection (indirection?) using URL 
surrogates for resources.  Many are critical of the TAG's "resolution" (har har har) of 
the issue, and think it places too much on the 303 redirect.

I'm afraid I still don't understand the issue fully enough to comment — though I'd love 
to hear from any who can.  I agree with Eric's hope that the library world can look to 
W3C's thinking to inform a "better way" forward for link resolving, though.
One key thing to remember with the W3C work is that URL's have to be dereference-able. I can't lookup (without an OpenLink resolver or Google or the like) a url:isbn:{whatever}, but I can dereference http://dbpedia.org/resource/The_Lord_of_the_Rings -- which 303's to http://dbpedia.org/page/The_Lord_of_the_Rings -- which is full of more /resource/ dereferencable (via 303 See Other) URL's.

The main thing that the W3C was trying to avoid was RDF that inadvertently talks about online documents when what it really wants to talk about is the "real thing." Real things (like books) need a URI, but ideally a URI that can be dereferenced (via HTTP in this case) to give them some information about that real thing--which isn't possible with the urn:isbn style schemes.

That's my primitive understanding of it anyway. Apologies if any overlaps with library tech are off. :)

Reply via email to