Okay, I know it's cool to hate on OpenURL, but I feel I have to clarify a few 

> OpenURL is of no use if you seperate it from the existing infrastructure 
> which is mainly held by companies. No sane person will try to build an open 
> alternative infrastructure because OpenURL is a crapy library-standard like 
> MARC etc.

OpenURL is mostly implemented by libraries, yes, but it isn't necessarily 
*just* a library standard - this is akin to saying that Dublin Core is a 
library standard.  Only sort of.

The other issue I have is that — although Jonathan used the term to make a 
point — OpenURL is *not* an infrastructure, it is a protocol.  Condemning the 
current OpenURL infrastructure (which is mostly a vendor-driven oligopoly) is 
akin to saying in 2004 that HTTP and HTML sucks because Firefox hadn't been 
released yet and all we had was IE6.  Don't condemn the standard because of the 

> The OpenURL specification is a 119 page PDF - that alone is a reason to run 
> away as fast as you can.

The main reason for this is because OpenURL can do much, much, much more than 
the simple "resolve a unique copy" use case that libraries use it for.  We're 
using maybe 1% of the spec for 99% of our practice, probably because librarians 
weren't imaginative (as Jim Weinheimer would say) enough to think of other use 
cases beyond that most pressing one.

I'd contend that OpenURL, like other technologies (<cough> XML) is greatly 
misunderstood, and therefore abused, and therefore discredited.  I think there 
is also often confusion between the KEV schemas and OpenURL itself (which is 
really what Dorothea's blog rant is about); I'm certainly guilty of this 
myself, as Jonathan can attest.

You don't *have* to use the KEVs with OpenURL, you can use anything, including 
eg. Dublin Core.

> If a twitter annotation setup wants to get adopted than it should not be 
> build on a crapy complex library standard like OpenURL.

I don't quite understand this (but I think I agree) — twitter annotation should 
be built on a data model, and then serialized via whatever protocols make sense 
(which may or may not include OpenURL).

> I must admit that this solution is based on the open assumption that CSL 
> record format contains all information needed for OpenURL which may not the 
> case.
> …

A good example.  And this is where you're exactly right that we need better 
tools, namely OpenURL resolvers which can do much more than they do now.  I've 
had the idea for a number of years now that OpenURL functionality should be 
merged into aggregation / discovery layer (eg. OAI harvester)-type systems, 
because, like OAI-PMH, OpenURL can *transport metadata*, we just don't use it 
for that in practice.

A ContextObject is just a triple that makes a single assertion about two 
entities (resources): that A "references" B.  Just like an RDF statement using 
<http://purl.org/dc/terms/references>, but with more focus on describing the 
entities rather than the assertion.

Maybe if I put it that way, OpenURL sounds a little less crappy.


Reply via email to