Thanks Alex,

This makes sense, and yes I see what your saying - and yes, if you end up
going back to custom coding because it's easier it does seem to defeat the

However I'd argue that actually OpenURL 'succeeded' because it did manage to
get some level of acceptance (ignoring the question of whether it is v0.1 or
v1.0) - the cost of developing 'link resolvers' would have been much higher
if we'd been doing something different for each publisher/platform. In this
sense (I'd argue) sometimes crappy standards are better than none.

We've used OpenURL v1.0 in a recent project and because we were able to
simply pick up code already done for Zotero, and  we already had an OpenURL
resolver, the amount of new code we needed for this was minimal.

I think the point about Link Resolvers doing stuff that Apache and CGI
scripts were already doing is a good one - and I've argued before that what
we actually should do is separate some of this out (a bit like Johnathan did
with Umlaut) into an application that can answer questions about location
(what is generally called the KnowledgeBase in link resolvers) and the
applications that deal with analysing the context and the redirection

(To introduce another tangent in a tangential thread, interestingly (I
think!) I'm having a not dissimilar debate about Linked Data at the moment -
there are many who argue that it is too complex and that as long as you have
a nice RESTful interface you don't need to get bogged down in ontologies and
RDF etc. I'm still struggling with this one - my instinct is that it will
pay to standardise but so far I've not managed to convince even myself this
is more than wishful thinking at the moment)


On Fri, Apr 30, 2010 at 10:33 AM, Alexander Johannesen <> wrote:

> On Fri, Apr 30, 2010 at 18:47, Owen Stephens <> wrote:
> > Could you expand on how you think the problem that OpenURL tackles would
> > have been better approached with existing mechanisms?
> As we all know, it's pretty much a spec for a way to template incoming
> and outgoing URLs, defining some functionality along the way. As such,
> URLs with basic URI templates and rewriting have been around for a
> long time. Even longer than that is just the basics of HTTP which have
> status codes and functionality to do exactly the same. We've been
> doing link resolving since mid 90's, either as CGI scripts, or as
> Apache modules, so none of this were new. URI comes in, you look it up
> in a database, you cross-check with other REQUEST parameters (or
> sessions, if you must, as well as IP addresses) and pop out a 303
> (with some possible rewriting of the outgoing URL) (with the hack we
> needed at the time to also create dummy pages with META tags
> *shudder*).
> So the idea was to standardize on a way to do this, and it was a good
> idea as such. OpenURL *could* have had a great potential if it
> actually defined something tangible, something concrete like a model
> of interaction or basic rules for fishing and catching tokens and the
> like, and as someone else mentioned, the 0.1 version was quite a good
> start. But by the time when 1.0 came out, all the goodness had turned
> so generic and flexible in such a complex way that handling it turned
> you right off it. The standard also had a very difficult language, and
> more specifically didn't use enough of the normal geeky language used
> by sysadmins around. The more I tried to wrap my head around it, the
> more I felt like just going back to CGI scripts that looked stuff up
> in a database. It was easier to hack legacy code, which, well, defeats
> the purpose, no?
> Also, forgive me if I've forgotten important details; I've suppressed
> this part of my life. :)
> Kind regards,
> Alex
> --
>  Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps
> --- ----------------------------------------------
> ------------------ ---

Owen Stephens
Owen Stephens Consulting

Reply via email to