On Fri, Apr 30, 2010 at 20:29, Owen Stephens <o...@ostephens.com> wrote:
> However I'd argue that actually OpenURL 'succeeded' because it did manage to
> get some level of acceptance (ignoring the question of whether it is v0.1 or
> v1.0) - the cost of developing 'link resolvers' would have been much higher
> if we'd been doing something different for each publisher/platform. In this
> sense (I'd argue) sometimes crappy standards are better than none.

Well, perhaps. I see OpenURL as the natural progression from PURL, in
which both have their degree of "success", however I'm careful using
that word as I live on the outside of the library world. It may well
be a success on the inside. :)

> I think the point about Link Resolvers doing stuff that Apache and CGI
> scripts were already doing is a good one - and I've argued before that what
> we actually should do is separate some of this out (a bit like Johnathan did
> with Umlaut) into an application that can answer questions about location
> (what is generally called the KnowledgeBase in link resolvers) and the
> applications that deal with analysing the context and the redirection

Yes, split it into smaller chunks is always smart, especially with
complex issues. For example, in the Topic Maps world, the who standard
(reference model, data model, query language, constraint language, XML
exchange language, various notational languages) is wrapped up with a
guide in the middle. Make them into smaller parcels, and make your
flexible point there. If you pop it all into one, no one will read it
and fully understand it. (And don't get me started on the WS-* set of
standards on the same issues ...)

> (To introduce another tangent in a tangential thread, interestingly (I
> think!) I'm having a not dissimilar debate about Linked Data at the moment -
> there are many who argue that it is too complex and that as long as you have
> a nice RESTful interface you don't need to get bogged down in ontologies and
> RDF etc. I'm still struggling with this one - my instinct is that it will
> pay to standardise but so far I've not managed to convince even myself this
> is more than wishful thinking at the moment)

Ah, now this is certainly up my alley. As you might have seen, I'm a
Topic Maps guy, and we have in our model a distinction between three
different kinds of identities; internal, external indicators and
published subject identifiers. The RDF world only had rdf:about, so
when you used "www.somewhere.org", are you talking about that thing,
or does that thing represent something you're talking about? Tricky
stuff which has these days become a *huge* problem with Linked Data.
And yes, they're trying to solve that by issuing a HTTP 303 status
code as a means of declaring the identifiers imperative, which is a
*lot* of resolving to do on any substantial set of data, and in my
eyes a huge ugly hack. (And what if your Internet falls down? Tough.)

Anyway, here's more on these identity problems ;

As to the RESTful notions, they only take you as far as content-types
can take you. Sure, you can gleam semantics from it, but I reckon
there's an impedance mismatch between just the things librarians how
got down pat ; meta data vs. data. CRUD or, in this example, GPPD
(get/post/put/delete), who aren't in a dichotomy btw, can only
determine behavior that enables certain semantic paradigms, but cannot
speak about more complex relationships or even modest models. (Very
often models aren't actionable :)

The funny thing is that after all these years of working with Topic
Maps I find that these hard issues have been solved years ago, and the
rest of the world is slowly catching up to it. I blame the lame
DAML+OIL background of RDF and OWL, to be honest; a model too simple
to be elegantly advanced and too complex to be easily useful.

Kind regards,

 Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps
--- http://shelter.nu/blog/ ----------------------------------------------
------------------ http://www.google.com/profiles/alexander.johannesen ---

Reply via email to