-----Original Message-----
From: Code for Libraries [mailto:[email protected]] On
Behalf Of Ross Singer
Sent: 15 September 2009 15:56
To: [email protected]
Subject: Re: [CODE4LIB] Implementing OpenURL for simple web resources
Owen, I might have missed it in this message -- my eyes are
starting glaze over at this point in the thread, but can you
describe how the input of these resources would work?
What I'm basically asking is -- what would the professor need
to do to add a new: citation for a 70 year old book; journal
on PubMed; URL to CiteSeer?
How does their input make it into your database?
-Ross.
On Tue, Sep 15, 2009 at 5:04 AM, O.Stephens
<[email protected]> wrote:
True. How, from the OpenURL, are you going to know that the rft is
meant to represent a website?
I guess that was part of my question. But no one has suggested
defining a new metadata profile for websites (which I
probably would
avoid tbh). DC doesn't seem to offer a nice way of doing
this (that is
saying 'this is a website'), although there are perhaps
some bits and
pieces (format, type) that could be used to give some
indication (but
I suspect not unambiguously)
But I still think what you want is simply a purl server. What makes
you think you want OpenURL in the first place? But I still don't
really understand what you're trying to do: "deliver consistency of
approach across all our references" -- so are you using OpenURL for
it's more "conventional" use too, but you want to tack on a
purl-like
functionality to the same software that's doing something
more like a
conventional link resolver? I don't completely understand
your use case.
I wouldn't use OpenURL just to get a persistent URL - I'd
almost certainly look at PURL for this. But, I want something
slightly different. I want our course authors to be able to
use whatever URL they know for a resource, but still try to
ensure that the link works persistently over time. I don't
think it is reasonable for a user to have to know a 'special'
URL for a resource - and this approach means establishing a
PURL for all resources used in our teaching material whether
or not it moves in the future - which is an overhead it would
be nice to avoid.
You can hit delete now if you aren't interested, but ...
... perhaps if I just say a little more about the project
I'm working on it may clarify...
The project I'm working on is concerned with referencing
and citation. We are looking at how references appear in
teaching material (esp. online) and how they can be reused by
students in their personal environment (in essays, later
study, or something else). The references that appear can be
to anything - books, chapters, journals, articles, etc.
Increasingly of course there are references to web-based materials.
For print material, references generally describe the
resource and nothing more, but for digital material
references are expected not only to describe the resource,
but also state a route of access to the resource. This tends
to be a bad idea when (for example) referencing e-journals,
as we know the problems that surround this - many different
routes of access to the same item. OpenURLs work well in this
situation and seem to me like a sensible (and perhaps the
only viable) solution. So we can say that for
journals/articles it is sensible to ignore any URL supplied
as part of the reference, and to form an OpenURL instead. If
there is a DOI in the reference (which is increasingly
common) then that can be used to form a URL using DOI
resolution, but it makes more sense to me to hand this off to
another application rather than bake this into the reference
- and OpenURL resolvers are reasonably set to do this.
If we look at a website it is pretty difficult to reference
it without including the URL - it seems to be the only good
way of describing what you are actually talking about (how
many people think of websites by 'title', 'author' and
'publisher'?). For me, this leads to an immediate confusion
between the description of the resource and the route of
access to it. So, to differentiate I'm starting to think of
the http URI in a reference like this as a URI, but not
necessarily a URL. We then need some mechanism to check,
given a URI, what is the URL.
Now I could do this with a script - just pass the URI to a
script that checks what URL to use against a list and
redirects the user if necessary. On this point Jonathan said
"if the usefulness of your technique does NOT count on being
inter-operable with existing link resolver infrastructure...
PERSONALLY I would be using OpenURL, I don't think it's worth
it" - but it struck me that if we were passing a URI to a
script, why not pass it in an OpenURL? I could see a number
of advantages to this in the local context:
Consistency - references to websites get treated the same as
references to journal articles - this means a single
approach on the
course side, with flexibility Usage stats - we could collect these
whatever, but if we do it via OpenURL we get this in the
same place as
the stats about usage of other scholarly material and could
consider
driving personalisation services off the data (like the bX product
from Ex Libris) Appropriate copy problem - for resources we
subscribe
to with authentication mechanisms there is (I think) an
equivalent to
the 'appropriate copy' issue as with journal articles - we
can push a
URI to 'Web of Science' to the correct version of Web of
Science via a
local authentication method (using ezproxy for us)
The problem with the approach (as Nate and Eric mention) is
that any approach that relies on the URI as a identifier
(whether using OpenURL or a script) is going to have problems
as the same URI could be used to identify different resources
over time. I think Eric's suggestion of using additional
information to help differentiate is worth looking at, but I
suspect that this is going to cause us problems - although
I'd say that it is likely to cause us much less work than the
alternative, which is allocating every single reference to a
web resource used in our course material it's own persistent URL.
The use case we are currently looking at is only with our
own (authenticated) learning environment - so these OpenURLs
are not going to appear in the wild, so to some extent
perhaps it doesn't matter what we do - but it still seems
sensible to me to look at what 'good practice' might look like.
I hope this is clear - I'm still struggling with some of
this, and sometimes it doesn't make complete sense to me, but
that's my best stab at explaining my thinking at the moment.
Again, I appreciate the comments. Jonathan said "But you seem
to understand what's up". I wish I did! I guess that I'm
reasonably confident that the approach I'm describing has
some chance of doing the job - whether it is the best
approach I'm not so sure about.
Owen
The Open University is incorporated by Royal Charter (RC
000391), an exempt charity in England & Wales and a charity
registered in Scotland (SC 038302).