I do like Ross's solution, if you really wanna use OpenURL. I'm much more comfortable with the idea of including a URI based on your own local service in rft_id, then including any old public URL in rft_id.

Then at least your link resolver can say "if what's in rft_id begins with (eg) http://telstar.open.ac.uk/, THEN I know this is one of these purl type things, and I know that sending the user to it will result in a redirect to an end-user-appropriate access URL." Cause that's my concern with putting random URLs in rft_id, that there's no way to know if they are intended as end-user-appropriate access URLs or not, and in putting things in rft_id that aren't really good "identifiers" for the referent at all. But using your own local service ID, now you really DO have something that's appropriately considered a "persistent identifier" for the referent, AND you have a straightforward way to tell when the rft_id of this context is intended as an access URL.

Jonathan

Ross Singer wrote:
Oh yeah, one thing I left off --

In Moodle, it would probably make sense to link to the URL in the <a> tag:
<a href="http://bbc.co.uk/";>The Beeb!</a>
but use a javascript onMouseDown action to rewrite the link to route
through your funky link resolver path, a la Google.

That way, the page works like any normal webpage, "right mouse
click->Copy Link Location" gives the user the "real" URL to copy and
paste, but normal behavior funnels through the link resolver.

-Ross.

On Tue, Sep 15, 2009 at 11:41 AM, Ross Singer <[email protected]> wrote:
Given that the burden of creating these links is entirely on RefWorks
& Telstar, OpenURL seems as good a choice as anything (since anything
would require some other service, anyway).  As long as the profs
aren't expected to mess with it, I'm not sure that *how* you do the
indirection matters all that much and, as you say, there are added
bonuses to keeping it within SFX.

It seems to me, though, that your rft_id should be a URI to the db
you're using to store their references, so your CTX would look
something like:

http://res.open.ac.uk/?rfr_id=info:/telstar.open.ac.uk&rft_id=http://telstar.open.ac.uk/1234&dc.identifier=http://bbc.uk.co/
# not url encoded because I have, you know, a life.

I can't remember if you can include both metadata-by-reference keys
and metadata-by-value, but you could have by-reference
(&rft_ref=http://telstar.open.ac.uk/1234&rft_ref_fmt=RIS or something)
point at your citation db to return a formatted citation.

This way your citations are unique -- somebody pointing at today's
London Times frontpage isn't the same as somebody else's on a
different day.

While I'm shocked that I agree with using OpenURL for this, it seems
as reasonable as any other solution.  That being said, unless you can
definitely offer some other service besides linking to the resource,
I'd avoid the resolver menu completely.

-Ross.

On Tue, Sep 15, 2009 at 11:17 AM, O.Stephens <[email protected]> wrote:
Ross - no you didn't miss it,

There are 3 ways that references might be added to the learning environment:

An author (or realistically a proxy on behalf of the author) can insert a 
reference into a structured Word document from an RIS file. This structured 
document (XML) then goes through a 'publication' process which pushes the 
content to the learning environment (Moodle), including rendering the 
references from RIS format into a specified style, with links.
An author/librarian/other can import references to a 'resources' area in our 
learning environment (Moodle) from a RIS file
An author/librarian/other can subscribe to an RSS feed from a RefWorks 
'RefShare' folder within the 'resources' area of the learning environment

In general the project is focussing on the use of RefWorks - so although the 
RIS files could be created by any suitable s/w, we are looking specifically at 
RefWorks.

How you get the reference into RefWorks is something we are looking at 
currently. The best approach varies depending on the type of material you are 
looking at:

For websites it looks like the 'RefGrab-it' bookmarklet/browser plugin 
(depending on your browser) is the easiest way of capturing website details.
For books, probably a Union catalogue search from within RefWorks
For journal articles, probably a Federated search engine (SS 360 is what we've 
got)
Any of these could be entered by hand of course, as could several other kinds 
of reference

Entering the references into RefWorks could be done by an author, but it more 
likely to be done by a member of clerical staff or a librarian/library assistant

Owen

Owen Stephens
TELSTAR Project Manager
Library and Learning Resources Centre
The Open University
Walton Hall
Milton Keynes, MK7 6AA

T: +44 (0) 1908 858701
F: +44 (0) 1908 653571
E: [email protected]


-----Original Message-----
From: Code for Libraries [mailto:[email protected]] On
Behalf Of Ross Singer
Sent: 15 September 2009 15:56
To: [email protected]
Subject: Re: [CODE4LIB] Implementing OpenURL for simple web resources

Owen, I might have missed it in this message -- my eyes are
starting glaze over at this point in the thread, but can you
describe how the input of these resources would work?

What I'm basically asking is -- what would the professor need
to do to add a new:  citation for a 70 year old book; journal
on PubMed; URL to CiteSeer?

How does their input make it into your database?

-Ross.

On Tue, Sep 15, 2009 at 5:04 AM, O.Stephens
<[email protected]> wrote:
True. How, from the OpenURL, are you going to know that the rft is
meant to represent a website?
I guess that was part of my question. But no one has suggested
defining a new metadata profile for websites (which I
probably would
avoid tbh). DC doesn't seem to offer a nice way of doing
this (that is
saying 'this is a website'), although there are perhaps
some bits and
pieces (format, type) that could be used to give some
indication (but
I suspect not unambiguously)

But I still think what you want is simply a purl server. What makes
you think you want OpenURL in the first place?  But I still don't
really understand what you're trying to do: "deliver consistency of
approach across all our references" -- so are you using OpenURL for
it's more "conventional" use too, but you want to tack on a
purl-like
functionality to the same software that's doing something
more like a
conventional link resolver?  I don't completely understand
your use case.
I wouldn't use OpenURL just to get a persistent URL - I'd
almost certainly look at PURL for this. But, I want something
slightly different. I want our course authors to be able to
use whatever URL they know for a resource, but still try to
ensure that the link works persistently over time. I don't
think it is reasonable for a user to have to know a 'special'
URL for a resource - and this approach means establishing a
PURL for all resources used in our teaching material whether
or not it moves in the future - which is an overhead it would
be nice to avoid.
You can hit delete now if you aren't interested, but ...

... perhaps if I just say a little more about the project
I'm working on it may clarify...
The project I'm working on is concerned with referencing
and citation. We are looking at how references appear in
teaching material (esp. online) and how they can be reused by
students in their personal environment (in essays, later
study, or something else). The references that appear can be
to anything - books, chapters, journals, articles, etc.
Increasingly of course there are references to web-based materials.
For print material, references generally describe the
resource and nothing more, but for digital material
references are expected not only to describe the resource,
but also state a route of access to the resource. This tends
to be a bad idea when (for example) referencing e-journals,
as we know the problems that surround this - many different
routes of access to the same item. OpenURLs work well in this
situation and seem to me like a sensible (and perhaps the
only viable) solution. So we can say that for
journals/articles it is sensible to ignore any URL supplied
as part of the reference, and to form an OpenURL instead. If
there is a DOI in the reference (which is increasingly
common) then that can be used to form a URL using DOI
resolution, but it makes more sense to me to hand this off to
another application rather than bake this into the reference
- and OpenURL resolvers are reasonably set to do this.
If we look at a website it is pretty difficult to reference
it without including the URL - it seems to be the only good
way of describing what you are actually talking about (how
many people think of websites by 'title', 'author' and
'publisher'?). For me, this leads to an immediate confusion
between the description of the resource and the route of
access to it. So, to differentiate I'm starting to think of
the http URI in a reference like this as a URI, but not
necessarily a URL. We then need some mechanism to check,
given a URI, what is the URL.
Now I could do this with a script - just pass the URI to a
script that checks what URL to use against a list and
redirects the user if necessary. On this point Jonathan said
"if the usefulness of your technique does NOT count on being
inter-operable with existing link resolver infrastructure...
PERSONALLY I would be using OpenURL, I don't think it's worth
it" - but it struck me that if we were passing a URI to a
script, why not pass it in an OpenURL? I could see a number
of advantages to this in the local context:
Consistency - references to websites get treated the same as
references to journal articles - this means a single
approach on the
course side, with flexibility Usage stats - we could collect these
whatever, but if we do it via OpenURL we get this in the
same place as
the stats about usage of other scholarly material and could
consider
driving personalisation services off the data (like the bX product
from Ex Libris) Appropriate copy problem - for resources we
subscribe
to with authentication mechanisms there is (I think) an
equivalent to
the 'appropriate copy' issue as with journal articles - we
can push a
URI to 'Web of Science' to the correct version of Web of
Science via a
local authentication method (using ezproxy for us)

The problem with the approach (as Nate and Eric mention) is
that any approach that relies on the URI as a identifier
(whether using OpenURL or a script) is going to have problems
as the same URI could be used to identify different resources
over time. I think Eric's suggestion of using additional
information to help differentiate is worth looking at, but I
suspect that this is going to cause us problems - although
I'd say that it is likely to cause us much less work than the
alternative, which is allocating every single reference to a
web resource used in our course material it's own persistent URL.
The use case we are currently looking at is only with our
own (authenticated) learning environment - so these OpenURLs
are not going to appear in the wild, so to some extent
perhaps it doesn't matter what we do - but it still seems
sensible to me to look at what 'good practice' might look like.
I hope this is clear - I'm still struggling with some of
this, and sometimes it doesn't make complete sense to me, but
that's my best stab at explaining my thinking at the moment.
Again, I appreciate the comments. Jonathan said "But you seem
to understand what's up". I wish I did! I guess that I'm
reasonably confident that the approach I'm describing has
some chance of doing the job - whether it is the best
approach I'm not so sure about.
Owen


The Open University is incorporated by Royal Charter (RC
000391), an exempt charity in England & Wales and a charity
registered in Scotland (SC 038302).
The Open University is incorporated by Royal Charter (RC 000391), an exempt charity 
in England & Wales and a charity registered in Scotland (SC 038302).


Reply via email to