On 24 Feb 2012, at 18:20, Joe Hourcle wrote:

> On Feb 24, 2012, at 9:25 AM, Kyle Banerjee wrote:
> I see it like the people who request that their pages not be cached elsewhere 
> -- they want to make their object 'discoverable', but they want to control 
> the access to those objects -- so it's one thing for a search engine to get a 
> copy, but they don't want that search engine being an agent to distribute 
> copies to others.
> 
That's maybe true - certainly some repositories publish policy statements that 
imply this type of thinking - e.g. a typical phrase used is "Full items must 
not be harvested by robots except transiently for full-text indexing or 
citation analysis". This type of policy is usually made available via OAI-PMH 
'Identify'. There are some issues with this. Firstly, textual policy statements 
like this don't help when you want to machine harvest many repositories. 
Secondly, these statements won't ever be seen by a web crawler. Thirdly 
'transiently' is not defined. Lastly, the limitation to two specific uses seems 
odd - for instance it would seem to me that semantic analysis of the text would 
not strictly be covered by this - but was this the intention of those framing 
the policy, or did they just want to say "don't copy our stuff and serve it up 
from your own application" (of course, different repositories will have 
different views on this).

Also some of the policies go further than this. For example the University of 
Cambridge policy states that *for metadata* "The metadata must not be re-used 
in any medium for commercial purposes without formal permission" - but does not 
block search engines from crawling in robots.txt - this is the kind of thing I 
see as inconsistent. I realise robots.txt is just a request to search engines, 
and isn't equivalent to a policy on reuse (e.g. a permissive robots.txt doesn't 
imply there is no copyright in the content being made available) - but there is 
no doubt that Google use the content they harvest for commercial purposes. So, 
this is a mixed message to some extent - meaning a well behaved OAI-PMH 
harvester might feel more constrained than a well behaved web crawler (even 
though I guess the legal situation would be pretty much the same for both in 
terms of actual rights to using the data harvested).

Again, I don't mean to pick on Cambridge - they aren't the only institution to 
run this kind of policy, but they are one everyone will have heard of :)

> Eg, all of the journal publishers who charge access fees -- they want people 
> to find that they have a copy of that article that you're interested in ... 
> but they want to collect their $35 for you to read it.

Agreed - this type of issue came up with Google News and led to the 
introduction of the 'first click free' programme 
(http://googlenewsblog.blogspot.com/2009/12/update-to-first-click-free.html) - 
although I'm not sure this is still in action?

> 
> In the case of scientific data, the problem is that to make stuff 
> discoverable, we often have to perform some lossy transformation to fit some 
> metadata standard, and those standards rarely have mechanisms for describing 
> error (accuracy, precision, etc.).  You can do some science with the catalog 
> records, but it's going to introduce some bias into your results, so you're 
> typically better of getting the data from the archive.  (and sometimes, they 
> have nice clean catalogs in FITS, VOTable, CDF, NetCDF, HDF or whatever their 
> discipline's preferred data format is)

This is going into areas I'm not so familiar with - at the moment the project 
I'm working on is looking at article level data only (so mostly pdfs with 
straightforward metadata)
> 
> ...
> 
> Also, I don't know if things have changed in the last year, but I seem to 
> remember someone mentioning at last year's RDAP (Research Data Access & 
> Preservation) summit that Google had coordinated with some libraries for 
> feeds from their catalogs, but was only interested in books, not other 
> objects.
> 
> I don't know how other search engines might use data from OAI-PMH, or if 
> they'd filter it because they didn't consider it to be information they cared 
> about.
> 
I don't think that Google ever used OAI-PMH to harvest metadata like this, 
although they did use it for sitemaps for a short time 
http://googlewebmastercentral.blogspot.com/2008/04/retiring-support-for-oai-pmh-in.html.
 It may be they have used it in specific cases to get library catalogue 
records, but I'm not aware of it.

Thanks

Owen

Reply via email to