Re: [CODE4LIB] internet archive experiment -- bad metadata

2010-05-19 Thread Barnett, Jeffrey
How common is the kind of meta data mismatch* associated with this record?
http://openlibrary.org/books/OL23383343M/Cisco_Networking_Academy_Program
What is the point of contact for making corrections?

*The metadata is about Unix (2004), the Book is about Ben Franklin (1908)
"Contributed by Google"

-Original Message-
From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Eric 
Lease Morgan
Sent: Friday, May 14, 2010 2:05 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] internet archive experiment

We are doing a tiny experiment here at Notre Dame with the Internet Archive, 
specifically, we are determining whether or not we can supplement a special 
collection with full text content.

We are hosting at site colloquially called the Catholic Portal -- a collection 
of rare, infrequently held, and uncommon materials of a Catholic nature. [1] 
Much of the content of the Portal is metadata -- MARC and EAD records/files. I 
think the Portal would be more useful if it contained full text content. If it 
did, then indexing would be improved and services against the texts could be 
implemented.

How can we get full text content? This is what we are going to try:

  1. parse out identifying information from
 metadata (author names, titles, dates,
 etc.)

  2. construct a URL in the form of a
 Advanced Search query and send it to the
 Archive

  3. get back a list of matches in an XML
 format

  4. parse the result looking for the "best"
 matches

  5. save Internet Archive keys identifying
 full text items

  6. mirror Internet Archive content locally
 using keys as pointers

  7. update local metadata files pointing to
 Archive content as well as locally
 mirrored content

  8. re-index local metadata

If we are (somewhat) successful, then search results would not only have 
pointers to the physical items, but they would also have pointers to the 
digitized items. Not only could they have pointers to the digitized items, but 
they could also have pointers to "services against the texts" such as make word 
cloud, display concordance, plot word/phrase frequency, etc. These later 
services are spaces where I think there is great potential for librarianship.

Frankly, because of the Portal's collection policy, I don't expect to find very 
much material. On the other hand, the same process could be applied to more 
generic library collections where more content may have already been digitized. 

Wish us luck.

[1] Catholic Portal - http://www.catholicresearch.net/
[2] Advanced search - http://www.archive.org/advancedsearch.php

-- 
Eric Lease Morgan
University of Notre Dame


Re: [CODE4LIB] internet archive experiment

2010-05-15 Thread Markus Wust
Hi,

when the NCSU Libraries decided to locally host content that we created for 
OCA, I put together a set of PHP scripts similar, but more basic to what Graham 
described. Based on a list of IDs, one went and calculated the amount of 
storage that we needed. This does not work for the different types of e-reader 
files since they only provide page numbers, but the size of these files is 
rather negligible.

Another one downloaded the files, stored the files in a separate folder, 
created checksums and stored filenames, types, original and post-download 
checksums in a MySQL database so that we can check for any discrepancies.

Let me know if you have any questions,
Markus


Re: [CODE4LIB] internet archive experiment

2010-05-14 Thread Graham Stewart

Hi,

I may be able to assist you with the content mirroring part of this. 
The University of Toronto Libraries hosts one of the Internet Archive 
scanning operations through the Open Content Alliance and we host 
content originally scanned by the Archive through the OCUL 
Scholarsportal project at this URL:  http://books.scholarsportal.info


In order to retrieve content from the IA (since it is sent immediately 
to San Francisco as it is scanned) I've written a set of scripts that 
download content based on various parameters.


-the starting point is a list of IA identifiers and other metadata 
pulled from an advanced search query.


-from those which file types you want to download (*.pdf, *_marc.xml, 
*.djvu, *_meta.xml, etc.) can be specified.


-The downloads are then queued and retrieved to specified local file 
systems.


The system uses a mysql backend, perl, and curl for http downloads, with 
an option for rsync.  Designed to run on Linux systems.  It contains 
fairly sophisticated tools for checking download success, file size 
comparison with the Archive, md5 error checking, re-running against the 
Archive in case content changes, and can be adapted to a variety of needs.


So far we've downloaded about 400,000 pdfs and associated metadata 
(about 14 TB altogether).  It could be used, however to, for example, 
just download marc records for integration into an ILS (a separate 
challenge, of course), and to build pointers to the archive's content 
for the fulltext.


Have had plans to open source it for some time, but other work always 
gets in the way.  If you (or anyone) want to take a look and try it out, 
just let me know.


--
Graham Stewart  graham.stew...@utoronto.ca  416-550-2806
Network and Storage Services Manager, Information Technology Services
University of Toronto Libraries
130 St. George Street
Toronto, Ontario, Canada M5S 1A5

On 10-05-14 03:34 PM, Eric Lease Morgan wrote:

We are doing a tiny experiment here at Notre Dame with the Internet Archive, 
specifically, we are determining whether or not we can supplement a special 
collection with full text content.

We are hosting at site colloquially called the Catholic Portal -- a collection 
of rare, infrequently held, and uncommon materials of a Catholic nature. [1] 
Much of the content of the Portal is metadata -- MARC and EAD records/files. I 
think the Portal would be more useful if it contained full text content. If it 
did, then indexing would be improved and services against the texts could be 
implemented.

How can we get full text content? This is what we are going to try:

   1. parse out identifying information from
  metadata (author names, titles, dates,
  etc.)

   2. construct a URL in the form of a
  Advanced Search query and send it to the
  Archive

   3. get back a list of matches in an XML
  format

   4. parse the result looking for the "best"
  matches

   5. save Internet Archive keys identifying
  full text items

   6. mirror Internet Archive content locally
  using keys as pointers

   7. update local metadata files pointing to
  Archive content as well as locally
  mirrored content

   8. re-index local metadata

If we are (somewhat) successful, then search results would not only have pointers to the 
physical items, but they would also have pointers to the digitized items. Not only could 
they have pointers to the digitized items, but they could also have pointers to 
"services against the texts" such as make word cloud, display concordance, plot 
word/phrase frequency, etc. These later services are spaces where I think there is great 
potential for librarianship.

Frankly, because of the Portal's collection policy, I don't expect to find very 
much material. On the other hand, the same process could be applied to more 
generic library collections where more content may have already been digitized.

Wish us luck.

[1] Catholic Portal - http://www.catholicresearch.net/
[2] Advanced search - http://www.archive.org/advancedsearch.php