Re: [CODE4LIB] Models of MARC in RDF

2011-12-08 Thread Westbrook, Bradley
Hi, folks, 

I am just back into the office from a workshop and wanted to add to this 
thread.  As Declan noted, we at UC San Diego wrangle a lot of source metadata 
into RDF statements for our digital assets.  MARC records form a goodly portion 
of the source metadata that we transform.  We leave bits, sometime big bits, of 
the MARC record behind (we do not use 040, 041, 09x data in our DAMS), and we 
sometimes change some of the MARC data we do retain (300 extent statements are 
typically changed to reflect the digital object and no longer the analog object 
that was digitized).  And we add technical and rights data to the RDF that does 
not appear at all in the source MARC record.  The RDF we transform from MARC 
records does, however, carry one or more traces back to its source MARC record, 
often in the forms of a link to the record in our MARC catalog, the record 
identifier for that MARC record, and the OCLC record identifier.  

Brad W.

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
Fleming, Declan
Sent: Tuesday, December 06, 2011 2:43 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Models of MARC in RDF

Hi - point at it where?  We could point back to the library catalog that we 
harvested in the MARC to MODS to RDF process, but what if that goes away?  Why 
not write ourselves a 1K insurance policy that sticks with the object for its 
life?

D

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Owen 
Stephens
Sent: Tuesday, December 06, 2011 8:06 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Models of MARC in RDF

I'd suggest that rather than shove it in a triple it might be better to point 
at alternative representations, including MARC if desirable (keep meaning to 
blog some thoughts about progressively enhanced metadata...)

Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 6 Dec 2011, at 15:44, Karen Coyle wrote:

 Quoting Fleming, Declan dflem...@ucsd.edu:
 
 Hi - I'll note that the mapping decisions were made by our metadata 
 services (then Cataloging) group, not by the tech folks making it all 
 work, though we were all involved in the discussions.  One idea that 
 came up was to do a, perhaps, lossy translation, but also stuff one 
 triple with a text dump of the whole MARC record just in case we 
 needed to grab some other element out we might need.  We didn't do 
 that, but I still like the idea.  Ok, it was my idea.  ;)
 
 I like that idea! Now that disk space is no longer an issue, it makes good 
 sense to keep around the original state of any data that you transform, 
 just in case you change your mind. I hadn't thought about incorporating the 
 entire MARC record string in the transformation, but as I recall the average 
 size of a MARC record is somewhere around 1K, which really isn't all that 
 much by today's standards.
 
 (As an old-timer, I remember running the entire Univ. of California 
 union catalog on 35 megabytes, something that would now be considered 
 a smallish email attachment.)
 
 kc
 
 
 D
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
 Of Esme Cowles
 Sent: Monday, December 05, 2011 11:22 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Models of MARC in RDF
 
 I looked into this a little more closely, and it turns out it's a little 
 more complicated than I remembered.  We built support for transforming to 
 MODS using the MODS21slim2MODS.xsl stylesheet, but don't use that.  Instead, 
 we use custom Java code to do the mapping.
 
 I don't have a lot of public examples, but there's at least one public 
 object which you can view the MARC from our OPAC:
 
 http://roger.ucsd.edu/search/.b4827884/.b4827884/1,1,1,B/detlmarc~123
 4567FF=1,0,
 
 The public display in our digital collections site:
 
 http://libraries.ucsd.edu/ark:/20775/bb0648473d
 
 The RDF for the MODS looks like:
 
mods:classification rdf:parseType=Resource
mods:authoritylocal/mods:authority
rdf:valueFVLP 222-1/rdf:value
/mods:classification
mods:identifier rdf:parseType=Resource
mods:typeARK/mods:type

 rdf:valuehttp://libraries.ucsd.edu/ark:/20775/bb0648473d/rdf:value
/mods:identifier
mods:name rdf:parseType=Resource
mods:namePartBrown, Victor W/mods:namePart
mods:typepersonal/mods:type
/mods:name
mods:name rdf:parseType=Resource
mods:namePartAmateur Film Club of San Diego/mods:namePart
mods:typecorporate/mods:type
/mods:name
mods:originInfo rdf:parseType=Resource
mods:dateCreated[196-]/mods:dateCreated
/mods:originInfo
mods:originInfo rdf:parseType=Resource
mods:dateIssued2005/mods:dateIssued
mods:publisherFilm and Video

Re: [CODE4LIB] Models of MARC in RDF

2011-12-07 Thread Owen Stephens
Fair point. Just instinct on my part that putting it in a triple is a bit ugly 
:)

It probably doesn't make any difference, although I don't think storing in a 
triple ensures that it sticks to the object (you could store the triple 
anywhere as well)

Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 6 Dec 2011, at 22:43, Fleming, Declan wrote:

 Hi - point at it where?  We could point back to the library catalog that we 
 harvested in the MARC to MODS to RDF process, but what if that goes away?  
 Why not write ourselves a 1K insurance policy that sticks with the object for 
 its life?
 
 D
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Owen 
 Stephens
 Sent: Tuesday, December 06, 2011 8:06 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Models of MARC in RDF
 
 I'd suggest that rather than shove it in a triple it might be better to point 
 at alternative representations, including MARC if desirable (keep meaning to 
 blog some thoughts about progressively enhanced metadata...)
 
 Owen
 
 Owen Stephens
 Owen Stephens Consulting
 Web: http://www.ostephens.com
 Email: o...@ostephens.com
 Telephone: 0121 288 6936
 
 On 6 Dec 2011, at 15:44, Karen Coyle wrote:
 
 Quoting Fleming, Declan dflem...@ucsd.edu:
 
 Hi - I'll note that the mapping decisions were made by our metadata 
 services (then Cataloging) group, not by the tech folks making it all 
 work, though we were all involved in the discussions.  One idea that 
 came up was to do a, perhaps, lossy translation, but also stuff one 
 triple with a text dump of the whole MARC record just in case we 
 needed to grab some other element out we might need.  We didn't do 
 that, but I still like the idea.  Ok, it was my idea.  ;)
 
 I like that idea! Now that disk space is no longer an issue, it makes good 
 sense to keep around the original state of any data that you transform, 
 just in case you change your mind. I hadn't thought about incorporating the 
 entire MARC record string in the transformation, but as I recall the average 
 size of a MARC record is somewhere around 1K, which really isn't all that 
 much by today's standards.
 
 (As an old-timer, I remember running the entire Univ. of California 
 union catalog on 35 megabytes, something that would now be considered 
 a smallish email attachment.)
 
 kc
 
 
 D
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
 Of Esme Cowles
 Sent: Monday, December 05, 2011 11:22 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Models of MARC in RDF
 
 I looked into this a little more closely, and it turns out it's a little 
 more complicated than I remembered.  We built support for transforming to 
 MODS using the MODS21slim2MODS.xsl stylesheet, but don't use that.  
 Instead, we use custom Java code to do the mapping.
 
 I don't have a lot of public examples, but there's at least one public 
 object which you can view the MARC from our OPAC:
 
 http://roger.ucsd.edu/search/.b4827884/.b4827884/1,1,1,B/detlmarc~123
 4567FF=1,0,
 
 The public display in our digital collections site:
 
 http://libraries.ucsd.edu/ark:/20775/bb0648473d
 
 The RDF for the MODS looks like:
 
   mods:classification rdf:parseType=Resource
   mods:authoritylocal/mods:authority
   rdf:valueFVLP 222-1/rdf:value
   /mods:classification
   mods:identifier rdf:parseType=Resource
   mods:typeARK/mods:type
   
 rdf:valuehttp://libraries.ucsd.edu/ark:/20775/bb0648473d/rdf:value
   /mods:identifier
   mods:name rdf:parseType=Resource
   mods:namePartBrown, Victor W/mods:namePart
   mods:typepersonal/mods:type
   /mods:name
   mods:name rdf:parseType=Resource
   mods:namePartAmateur Film Club of San Diego/mods:namePart
   mods:typecorporate/mods:type
   /mods:name
   mods:originInfo rdf:parseType=Resource
   mods:dateCreated[196-]/mods:dateCreated
   /mods:originInfo
   mods:originInfo rdf:parseType=Resource
   mods:dateIssued2005/mods:dateIssued
   mods:publisherFilm and Video Library, University of California, 
 San Diego, La Jolla, CA 92093-0175 
 http://orpheus.ucsd.edu/fvl/FVLPAGE.HTM/mods:publisher
   /mods:originInfo
   mods:physicalDescription rdf:parseType=Resource
   mods:digitalOriginreformatted digital/mods:digitalOrigin
   mods:note16mm; 1 film reel (25 min.) :; sd., col. ;/mods:note
   /mods:physicalDescription
   mods:subject rdf:parseType=Resource
   mods:authoritylcsh/mods:authority
   mods:topicRanching/mods:topic
   /mods:subject
 
 etc.
 
 
 There is definitely some loss in the conversion process -- I don't know 
 enough about the MARC leader and control fields to know if they are 
 captured in the MODS and/or RDF in some way.  But there are quite

Re: [CODE4LIB] Models of MARC in RDF

2011-12-07 Thread Owen Stephens
When I did a project converting records from UKMARC - MARC21 we kept the 
UKMARC records for a period (about 5 years I think) while we assured ourselves 
that we hadn't missed anything vital. We did occasionally refer back to the 
older record to check things, but having not found any major issues with the 
conversion after that period we felt confident disposing of the record. This is 
the type of usage I was imagining for a copy of the MARC record in this 
scenario.

Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 7 Dec 2011, at 01:52, Montoya, Gabriela wrote:

 One critical thing to consider with MARC records (or any metadata, for that 
 matter) is that it they are not stagnant, so what is the value of storing 
 entire record strings into one triple if we know that metadata is volatile? 
 As an example, UCSD has over 200,000 art images that had their metadata 
 records ingested into our local DAMS over five years ago. Since then, many of 
 these records have been edited/massaged in our OPAC (and ARTstor), but these 
 updated records have not been refreshed in our DAMS. Now we find ourselves 
 needing to desperately have the What is our database of record? 
 conversation.
 
 I'd much rather see resources invested in data synching than spending it in 
 saving text dumps that will most likely not be referred to again.
 
 Dream Team for Building a MARC  RDF Model: Karen Coyle, Alistair Miles, 
 Diane Hillman, Ed Summers, Bradley Westbrook.
 
 Gabriela
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Karen 
 Coyle
 Sent: Tuesday, December 06, 2011 7:44 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Models of MARC in RDF
 
 Quoting Fleming, Declan dflem...@ucsd.edu:
 
 Hi - I'll note that the mapping decisions were made by our metadata 
 services (then Cataloging) group, not by the tech folks making it all 
 work, though we were all involved in the discussions.  One idea that 
 came up was to do a, perhaps, lossy translation, but also stuff one 
 triple with a text dump of the whole MARC record just in case we 
 needed to grab some other element out we might need.  We didn't do 
 that, but I still like the idea.  Ok, it was my idea.  ;)
 
 I like that idea! Now that disk space is no longer an issue, it makes good 
 sense to keep around the original state of any data that you transform, 
 just in case you change your mind. I hadn't thought about incorporating the 
 entire MARC record string in the transformation, but as I recall the average 
 size of a MARC record is somewhere around 1K, which really isn't all that 
 much by today's standards.
 
 (As an old-timer, I remember running the entire Univ. of California union 
 catalog on 35 megabytes, something that would now be considered a smallish 
 email attachment.)
 
 kc
 
 
 D
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
 Of Esme Cowles
 Sent: Monday, December 05, 2011 11:22 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Models of MARC in RDF
 
 I looked into this a little more closely, and it turns out it's a 
 little more complicated than I remembered.  We built support for 
 transforming to MODS using the MODS21slim2MODS.xsl stylesheet, but 
 don't use that.  Instead, we use custom Java code to do the mapping.
 
 I don't have a lot of public examples, but there's at least one public 
 object which you can view the MARC from our OPAC:
 
 http://roger.ucsd.edu/search/.b4827884/.b4827884/1,1,1,B/detlmarc~1234
 567FF=1,0,
 
 The public display in our digital collections site:
 
 http://libraries.ucsd.edu/ark:/20775/bb0648473d
 
 The RDF for the MODS looks like:
 
mods:classification rdf:parseType=Resource
mods:authoritylocal/mods:authority
rdf:valueFVLP 222-1/rdf:value
/mods:classification
mods:identifier rdf:parseType=Resource
mods:typeARK/mods:type
 
 rdf:valuehttp://libraries.ucsd.edu/ark:/20775/bb0648473d/rdf:value
/mods:identifier
mods:name rdf:parseType=Resource
mods:namePartBrown, Victor W/mods:namePart
mods:typepersonal/mods:type
/mods:name
mods:name rdf:parseType=Resource
mods:namePartAmateur Film Club of San Diego/mods:namePart
mods:typecorporate/mods:type
/mods:name
mods:originInfo rdf:parseType=Resource
mods:dateCreated[196-]/mods:dateCreated
/mods:originInfo
mods:originInfo rdf:parseType=Resource
mods:dateIssued2005/mods:dateIssued
mods:publisherFilm and Video Library, University of 
 California, San Diego, La Jolla, CA 92093-0175 
 http://orpheus.ucsd.edu/fvl/FVLPAGE.HTM/mods:publisher
/mods:originInfo
mods:physicalDescription rdf:parseType=Resource
mods:digitalOriginreformatted digital

Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Karen Coyle

Quoting Fleming, Declan dflem...@ucsd.edu:

Hi - I'll note that the mapping decisions were made by our metadata  
services (then Cataloging) group, not by the tech folks making it  
all work, though we were all involved in the discussions.  One idea  
that came up was to do a, perhaps, lossy translation, but also stuff  
one triple with a text dump of the whole MARC record just in case we  
needed to grab some other element out we might need.  We didn't do  
that, but I still like the idea.  Ok, it was my idea.  ;)


I like that idea! Now that disk space is no longer an issue, it  
makes good sense to keep around the original state of any data that  
you transform, just in case you change your mind. I hadn't thought  
about incorporating the entire MARC record string in the  
transformation, but as I recall the average size of a MARC record is  
somewhere around 1K, which really isn't all that much by today's  
standards.


(As an old-timer, I remember running the entire Univ. of California  
union catalog on 35 megabytes, something that would now be considered  
a smallish email attachment.)


kc



D

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf  
Of Esme Cowles

Sent: Monday, December 05, 2011 11:22 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Models of MARC in RDF

I looked into this a little more closely, and it turns out it's a  
little more complicated than I remembered.  We built support for  
transforming to MODS using the MODS21slim2MODS.xsl stylesheet, but  
don't use that.  Instead, we use custom Java code to do the mapping.


I don't have a lot of public examples, but there's at least one  
public object which you can view the MARC from our OPAC:


http://roger.ucsd.edu/search/.b4827884/.b4827884/1,1,1,B/detlmarc~1234567FF=1,0,

The public display in our digital collections site:

http://libraries.ucsd.edu/ark:/20775/bb0648473d

The RDF for the MODS looks like:

mods:classification rdf:parseType=Resource
mods:authoritylocal/mods:authority
rdf:valueFVLP 222-1/rdf:value
/mods:classification
mods:identifier rdf:parseType=Resource
mods:typeARK/mods:type
 
rdf:valuehttp://libraries.ucsd.edu/ark:/20775/bb0648473d/rdf:value

/mods:identifier
mods:name rdf:parseType=Resource
mods:namePartBrown, Victor W/mods:namePart
mods:typepersonal/mods:type
/mods:name
mods:name rdf:parseType=Resource
mods:namePartAmateur Film Club of San Diego/mods:namePart
mods:typecorporate/mods:type
/mods:name
mods:originInfo rdf:parseType=Resource
mods:dateCreated[196-]/mods:dateCreated
/mods:originInfo
mods:originInfo rdf:parseType=Resource
mods:dateIssued2005/mods:dateIssued
mods:publisherFilm and Video Library, University of  
California, San Diego, La Jolla, CA 92093-0175  
http://orpheus.ucsd.edu/fvl/FVLPAGE.HTM/mods:publisher

/mods:originInfo
mods:physicalDescription rdf:parseType=Resource
mods:digitalOriginreformatted digital/mods:digitalOrigin
mods:note16mm; 1 film reel (25 min.) :; sd., col. ;/mods:note
/mods:physicalDescription
mods:subject rdf:parseType=Resource
mods:authoritylcsh/mods:authority
mods:topicRanching/mods:topic
/mods:subject

etc.


There is definitely some loss in the conversion process -- I don't  
know enough about the MARC leader and control fields to know if they  
are captured in the MODS and/or RDF in some way.  But there are  
quite a few local and note fields that aren't present in the RDF.   
Other fields (e.g. 300 and 505) are mapped to MODS, but not  
displayed in our access system (though they are indexed for  
searching).


I agree it's hard to quantify lossy-ness.  Counting fields or  
characters would be the most objective, but has obvious problems  
with control characters sometimes containing a lot of information,  
and then the relative importance of different fields to the overall  
description.  There are other issues too -- some fields in this  
record weren't migrated because they duplicated collection-wide  
values, which are formulated slightly differently from the MARC  
record.  Some fields weren't migrated because they concern the  
physical object, and therefore don't really apply to the digital  
object.  So that really seems like a morass to me.


-Esme
--
Esme Cowles escow...@ucsd.edu

Necessity is the plea for every infringement of human freedom. It  
is the  argument of tyrants; it is the creed of slaves. -- William  
Pitt, 1783


On 12/3/2011, at 10:35 AM, Karen Coyle wrote:


Esme, let me second Owen's enthusiasm for more detail if you can
supply it. I think we also need to start putting these efforts along a
loss continuum - MODS is already lossy vis-a-vis MARC, and my guess
is that some of the other

Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Owen Stephens
I'd suggest that rather than shove it in a triple it might be better to point 
at alternative representations, including MARC if desirable (keep meaning to 
blog some thoughts about progressively enhanced metadata...)

Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 6 Dec 2011, at 15:44, Karen Coyle wrote:

 Quoting Fleming, Declan dflem...@ucsd.edu:
 
 Hi - I'll note that the mapping decisions were made by our metadata services 
 (then Cataloging) group, not by the tech folks making it all work, though we 
 were all involved in the discussions.  One idea that came up was to do a, 
 perhaps, lossy translation, but also stuff one triple with a text dump of 
 the whole MARC record just in case we needed to grab some other element out 
 we might need.  We didn't do that, but I still like the idea.  Ok, it was my 
 idea.  ;)
 
 I like that idea! Now that disk space is no longer an issue, it makes good 
 sense to keep around the original state of any data that you transform, 
 just in case you change your mind. I hadn't thought about incorporating the 
 entire MARC record string in the transformation, but as I recall the average 
 size of a MARC record is somewhere around 1K, which really isn't all that 
 much by today's standards.
 
 (As an old-timer, I remember running the entire Univ. of California union 
 catalog on 35 megabytes, something that would now be considered a smallish 
 email attachment.)
 
 kc
 
 
 D
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Esme 
 Cowles
 Sent: Monday, December 05, 2011 11:22 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Models of MARC in RDF
 
 I looked into this a little more closely, and it turns out it's a little 
 more complicated than I remembered.  We built support for transforming to 
 MODS using the MODS21slim2MODS.xsl stylesheet, but don't use that.  Instead, 
 we use custom Java code to do the mapping.
 
 I don't have a lot of public examples, but there's at least one public 
 object which you can view the MARC from our OPAC:
 
 http://roger.ucsd.edu/search/.b4827884/.b4827884/1,1,1,B/detlmarc~1234567FF=1,0,
 
 The public display in our digital collections site:
 
 http://libraries.ucsd.edu/ark:/20775/bb0648473d
 
 The RDF for the MODS looks like:
 
mods:classification rdf:parseType=Resource
mods:authoritylocal/mods:authority
rdf:valueFVLP 222-1/rdf:value
/mods:classification
mods:identifier rdf:parseType=Resource
mods:typeARK/mods:type

 rdf:valuehttp://libraries.ucsd.edu/ark:/20775/bb0648473d/rdf:value
/mods:identifier
mods:name rdf:parseType=Resource
mods:namePartBrown, Victor W/mods:namePart
mods:typepersonal/mods:type
/mods:name
mods:name rdf:parseType=Resource
mods:namePartAmateur Film Club of San Diego/mods:namePart
mods:typecorporate/mods:type
/mods:name
mods:originInfo rdf:parseType=Resource
mods:dateCreated[196-]/mods:dateCreated
/mods:originInfo
mods:originInfo rdf:parseType=Resource
mods:dateIssued2005/mods:dateIssued
mods:publisherFilm and Video Library, University of California, 
 San Diego, La Jolla, CA 92093-0175 
 http://orpheus.ucsd.edu/fvl/FVLPAGE.HTM/mods:publisher
/mods:originInfo
mods:physicalDescription rdf:parseType=Resource
mods:digitalOriginreformatted digital/mods:digitalOrigin
mods:note16mm; 1 film reel (25 min.) :; sd., col. ;/mods:note
/mods:physicalDescription
mods:subject rdf:parseType=Resource
mods:authoritylcsh/mods:authority
mods:topicRanching/mods:topic
/mods:subject
 
 etc.
 
 
 There is definitely some loss in the conversion process -- I don't know 
 enough about the MARC leader and control fields to know if they are captured 
 in the MODS and/or RDF in some way.  But there are quite a few local and 
 note fields that aren't present in the RDF.  Other fields (e.g. 300 and 505) 
 are mapped to MODS, but not displayed in our access system (though they are 
 indexed for searching).
 
 I agree it's hard to quantify lossy-ness.  Counting fields or characters 
 would be the most objective, but has obvious problems with control 
 characters sometimes containing a lot of information, and then the relative 
 importance of different fields to the overall description.  There are other 
 issues too -- some fields in this record weren't migrated because they 
 duplicated collection-wide values, which are formulated slightly differently 
 from the MARC record.  Some fields weren't migrated because they concern the 
 physical object, and therefore don't really apply to the digital object.  So 
 that really seems like a morass to me.
 
 -Esme
 --
 Esme Cowles escow...@ucsd.edu
 
 Necessity

Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Owen Stephens
I think the strength of adopting RDF is that it doesn't tie us to a single 
vocab/schema. That isn't to say it isn't desirable for us to establish common 
approaches, but that we need to think slightly differently about how this is 
done - more application profiles than 'one true schema'.

This is why RDA worries me - because it (seems to?) suggest that we define a 
schema that stands alone from everything else and that is used by the library 
community. I'd prefer to see the library community adopting the best of what 
already exists and then enhancing where the existing ontologies are lacking. If 
we are going to have a (web of) linked data, then re-use of ontologies and IDs 
is needed. For example in the work I did at the Open University in the UK we 
ended up only a single property from a specific library ontology (the draft 
ISBD http://metadataregistry.org/schemaprop/show/id/1957.html has place of 
publication, production, distribution).

I think it is interesting that many of the MARC-RDF mappings so far have 
adopting many of the same ontologies (although no doubt partly because there is 
a 'follow the leader' element to this - or at least there was for me when 
looking at the transformation at the Open University)

Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 5 Dec 2011, at 18:56, Jonathan Rochkind wrote:

 On 12/5/2011 1:40 PM, Karen Coyle wrote:
 
 This brings up another point that I haven't fully grokked yet: the use of 
 MARC kept library data consistent across the many thousands of libraries 
 that had MARC-based systems. 
 
 Well, only somewhat consistent, but, yeah.
 
 What happens if we move to RDF without a standard? Can we rely on linking to 
 provide interoperability without that rigid consistency of data models?
 
 Definitely not. I think this is a real issue.  There is no magic to linking 
 or RDF that provides interoperability for free; it's all about the 
 vocabularies/schemata -- whether in MARC or in anything else.   (Note 
 different national/regional  library communities used different schemata in 
 MARC, which made interoperability infeasible there. Some still do, although 
 gradually people have moved to Marc21 precisely for this reason, even when 
 Marc21 was less powerful than the MARC variant they started with).
 
 That is to say, if we just used MARC's own implicit vocabularies, but output 
 them as RDF, sure, we'd still have consistency, although we wouldn't really 
 _gain_ much.On the other hand, if we switch to a new better vocabulary -- 
 we've got to actually switch to a new better vocabulary.  If it's just 
 whatever anyone wants to use, we've made it VERY difficult to share data, 
 which is something pretty darn important to us.
 
 Of course, the goal of the RDA process (or one of em) was to create a new 
 schema for us to consistently use. That's the library community effort to 
 maintain a common schema that is more powerful and flexible than MARC.  If 
 people are using other things instead, apparently that failed, or at least 
 has not yet succeeded.


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Owen, Will
This is a *very* tangential rant, but it makes me mental when I hear
people say the 'disk space' is no longer an issue.  While it's true that
the costs of disk drives continue to drop, my experience is that the cost
of managing storage and backups is rising almost exponentially as
libraries continue to amass enormous quantities of digital data and
metadata.  Again, I recognize that text files are a small portion of our
library storage these days, but to casually suggest that doubling any
amount of data storage is an inconsiderable consideration strikes me as
the first step down a dangerous path.  Sorry for the interruption to an
interesting thread.

Will



On 12/6/11 10:44 AM, Karen Coyle li...@kcoyle.net wrote:

Quoting Fleming, Declan dflem...@ucsd.edu:

Hi - I'll note that the mapping decisions were made by our metadata
services (then Cataloging) group, not by the tech folks making it
all work, though we were all involved in the discussions.  One idea
that came up was to do a, perhaps, lossy translation, but also stuff
one triple with a text dump of the whole MARC record just in case we
needed to grab some other element out we might need.  We didn't do
that, but I still like the idea.  Ok, it was my idea.  ;)

I like that idea! Now that disk space is no longer an issue, it
makes good sense to keep around the original state of any data that
you transform, just in case you change your mind. I hadn't thought
about incorporating the entire MARC record string in the
transformation, but as I recall the average size of a MARC record is
somewhere around 1K, which really isn't all that much by today's
standards.


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Mark Jordan
Well said Will,

Mark

- Original Message -
 This is a *very* tangential rant, but it makes me mental when I hear
 people say the 'disk space' is no longer an issue. While it's true
 that
 the costs of disk drives continue to drop, my experience is that the
 cost
 of managing storage and backups is rising almost exponentially as
 libraries continue to amass enormous quantities of digital data and
 metadata. Again, I recognize that text files are a small portion of
 our
 library storage these days, but to casually suggest that doubling any
 amount of data storage is an inconsiderable consideration strikes me
 as
 the first step down a dangerous path. Sorry for the interruption to an
 interesting thread.
 
 Will
 
 
 
 On 12/6/11 10:44 AM, Karen Coyle li...@kcoyle.net wrote:
 
 Quoting Fleming, Declan dflem...@ucsd.edu:
 
 Hi - I'll note that the mapping decisions were made by our metadata
 services (then Cataloging) group, not by the tech folks making it
 all work, though we were all involved in the discussions. One idea
 that came up was to do a, perhaps, lossy translation, but also stuff
 one triple with a text dump of the whole MARC record just in case we
 needed to grab some other element out we might need. We didn't do
 that, but I still like the idea. Ok, it was my idea. ;)
 
 I like that idea! Now that disk space is no longer an issue, it
 makes good sense to keep around the original state of any data that
 you transform, just in case you change your mind. I hadn't thought
 about incorporating the entire MARC record string in the
 transformation, but as I recall the average size of a MARC record is
 somewhere around 1K, which really isn't all that much by today's
 standards.


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Fleming, Declan
Well, we didn't end up doing it (although we still could).

When I look across the storage load that our asset management system is 
overseeing, metadata space pales in comparison to the original data file 
itself.  Even access derivatives like display JPGs are tiny compared to their 
TIFF masters.  WAV files are even bigger.

I agree that we shouldn't just assume disk is free, but when looking at the 
orders of magnitude of metadata to originals, I'd err on the side of keeping 
all the metadata.

Do you really feel that the cost of management of storage is going up?  I do 
find that the bulk of the ongoing cost of digital asset management is in the 
people to manage the assets, but over time I'm seeing the management cost per 
asset drop as we need about the same number of people to run ten racks of 
storage as it takes to run two.  And all of those racks are getting denser as 
storage media costs go down (Lord willin' and the creek don't flood.  Again).  
I expect at some point the cost to store the assets in the cloud, rather than 
in local racks, will hit a sweet spot, and we'll move to that.  We'll still 
need good management of the assets, but the policies it takes to track 300k 
assets will probably scale to millions, especially if the metadata is stored in 
a very accessible, linkable way.

D

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Mark 
Jordan
Sent: Tuesday, December 06, 2011 10:51 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Models of MARC in RDF

Well said Will,

Mark

- Original Message -
 This is a *very* tangential rant, but it makes me mental when I hear 
 people say the 'disk space' is no longer an issue. While it's true 
 that the costs of disk drives continue to drop, my experience is that 
 the cost of managing storage and backups is rising almost 
 exponentially as libraries continue to amass enormous quantities of 
 digital data and metadata. Again, I recognize that text files are a 
 small portion of our library storage these days, but to casually 
 suggest that doubling any amount of data storage is an inconsiderable 
 consideration strikes me as the first step down a dangerous path. 
 Sorry for the interruption to an interesting thread.
 
 Will
 
 
 
 On 12/6/11 10:44 AM, Karen Coyle li...@kcoyle.net wrote:
 
 Quoting Fleming, Declan dflem...@ucsd.edu:
 
 Hi - I'll note that the mapping decisions were made by our metadata 
 services (then Cataloging) group, not by the tech folks making it 
 all work, though we were all involved in the discussions. One idea 
 that came up was to do a, perhaps, lossy translation, but also stuff 
 one triple with a text dump of the whole MARC record just in case we 
 needed to grab some other element out we might need. We didn't do 
 that, but I still like the idea. Ok, it was my idea. ;)
 
 I like that idea! Now that disk space is no longer an issue, it 
 makes good sense to keep around the original state of any data that 
 you transform, just in case you change your mind. I hadn't thought 
 about incorporating the entire MARC record string in the 
 transformation, but as I recall the average size of a MARC record is 
 somewhere around 1K, which really isn't all that much by today's 
 standards.


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Fleming, Declan
Hi - point at it where?  We could point back to the library catalog that we 
harvested in the MARC to MODS to RDF process, but what if that goes away?  Why 
not write ourselves a 1K insurance policy that sticks with the object for its 
life?

D

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Owen 
Stephens
Sent: Tuesday, December 06, 2011 8:06 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Models of MARC in RDF

I'd suggest that rather than shove it in a triple it might be better to point 
at alternative representations, including MARC if desirable (keep meaning to 
blog some thoughts about progressively enhanced metadata...)

Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 6 Dec 2011, at 15:44, Karen Coyle wrote:

 Quoting Fleming, Declan dflem...@ucsd.edu:
 
 Hi - I'll note that the mapping decisions were made by our metadata 
 services (then Cataloging) group, not by the tech folks making it all 
 work, though we were all involved in the discussions.  One idea that 
 came up was to do a, perhaps, lossy translation, but also stuff one 
 triple with a text dump of the whole MARC record just in case we 
 needed to grab some other element out we might need.  We didn't do 
 that, but I still like the idea.  Ok, it was my idea.  ;)
 
 I like that idea! Now that disk space is no longer an issue, it makes good 
 sense to keep around the original state of any data that you transform, 
 just in case you change your mind. I hadn't thought about incorporating the 
 entire MARC record string in the transformation, but as I recall the average 
 size of a MARC record is somewhere around 1K, which really isn't all that 
 much by today's standards.
 
 (As an old-timer, I remember running the entire Univ. of California 
 union catalog on 35 megabytes, something that would now be considered 
 a smallish email attachment.)
 
 kc
 
 
 D
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
 Of Esme Cowles
 Sent: Monday, December 05, 2011 11:22 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Models of MARC in RDF
 
 I looked into this a little more closely, and it turns out it's a little 
 more complicated than I remembered.  We built support for transforming to 
 MODS using the MODS21slim2MODS.xsl stylesheet, but don't use that.  Instead, 
 we use custom Java code to do the mapping.
 
 I don't have a lot of public examples, but there's at least one public 
 object which you can view the MARC from our OPAC:
 
 http://roger.ucsd.edu/search/.b4827884/.b4827884/1,1,1,B/detlmarc~123
 4567FF=1,0,
 
 The public display in our digital collections site:
 
 http://libraries.ucsd.edu/ark:/20775/bb0648473d
 
 The RDF for the MODS looks like:
 
mods:classification rdf:parseType=Resource
mods:authoritylocal/mods:authority
rdf:valueFVLP 222-1/rdf:value
/mods:classification
mods:identifier rdf:parseType=Resource
mods:typeARK/mods:type

 rdf:valuehttp://libraries.ucsd.edu/ark:/20775/bb0648473d/rdf:value
/mods:identifier
mods:name rdf:parseType=Resource
mods:namePartBrown, Victor W/mods:namePart
mods:typepersonal/mods:type
/mods:name
mods:name rdf:parseType=Resource
mods:namePartAmateur Film Club of San Diego/mods:namePart
mods:typecorporate/mods:type
/mods:name
mods:originInfo rdf:parseType=Resource
mods:dateCreated[196-]/mods:dateCreated
/mods:originInfo
mods:originInfo rdf:parseType=Resource
mods:dateIssued2005/mods:dateIssued
mods:publisherFilm and Video Library, University of California, 
 San Diego, La Jolla, CA 92093-0175 
 http://orpheus.ucsd.edu/fvl/FVLPAGE.HTM/mods:publisher
/mods:originInfo
mods:physicalDescription rdf:parseType=Resource
mods:digitalOriginreformatted digital/mods:digitalOrigin
mods:note16mm; 1 film reel (25 min.) :; sd., col. ;/mods:note
/mods:physicalDescription
mods:subject rdf:parseType=Resource
mods:authoritylcsh/mods:authority
mods:topicRanching/mods:topic
/mods:subject
 
 etc.
 
 
 There is definitely some loss in the conversion process -- I don't know 
 enough about the MARC leader and control fields to know if they are captured 
 in the MODS and/or RDF in some way.  But there are quite a few local and 
 note fields that aren't present in the RDF.  Other fields (e.g. 300 and 505) 
 are mapped to MODS, but not displayed in our access system (though they are 
 indexed for searching).
 
 I agree it's hard to quantify lossy-ness.  Counting fields or characters 
 would be the most objective, but has obvious problems with control 
 characters sometimes containing a lot of information, and then the relative

Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Montoya, Gabriela
One critical thing to consider with MARC records (or any metadata, for that 
matter) is that it they are not stagnant, so what is the value of storing 
entire record strings into one triple if we know that metadata is volatile? As 
an example, UCSD has over 200,000 art images that had their metadata records 
ingested into our local DAMS over five years ago. Since then, many of these 
records have been edited/massaged in our OPAC (and ARTstor), but these updated 
records have not been refreshed in our DAMS. Now we find ourselves needing to 
desperately have the What is our database of record? conversation.

I'd much rather see resources invested in data synching than spending it in 
saving text dumps that will most likely not be referred to again.

Dream Team for Building a MARC  RDF Model: Karen Coyle, Alistair Miles, Diane 
Hillman, Ed Summers, Bradley Westbrook.

Gabriela

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Karen 
Coyle
Sent: Tuesday, December 06, 2011 7:44 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Models of MARC in RDF

Quoting Fleming, Declan dflem...@ucsd.edu:

 Hi - I'll note that the mapping decisions were made by our metadata 
 services (then Cataloging) group, not by the tech folks making it all 
 work, though we were all involved in the discussions.  One idea that 
 came up was to do a, perhaps, lossy translation, but also stuff one 
 triple with a text dump of the whole MARC record just in case we 
 needed to grab some other element out we might need.  We didn't do 
 that, but I still like the idea.  Ok, it was my idea.  ;)

I like that idea! Now that disk space is no longer an issue, it makes good 
sense to keep around the original state of any data that you transform, just 
in case you change your mind. I hadn't thought about incorporating the entire 
MARC record string in the transformation, but as I recall the average size of a 
MARC record is somewhere around 1K, which really isn't all that much by today's 
standards.

(As an old-timer, I remember running the entire Univ. of California union 
catalog on 35 megabytes, something that would now be considered a smallish 
email attachment.)

kc


 D

 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
 Of Esme Cowles
 Sent: Monday, December 05, 2011 11:22 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Models of MARC in RDF

 I looked into this a little more closely, and it turns out it's a 
 little more complicated than I remembered.  We built support for 
 transforming to MODS using the MODS21slim2MODS.xsl stylesheet, but 
 don't use that.  Instead, we use custom Java code to do the mapping.

 I don't have a lot of public examples, but there's at least one public 
 object which you can view the MARC from our OPAC:

 http://roger.ucsd.edu/search/.b4827884/.b4827884/1,1,1,B/detlmarc~1234
 567FF=1,0,

 The public display in our digital collections site:

 http://libraries.ucsd.edu/ark:/20775/bb0648473d

 The RDF for the MODS looks like:

 mods:classification rdf:parseType=Resource
 mods:authoritylocal/mods:authority
 rdf:valueFVLP 222-1/rdf:value
 /mods:classification
 mods:identifier rdf:parseType=Resource
 mods:typeARK/mods:type
  
 rdf:valuehttp://libraries.ucsd.edu/ark:/20775/bb0648473d/rdf:value
 /mods:identifier
 mods:name rdf:parseType=Resource
 mods:namePartBrown, Victor W/mods:namePart
 mods:typepersonal/mods:type
 /mods:name
 mods:name rdf:parseType=Resource
 mods:namePartAmateur Film Club of San Diego/mods:namePart
 mods:typecorporate/mods:type
 /mods:name
 mods:originInfo rdf:parseType=Resource
 mods:dateCreated[196-]/mods:dateCreated
 /mods:originInfo
 mods:originInfo rdf:parseType=Resource
 mods:dateIssued2005/mods:dateIssued
 mods:publisherFilm and Video Library, University of 
 California, San Diego, La Jolla, CA 92093-0175 
 http://orpheus.ucsd.edu/fvl/FVLPAGE.HTM/mods:publisher
 /mods:originInfo
 mods:physicalDescription rdf:parseType=Resource
 mods:digitalOriginreformatted digital/mods:digitalOrigin
 mods:note16mm; 1 film reel (25 min.) :; sd., col. ;/mods:note
 /mods:physicalDescription
 mods:subject rdf:parseType=Resource
 mods:authoritylcsh/mods:authority
 mods:topicRanching/mods:topic
 /mods:subject

 etc.


 There is definitely some loss in the conversion process -- I don't 
 know enough about the MARC leader and control fields to know if they 
 are captured in the MODS and/or RDF in some way.  But there are
 quite a few local and note fields that aren't present in the RDF.   
 Other fields (e.g. 300 and 505) are mapped to MODS, but not displayed 
 in our access system (though

Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread BRIAN TINGLE
On Dec 6, 2011, at 5:52 PM, Montoya, Gabriela wrote:

 ...
 I'd much rather see resources invested in data synching than spending it in 
 saving text dumps that will most likely not be referred to again.
 ...

In a MARC-as-the-record-of-record scenario; storing the original raw MARC might 
be helpful for the syncing -- when a sync was happing, the new MARC of record 
could maybe be compared against the old MARC of record to know that RDF triples 
needed to be updated?


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Michael J. Giarlo
On Tue, Dec 6, 2011 at 20:52, Montoya, Gabriela gamont...@ucsd.edu wrote:
 One critical thing to consider with MARC records (or any metadata, for that 
 matter) is that it they are not stagnant, so what is the value of storing 
 entire record strings into one triple if we know that metadata is volatile? 
 As an example, UCSD has over 200,000 art images that had their metadata 
 records ingested into our local DAMS over five years ago. Since then, many of 
 these records have been edited/massaged in our OPAC (and ARTstor), but these 
 updated records have not been refreshed in our DAMS. Now we find ourselves 
 needing to desperately have the What is our database of record? 
 conversation.

 I'd much rather see resources invested in data synching than spending it in 
 saving text dumps that will most likely not be referred to again.


I don't disagree with your rationale, and I love your Dream Team, but
there's a false equivalence here between the cost of sucking in a
record and stuffing it away and dealing with the very tricky problem
of interop with the OPAC, ARTstor,  other systems.

-Mike


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread stuart yeates

On 07/12/11 14:52, Montoya, Gabriela wrote:


Dream Team for Building a MARC  RDF Model: Karen Coyle, Alistair Miles, Diane 
Hillman, Ed Summers, Bradley Westbrook.


As much as I have nothing against anyone on this list, isn't it a little 
US-centric? Didn't we make that mistake before?


cheers
stuart
--
Stuart Yeates
Library Technology Services http://www.victoria.ac.nz/library/


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Alexander Johannesen
On Wed, Dec 7, 2011 at 1:49 PM, stuart yeates stuart.yea...@vuw.ac.nz wrote:
 As much as I have nothing against anyone on this list, isn't it a little
 US-centric? Didn't we make that mistake before?

I wouldn't worry. A dream-team have no basis in reality, hence the
dream part. I'd like to see a Real Team instead, an international
collaboration of people, including international smarts and
non-librarians. (Realistically, an international [or semi] library
conference should have a three-day session with smart people first on
this very issue, and that would make a fine place to get this thing
working, even to some degree of speed)


Alex
-- 
 Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps
--- http://shelter.nu/blog/ --
-- http://www.google.com/profiles/alexander.johannesen ---


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Mark A. Matienzo
On Tue, Dec 6, 2011 at 10:23 PM, Alexander Johannesen
alexander.johanne...@gmail.com wrote:

 A dream-team have no basis in reality, hence the dream part.

Tell that to the 1992 U.S. Men's Olympic Basketball Team.

Mark


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Stuart Yeates
  A dream-team have no basis in reality, hence the dream part.
 
 Tell that to the 1992 U.S. Men's Olympic Basketball Team.

So, the response to my suggestion of an unhelpful US bias is a US-based 
metaphor? 

I'll just consider my point proved.

cheers
stuart


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Michael J. Giarlo

I mean, have you *seen* Drexler dunk?

-Original message-
From: Stuart Yeates stuart.yea...@vuw.ac.nz
To: CODE4LIB@listserv.nd.edu
Sent: Wed, Dec 7, 2011 06:50:28 GMT+00:00
Subject: Re: [CODE4LIB] Models of MARC in RDF


 A dream-team have no basis in reality, hence the dream part.

Tell that to the 1992 U.S. Men's Olympic Basketball Team.


So, the response to my suggestion of an unhelpful US bias is a US-based  
metaphor? 


I'll just consider my point proved.

cheers
stuart


Re: [CODE4LIB] Models of MARC in RDF

2011-12-05 Thread Matt Machell
Owen mentioned the Talis (now Capita Libraries) model. If you'd like
more info on that, our tech lead put his slides from the Linked Data
in Libraries event online at:

http://www.slideshare.net/philjohn/linked-library-data-in-the-wild-8593328

They cover some of the work we've done, approaches taken and some of
the challenges (in both released and as yet unreleased versions of the
model).

For some context, the Prism data model is used on some 70 or so
University and local authority catalogues in the UK and Ireland. Any
item in those catalogues can be accessed as linked data by appending
the appropriate file type (.nt, .rdf or .json) to the item uris (or
.rss to search uris), for example:
http://catalogue.library.manchester.ac.uk/items/3013197.rdf

Hope that's helpful.

Matt Machell

Senior Developer, Prism 3 - Capita LIbraries

Me: http://eclecticdreams.com
Work: http://blogs.talis.com/prism


Re: [CODE4LIB] Models of MARC in RDF

2011-12-05 Thread Karen Coyle
Thanks, Matt. The RDF here uses BIBO and DC, and is therefore  
definitely lossy. I'm not saying that's a bad thing -- loss from MARC  
may well be the only way to save library metadata. What I would be  
interested in learning is how one decides WHAT to lose. Im also  
curious to know if any folks have started out with a minimum set of  
elements from MARC and then later pulled in other dat elements that  
were needed.


This brings up another point that I haven't fully grokked yet: the use  
of MARC kept library data consistent across the many thousands of  
libraries that had MARC-based systems. What happens if we move to RDF  
without a standard? Can we rely on linking to provide interoperability  
without that rigid consistency of data models?


kc

Quoting Matt Machell mattmach...@googlemail.com:


Owen mentioned the Talis (now Capita Libraries) model. If you'd like
more info on that, our tech lead put his slides from the Linked Data
in Libraries event online at:

http://www.slideshare.net/philjohn/linked-library-data-in-the-wild-8593328

They cover some of the work we've done, approaches taken and some of
the challenges (in both released and as yet unreleased versions of the
model).

For some context, the Prism data model is used on some 70 or so
University and local authority catalogues in the UK and Ireland. Any
item in those catalogues can be accessed as linked data by appending
the appropriate file type (.nt, .rdf or .json) to the item uris (or
.rss to search uris), for example:
http://catalogue.library.manchester.ac.uk/items/3013197.rdf

Hope that's helpful.

Matt Machell

Senior Developer, Prism 3 - Capita LIbraries

Me: http://eclecticdreams.com
Work: http://blogs.talis.com/prism





--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet


Re: [CODE4LIB] Models of MARC in RDF

2011-12-05 Thread Jonathan Rochkind

On 12/5/2011 1:40 PM, Karen Coyle wrote:


This brings up another point that I haven't fully grokked yet: the use 
of MARC kept library data consistent across the many thousands of 
libraries that had MARC-based systems. 


Well, only somewhat consistent, but, yeah.

What happens if we move to RDF without a standard? Can we rely on 
linking to provide interoperability without that rigid consistency of 
data models?


Definitely not. I think this is a real issue.  There is no magic to 
linking or RDF that provides interoperability for free; it's all about 
the vocabularies/schemata -- whether in MARC or in anything else.   
(Note different national/regional  library communities used different 
schemata in MARC, which made interoperability infeasible there. Some 
still do, although gradually people have moved to Marc21 precisely for 
this reason, even when Marc21 was less powerful than the MARC variant 
they started with).


That is to say, if we just used MARC's own implicit vocabularies, but 
output them as RDF, sure, we'd still have consistency, although we 
wouldn't really _gain_ much.On the other hand, if we switch to a new 
better vocabulary -- we've got to actually switch to a new better 
vocabulary.  If it's just whatever anyone wants to use, we've made it 
VERY difficult to share data, which is something pretty darn important 
to us.


Of course, the goal of the RDA process (or one of em) was to create a 
new schema for us to consistently use. That's the library community 
effort to maintain a common schema that is more powerful and flexible 
than MARC.  If people are using other things instead, apparently that 
failed, or at least has not yet succeeded.


Re: [CODE4LIB] Models of MARC in RDF

2011-12-05 Thread Peter Noerr
See historical comment in text below. But, to look forward -

It seems to me that we should be able to design a model with graceful 
degradation from full MARC data element set (vocabulary if you insist) to a 
core set which allows systems to fill in what they have and, on the receiving 
end, extract what they can find. Each system can work with its own schema, if 
it must, as long as the mapping for its level of detail against whatever 
designated level of detail it wishes to accept in the exchange format is 
created first. Obviously greater levels of detail cannot be inferred from 
lesser, and so many systems would be working with less than the data they would 
like, or create locally, but that is the nature of bibliographic data - it is 
never complete, or it must be processed assuming that is the case.

Using RDF and entity modeling it should be possible to devise a (small) number 
of levels from a basic core set (akin to DC, if not semantically identical) 
through to a 2,500 attribute* person authority record (plus the other bib 
entities), and produce pre-parsers which will massage these to what the ILS (or 
other repository/system) is comfortable with. Since the receiving system is 
fixed for any one installation it does not need the complexity we build into 
our fed search platforms, and converters would be largely re-usable.

So, what about a Russian doll bibliographic schema? (Who gets to decide on what 
goes in which level is for years of committee work - unemployment solved!)


* number obtained from a line count from 
http://www.loc.gov/marc/authority/ecadlist.html - so rather approximate.

 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
 Jonathan Rochkind
 Sent: Monday, December 05, 2011 10:57 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Models of MARC in RDF
 
 On 12/5/2011 1:40 PM, Karen Coyle wrote:
 
  This brings up another point that I haven't fully grokked yet: the use
  of MARC kept library data consistent across the many thousands of
  libraries that had MARC-based systems.
 
 Well, only somewhat consistent, but, yeah.
 
  What happens if we move to RDF without a standard? Can we rely on
  linking to provide interoperability without that rigid consistency of
  data models?
 
 Definitely not. I think this is a real issue.  There is no magic to linking 
 or RDF that provides
 interoperability for free; it's all about
 the vocabularies/schemata -- whether in MARC or in anything else.
 (Note different national/regional  library communities used different 
 schemata in MARC, which made
 interoperability infeasible there. Some still do, although gradually people 
 have moved to Marc21
 precisely for this reason, even when Marc21 was less powerful than the MARC 
 variant they started with).

Just a comment about the good old days when we had to work with USMARC, 
UKMARC, DANMARC, MAB1, AUSMARC, and so on. interoperability infeasible was 
not the situation. It was perfectly possible to convert records from one format 
to another - with some loss of data into the less specific format of course. 
Which meant that a round trip was not possible. But major elements were 
present in all and that meant it was practically useful to do it. We did this 
at the British Library when I was there, and we did it commercially as a 
service for OCLC (remember them?) as a commercial ILS vendor. It did involve 
specific coding, and an internal database system built to accommodate the 
variability. 

 
 That is to say, if we just used MARC's own implicit vocabularies, but output 
 them as RDF, sure, we'd
 still have consistency, although we
 wouldn't really _gain_ much.On the other hand, if we switch to a new
 better vocabulary -- we've got to actually switch to a new better vocabulary. 
  If it's just whatever
 anyone wants to use, we've made it VERY difficult to share data, which is 
 something pretty darn
 important to us.
 
 Of course, the goal of the RDA process (or one of em) was to create a new 
 schema for us to
 consistently use. That's the library community effort to maintain a common 
 schema that is more
 powerful and flexible than MARC.  If people are using other things instead, 
 apparently that failed, or
 at least has not yet succeeded.


Re: [CODE4LIB] Models of MARC in RDF

2011-12-05 Thread Fleming, Declan
Hi - I'll note that the mapping decisions were made by our metadata services 
(then Cataloging) group, not by the tech folks making it all work, though we 
were all involved in the discussions.  One idea that came up was to do a, 
perhaps, lossy translation, but also stuff one triple with a text dump of the 
whole MARC record just in case we needed to grab some other element out we 
might need.  We didn't do that, but I still like the idea.  Ok, it was my idea. 
 ;)

D

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Esme 
Cowles
Sent: Monday, December 05, 2011 11:22 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Models of MARC in RDF

I looked into this a little more closely, and it turns out it's a little more 
complicated than I remembered.  We built support for transforming to MODS using 
the MODS21slim2MODS.xsl stylesheet, but don't use that.  Instead, we use custom 
Java code to do the mapping.

I don't have a lot of public examples, but there's at least one public object 
which you can view the MARC from our OPAC:

http://roger.ucsd.edu/search/.b4827884/.b4827884/1,1,1,B/detlmarc~1234567FF=1,0,

The public display in our digital collections site:

http://libraries.ucsd.edu/ark:/20775/bb0648473d

The RDF for the MODS looks like:

mods:classification rdf:parseType=Resource
mods:authoritylocal/mods:authority
rdf:valueFVLP 222-1/rdf:value
/mods:classification
mods:identifier rdf:parseType=Resource
mods:typeARK/mods:type

rdf:valuehttp://libraries.ucsd.edu/ark:/20775/bb0648473d/rdf:value
/mods:identifier
mods:name rdf:parseType=Resource
mods:namePartBrown, Victor W/mods:namePart
mods:typepersonal/mods:type
/mods:name
mods:name rdf:parseType=Resource
mods:namePartAmateur Film Club of San Diego/mods:namePart
mods:typecorporate/mods:type
/mods:name
mods:originInfo rdf:parseType=Resource
mods:dateCreated[196-]/mods:dateCreated
/mods:originInfo
mods:originInfo rdf:parseType=Resource
mods:dateIssued2005/mods:dateIssued
mods:publisherFilm and Video Library, University of California, 
San Diego, La Jolla, CA 92093-0175 
http://orpheus.ucsd.edu/fvl/FVLPAGE.HTM/mods:publisher
/mods:originInfo
mods:physicalDescription rdf:parseType=Resource
mods:digitalOriginreformatted digital/mods:digitalOrigin
mods:note16mm; 1 film reel (25 min.) :; sd., col. ;/mods:note
/mods:physicalDescription
mods:subject rdf:parseType=Resource
mods:authoritylcsh/mods:authority
mods:topicRanching/mods:topic
/mods:subject

etc.


There is definitely some loss in the conversion process -- I don't know enough 
about the MARC leader and control fields to know if they are captured in the 
MODS and/or RDF in some way.  But there are quite a few local and note fields 
that aren't present in the RDF.  Other fields (e.g. 300 and 505) are mapped to 
MODS, but not displayed in our access system (though they are indexed for 
searching).

I agree it's hard to quantify lossy-ness.  Counting fields or characters would 
be the most objective, but has obvious problems with control characters 
sometimes containing a lot of information, and then the relative importance of 
different fields to the overall description.  There are other issues too -- 
some fields in this record weren't migrated because they duplicated 
collection-wide values, which are formulated slightly differently from the MARC 
record.  Some fields weren't migrated because they concern the physical object, 
and therefore don't really apply to the digital object.  So that really seems 
like a morass to me.

-Esme
--
Esme Cowles escow...@ucsd.edu

Necessity is the plea for every infringement of human freedom. It is the  
argument of tyrants; it is the creed of slaves. -- William Pitt, 1783

On 12/3/2011, at 10:35 AM, Karen Coyle wrote:

 Esme, let me second Owen's enthusiasm for more detail if you can 
 supply it. I think we also need to start putting these efforts along a 
 loss continuum - MODS is already lossy vis-a-vis MARC, and my guess 
 is that some of the other MARC-RDF transforms don't include all of 
 the warts and wrinkles of MARC. LC's new bibliographic framework 
 document sets as a goal to bring along ALL of MARC (a decision that I 
 think isn't obvious, as we have already discussed here). If we say we 
 are going from MARC to RDF, how much is actually captured in the 
 transformed data set? (Yes, that's going to be hard to quantify.)
 
 kc
 
 Quoting Esme Cowles escow...@ucsd.edu:
 
 Owen-
 
 Another strategy for capturing MARC data in RDF is to convert it to MODS (we 
 do this using the LoC MARC to MODS stylesheet: 
 http://www.loc.gov/standards/marcxml/xslt/MARC21slim2MODS.xsl).  From there, 
 it's pretty easy

Re: [CODE4LIB] Models of MARC in RDF

2011-12-03 Thread Karen Coyle
Esme, let me second Owen's enthusiasm for more detail if you can  
supply it. I think we also need to start putting these efforts along a  
loss continuum - MODS is already lossy vis-a-vis MARC, and my guess  
is that some of the other MARC-RDF transforms don't include all of  
the warts and wrinkles of MARC. LC's new bibliographic framework  
document sets as a goal to bring along ALL of MARC (a decision that I  
think isn't obvious, as we have already discussed here). If we say we  
are going from MARC to RDF, how much is actually captured in the  
transformed data set? (Yes, that's going to be hard to quantify.)


kc

Quoting Esme Cowles escow...@ucsd.edu:


Owen-

Another strategy for capturing MARC data in RDF is to convert it to  
MODS (we do this using the LoC MARC to MODS stylesheet:  
http://www.loc.gov/standards/marcxml/xslt/MARC21slim2MODS.xsl).   
From there, it's pretty easy to incorporate into RDF.  There are  
some issues to be aware of, such as how to map the MODS XML names to  
predicates and how to handle elements that can appear in multiple  
places in the hierarchy.


-Esme
--
Esme Cowles escow...@ucsd.edu

Necessity is the plea for every infringement of human freedom. It is the
 argument of tyrants; it is the creed of slaves. -- William Pitt, 1783

On 11/28/2011, at 8:25 AM, Owen Stephens wrote:

It would be great to start collecting transforms together - just a  
quick brain dump of some I'm aware of


MARC21 transformations
Cambridge University Library - http://data.lib.cam.ac.uk -  
transformation made available (in code) from same site
Open University - http://data.open.ac.uk - specific transform for  
materials related to teaching, code available at  
http://code.google.com/p/luceroproject/source/browse/trunk%20luceroproject/OULinkedData/src/uk/ac/open/kmi/lucero/rdfextractor/RDFExtractor.java (MARC transform is in libraryRDFExtraction  
method)
COPAC - small set of records from the COPAC Union catalogue - data  
and transform not yet published
Podes Projekt - LinkedAuthors - documentation at  
http://bibpode.no/linkedauthors/doc/Pode-LinkedAuthors-Documentation.pdf -  
2 stage transformation firstly from MARC to FRBRized version of  
data, then from FRBRized data to RDF. These linked from documentation
Podes Project - LinkedNonFiction - documentation at  
http://bibpode.no/linkednonfiction/doc/Pode-LinkedNonFiction-Documentation.pdf - MARC data transformed using xslt  
https://github.com/pode/LinkedNonFiction/blob/master/marcslim2n3.xsl


British Library British National Bibliography -  
http://www.bl.uk/bibliographic/datafree.html - data model  
documented, but no code available
Libris.se - some notes in various presentations/blogposts (e.g.  
http://dc2008.de/wp-content/uploads/2008/09/malmsten.pdf) but can't  
find explicit transformation
Hungarian National library -  
http://thedatahub.org/dataset/hungarian-national-library-catalog  
and http://nektar.oszk.hu/wiki/Semantic_web#Implementation - some  
information on ontologies used but no code or explicit  
transformation (not 100% sure this is from MARC)
Talis - implemented in several live catalogues including  
http://catalogue.library.manchester.ac.uk/  - no documentation or  
code afaik although some notes in


MAB transformation
HBZ - some of the transformation documented at  
https://wiki1.hbz-nrw.de/display/SEM/Converting+the+Open+Data+from+the+hbz+to+BIBO, don't think any code  
published?


Would be really helpful if more projects published their  
transformations (or someone told me where to look!)


Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 26 Nov 2011, at 15:58, Karen Coyle wrote:

A few of the code4lib talk proposals mention projects that have or  
will transform MARC records into RDF. If any of you have  
documentation and/or examples of this, I would be very interested  
to see them, even if they are under construction.


Thanks,
kc

--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet






--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet


Re: [CODE4LIB] Models of MARC in RDF

2011-12-02 Thread Esme Cowles
Owen-

Another strategy for capturing MARC data in RDF is to convert it to MODS (we do 
this using the LoC MARC to MODS stylesheet: 
http://www.loc.gov/standards/marcxml/xslt/MARC21slim2MODS.xsl).  From there, 
it's pretty easy to incorporate into RDF.  There are some issues to be aware 
of, such as how to map the MODS XML names to predicates and how to handle 
elements that can appear in multiple places in the hierarchy.

-Esme
--
Esme Cowles escow...@ucsd.edu

Necessity is the plea for every infringement of human freedom. It is the
 argument of tyrants; it is the creed of slaves. -- William Pitt, 1783

On 11/28/2011, at 8:25 AM, Owen Stephens wrote:

 It would be great to start collecting transforms together - just a quick 
 brain dump of some I'm aware of
 
 MARC21 transformations
 Cambridge University Library - http://data.lib.cam.ac.uk - transformation 
 made available (in code) from same site
 Open University - http://data.open.ac.uk - specific transform for materials 
 related to teaching, code available at 
 http://code.google.com/p/luceroproject/source/browse/trunk%20luceroproject/OULinkedData/src/uk/ac/open/kmi/lucero/rdfextractor/RDFExtractor.java
  (MARC transform is in libraryRDFExtraction method)
 COPAC - small set of records from the COPAC Union catalogue - data and 
 transform not yet published
 Podes Projekt - LinkedAuthors - documentation at 
 http://bibpode.no/linkedauthors/doc/Pode-LinkedAuthors-Documentation.pdf - 2 
 stage transformation firstly from MARC to FRBRized version of data, then from 
 FRBRized data to RDF. These linked from documentation
 Podes Project - LinkedNonFiction - documentation at 
 http://bibpode.no/linkednonfiction/doc/Pode-LinkedNonFiction-Documentation.pdf
  - MARC data transformed using xslt 
 https://github.com/pode/LinkedNonFiction/blob/master/marcslim2n3.xsl
 
 British Library British National Bibliography - 
 http://www.bl.uk/bibliographic/datafree.html - data model documented, but no 
 code available
 Libris.se - some notes in various presentations/blogposts (e.g. 
 http://dc2008.de/wp-content/uploads/2008/09/malmsten.pdf) but can't find 
 explicit transformation
 Hungarian National library - 
 http://thedatahub.org/dataset/hungarian-national-library-catalog and 
 http://nektar.oszk.hu/wiki/Semantic_web#Implementation - some information on 
 ontologies used but no code or explicit transformation (not 100% sure this is 
 from MARC)
 Talis - implemented in several live catalogues including 
 http://catalogue.library.manchester.ac.uk/  - no documentation or code afaik 
 although some notes in 
 
 MAB transformation
 HBZ - some of the transformation documented at 
 https://wiki1.hbz-nrw.de/display/SEM/Converting+the+Open+Data+from+the+hbz+to+BIBO,
  don't think any code published?
 
 Would be really helpful if more projects published their transformations (or 
 someone told me where to look!)
 
 Owen
 
 Owen Stephens
 Owen Stephens Consulting
 Web: http://www.ostephens.com
 Email: o...@ostephens.com
 Telephone: 0121 288 6936
 
 On 26 Nov 2011, at 15:58, Karen Coyle wrote:
 
 A few of the code4lib talk proposals mention projects that have or will 
 transform MARC records into RDF. If any of you have documentation and/or 
 examples of this, I would be very interested to see them, even if they are 
 under construction.
 
 Thanks,
 kc
 
 -- 
 Karen Coyle
 kco...@kcoyle.net http://kcoyle.net
 ph: 1-510-540-7596
 m: 1-510-435-8234
 skype: kcoylenet


Re: [CODE4LIB] Models of MARC in RDF

2011-12-02 Thread Owen Stephens
Hi Esme - thanks for this. Do you have any documentation on which predicates 
you've used and MODS-RDF transformation?

Owen

On 2 Dec 2011, at 16:07, Esme Cowles escow...@ucsd.edu wrote:

 Owen-
 
 Another strategy for capturing MARC data in RDF is to convert it to MODS (we 
 do this using the LoC MARC to MODS stylesheet: 
 http://www.loc.gov/standards/marcxml/xslt/MARC21slim2MODS.xsl).  From there, 
 it's pretty easy to incorporate into RDF.  There are some issues to be aware 
 of, such as how to map the MODS XML names to predicates and how to handle 
 elements that can appear in multiple places in the hierarchy.
 
 -Esme
 --
 Esme Cowles escow...@ucsd.edu
 
 Necessity is the plea for every infringement of human freedom. It is the
 argument of tyrants; it is the creed of slaves. -- William Pitt, 1783
 
 On 11/28/2011, at 8:25 AM, Owen Stephens wrote:
 
 It would be great to start collecting transforms together - just a quick 
 brain dump of some I'm aware of
 
 MARC21 transformations
 Cambridge University Library - http://data.lib.cam.ac.uk - transformation 
 made available (in code) from same site
 Open University - http://data.open.ac.uk - specific transform for materials 
 related to teaching, code available at 
 http://code.google.com/p/luceroproject/source/browse/trunk%20luceroproject/OULinkedData/src/uk/ac/open/kmi/lucero/rdfextractor/RDFExtractor.java
  (MARC transform is in libraryRDFExtraction method)
 COPAC - small set of records from the COPAC Union catalogue - data and 
 transform not yet published
 Podes Projekt - LinkedAuthors - documentation at 
 http://bibpode.no/linkedauthors/doc/Pode-LinkedAuthors-Documentation.pdf - 2 
 stage transformation firstly from MARC to FRBRized version of data, then 
 from FRBRized data to RDF. These linked from documentation
 Podes Project - LinkedNonFiction - documentation at 
 http://bibpode.no/linkednonfiction/doc/Pode-LinkedNonFiction-Documentation.pdf
  - MARC data transformed using xslt 
 https://github.com/pode/LinkedNonFiction/blob/master/marcslim2n3.xsl
 
 British Library British National Bibliography - 
 http://www.bl.uk/bibliographic/datafree.html - data model documented, but no 
 code available
 Libris.se - some notes in various presentations/blogposts (e.g. 
 http://dc2008.de/wp-content/uploads/2008/09/malmsten.pdf) but can't find 
 explicit transformation
 Hungarian National library - 
 http://thedatahub.org/dataset/hungarian-national-library-catalog and 
 http://nektar.oszk.hu/wiki/Semantic_web#Implementation - some information on 
 ontologies used but no code or explicit transformation (not 100% sure this 
 is from MARC)
 Talis - implemented in several live catalogues including 
 http://catalogue.library.manchester.ac.uk/  - no documentation or code afaik 
 although some notes in 
 
 MAB transformation
 HBZ - some of the transformation documented at 
 https://wiki1.hbz-nrw.de/display/SEM/Converting+the+Open+Data+from+the+hbz+to+BIBO,
  don't think any code published?
 
 Would be really helpful if more projects published their transformations (or 
 someone told me where to look!)
 
 Owen
 
 Owen Stephens
 Owen Stephens Consulting
 Web: http://www.ostephens.com
 Email: o...@ostephens.com
 Telephone: 0121 288 6936
 
 On 26 Nov 2011, at 15:58, Karen Coyle wrote:
 
 A few of the code4lib talk proposals mention projects that have or will 
 transform MARC records into RDF. If any of you have documentation and/or 
 examples of this, I would be very interested to see them, even if they are 
 under construction.
 
 Thanks,
 kc
 
 -- 
 Karen Coyle
 kco...@kcoyle.net http://kcoyle.net
 ph: 1-510-540-7596
 m: 1-510-435-8234
 skype: kcoylenet


Re: [CODE4LIB] Models of MARC in RDF

2011-12-02 Thread Owen Stephens
Oh - and perhaps just/more importantly - how do you create URIs for you data 
and how do you reconcile against other sources?

Owen

On 2 Dec 2011, at 16:07, Esme Cowles escow...@ucsd.edu wrote:

 Owen-
 
 Another strategy for capturing MARC data in RDF is to convert it to MODS (we 
 do this using the LoC MARC to MODS stylesheet: 
 http://www.loc.gov/standards/marcxml/xslt/MARC21slim2MODS.xsl).  From there, 
 it's pretty easy to incorporate into RDF.  There are some issues to be aware 
 of, such as how to map the MODS XML names to predicates and how to handle 
 elements that can appear in multiple places in the hierarchy.
 
 -Esme
 --
 Esme Cowles escow...@ucsd.edu
 
 Necessity is the plea for every infringement of human freedom. It is the
 argument of tyrants; it is the creed of slaves. -- William Pitt, 1783
 
 On 11/28/2011, at 8:25 AM, Owen Stephens wrote:
 
 It would be great to start collecting transforms together - just a quick 
 brain dump of some I'm aware of
 
 MARC21 transformations
 Cambridge University Library - http://data.lib.cam.ac.uk - transformation 
 made available (in code) from same site
 Open University - http://data.open.ac.uk - specific transform for materials 
 related to teaching, code available at 
 http://code.google.com/p/luceroproject/source/browse/trunk%20luceroproject/OULinkedData/src/uk/ac/open/kmi/lucero/rdfextractor/RDFExtractor.java
  (MARC transform is in libraryRDFExtraction method)
 COPAC - small set of records from the COPAC Union catalogue - data and 
 transform not yet published
 Podes Projekt - LinkedAuthors - documentation at 
 http://bibpode.no/linkedauthors/doc/Pode-LinkedAuthors-Documentation.pdf - 2 
 stage transformation firstly from MARC to FRBRized version of data, then 
 from FRBRized data to RDF. These linked from documentation
 Podes Project - LinkedNonFiction - documentation at 
 http://bibpode.no/linkednonfiction/doc/Pode-LinkedNonFiction-Documentation.pdf
  - MARC data transformed using xslt 
 https://github.com/pode/LinkedNonFiction/blob/master/marcslim2n3.xsl
 
 British Library British National Bibliography - 
 http://www.bl.uk/bibliographic/datafree.html - data model documented, but no 
 code available
 Libris.se - some notes in various presentations/blogposts (e.g. 
 http://dc2008.de/wp-content/uploads/2008/09/malmsten.pdf) but can't find 
 explicit transformation
 Hungarian National library - 
 http://thedatahub.org/dataset/hungarian-national-library-catalog and 
 http://nektar.oszk.hu/wiki/Semantic_web#Implementation - some information on 
 ontologies used but no code or explicit transformation (not 100% sure this 
 is from MARC)
 Talis - implemented in several live catalogues including 
 http://catalogue.library.manchester.ac.uk/  - no documentation or code afaik 
 although some notes in 
 
 MAB transformation
 HBZ - some of the transformation documented at 
 https://wiki1.hbz-nrw.de/display/SEM/Converting+the+Open+Data+from+the+hbz+to+BIBO,
  don't think any code published?
 
 Would be really helpful if more projects published their transformations (or 
 someone told me where to look!)
 
 Owen
 
 Owen Stephens
 Owen Stephens Consulting
 Web: http://www.ostephens.com
 Email: o...@ostephens.com
 Telephone: 0121 288 6936
 
 On 26 Nov 2011, at 15:58, Karen Coyle wrote:
 
 A few of the code4lib talk proposals mention projects that have or will 
 transform MARC records into RDF. If any of you have documentation and/or 
 examples of this, I would be very interested to see them, even if they are 
 under construction.
 
 Thanks,
 kc
 
 -- 
 Karen Coyle
 kco...@kcoyle.net http://kcoyle.net
 ph: 1-510-540-7596
 m: 1-510-435-8234
 skype: kcoylenet


Re: [CODE4LIB] Models of MARC in RDF

2011-12-02 Thread Esme Cowles
Owen-

We assign ARKs[1] to our objects (and predicates for that matter).  The issue 
of reconciling against other sources hasn't come as much, since we have mostly 
focused on our unique objects.  But we have worked on that issue some.  For 
example, several years ago, I worked on the UCAI project, where we mapped 
several slide collections to a common schema[2] and did quite a bit of work 
trying to build work records for the collections that didn't have them, and 
match work records across collections.  That project didn't produce a 
copy-cataloging service like we'd hoped, though the Getty is now working on a 
registry[3] of works of art, which would the task of matching records a lot 
simpler.

1. https://wiki.ucop.edu/display/Curation/ARK
2. http://www.loc.gov/standards/vracore/
3. http://www.getty.edu/research/tools/vocabularies/cona/index.html

-Esme
--
Esme Cowles escow...@ucsd.edu

In the old days, an operating system was designed to optimize the
 utilization of the computer's resources. In the future, its main goal
 will be to optimize the user's time. -- Jakob Nielsen

On 12/2/2011, at 1:37 PM, Owen Stephens wrote:

 Oh - and perhaps just/more importantly - how do you create URIs for you data 
 and how do you reconcile against other sources?
 
 Owen
 
 On 2 Dec 2011, at 16:07, Esme Cowles escow...@ucsd.edu wrote:
 
 Owen-
 
 Another strategy for capturing MARC data in RDF is to convert it to MODS (we 
 do this using the LoC MARC to MODS stylesheet: 
 http://www.loc.gov/standards/marcxml/xslt/MARC21slim2MODS.xsl).  From there, 
 it's pretty easy to incorporate into RDF.  There are some issues to be aware 
 of, such as how to map the MODS XML names to predicates and how to handle 
 elements that can appear in multiple places in the hierarchy.
 
 -Esme
 --
 Esme Cowles escow...@ucsd.edu
 
 Necessity is the plea for every infringement of human freedom. It is the
 argument of tyrants; it is the creed of slaves. -- William Pitt, 1783
 
 On 11/28/2011, at 8:25 AM, Owen Stephens wrote:
 
 It would be great to start collecting transforms together - just a quick 
 brain dump of some I'm aware of
 
 MARC21 transformations
 Cambridge University Library - http://data.lib.cam.ac.uk - transformation 
 made available (in code) from same site
 Open University - http://data.open.ac.uk - specific transform for materials 
 related to teaching, code available at 
 http://code.google.com/p/luceroproject/source/browse/trunk%20luceroproject/OULinkedData/src/uk/ac/open/kmi/lucero/rdfextractor/RDFExtractor.java
  (MARC transform is in libraryRDFExtraction method)
 COPAC - small set of records from the COPAC Union catalogue - data and 
 transform not yet published
 Podes Projekt - LinkedAuthors - documentation at 
 http://bibpode.no/linkedauthors/doc/Pode-LinkedAuthors-Documentation.pdf - 
 2 stage transformation firstly from MARC to FRBRized version of data, then 
 from FRBRized data to RDF. These linked from documentation
 Podes Project - LinkedNonFiction - documentation at 
 http://bibpode.no/linkednonfiction/doc/Pode-LinkedNonFiction-Documentation.pdf
  - MARC data transformed using xslt 
 https://github.com/pode/LinkedNonFiction/blob/master/marcslim2n3.xsl
 
 British Library British National Bibliography - 
 http://www.bl.uk/bibliographic/datafree.html - data model documented, but 
 no code available
 Libris.se - some notes in various presentations/blogposts (e.g. 
 http://dc2008.de/wp-content/uploads/2008/09/malmsten.pdf) but can't find 
 explicit transformation
 Hungarian National library - 
 http://thedatahub.org/dataset/hungarian-national-library-catalog and 
 http://nektar.oszk.hu/wiki/Semantic_web#Implementation - some information 
 on ontologies used but no code or explicit transformation (not 100% sure 
 this is from MARC)
 Talis - implemented in several live catalogues including 
 http://catalogue.library.manchester.ac.uk/  - no documentation or code 
 afaik although some notes in 
 
 MAB transformation
 HBZ - some of the transformation documented at 
 https://wiki1.hbz-nrw.de/display/SEM/Converting+the+Open+Data+from+the+hbz+to+BIBO,
  don't think any code published?
 
 Would be really helpful if more projects published their transformations 
 (or someone told me where to look!)
 
 Owen
 
 Owen Stephens
 Owen Stephens Consulting
 Web: http://www.ostephens.com
 Email: o...@ostephens.com
 Telephone: 0121 288 6936
 
 On 26 Nov 2011, at 15:58, Karen Coyle wrote:
 
 A few of the code4lib talk proposals mention projects that have or will 
 transform MARC records into RDF. If any of you have documentation and/or 
 examples of this, I would be very interested to see them, even if they are 
 under construction.
 
 Thanks,
 kc
 
 -- 
 Karen Coyle
 kco...@kcoyle.net http://kcoyle.net
 ph: 1-510-540-7596
 m: 1-510-435-8234
 skype: kcoylenet


Re: [CODE4LIB] Models of MARC in RDF

2011-11-28 Thread Owen Stephens
It would be great to start collecting transforms together - just a quick brain 
dump of some I'm aware of

MARC21 transformations
Cambridge University Library - http://data.lib.cam.ac.uk - transformation made 
available (in code) from same site
Open University - http://data.open.ac.uk - specific transform for materials 
related to teaching, code available at 
http://code.google.com/p/luceroproject/source/browse/trunk%20luceroproject/OULinkedData/src/uk/ac/open/kmi/lucero/rdfextractor/RDFExtractor.java
 (MARC transform is in libraryRDFExtraction method)
COPAC - small set of records from the COPAC Union catalogue - data and 
transform not yet published
Podes Projekt - LinkedAuthors - documentation at 
http://bibpode.no/linkedauthors/doc/Pode-LinkedAuthors-Documentation.pdf - 2 
stage transformation firstly from MARC to FRBRized version of data, then from 
FRBRized data to RDF. These linked from documentation
Podes Project - LinkedNonFiction - documentation at 
http://bibpode.no/linkednonfiction/doc/Pode-LinkedNonFiction-Documentation.pdf 
- MARC data transformed using xslt 
https://github.com/pode/LinkedNonFiction/blob/master/marcslim2n3.xsl

British Library British National Bibliography - 
http://www.bl.uk/bibliographic/datafree.html - data model documented, but no 
code available
Libris.se - some notes in various presentations/blogposts (e.g. 
http://dc2008.de/wp-content/uploads/2008/09/malmsten.pdf) but can't find 
explicit transformation
Hungarian National library - 
http://thedatahub.org/dataset/hungarian-national-library-catalog and 
http://nektar.oszk.hu/wiki/Semantic_web#Implementation - some information on 
ontologies used but no code or explicit transformation (not 100% sure this is 
from MARC)
Talis - implemented in several live catalogues including 
http://catalogue.library.manchester.ac.uk/  - no documentation or code afaik 
although some notes in 

MAB transformation
HBZ - some of the transformation documented at 
https://wiki1.hbz-nrw.de/display/SEM/Converting+the+Open+Data+from+the+hbz+to+BIBO,
 don't think any code published?

Would be really helpful if more projects published their transformations (or 
someone told me where to look!)

Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 26 Nov 2011, at 15:58, Karen Coyle wrote:

 A few of the code4lib talk proposals mention projects that have or will 
 transform MARC records into RDF. If any of you have documentation and/or 
 examples of this, I would be very interested to see them, even if they are 
 under construction.
 
 Thanks,
 kc
 
 -- 
 Karen Coyle
 kco...@kcoyle.net http://kcoyle.net
 ph: 1-510-540-7596
 m: 1-510-435-8234
 skype: kcoylenet


Re: [CODE4LIB] Models of MARC in RDF

2011-11-28 Thread Jon Stroop
You may know about this one already, but the BL exposed the British 
National Bibliography as RDF last summer. The project has a page[1] with 
a good amount of info--the data model[2] might be a good place to start.

-Jon

1. http://www.bl.uk/bibliographic/datafree.html
2. http://www.bl.uk/bibliographic/pdfs/datamodelv1_01.pdf

On 11/26/2011 10:58 AM, Karen Coyle wrote:
A few of the code4lib talk proposals mention projects that have or 
will transform MARC records into RDF. If any of you have documentation 
and/or examples of this, I would be very interested to see them, even 
if they are under construction.


Thanks,
kc



--
Jon Stroop
Metadata Analyst
Firestone Library
Princeton University
Princeton, NJ 08544

Email: jstr...@princeton.edu
Phone: (609)258-0059
Fax: (609)258-0441

http://pudl.princeton.edu
http://findingaids.princeton.edu
http://www.cpanda.org


Re: [CODE4LIB] Models of MARC in RDF

2011-11-28 Thread Karen Coyle
Mike, that's what I suspected is going on. It might be good to mine  
those efforts, contrast and compare. Maybe not the details, but the  
general models.


kc

Quoting Mike Taylor m...@indexdata.com:


I was at a one-day conference hosted by the British Library a few
months ago, on the use of Linked Data in libraries.  There were about
50 people there in total.  It became apparent that between us we
represented AT LAST ten separate projects (or parts of bigger project)
for converting MARC data into LD-friendly RDF.

-- Mike.


On 26 November 2011 09:58, Karen Coyle li...@kcoyle.net wrote:

A few of the code4lib talk proposals mention projects that have or will
transform MARC records into RDF. If any of you have documentation and/or
examples of this, I would be very interested to see them, even if they are
under construction.

Thanks,
kc

--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet








--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet


Re: [CODE4LIB] Models of MARC in RDF

2011-11-28 Thread Karen Coyle
Wow. Thank you, Owen! As a way not to lose these, I have done a crude  
page on the futurelib wiki with the contents of your mail, and promise  
to clean it up at some not too distant date:

  http://futurelib.pbworks.com/w/page/48408645/MARC%20in%20RDF

When/if I get the time, I will try to dig into the details of some of  
these and see how one could do a comparison. Obviously, if someone  
else is able to do that before I get to it, *please* post here!


kc


Quoting Owen Stephens o...@ostephens.com:

It would be great to start collecting transforms together - just a  
quick brain dump of some I'm aware of


MARC21 transformations
Cambridge University Library - http://data.lib.cam.ac.uk -  
transformation made available (in code) from same site
Open University - http://data.open.ac.uk - specific transform for  
materials related to teaching, code available at  
http://code.google.com/p/luceroproject/source/browse/trunk%20luceroproject/OULinkedData/src/uk/ac/open/kmi/lucero/rdfextractor/RDFExtractor.java (MARC transform is in libraryRDFExtraction  
method)
COPAC - small set of records from the COPAC Union catalogue - data  
and transform not yet published
Podes Projekt - LinkedAuthors - documentation at  
http://bibpode.no/linkedauthors/doc/Pode-LinkedAuthors-Documentation.pdf - 2  
stage transformation firstly from MARC to FRBRized version of data,  
then from FRBRized data to RDF. These linked from documentation
Podes Project - LinkedNonFiction - documentation at  
http://bibpode.no/linkednonfiction/doc/Pode-LinkedNonFiction-Documentation.pdf - MARC data transformed using xslt  
https://github.com/pode/LinkedNonFiction/blob/master/marcslim2n3.xsl


British Library British National Bibliography -  
http://www.bl.uk/bibliographic/datafree.html - data model  
documented, but no code available
Libris.se - some notes in various presentations/blogposts (e.g.  
http://dc2008.de/wp-content/uploads/2008/09/malmsten.pdf) but can't  
find explicit transformation
Hungarian National library -  
http://thedatahub.org/dataset/hungarian-national-library-catalog and  
http://nektar.oszk.hu/wiki/Semantic_web#Implementation - some  
information on ontologies used but no code or explicit  
transformation (not 100% sure this is from MARC)
Talis - implemented in several live catalogues including  
http://catalogue.library.manchester.ac.uk/  - no documentation or  
code afaik although some notes in


MAB transformation
HBZ - some of the transformation documented at  
https://wiki1.hbz-nrw.de/display/SEM/Converting+the+Open+Data+from+the+hbz+to+BIBO, don't think any code  
published?


Would be really helpful if more projects published their  
transformations (or someone told me where to look!)


Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 26 Nov 2011, at 15:58, Karen Coyle wrote:

A few of the code4lib talk proposals mention projects that have or  
will transform MARC records into RDF. If any of you have  
documentation and/or examples of this, I would be very interested  
to see them, even if they are under construction.


Thanks,
kc

--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet






--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet


Re: [CODE4LIB] Models of MARC in RDF

2011-11-27 Thread Mike Taylor
I was at a one-day conference hosted by the British Library a few
months ago, on the use of Linked Data in libraries.  There were about
50 people there in total.  It became apparent that between us we
represented AT LAST ten separate projects (or parts of bigger project)
for converting MARC data into LD-friendly RDF.

-- Mike.


On 26 November 2011 09:58, Karen Coyle li...@kcoyle.net wrote:
 A few of the code4lib talk proposals mention projects that have or will
 transform MARC records into RDF. If any of you have documentation and/or
 examples of this, I would be very interested to see them, even if they are
 under construction.

 Thanks,
 kc

 --
 Karen Coyle
 kco...@kcoyle.net http://kcoyle.net
 ph: 1-510-540-7596
 m: 1-510-435-8234
 skype: kcoylenet