I have created an initial pile of RDF, mostly.
I am in the process of experimenting with linked data for archives. My goal is
to use existing (EAD and MARC) metadata to create RDF/XML, and then to expose
this RDF/XML using linked data principles. Once I get that far I hope to slurp
up the
On 12/5/13 8:11 AM, Eric Lease Morgan wrote:
Where will I get the URIs from? I will get them by combining some sort
of unique code (like an OCLC symbol) or namespace with the value of
the MARC records' 001 fields.
You actually need 3 URIs per triple:
subject URI (which is what I believe you
I have successfully been able to being the systematic transformation process of
EAD and MARC to RDF/XML, and consequently been able to literally illustrate the
resulting triples. [1, 2] From the blog posting [3]:
The resulting images are huge, and the astute/diligent reader
will see a
On Dec 5, 2013, at 12:35 PM, Ross Singer rossfsin...@gmail.com wrote:
You still haven't really answered my question about what you're hoping to
achieve and who stands to benefit from it. I don't see how assigning a
bunch of arbitrary identifiers, properties, and values to a description of
a
On Dec 5, 2013, at 1:17 PM, Kevin Ford k...@3windmills.com wrote:
Frankly, I don't see how you can generate RDF that anybody would want to
use from XSLT: where would your URIs come from? What, exactly, are you
modeling?
-- Our experience getting to good, URI rich RDF has been basically a
Hi Eric,
you seem to have missed the Catmandu tutorial at SWIB13. Luckily there
is a basic tutorial and a demo online: http://librecat.org/
The demo happens to be about transforming MARC to RDF using the
Catmandu Perl framework. It gives you full flexibility by separating
the importer from the
On Dec 4, 2013, at 10:29 PM, Corey A Harper corey.har...@nyu.edu wrote:
Have you had a look at Ed Chamberlain's work on COMET:
https://github.com/edchamberlain/COMET
It's been a while since I've run this, but if I remember correctly, it was
fairly easy-to-use.
Thank you for the pointer. I
On Dec 4, 2013, at 10:29 PM, Corey A Harper corey.har...@nyu.edu wrote:
Also, though much older, I seem to remember the Simile MARC RDFizer being
a pretty straightforward one to run:
http://simile.mit.edu/wiki/MARC/MODS_RDFizer
MODS aficionados will point to some problems with some of it's
On Dec 5, 2013, at 6:54 AM, Eric Lease Morgan emor...@nd.edu wrote:
http://simile.mit.edu/wiki/MARC/MODS_RDFizer
...The distribution includes a possibly cool stylesheet — mods2rdf.xslt.
Ah ha! The MODS_RDFizer’s mods2rdf.xslt file functioned very well against one
of my MODS files:
$
On Dec 5, 2013, at 3:07 AM, Christian Pietsch
chr.pietsch+web4...@googlemail.com wrote:
you seem to have missed the Catmandu tutorial at SWIB13. Luckily there
is a basic tutorial and a demo online: http://librecat.org/
I did attend SWIB13, and I really wanted to go to the Catmandu workshop,
Eric, I'm having a hard time figuring out exactly what you're hoping to get.
Going from MARC to RDF was my great white whale for years while Talis' main
business interests involved both of those (although not archival
collections). Anything that will remodel MARC to (decent) RDF is going be:
When exposing sets of MARC records as linked data, do you think it is better to
expose them in batch (collection) files or as individual RDF serializations? To
bastardize the Bard — “To batch or not to batch? That is the question.”
Suppose I am a medium-sized academic research library. Suppose
On Dec 5, 2013, at 8:55 AM, Ross Singer rossfsin...@gmail.com wrote:
Eric, I'm having a hard time figuring out exactly what you're hoping to get.
Going from MARC to RDF was my great white whale for years while Talis' main
business interests involved both of those (although not archival
On Thu, Dec 5, 2013 at 11:11 AM, Eric Lease Morgan emor...@nd.edu wrote:
I’m hoping to articulate and implement a simple and functional method for
exposing EAD and MARC metadata as linked data.
Isn't the point of this to expose archival description as linked data? What
about description
On Dec 5, 2013, at 11:17 AM, Mark A. Matienzo mark.matie...@gmail.com wrote:
I’m hoping to articulate and implement a simple and functional method for
exposing EAD and MARC metadata as linked data.
Isn't the point of this to expose archival description as linked data? What
about description
On Thu, Dec 5, 2013 at 11:26 AM, Eric Lease Morgan emor...@nd.edu wrote:
Good question! At the very least, these applications (ArchivesSpace,
Archivists’ Toolkit, etc.) can regularly and systematically export their
data as EAD, and the EAD can be made available as linked data. It would be
On Dec 5, 2013, at 11:33 AM, Mark A. Matienzo mark.matie...@gmail.com wrote:
At the very least, these applications (ArchivesSpace,
Archivists’ Toolkit, etc.) can regularly and systematically export their
data as EAD, and the EAD can be made available as linked data.
Wouldn't it make more
With apologies to Eric to others from the LiAM project, I feel like I
want to jump in here with a little more context.
Eric, or Aaron, or Anne, please feel free to correct any of what I say
below.
I agree with the points made and concerns raised by both Ross Mark --
most significantly, that a
: [CODE4LIB] transforming marc to rdf
On Dec 5, 2013, at 11:33 AM, Mark A. Matienzo mark.matie...@gmail.com wrote:
At the very least, these applications (ArchivesSpace,
Archivists’ Toolkit, etc.) can regularly and systematically export their
data as EAD, and the EAD can be made available as linked
On Thu, Dec 5, 2013 at 11:57 AM, Eric Lease Morgan emor...@nd.edu wrote:
On Dec 5, 2013, at 11:33 AM, Mark A. Matienzo mark.matie...@gmail.com
wrote:
Wouldn't it make more sense, especially with a system like ArchivesSpace,
which provides a backend HTTP API and a public UI, to publish
On Thu, Dec 5, 2013 at 11:57 AM, Eric Lease Morgan emor...@nd.edu wrote:
“There is more than one way to skin a cat.” There are advantages and
disadvantages to every software solution.
I think what Mark and I are trying to say is that the first step to this
solution is not by applying
* BIBFRAME Tools [6] - sports nice ontologies, but
the online tools won’t scale for large operations
-- The code running the transformation at [6] is available here:
https://github.com/lcnetdev/marc2bibframe
We've run several million records through it at one time. As with
Anything that will remodel MARC to (decent) RDF is going be:
- Non-trivial to install
- Non-trivial to use
- Slow
- Require massive amounts of memory/disk space
Choose any two.
-- I'll second this.
Frankly, I don't see how you can generate RDF that anybody would want to
Eric,
Have you had a look at Ed Chamberlain's work on COMET:
https://github.com/edchamberlain/COMET
It's been a while since I've run this, but if I remember correctly, it was
fairly easy-to-use.
Also, though much older, I seem to remember the Simile MARC RDFizer being
a pretty straightforward
24 matches
Mail list logo