I’m not for bubble gum and duct tape. But I also realize that when I’ve got a 
hammer everything begins to look like a nail. 

After having made those two ambiguous statements I would ask myself, “What is 
the problem I am trying to solve?” If you want to make your data available in a 
linked data (Semantic Web) fashion, then starting out with a small set of 
content is a good thing. The process can be as simple as exporting the metadata 
from your exiting store, converting it into some flavor of serialized RDF 
(XML/RDF, turtle, etc.), and saving it on an HTTP file system. After that you 
can begin to play with content negotiation to support harvesting by humans as 
well as robots, triple stores for managing the RDF as a whole, enhancing the 
RDF’s URIs to point to other people’s RDF, or creating “mash-ups” and “graphs” 
— essentially services — against your meta data. It is a never ending process, 
but linked data works with other people’s data and is all but API-independent; 
it is pure HTTP.  

On the other hand, if the desire is to learn how to take your data to another 
level, and not necessarily through linked data, then there are many additional 
options from which to choose. Indexing your data with Solr. Figuring out ways 
to do massive find/replace operations. Figuring out ways to do massive 
enhancements, maybe with full text content or images. Using something like 
ElasticSearch to not only index your data but index it in combination with 
other data that is not so bibliographic in nature and yet not dummied down to 
Dublin Core or without a huge database schemas. 

But back to the hammer and nail, I think time spent exploring the possibilities 
of exposing content as linked data will not be time wasted. 

— 
Eric Morgan

Reply via email to