I think there are many good reasons why XSLT is absolutely the wrong tool for 
the job of indexing MARC records for Solr.
 
1) Performance/Speed: In my experience even just transforming from MARCXML to 
MODS takes a second or two (using the LoC stylesheet), due to the stylesheet's 
complexity and inefficiency of doing heavy-duty string manipulation in XSL.  
That means you're looking at an indexing speed of around 1 record/second.  If 
you've got 1,000,000 bib records, it'll take a couple of weeks just to index 
your data.  For comparison, the indexer of our commercial OPAC does about 50 
records per second (~6 hours for a million records) and the one I've written in 
Jython (by no means the fastest language out there) that doesn't use XSL can do 
about 150 records a second (about 2 hours for 1 million records).  
 
2) Reusability:  What if you want to change how a field is indexed?  You would 
have to edit the XSLT directly (or have the XSL stylesheet automatically 
generated based on settings stored elsewhere).  
 
a) Users of the indexer shouldn't have to actually mess with programming logic 
to change how it indexes.  You shouldn't have to know a thing about programming 
to change the setup of an index.
 
b) It should be easy for an external application to know how your indexes have 
been built.  This would be very difficult with an XSL stylesheet.  Burying 
configuration inside of programming logic is a bad idea.  
 
c) The Solr schema should be automatically generated from your index setup so 
all your index configuration is in one place.  I guess you could write 
*another* XSL stylesheet that would transform your indexing stylesheet into the 
Solr schema file, but that seems ridiculous.
 
d) Automatic code generation is evil.  Blanchard's law: "Systems that require 
code generation lack sufficient power to tackle the problem at hand."  If you 
find yourself considering automatic code generation, you should instead be 
considering a more dynamic programming language.
 
3) Ease of programming.  
 
a) Heavy-duty string manipulation is a pain in pure XSLT.  To index MARC 
records have to do normalization on dates and names and you probably want to do 
some translation between MARC codes and their meanings (for the audience & 
language codes, for instance).  Is it doable?  Yes, especially if you use XSL 
extension functions.  But if you're going to have huge chunks of your logic 
buried in extension functions, why not go whole hog and do it all outside of 
XSLT, instead of having half your programming logic in an extension function 
and half in the XSLT itself?
 
b) Using XSLT makes object-oriented programming with your data harder.  Your 
indexer should be able to give you a nice object representation of a record (so 
you can use that object representation within other code).  If you go the XSLT 
route, you'd have to parse the MARC record, transform it to your Solr record 
XML format, then parse that XML and map the XML to an object.  If you avoid 
XSLT, you just parse the MARC record and transform it to an object 
programmatically (with the object having a method to print itself out as a Solr 
XML record).
 
Honestly, all this talk of using XSLT for indexing MARC records reminds me of 
that guy who rode across the United States on a riding lawnmower.  I am looking 
forward to there being a standard, well-tested MARC record indexer for Solr 
(and would be excited to contribute to such a project), but I don't think that 
XSL is the right tool to use.
 
 
--Casey
 

 

Reply via email to