BTW, My Binary XML project (http://d-haven.org/bxml) has the ability to compile an XML document into a Java Class (I know this is nothing extraordinary as XSP has been doing it for years). However what is very different from XSP is the following:
1) No Java file is ever written (it uses BCEL) 2) No Class file *needs* to be written (although it is an option) 3) The original document name and the line numbers are part of the generated source code. * That means the Stack trace has debug information you can use.
This has nothing to do with my rants about how stupid the xslt syntax is, but I would be interested to see actual performances between your approach to compiled xml and mine (the one that currently cocoon uses for the cache).
This is in light of the compiled/interpreted question. I think it would be kind of cool to have a serious benchmark between the two approaches, because this would be very helpful on the recent discussion on XSP.
I mean, the fact that you aren't generating the source code, well, that's nice from an implementation perspective (I thought about making a BCEL java assembler for XSP but then stopped because it was simply too complex and not really needed now that we have the superb eclipse compiler) but it's nothing different from XSP.
So, you are, in fact, unrolling a big loop of sax events, while my xml compilation approach works at serializing the sax events in a easy-to-be-parsed-later binary format.
I would expect my approach to be faster than yours on modern virtual machines because hotspot can optimize the tight binary SAX reparsing code, while your approach will never reveal any hotspots.
I'll also be interested to see how different the performance gets on hotspot server/client and how much it changes with several subsequent runs.
Too bad I don't have any time for this right now... but if you want to do it, it would be *extremely* useful not only for you but also for the future of compiled/interpreted stuff in java on the server side.
Stefano.