Hello,

I'm working on a project using Xalan to transform XML files to XML files.

We are currently wondering whether this product can fit our needs in terms 
of robustness, performance, and ability to handle bulk documents.
My questions are the followings :

*** Document size ***

I recently encountered the "No more DTM IDs are available" error message 
(using a Xalan 2.5.0 on a JRE 1.3.1 not due to the Xalan embedded in the 
JRE 1.4.X problem mentionned in the FAQ) while processing big documents 
(about 75K nodes in the source document).
I glanced through to the source code and saw things relative to the 
maximum DTM ids (IDENT_DTM_NODE_BITS = 16). Does this number mean that the 
maximum node number is about 65K ? Apparently not. Because I succeeded in 
transforming documents containing 100K nodes using a simplier stylesheet.
Is there an absolute limit on the file size to process or does this limit 
depend on the XML document AND its associated stylesheet ?
How can the stylesheet structure optimize the DTM ids allocation ?

*** Performance ***

Could the use of XSLTC improve the performance of my application ? I 
notice tiny improvements but as soon as I try to use that mode on large 
documents I encounter exceptions.
Has anybody hints on how to optimize a stylesheet (more precise than the 
one in the FAQ) ?

*** Experience and alternatives ***

Has anybody experienced the use of Xalan in an equivalent context (bulk 
XML document) ? Any hint, tricks ?
Are there any other Java XSLT processor offering better performance ?



For information :
statistically the ratio NodeGeneratedDoc / NodeSourceDoc is about 10.


Regards

Reply via email to