I'm planning to do xml -> text transformations (for tab-delimited output) and xml -> FOP on large XML datasets. The XML I will be processing will be 10-12 MB in size, and will grow from there. Based on planning, the XSL will contain around 50 node traversals and will iterate over my XML dataset around 46,000 times. Previous to this, my Cocoon transformations haven't been nearly this big.

The amount of JVM memory I have to deal with is limited (<256M). This transformation will need to run in real-time.

Does anyone have experience dealing with large datasets like this?

TIA,
Tom









---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to