I'm generating reports off of a 4mb xml document. Even after
    upping the freememory and heapsize parameters, performance is
    disappointing.

    My pipeline starts with the big document, transforms it to a
    different format, just as big, then it will filter out smaller
    documents, much smaller, then transform those to XHTML.

    My guess is that even when cached, the 4mb document requires
    that the DOM be rebuilt before the filter transforms are
    applied. Is it possible to cache the big documents as DOM?

    Is that happening anyway.

-- 
Alan / [EMAIL PROTECTED] / http://engrm.com/

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to