Hi,

If you want to help improve the memory usage in a future version of fop I
suggest you follow the current cvs developments. This will attempt to
address this issue along with a number of other issues.

As far as memory is concerned the ideas area:
- have the smallest set of variables and objects required to represent the
data
- keep the processing straight forward
- use and dispose of objects as soon as possible
- process output as soon as possible (ie pages)
- if objects need to be kept then make it possible to cache


On 2001.11.09 19:31 Lloyd McKenzie/CanWest/IBM wrote:
> 
> I'm not sure if this will help or not, but it worked well for me.
> 
> I was trying to process a 64 MB document, and it was taking DAYS and was
> eating gobs of memory.  I did some wading through the code, looking for
> ways to optimize.  I found a couple of places to reduce memory, but
> nothing
> substantial.  (I plan to run some analysis on my changes, and if they
> make
> a difference of more than 5%, I'll submit them for inclusion in a future
> release.)  However, in my wondering through the code, I realized that FOP
> parses and stores everything until it runs into an 'End' Page sequence
> marker.  My XML document was one BIG page sequence, so FOP was parsing
> the
> entire thing before it would start to generate output.  As my XML
> consisted
> of a large number of fairly independent sections, I modified my XSLT to
> put
> each section into a different page sequence.  The result was that FOP
> only
> parses objects to the end of the page-sequence, spits out the pages for
> that sequence, and garbage collects the objects before moving on.  The
> only
> data that is retained are link references.  These eat up a bit memory,
> but
> nothing as bad as all of the area references needed to draw the page :>
> 
> Hope  this helps,
> 
> 
> Lloyd

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, email: [EMAIL PROTECTED]

Reply via email to