On Nov 20, 2007, at 17:53, Chris Bowditch wrote:

Hi Chris

Chris Bowditch wrote:

<snip/>

Tested it here, and I don't see any immediate leakage anymore.
One 71-page document, and with -Xmx256M, the heap never exceeded +/-85M. Each document rendered consistently within 3-4s (ran up to 1000 subsequent renderings).
Well memory seems okay for the first 1000 or so. I hate to be the bearer of bad news, but your latest patch doesn't seem any better. Its taking about 5s for 2-3 page documents on a Quad Core 2.66 processor! Running at the heap limit too, currently at 13000 documents. I'm sure it will reach OOM Error at some point during the evening :(

OOM Error did indeed occur at 20900 documents with the latest patch. I need to take a memory leak free build of FOP for use in my own project so I have committed Jeremias' patch for now. If you want me to help test any further changes I will be happy to help.


Made yet another attempt to simplify/correct the design a bit (and hopefully fix the leak as well).

The implementation of the CacheEntry was not what it should have been...
By using a WeakReference as a member of the entry, it seems I was making things more difficult than they already are. The right way to go is to subclass WeakReference, and to perform cleanup via ReferenceQueues (as Jeremias suggested earlier). So, that's what happens now: each CacheSegment has its own queue of stale entries. Cleanup is now triggered unconditionally with each put(), and simply polls the reference queue. I removed the threading altogether. The cleanup/rehash is now always performed in the main thread.


Let me know if it works on your end.

Cheers

Andreas



Attachment: propcache.diff
Description: Binary data

Reply via email to