Anders.
Thank you for your tests and your time.
At that moment, i have no possiblility to check more on that issue. My deadline is coming and i will not be afforded to pass more time on this. Moreover, as lazy loading of blob is not possible from JPA/toplink, i will have to resort to loading by hand and thus will be able to use stream writing. As i checked that this way did not cause me problems i will follow it.
Thanks again.

Anders Morken a écrit :
PS. My small, slightly modified test case based on yours, successfully
churned through >1500 iterations of 6MB blob insertion with Derby
10.2.2.0 in the classpath and a 100 page page cache while I wrote this
email. I moved the prepareStatement outside the loop, replaced your
copy/paste with a single while(true) {...} and added a counter to count
the number of loops but it's otherwise cut-and-pasted from what you
sent.

Just slightly related - a 1000-page page cache bombs after 5 iterations,
a 500-page page cache seems to survive at least 100 iterations, a
750-page cache also survives at least 200 iterations.

When taking a look at the heap dump (produced by giving Java the
-XX:+HeapDumpOnOutOfMemoryError parameter on a 1000-page run and then
waiting for the expected crash) using jhat, I can't find any other big
memory eaters than the page cache. In total the cache manager referenced
about 50MB of heap when Java died.  That includes every object reachable
from the page objects in the cache, so it's not really the "size of the
cache", but...
Well, at least it indicates that you can't just multiply the page cache
size by the page size (Seemed to be 32K in this case - at least that was
the size of the page data byte arrays) to find how much memory the cache
is referring. =)


Reply via email to