Hi all,

I'm having a problem here in a sequential image processing application that
seems to test a particularly bad operation mode for the RTL heap manager (on
Windows, but I don't think this matters here).
The work load looks like this: load "normal sized" image, do some processing,
calculate a few values, create thumbnail in memory, next. By "normal sized" I
mean something like 35MB uncompressed (regular DSLR resolution) and smaller.
It's threaded code, but I'm describing the single worker operation here.

This appears to trigger insane memory fragmentation: after execution on 40 test
files, I'm left with 250MB working set, while GetHeapStatus only reports 600k
used (which seems correct, my data structures add up to 500k). This is never
released to the OS. In fact, I've had to set the LAA flag just so that the
program can work on the real data at all, with some 2.6GB working set for 1.07MB
of used memory.

Is there any tuning option that could use maybe fix that? Some growing or
reallocating flag or something?


As a quick test, I've tried my partial port of FastMM4 which works just fine, no
fragmentation and peaks at 40MB used memory, which fits with the largest image.

But this is such a reproducible test case, maybe there is something that can be
done to improve the RTL MM?

-- 
Regards,
Martok


_______________________________________________
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Reply via email to