On 10/19/2012 10:29 PM, Liam R E Quin wrote:
so if one would allocate a target memory area in RAM first and then fill the
mem map with the to-be-saved data one could open a file handle right from the
start and copy the mem portions to disk as they get filled by the compression
The problem with this is that you don't know in advance how much memoro
to allocate or where to write, because the compression varies depending
on patterns of light and dark (for example) in the image.
I've never actually coded png compression so correct me if I'm wrong.
If you have n cores. You allocate n blocks of memory assuming worst
compression outcome and start to parallely compress blocks you only use
a reasonably small amount of memory. (n times the worst case size of a
png tile). For each block that is ready and can be appended to the disk
file, you write out the block ans start to compress the next unhandled
block. Of course this is asynchronous as some blocks will finish to
compress before the next-to-be-written-out is still working, but you
still speed up compression by a big factor and still have a small and
static compression buffer.
If you add a write-out-queue you can achieve a more effective
parallelism while still limiting memory usage.
Just my 2c.
Or another approach would be make a copy of the image in RAM and do the save
in the background. That way when using the same file name one would even
narrow the state transition of the file to a minumum.
It's done in a separate process right now, but copying the image in
memory, if it's, say, a one gigabyte image, might be problematic. And
the images that need to be sped up are the fast ones.
It might be faster in some cases for gimp to do a "merge visible layers"
before a save, I don't know.
gimp-user-list mailing list