On 2012-10-09 John Reiser wrote:
> I'm interested in speeding up compression for mksquashfs, which uses
> independent blocks of input length 2**17 bytes.  I have in mind a
> specialized match finder which would take advantage of the small
> fixed block size, and tailor its memory usage to the common L2 cache
> size of 256KB.  Is anyone else looking into this?

I'm not aware of anyone working on something like this.

I think one needs to modify more than the match finder to fit all data
structures into 256 KiB. For example, the dictionary buffer has some
fixed extra size to prevent too frequent memmove calls. Even then
256 KiB might be hard to achieve without affecting compression ratio
much. You may need to use mode=fast since mode=normal uses slightly more
memory. Maybe that is OK for you since you are looking for fast
compression anyway.

If you have trouble reading the code, see also XZ for Java, which I
think is currently the most readable version. (I'm not suggesting that
you should use the Java code, I just mean that it might help
understanding liblzma.) liblzma should be made more readable too.

-- 
Lasse Collin  |  IRC: Larhzu @ IRCnet & Freenode

Reply via email to