On Sat, Apr 16, 2011 at 02:46, Zeev Tarantov <[email protected]> wrote:
> On Sat, Apr 16, 2011 at 02:11, Dan Magenheimer <[email protected]> 
> wrote:
>>> > On Fri, Apr 15, 2011 at 05:21, Greg KH <[email protected]> wrote:
>>> You need to show a solid use case for why to switch to this code in
>>> order to have it accepted.
>>
>> In particular, zram and zcache both operate on a page (4K) granularity,
>> so it would be interesting to see ranges of performance vs compression
>> of snappy vs LZO on a large test set of binary and text pages.  I mean
>> one page per test... I'm no expert but I believe some compression
>> algorithms have a larger startup overhead so may not be as useful
>> for compressing "smaller" (e.g. 4K) streams.
>
> Neither LZO nor Snappy do things like first transmitting the huffman
> trees before the data itself, so there are no startup costs. They're
> just too fast to use huffman coding.
> I can quickly make a user space tester that compresses the input 4KB at a 
> time.

You can get block_compressor.c from:
https://github.com/zeevt/csnappy/
It works with libcsnappy.so in the same directory if you export
LD_LIBRARY_PATH (that's what I get for not using libtool).
It compresses each page of a single file separately and gives you
compressed length. It writes the compressed data and a header with all
lengths so it can restore the file later. LZO and CSnappy are
supported.

-Z.T.
_______________________________________________
devel mailing list
[email protected]
http://driverdev.linuxdriverproject.org/mailman/listinfo/devel

Reply via email to