On Wed, Oct 16, 2013 at 01:42:34PM +0900, KONDO Mitsumasa wrote:
> (2013/10/15 22:01), k...@rice.edu wrote:
> >Google's lz4 is also a very nice algorithm with 33% better compression
> >performance than snappy and 2X the decompression performance in some
> >benchmarks also with a bsd license:
> >
> >https://code.google.com/p/lz4/
> If we judge only performance, we will select lz4. However, we should think
>  another important factor which is software robustness, achievement, bug
> fix history, and etc... If we see unknown bugs, can we fix it or improve
> algorithm? It seems very difficult, because we only use it and don't
> understand algorihtms. Therefore, I think that we had better to select
> robust and having more user software.
> Regards,
> --
> Mitsumasa KONDO
> NTT Open Source Software

Those are all very good points. lz4 however is being used by Hadoop. It
is implemented natively in the Linux 3.11 kernel and the BSD version of
the ZFS filesystem supports the lz4 algorithm for on-the-fly compression.
With more and more CPU cores available in modern system, using an
algorithm with very fast decompression speeds can make storing data, even
in memory, in a compressed form can reduce space requirements in exchange
for a higher CPU cycle cost. The ability to make those sorts of trade-offs
can really benefit from a plug-able compression algorithm interface.


Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to