2012-07-10 15:49, Edward Ned Harvey wrote:
If you use compression=on, or lzjb, then you're using very fast compression.
Should not hurt performance, in fact, may gain performance for highly
compressible data.

If you use compression=gzip (or any gzip level 1 thru 9) then you're using a
fairly expensive compression algorithm.  It will almost certainly hurt
performance, but you may gain more disk space. (Probably not.)

Well, as far as the discussion relates to zones, the "WORM"
(write once - read many) type of data, such as the OS image
of the local zone, can "suffer" gzip-9 compression during
installation of the zone and applications. This may make
your files consume less disk sectors and further reads can
be faster. Then you can enable lzjb on the same dataset,
and further writes (of logs) would be compressed faster.

In fact, you might want to "delegate" a dataset to the
zone and create several filesystems in it, with different
compression options, for your logs, application data and
perhaps databases (which may be sensitive to IO block size
and dislike external compression in favor of speeds). For
these there is a "zle" compression which only compresses
blocks filled with zeroes - that allows to save some space
when your DB precreates a huge storage file but only uses
a few kilobytes in it.

I am not qualified to state whether gzip decompression might
be slower during reads than lzjb or not, but remember that
all this relies on general assumption that current CPUs are
overqualified for their jobs and have lots of spare cycles -
so (de)compression has little impact on real work anyway.
Also decompression tends to be faster than compression,
because there is little to no analysis to do - only matching
compressed tags to a dictionary of original data snippets.

//Jim Klimov
zfs-discuss mailing list

Reply via email to