On 2017-08-10 07:32, Austin S. Hemmelgarn wrote:
Also didn't think to mention this, but I could see the max level being
very popular for use with SquashFS root filesystems used in LiveCD's.
Currently, they have to decide between read performance and image size,
while zstd would provide both.
On 2017-08-10 04:30, Eric Biggers wrote:
Large data-sets with WORM access patterns and infrequent writes
immediately come to mind as a use case for the highest compression level.
On Wed, Aug 09, 2017 at 07:35:53PM -0700, Nick Terrell wrote:
It can compress at speeds approaching lz4, and quality approaching lzma.
Well, for a very loose definition of "approaching", and certainly not
same time. I doubt there's a use case for using the highest
in kernel mode --- especially the ones using zstd_opt.h.
As a more specific example, the company I work for has a very large
amount of documentation, and we keep all old versions. This is all
stored on a file server which is currently using BTRFS. Once a document
is written, it's almost never rewritten, so write performance only
matters for the first write. However, they're read back pretty
frequently, so we need good read performance. As of right now, the
system is set to use LZO compression by default, and then when a new
document is added, the previous version of that document gets
re-compressed using zlib compression, which actually results in pretty
significant space savings most of the time. I would absolutely love to
use zstd compression with this system with the highest compression
level, because most people don't care how long it takes to write the
file out, but they do care how long it takes to read a file (even if
it's an older version).