> On Nov 21, 2017, at 10:18 PM, Henrik Cednert (Filmlance) 
> <henrik.cedn...@filmlance.se> wrote:
> WHat's the normal way to deal with compression? Dump uncompressed and use 
> something that threads better to compress the dump?

I would say most likely your zlib is screwed up somehow, like maybe it didn't 
get optimized right by the C compiler or something else sucks w/ the 
compression settings. The CPU should easily blast away at that faster than 
disks can read.

I did do some studies of this previously some years ago, and I found gzip -6 
offered the best ratio between size reduction and CPU time out of a very wide 
range of formats, but at the time xz was also not yet available.

If I were you I would first pipe the uncompressed output through a separate 
compression command, then you can experiment with the flags and threads, and 
you already get another separate process for the kernel to put on other CPUs as 
an automatic bonus for multi-core with minimal work.

After that, xz is GNU standard now and has xz -T for cranking up some threads, 
with little extra effort for the user. But it can be kind of slow so probably 
need to lower the compression level somewhat depending a bit on some time 
testing. I would try on some medium sized DB table, like a bit over the size of 
system RAM, instead of dumping this great big DB, in order to benchmark a 
couple times until it looks happy.


Reply via email to