Achim Gratz writes: > I've had 16 cores to run that on and those stayed at 2.8GHz all the > time, so it didn't take too long. Here at home I've tasked two slower > 4-core machines with it, so they were running a few hours of wall-time. > I might try long-range matching to get even better compression at work > since I've got 128GB memory now to see if there's an improvement. At > home I've run out of 8GiB memory more than once even with a single > compression for the largest debuginfo packages.
I used some spare cycles today to look into that. It turns out that if you enable long-range matching, each zstd process reserves about 12GB of memory, which Windows insists to "commit", i.e. your page file must be large enough to hold all committed memory. If it's dynamically sized it will never grow larger than some fixed percentage of the free space on that drive, so I've ran into that limit promptly. It never actually uses that memory best I can tell, the actual VM used stays below 4GB per zstd process. So I just blew up the pagefile.sys manually to 384GiB (224GiB would have sufficed, but anyway) and started 20 parallel compressions. I used up 48GiB peak of the 128GiB in the machine and was done in less than 50 minutes (the four extra threads not backed by a CPU helped to keep the machine loaded while it was spinning up new compressions when it came to the smaller files which are dominated by I/O). The compression ration only improved about 1% vs. the non-long-range option to 103% vs. the original data, so using that option isn't really necessary or recommended. Regards, Achim. -- +<[Q+ Matrix-12 WAVE#46+305 Neuron microQkb Andromeda XTk Blofeld]>+ Wavetables for the Terratec KOMPLEXER: http://Synth.Stromeko.net/Downloads.html#KomplexerWaves