Am 17.08.2018 um 20:49 schrieb Achim Gratz:
While it seems like I'm talking to myself, in case anybody is listening: I've had 16 cores to run that on and those stayed at 2.8GHz all the time, so it didn't take too long. Here at home I've tasked two slower 4-core machines with it, so they were running a few hours of wall-time. I might try long-range matching to get even better compression at work since I've got 128GB memory now to see if there's an improvement. At home I've run out of 8GiB memory more than once even with a single compression for the largest debuginfo packages. Incidentally, these packages always take the longest time to compress, so if you want to do a parallel mass conversion it's advisable to start these first (keeping an eye on available memory) and have the faster compression of the smaller package archives fill the remaining CPU/time.
memory and time requirements seem not so appealing to me as my machine has only 8GiB and already the debuginfo on 32bit is a time consuming effort
I've installed and updated over a dozen of machines by now with this, so I'm reasonably certain that the implementation in setup is OK. Regards, Achim.
--- Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft. https://www.avast.com/antivirus
