Have you experimented with dividing those target disks into smaller pieces, 
using
tar?   So that amanda isn’t doing level 0 on all parts on the same day?

I’ve divided some disks as far as a*,  b*, c*, …. z*,  Other  (to catch caps or 
numbers
or future additions).   I’ve found that each piece must have SOME content,
or tar fails.  So Other always contains some small portion,  and non-existent 
letters
are skipped and are caught by Other if they’re created later.

It’s a pain for restoring a whole disk, but it helps backups.

Deb Baddorf
Fermilab

> On Sep 21, 2021, at 8:55 AM, Kees Meijs | Nefos <[email protected]> wrote:
> 
> Hi list,
> 
> We've got some backup targets with lots (and lots, and then some) of files. 
> There's so much of them that making back-ups is becoming a problem.
> 
> During the backup process, tar(1) is eating up a CPU core. There's no or 
> hardly no I/O wait to be seen. Very likely tar is single threaded so there's 
> that. The additional gzip(1) process is doing zero to nothing.
> 
> Any thoughts on speeding this up? Maybe an alternative for GNU tar, or...?
> 
> Thanks all!
> 
> Cheers,
> Kees
> 
> -- 
> https://nefos.nl/contact 
> 
> Nefos IT bv
> Ambachtsweg 25 (industrienummer 4217)
> 5627 BZ Eindhoven
> Nederland
> 
> KvK 66494931
> 
> Bereikbaar op maandag, dinsdag, donderdag en vrijdag tussen 09:00u en 17:00u.


Reply via email to