Is the file storage media hard disk, and are the files mostly
small and IOPS/latency bound?  If so, you may have to live with sloooow
backups.  We eventually added NVMe SSDs to our ZFS array as special devices
and sent files smaller than 128KB to NVMe, which along with the metadata,
helped speed up backups quite a bit.

It may be worth trying star instead of tar, but I think it too is single
threaded.  There's something called tarsplitter on github which is
multi-threaded if you don't mind experimenting:

https://github.com/AQUAOSOTech/tarsplitter

Also Sprach Kees Meijs | Nefos:

Hi list,

We've got some backup targets with lots (and lots, and then some) of files. There's so much of them that making back-ups is becoming a problem.

During the backup process, tar(1) is eating up a CPU core. There's no or hardly no I/O wait to be seen. Very likely tar is single threaded so there's that. The additional gzip(1) process is doing zero to nothing.

Any thoughts on speeding this up? Maybe an alternative for GNU tar, or...?

Thanks all!

Cheers,
Kees

--
https://nefos.nl/contact <https://nefos.nl/contact>

Nefos IT bv
Ambachtsweg 25 (industrienummer 4217)
5627 BZ Eindhoven
Nederland

KvK 66494931

/Bereikbaar op maandag, dinsdag, donderdag en vrijdag tussen 09:00u en 17:00u./




--
C. Chan <c-chan at uchicago.edu>
GPG Public Key registered at pgp.mit.edu

Reply via email to