On Monday, 27 September 2021 14:30:36 BST Peter Humphrey wrote:
> On Monday, 27 September 2021 02:39:19 BST Adam Carter wrote:
> > On Sun, Sep 26, 2021 at 8:57 PM Peter Humphrey 
<pe...@prh.myzen.co.uk>
> > 
> > wrote:
> > > Hello list,
> > > 
> > > I have an external USB-3 drive with various system backups. There are
> > > 350
> > > .tar files (not .tar.gz etc.), amounting to 2.5TB. I was sure I wouldn't
> > > need to compress them, so I didn't, but now I think I'm going to have
> > > to.
> > > Is there a reasonably efficient way to do this?
> > 
> > find <mountpoint> -name \*tar -exec zstd -TN {} \;
> > 
> > Where N is the number of cores you want to allocate. zstd -T0 (or just
> > zstdmt) if you want to use all the available cores. I use zstd for
> > everything now as it's as good as or better than all the others in the
> > general case.
> > 
> > Parallel means it uses more than one core, so on a modern machine it is
> > much faster.
> 
> Thanks to all who've helped. I can't avoid feeling, though, that the main
> bottleneck has been missed: that I have to read and write on a USB-3 drive.
> It's just taken 23 minutes to copy the current system backup from USB-3 to
> SATA SSD: 108GB in 8 .tar files.

I was premature. In contrast to the 23 minutes to copy the files from USB-3 to 
internal SSD, zstd -T0 took 3:22 to compress them onto another internal SSD. I 
watched /bin/top and didn't see more than 250% CPU (this is a 24-CPU box) with 
next-to-nothing else running. The result was 65G of .tar.zst files.

So, at negligible cost in CPU load*, I can achieve a 40% saving in space. Of 
course, I'll have to manage the process myself, and I still have to copy the 
compressed files back to USB-3 - but then I am retired, so what else do I have 
to do? :)

Thanks again, all who've helped.

*  ...so I can continue running my 5 BOINC projects at the same time.

-- 
Regards,
Peter.




Reply via email to