Maybe it seems a bit esoteric, but in practice I have found it to be very useful. Very often, I have restored files for an IRIX workstation to the holding disk of the amanda server, which is Linux, and then pick and choose which files I really want to transfer to the workstation with rsync, rather than just restoring everything to the IRIX workstation directly. That way, I can be careful not to overwrite files, or force overwriting corrupt files with newer timestamps, whatever it is that I need to do. I think it gives me an extra level of control to help avoid making a mistake.
On another note, maybe things have changed, but I once found that gnutar incremental backups sucked performance-wise, would make machines pretty much unusable during estimates and dumps. Normally, this would not matter, but you're talking University with eccentric grad students working at 3am and such who complain about these things. I have migrated most things to XFS filesystem and use xfsdump on Linux and IRIX--a process that I started when XFS went Open Source (around Red Hat 7.0) and I got tired of waiting for the problems with dump for ext2fs to get sorted out. Machines are still very usable with xfsdump and software compression running in the background, and finish faster than gnutar dumps. xfsdump estimates are very fast, comparatively speaking.
However, with faster CPUs, faster disk interfaces, and filesystems like Rieserfs, perhaps the performance of gnutar has improved.
--jonathan
Frank Smith wrote:
Then you can run configure --with-gnutar=/usr/local/bin/tar, and make
sure that that path exists on your clients, and is gnu tar of the
proper version on all of them as well.
