I think this recommendation is counterproductive.
The purpose of rsnapshot is the have directories for each snapshot
time, linked together with hard links - so each file name in each
directory is just a pointer to a single inode for the unchanged file.
Run an ls -li on some rsnapshot files and you will see link counts
matching each unchanged snapshot. Compressing any one directory entry
will create the corresponding compressed file but it had to have a new
unique inode, thus breaking the string of hard linked pointers to
the original inode and now taking up more space than before, and if
you happen to compress the 'most recent' rsync previous directory
structure, your next rsync will not find the pre-existing files and
have to copy them over again thus taking up even more space.
--
Steve Herber [email protected] work: 206-221-7262
Software Engineer, UW Medicine, IT Services home: 425-454-2399
On Wed, 14 Oct 2009, Mark Foster wrote:
Derek Simkowiak wrote:
Q1: Has anyone used rsnapshot on a compressed filesystem?
Only looked into it recently. ZFS and btrfs seemed the only viable
choices and btrfs would not compile/load for me (yet). ZFS is not
available for Linux but you can run it on Solaris, FreeBSD and Nexenta.
Another approach if you need to squeeze every last byte out of your
rsnapshot storage is to compress the delta. It's a bit of a hack but
something like
|find /path/to/rsnapshot/backups/daily.1 -type f -links 1 ! -name
"*.bz2" -print | xargs bzip2 -v|
*after* your daily rsnapshot run (or hourly.1 after hourly if you use
hourly)
will do the trick. In practice I found it doesn't save that much but it
depends on what is there. Only the delta is safe to compress this way.