On 2023-06-04 09:48, Lennart Sorensen via talk wrote:
Personally I use rshapshot for backups with a target being a linux server
at my parents house and then they backup to mine the same way.

No cloud providers, nothing complicated, it just works and it's
automatically offsite.

Of course it's great for making a backup of your data, it is not for
making a system backup should you need to restore the system, but I
don't consider that to my a big task in general.  I also tend to use at
least RAID1.

I found that rsnapshot does a lot of filesystem churn on the target system, which can start to be an issue with data that has a huge number of small files that wanted hourly snapshots all hitting RAID6. It will do an rm -rf of older snapshots, resulting in each directory having to be cleaned out entry-by-entry, all the files having their link counts decremented, lots and lots of inode activity, and then it builds new snapshots and has to increment link counts and build new directories.

The solution I found was to recycle older snapshots, letting rsync bring a recently-retired snapshot up to date making only the new changes before calling that the current snapshot.

For absolutely vital data there's also a lot to be said for keeping it in a git repo to track changes and being able to revert damage, and then git push of a packfile can be your backup.

There's still a lot to be said for tarball backups, since they hit backup target storage as a single coherent file and don't do the disk thrashing. Nowadays something in a squashfs image could let you mount that and copy out individual files without having to restore the whole tarball, so that's an interesting direction too.

Anthony

---
Post to this mailing list [email protected]
Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk

Reply via email to