The scripts I use analyze the rsync log after it completes and then sftp's a summary to the root of the just completed rsync. If no summary is found or the summary is that it failed, the folder rotation for that set is skipped and that folder is re-used on the subsequent rsync. The key here is that the folder rotation script runs separately from the rsync script(s). For each entity I want to rsync, I create a named folder to identify it and the rsync'd data is held in sub-folders:
daily.[1-7] and monthly.[1-3]
When I rsync, I rsync into daily.0 using daily.1 as the link-dest.
Then the rotation script checks daily.0/rsync.summary - and if it worked, it removes daily.7 and renames the daily folders. On the first of the month, the rotation script removes monthly.3, renames the other 2 and makes a complete hard-link copy of daily.1 to monthly.1 It's been running now for about 4 years and, in my environment, the 10 copies take about 4 times the space of a single copy.
(we do complete copies of linux servers - starting from /)
If there's a good spot to post the scripts, I'd be glad to put them up.

--
Larry Irwin
Cell: 864-525-1322
Email: lrir...@alum.wustl.edu
Skype: larry_irwin
About: http://about.me/larry_irwin

On 06/19/2016 01:27 PM, Simon Hobson wrote:
Dennis Steinkamp <den...@lightandshadow.tv> wrote:

i tried to create a simple rsync script that should create daily backups from a 
ZFS storage and put them into a timestamp folder.
After creating the initial full backup, the following backups should only contain 
"new data" and the rest will be referenced via hardlinks (-link-dest)
...
Well, it works but there is a huge flaw with his approach and i am not able to 
solve it on my own unfortunately.
As long as the backups are finishing properly, everything is fine but as soon 
as one backup job couldn`t be finished for some reason, (like it will be 
aborted accidently or a power cut occurs)
the whole backup chain is messed up and usually the script creates a new full 
backup which fills up my backup storage.
Yes indeed, this is a typical flaw with many systems - you often need to throw 
away the partial backup.
One option that comes to mind is this :
Create the new backup in a directory called (for example) "new" or 
"in-progress". If, and only if, the backup completes, then rename this to a timestamp. If 
when you start a new backup, if the in-progress folder exists, then use that and it'll be freshened 
to the current source state.

Also, have you looked at StoreBackup ? http://storebackup.org
I does most of this automagically, keeps a definable history (eg one/day for 14 
days, one/week for x weeks, one/30d for y years), plus it keeps file hashes so 
can detect bit-rot in your backups.




--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Reply via email to