Dear rdiff-backup Experts
Pardon my naïve query, but need to understand what is the difference between
rdiff-backup approach and the following steps:
1) we take the remote sync of primary data store on a mirror server using
rsync. This is automated for every 1 hour using cron.
2) to get a poi
Hi Robert,
On 6. 6. 2012 17:43, Robert Nichols wrote:
> [...]
> The way I handle it for the dailys is that once a week I do a verify for
> each of the 8 most recent daily backups. That is enough to verity that the
> most recent part of the increments chain merges properly with the older
> increme
After reading the obnam documentation I think a combination of
1) several rsync-linkdest backupslots (for easy accessability) and
2) an additional obnam repository (for more generation flexibility while at the
same time preventing nearly all drawbacks of rdiff-backup mentioned in this
thread)
m
So I've been giving obnam a quick test drive and although it does look very
interesting it seems that the backups are not accessible in the way an rsync
backup is.
The advantage with rsync, at least in my case, is that in the even of a total
disaster, I am able to serve my backups as a usable re
@David: if you try obnam please let us know how you get on with it.
I guess one could store rdiff-backup repositories on a deduplication
file system such as lessfs or zfs and get the de-duplication
benefits. And using the --no-compression switch might or might not
br
Hi Robert,
On 6. 6. 2012 17:43, Robert Nichols wrote:
> [...]
> The way I handle it for the dailys is that once a week I do a verify for
> each of the 8 most recent daily backups. That is enough to verity that the
> most recent part of the increments chain merges properly with the older
> increme
Fri, 08 Jun 2012 13:02:43 -0700 shorvath
> Florian, you mention you use rdiff-backup (for max 20G) and rsync for
> larger. Is it not recommended to use rdiff-backup on large backups?
> Mine are several Tb's with possible gb's daily increments. Would it be
> recommend to stick to rsync?
There ar
If I get it right, there is no such thing as deltas. Every generation seems to
virtually stand for its own. However, double chunks of data (no matter if being
caused by different machines or generations or whatever) are deduplicated in a
generalized way. Nice design! As check for duplicate data
Thanks for mentioning obnam, which I have not known so far. Seems to have some
very nice features even though I would be interested in how it internally works
(what kind of deltas are used, how efficient is the transmission (comparable to
rsync?)).
Maybe I will give it a shot :-)
Nicolas Jun
Obnam? It's the first I've heard of it but it definitely looks interesting and
well worth further consideration.
I'm not sure if I should be thanking you or cursing you for adding more
research to my list ;)
No seriously. Thanks for the suggestion.
My only concern (so far) is that there see
10 matches
Mail list logo