We are rsync'ing large (hundreds of GB) and constantly changing Berkley DB 
(aka Sleepycat) datasets (the RPM database uses the same thing, but its 
dataset is extremely small). When a change occurs (insert, update, delete, 
etc) in a BDB it has a tendency to propagate through the binary database 
files such that rsync has to re-download a great quantity of old data 
(much like the recent example of why you shouldn't gzip large files before 
rsync'ing them).

Are there any known methods for making rsync backups of these databases 
more efficient?

-Chuck

-- 
http://www.quantumlinux.com 
 Quantum Linux Laboratories, LLC.
 ACCELERATING Business with Open Technology

 "The measure of the restoration lies in the extent to which we apply 
  social values more noble than mere monetary profit." - FDR
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Reply via email to