Thanks Matt, it seems that the last days there is a lot attention to
this subject.
My pre-processor approach helped a lot, but checksumming is very
CPU-intensive.
For that reason i sorted first on timestamp to determine which files
would normally be deleted,
thus minimizing the amount of files to
On 10/5/07, N.J. van der Horn (Nico) [EMAIL PROTECTED] wrote:
It is a tricky problem to deal with i think, it is tempting to keep a
checksum'd file/directory list on both sides with information like:
* a fingerprint/signature/checksum to identify each file or directory
* inode number
*
We are using rsync for several years, but since a couple of months
we use it to backup remote servers, some with more than 200GB capacity.
Especially Windows users sometimes have the (bad) habit to change
the name of a directory with huge amounts of data below them.
We see the same nasty results