>  - An rsync "full" backup now uses --checksum (instead of
> --ignore-times), which is much more efficient on the server side -
> the server just needs to check the full-file checksum computed by the
> client, together with the mtime, nlinks, size attributes, to see if
> the file has changed.  If you want a more conservative approach, you
> can change it back to --ignore-times, which requires the server to
> send block checksums to the client.

--checksum is quite paranoid and causes substantial IO on both sides.

Basically, it would cause md5sum calculated for each and every file, on
both sides.
If the archive is dozens of gigabytes or more - it means:

- extra CPU used,

- extra IO used,

- since we have to read all the files, anything the server had in
  cache/buffers, will be purged from there (newly read files go to
  cache/buffers instead, but it's not very useful there, since the
  backup will be most likely made daily or less often).

All above cause visible slowdown.


I'm pretty sure that my clients don't secretly change the file content,
while preserving their size and timestamp.
In that case - will BackupPC still work correctly if I change the
default:

$Conf{RsyncFullArgsExtra} = [
            '--checksum',
];


to:

$Conf{RsyncFullArgsExtra} = [
            '',
];

?


-- 
Tomasz Chmielewski
http://wpkg.org

------------------------------------------------------------------------------
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk
_______________________________________________
BackupPC-devel mailing list
BackupPC-devel@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-devel
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to