On 11/17 08:59 , [EMAIL PROTECTED] wrote: > First off, with "ssh -C -o CompressionLevel=9" you get decent > compression. Secondly, there is no need to compare files to and fro > between machines. I imagine the whole 'do you have this' - 'no I dont' - > 'this?' - 'yes I do' process rsync does will in the end be a lot more > traffic than just doing 'send me everything modified after <date>'
The only time I've seen rsync be slower than tar, is when there are no files copied over already. (In which case tar is easily twice as fast). Otherwise it just compares checksums of the files at each end, which is much faster for anything but trivial files. Even in the case that you're winning when transferring incremental backups via tar (and I can concieve of this happening easily when the number of files changed is small, but the list of files is large); you're still losing ground when it comes to your full backups (which must transfer *all* of the files all over again, and I hate to do something like that on my workstation with 52GB of files). Or am I missing something in your argument? Or are you proposing a mixed-transport scheme, where full backups can be done with rsync and incrementals with tar? -- Carl Soderstrom Systems Administrator Real-Time Enterprises www.real-time.com ------------------------------------------------------- This SF.Net email is sponsored by the JBoss Inc. Get Certified Today Register for a JBoss Training Course. Free Certification Exam for All Training Attendees Through End of 2005. For more info visit: http://ads.osdn.com/?ad_id=7628&alloc_id=16845&op=click _______________________________________________ BackupPC-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
