On Fri, Oct 11, 2002 at 03:26:45PM -0700, Terry Reed wrote: > > > > -----Original Message----- > > From: Derek Simkowiak [mailto:dereks@;itsite.com] > > Sent: Friday, October 11, 2002 1:51 PM > > To: Terry Reed > > Cc: '[EMAIL PROTECTED]' > > Subject: Re: Problem with checksum failing on large files > > > > > > > I'm having a problem with large files being rsync'd twice > > because of > > > the checksum failing. > > > > I think this was reported recently. > > > > Please try using the "-c" option ("always checksum") > > and see if the makes the problem go away. > > > > This is a high priority bug for me (although I have not > > yet experienced it). > > > > > > --Derek > > > Using -c helps for the smallest file (900 MB), but has no effect on the > larger files (e.g, 2.7 GB). Most of my files are between 1.5 GB & 3 GB.
I wonder if this is related to the rsync md4sum not producing "correct" md4sums for files larger than 512M? The existing rsync md4sum implementation does not produce md4sums the same as the RSA implementation for files larger than 512M... but I thought it was consistant with itself so this didn't affect anything. I posted a "fixed" md4sum implementation, but it would have required a new protocol version number and various hacks for backwards compatability with the old implementation before it could be used in rsync. Has someone rolled in a "fixed" md4sum implementation to rsync without checking to make it backwards compatible? If so, then ensuring you have the same version of rsync at both ends would avoid the problem. If not, then I dunno :-) -- ---------------------------------------------------------------------- ABO: finger [EMAIL PROTECTED] for more info, including pgp key ---------------------------------------------------------------------- -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
