Tobias Weingartner writes:
>
> Unless you can point me at a definite article that explains the coding
> and complexity theory behind using 2 different algorithms, and that
> proves that it actually does reduce the chance of errors, I'm going to
> say, that the *BEST* you can do, is as well as a single algorithm with
> N+M bits worth of a sum. In other words, I'm sure I can replace the
> 2 algorithms with 1 having the same "chance of errors" properties.
That's true, but it's still much better than a sinble N-bit sum, and
applying it to fixed-size blocks (which are presumably much smaller than
the average file size) also reduces the chance of error significantly
since you've significantly reduced the number of possible inputs.
> Also, the rsync algorithm does not help much here. It makes a lot of
> sense in a "transmition" scenario. In a "checking" of "checksum"
> scenario, it makes less sense. In some sense, checksums are meant to
> be "fixed size" representations of a file. Quick to look up, quick to
> manage, compare, etc.
I guess I wasn't clear -- I meant that it should be used to send the
(possibly) modified file to the server, not that it should be used
simply to determine whether the file was suspected of changing.
-Larry Jones
Talk about someone easy to exploit! -- Calvin