> what I'm talking about is the chance that somewhere, sometime there will
> be two different documents that end up with the same hash
I have vastly greater chance of a file colliding due to hardware or
software glitch than a random message digest collision of two legitimate
I've lost quite a few files in 25 years of computing to just
such glitches, sometimes without knowing it until months or years
We've already computed the chances of a random pure hash collision
with SHA1 - it's something like an average of 1 collision every
10 billion years if we have 10,000 coders generating 1 new file
version every minute, non-stop, 24 hours a day, 365 days a year.
Get real. There are _many_ sources of random error in our
tools. When some sources are billions of billions times
more likely to occur, it makes sense to worry about them first.
Reminds me of the drunk looking under the lamp post for the
house keys he dropped - because that's where the light is.
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <[EMAIL PROTECTED]> 1.650.933.1373,
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html