Jerry Leichter writes: >> However ... *any* on-line algorithm falls to a Joux-style attack.
Hal Finney wrote:
... hashes that support incremental hashing, as any useful hash surely must,
If you insist on being able to hash exceedingly long strings (i.e. longer than your storage capacity) here is a modification of the "splint" I proposed earlier.
Run two hash-machines in parallel. The first one operates normally. The second one puts the first block of the string into a buffer (bounded size!) and then proceeds to hash the rest of the string, starting with the second block. At the end of the message, it appends the saved block to the tail of the message and hashes it.
The two results are combined in some nonlinear way, perhaps by appending the other machine's hash onto this machine's input string.
The strength here comes from the idea the that you cannot engineer a collision in the "saved" block in the second machine until you have engineered the collision in the first machine, and then if you change the saved block to engineer a collision in the second machine you have to redo everything you did in the first machine.
Of course it would be nice to have hash functions that were strong on a block-by-block basis ... but still I think there is value in destroying the prefix property without destroying the online property, i.e. not destroying the ability to hash strings that are too long to store.
=============
Small point: I'm not entirely convinced that *any* useful hash must have the online property. That's not a defining property of hash functions according to most authorities. I've done lots of hashing in situations where I would have been happy to store the entire input-string.
Still we can agree that the online property is *sometimes* nice, and I'm happy to be able to provide it.
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]