All hash functions I'm aware of consist of an inner compression function that hashes a fixed size block of data into a smaller fixed size block and an outer composition function that applies the inner function iteratively to the variable length data to be hashed. Essentially you're proposing a modification to the outer layer of the hash construction.
All of the standard hash functions since MD4 have been constructed so that a collision in the inner compression function is likely to lead to a collision in the hash function. MD2 did not have that property. It computed a cheap checksum of the variable length data in parallel with the digesting process and digested the checksum following the data. I have often wondered whether such a cheap addition would strengthen the newer hashes. (It would fix the suffixing attacks that motivated the development of HMAC).
It's not obvious whether this would make the functions more secure or just make them harder to analyze. Perhaps someone from the research community could comment on why the checksum was removed in the evolution from MD2 to MD4.
Your proposed encoding has the disadvantage that it would require two passes over the message being digested. This would be bad news for hardware implementations and should be avoided if possible.
I suggested in a later version these two constructions:
H'(x)=H(H(x || H(0 || x) || H(0 || x))
or:
H'(x)=H(H(x || H(0 || x) || H(1 || x))
which only require a single pass (but, unfortunately, two or three different instances of the hash). This seems similar to the mechanism used in MD2, except the checksum is expensive.
Cheers,
Ben.
-- http://www.apache-ssl.org/ben.html http://www.thebunker.net/
"There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
