Whether these various tricks help depends on the technical details of the attacks found. I hope that the bit twiddling crypto types who are finding the attacks are going to propose something to fix them.
There are probably cheaper fixes than the 2x or 3x performance loss of your algorithm down in the inner loops of these algorithms (such as the change from SHA to SHA-1) and that these will come out. I'm reluctant to jump on the SHA-256 bandwagon or to come up with some ad hoc fix until a more thorough analysis is done. SHA-256 was designed before these attacks were known and probably has related flaws (though they are even less likely to be practically exploitable). We have the luxury of having the current break being largely theoretical, so waiting even a year for the mathematicians is probably OK. But it's never too early to start preparing for a new algorithm - perhaps with a new hash size - in our protocols. Further, given that lots of attacks (past and present) are not exploitable if every hashed quantity includes some value chosen by a trusted party and unpredictable by an attacker, it seems reasonable to consider that as a desirable characteristic as we design our protocols. --Charlie Kaufman p.s. Your formulae below have unbalanced parentheses, but I can guess what you probably meant. -----Original Message----- From: Ben Laurie [mailto:[EMAIL PROTECTED] Sent: Thursday, March 24, 2005 2:39 AM To: Charlie Kaufman Cc: Cryptography; [EMAIL PROTECTED] Subject: Re: Propping up SHA-1 (or MD5) Charlie Kaufman wrote: > All hash functions I'm aware of consist of an inner compression function > that hashes a fixed size block of data into a smaller fixed size block > and an outer composition function that applies the inner function > iteratively to the variable length data to be hashed. Essentially you're > proposing a modification to the outer layer of the hash construction. > > All of the standard hash functions since MD4 have been constructed so > that a collision in the inner compression function is likely to lead to > a collision in the hash function. MD2 did not have that property. It > computed a cheap checksum of the variable length data in parallel with > the digesting process and digested the checksum following the data. I > have often wondered whether such a cheap addition would strengthen the > newer hashes. (It would fix the suffixing attacks that motivated the > development of HMAC). > > It's not obvious whether this would make the functions more secure or > just make them harder to analyze. Perhaps someone from the research > community could comment on why the checksum was removed in the evolution > from MD2 to MD4. > > Your proposed encoding has the disadvantage that it would require two > passes over the message being digested. This would be bad news for > hardware implementations and should be avoided if possible. I suggested in a later version these two constructions: H'(x)=H(H(x || H(0 || x) || H(0 || x)) or: H'(x)=H(H(x || H(0 || x) || H(1 || x)) which only require a single pass (but, unfortunately, two or three different instances of the hash). This seems similar to the mechanism used in MD2, except the checksum is expensive. Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.thebunker.net/ "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]