Hal Finney wrote:
Another of the Crypto talks that was relevant to hash function security
was by Antoine Joux, discoverer of the SHA-0 collision that required
2^51 work. Joux showed how most modern hash functions depart from the
ideal of a random function.
The problem is with the iterative nature of most hash functions, which
are structured like this (view with a fixed with font):
IV ---> COMP ---> COMP ---> COMP ---> Output
^ ^ ^
| | |
| | |
Block 1 Block 2 Block 3
The idea is that there is a compression function COMP, which takes
two inputs: a state vector, which is the size of the eventual hash
output; and an input block. The hash input is padded and divided
into fixed-size blocks, and they are run through the hash function
via the pattern above.
This pattern applies to pretty much all the hashes in common use,
including MDx, RIPEMDx, and SHA-x.
Just for the record, the Frogbit algorithm is a compression function with a 1-bit
block size and a variable size state information.
(http://www.connotech.com/frogbit.htm)
The Helix cipher shares with the Frogbit algorithm the use of dual key stream usages
between which the plaintext stream is inserted in the computations. However, the Helix
authors do not recommend their construct for a hash function.
(http://www.schneier.com/paper-helix.pdf)
Perhaps ideas combined from these two approaches can help define other constructs for hash
functions. Obviously, if the sate information is to end up in a standard size at the end of the
plaintext processing, the additional state information has to be "folded", which means
additional processing costs, of discarded.
--
- Thierry Moreau
CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada H2M 2A1
Tel.: (514)385-5691
Fax: (514)385-5900
web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]