On Fri, Sep 03, 2010 at 09:45:20AM +0100, Ben Laurie wrote:
> That's the whole point - a hash function used on an arbitrary message
> produces one of its possible outputs. Feed that hash back in and it
> produces one of a subset of its possible outputs. Each time you do this,
> you lose a little entropy (I can't remember how much, but I do remember
> David Wagner explaining it to me when I discovered this for myself quite
> a few years ago).

A recent result relevant to this is

"Practical consequences of the aberration of narrow-pipe hash designs
from ideal random functions", Klima and Gligoroski

Which shows that narrow-pipe designs have a huge null space for
messages which are exactly as big as the compression function input
size. For instance hashing inputs that are multiples of 512 bits,
SHA-256 will only produce about 63% of the possible 2^256
outputs. This could be quite relevant to a Merkle scheme since
typically the input will be a set of hash outputs; if the internal
block size and the output length are not coprime, one would easily
encounter this limitation. It could probably be hacked around by using
padding on the inputs (eg the input to each Merkle hash is N subtree
hashes plus as many zero bytes as required to make the input length a
prime number of bytes), but that definitely sets of some bogosity
alarms in my mind.

One point I am uncertain on is if wide-pipe hashes like Blue Midnight
Wish or SHA-384 suffer the same levels of entropy loss as narrow-pipe
designs (outside of the case described in the referenced paper, which
is specific to narrow pipes).


The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com

Reply via email to