In several posts to the list, e.g. by Wagner and Denker, it is proposed to use a `strong crypto hash function` h(), e.g. SHA1, to extract entropy from a physical source, i.e.
> + Source --> Digitizer --> good hash Namely, if the sample from the digitizer (of the physical source) is x, the suggestion is to use h(x) as random data. I think this is a very common design in practice. But clearly this relies on some property of the hash function. For example Denker says: > a) if the hash function happens to have a property I call "no > wasted entropy" then... [this h(x) is `as good as random`]. Indeed if we adopt the `random oracle` methodology, i.e. analyze the system assuming h() is a truly random function, then we easily see that h(x) are truly random bits (assuming the number of bits in h(x) is significantly less than the entropy in x). But: the `random oracle` methodology helps us find vulnerabilities in protocols, but no specific hash function is really a random oracle... So I ask: is there a definition of this `no wasted entropy` property, which hash functions can be assumed to have (and tested for), and which ensures the desired extraction of randomness? Regards, Amir Herzberg See http://amir.herzberg.name/book.html for lectures and draft-chapters from book-in-progress, `Introduction to Cryptography, Secure Communication and Commerce`; feedback appreciated! --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
