On 09/15/2013 03:49 AM, Kent Borg wrote: > When Bruce Schneier last put his hand to designing an RNG he > concluded that estimating entropy is doomed. I don't think he would > object to some coarse order-of-magnitude confirmation that there is > entropy coming in, but I think trying to meter entropy-in against > entropy-out will either leave you starved or fooled.
That's just completely backwards. In the world I live in, people get fooled because they /didn't/ do the analysis, not because they did. I very much doubt that Bruce concluded that accounting is "doomed". If he did, it would mark a dramatic step backwards from his work on the commendable and influential Yarrow PRNG: J. Kelsey, B. Schneier, and N. Ferguson (1999) http://www.schneier.com/paper-yarrow.pdf This revolves around a /two-stage/ design. Entropy accumulates in the first stage and is then transferred in /batches/ to the second stage. There must be a substantial amount of energy in each batch, or the entire batch is wasted, in the sense that it does not help the PRNG recover from compromise. Let's be clear: transferring "randomness" to the second stage before the accumulator has accumulated enough entropy is demonstrably worse than nothing. It wastes entropy that otherwise would have eventually accumulated to a useful level. This design makes sense if *and only if* you have a reliable non-zero lower bound on the entropy coming into the first stage. A PRNG is like almost everything else in cryptography: You can't build a good PRNG unless you know how to /attack/ a PRNG. Dribbling small amounts of entropy into the final-stage pool does *not* have acceptable resistance to attack. Naïve intuition suggests it might be OK, but it's not. _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography