On Sep 15, 2013, at 6:49 AM, Kent Borg <kentb...@borg.org> wrote:

> John Kelsey wrote:
>> I think the big problem with (b) is in quantifying the entropy you get.
> Maybe don't.
> When Bruce Schneier last put his hand to designing an RNG he concluded that 
> estimating entropy is doomed. I don't think he would object to some coarse 
> order-of-magnitude confirmation that there is entropy coming in, but I think 
> trying to meter entropy-in against entropy-out will either leave you starved 
> or fooled.

If you are using a strong cryptographic PRNG, you only really need to know the 
amount of entropy you've collected in two situations:

a.  When you want to instantiate the PRNG and start generating keys from it.

b.  When you want to reseed the PRNG and know you will get some benefit from 
doing so.  

But those are pretty critical things, especially (a).  You need to know whether 
it is yet safe to generate your high-value keypair.  For that, you don't need 
super precise entropy estimates, but you do need at least a good first cut 
entropy estimate--does this input string have 20 bits of entropy or 120 bits?  

My view is that all the song and dance in /dev/random with keeping track of the 
entropy in the pool as it flows in and out is not all that useful, but there's 
just no way around needing to estimate entropy to know if your PRNG is in a 
secure state or not.  


> -kb
The cryptography mailing list

Reply via email to