Your first two categories are talking about the distribution of entropy--we 
assume some unpredictability exists, and we want to quantify it in terms of 
bits of entropy per bit of output.  That's a useful distinction to make, and as 
you said, if you can get even a little entropy per bit and know how much you're 
getting, you can get something very close to ideal random bits out.

Your second two categories are talking about different kinds of 
sources--completely deterministic, or things that can have randomness but don't 
always.  That leaves out sources that always have a particular amount of 
entropy (or at least are always expected to!).  

I'd say even the "squish" category can be useful in two ways:

a.  If you have sensible mechanisms for collecting entropy, they can't hurt and 
sometimes help.  For example, if you sample an external clock, most of the 
time, the answer may be deterministic, but once in awhile, you may get some 
actual entropy, in the sense that the clock drift is sufficient that the 
sampled value could have one of two values, and an attacker can't know which.  

b.  If you sample enough squishes, you may accumulate a lot of entropy.  Some 
ring oscillator designs are built like this, hoping to occasionally sample the 
transition in value on one of the oscillators.  The idea is that the rest of 
the behavior of the oscillators might possibly be predicted by an attacker, but 
what value gets read when you sample a value that's transitioning between a 0 
and a 1 is really random, changed by thermal noise.  

I think the big problem with (b) is in quantifying the entropy you get.  I also 
think that (b) describes a lot of what commonly gets collected by the OS and 
put into the entropy pool.  

--John
_______________________________________________
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Reply via email to