I do love all this considerations. Just wondering by on earth entropy doesn't get much attention in a world where people seems so worried about security and privacy.

Have you ever used any specific method to measure the randomness quality of the numbers generated by the kernel when randomness pool goes low? By means of the NIST Statistical Test Suite or anything like that.

Maybe it could be possible to maintain a 'randomness quality factor' variable updated in the kernel to be able to estimate, in a given time, the randomness available. Just thinking loud! I'd take a look to that.

El 29/09/2010 19:16, Theo de Raadt escribis:
On Wed, Sep 29, 2010 at 12:49 PM, Kevin Chadwick<ma1l1i...@yahoo.co.uk>  wrote:
And isn't srandom sometimes (very rarely!) appropriate? E.g. for
generating encryption keys?

If arandom is somehow not appropriate for generating keys, it should
be fixed.  I'd be interested to hear more.

For those who don't want to go read the code, the algorith on the very back
end is roughly this:

     (a) collect entropy until there is a big enough buffer
     (b) fold it into the srandom buffer, eventually

That is just like the past.

But the front end is different.  From the kernel side:

     (1) grab a srandom buffer and start a arc4 stream cipher on it
        (discarding the first bit, of course)
     (2) now the kernel starts taking data from this on every packet
        it sends, to modulate this, to modulate that, who knows.
     (3) lots of other subsystems get small chunks of random from the
        stream; deeply unpredictable when
     (4) on very interrupt, based on quality, the kernel injects something
        into (a)
     (5) re-seed the buffer as stated in (1) when needed

Simultaneously, userland programs need random data:

     (i) libc does a sysctl to get a chunk from the rc4 buffer
     (ii) starts a arc4 buffer of it's own, in that program
     (iii) feeds data to the program, and re-seeds the buffer when needed

The arc4 stream ciphers get new entropy when they need. But the really
neat architecture here is that a single stream cipher is *unpredictably*
having entropy taken out of it, by hundreds of consumers.  In regular
unix operating systems, there are only a few entropy consumers.  In OpenBSD
there are hundreds and hundreds.  The entire system is full of random number
readers, at every level.  That is why this works so well.

I notice arandom doesn't pause. Is arandom always better or only when
there's enough entropy?

It is more efficient.  There is almost always enough entropy for
arandom, and if there isn't, you would have a hard time detecting
that.

There is always enough.  The generator will keep moving, until it has fetched
too much, or too much time has gone by.  Then it reseeds; though I think
it fundamentally does not care if the srandom buffer it feeds from is full
or not.

Reply via email to