On Wed, 22 Dec 2010 05:08:56 -0600
Marsh Ray <[email protected]> wrote:
> Let's say I could sample the output of the RNG in every process and from
> every network device in the system. As much as I wanted. How could I
> tell the difference between "one prng per purpose" and "data-slicing one
> prng with all consumers"?
There was a thread "called how to use /dev/srandom" where theo sent
this, which may be relevant?
=======================================================================
For those who don't want to go read the code, the algorith on the very
back end is roughly this:
(a) collect entropy until there is a big enough buffer
(b) fold it into the srandom buffer, eventually
That is just like the past.
But the front end is different. From the kernel side:
(1) grab a srandom buffer and start a arc4 stream cipher on it
(discarding the first bit, of course)
(2) now the kernel starts taking data from this on every packet
it sends, to modulate this, to modulate that, who knows.
(3) lots of other subsystems get small chunks of random from the
stream; deeply unpredictable when
(4) on very interrupt, based on quality, the kernel injects
something into (a)
(5) re-seed the buffer as stated in (1) when needed
Simultaneously, userland programs need random data:
(i) libc does a sysctl to get a chunk from the rc4 buffer
(ii) starts a arc4 buffer of it's own, in that program
(iii) feeds data to the program, and re-seeds the buffer when needed
The arc4 stream ciphers get new entropy when they need. But the really
neat architecture here is that a single stream cipher is *unpredictably*
having entropy taken out of it, by hundreds of consumers. In regular
unix operating systems, there are only a few entropy consumers. In
OpenBSD there are hundreds and hundreds. The entire system is full
of random number readers, at every level. That is why this works
so well.
============================================================================