> Which is why I'm wondering what exactly, this 'multi-consumer' design 
> feature is all about. Is it simply that more userland stuff is pinging 
> the kernel at unpredictable times resulting in more timestamps feeding 
> into the central entropy pool? It seems like you could accomplish that 
> with any syscall. Or is there some other effect being claimed?

Holy cow, you are dense.

I am going to throw out estimates here because (a) it has been a long
time since we tested, and (b) so much can vary machine to machine.

Without a hardware RNG device, a typical i386 desktop machine can
provide (based on interrupt sources) around 1800 bytes of base entropy
to the MD5 thrasher -- per minute.

Meanwhile, OpenBSD is consuming about 80 KB of arc4random output per
minute.  How do you convert 1800 bytes of input to 81920 bytes of
output, while giving references out to papers that don't solve this
problem.

And how do you make it fast.  Because all those papers try to solve
the problem by making it SERIOUSLY SLOWER.

While all this is going on, each userland application is using random
data out of it's own libc arc4random for many purposes, including per
malloc() and free() amongst many others, and are re-seeding their libc
generators from the kernel as required, putting even more pressure on
the kernel.

You don't know what you are talking about, and you don't seem to have the
ability to wrap your mind around all the parts that are involved.

I am not reading your mails again.

Reply via email to