On Thu, May 2, 2019 at 8:09 AM Ingo Molnar <[email protected]> wrote: > > > * Andy Lutomirski <[email protected]> wrote: > > > Or we decide that calling get_random_bytes() is okay with IRQs off and > > this all gets a bit simpler. > > BTW., before we go down this path any further, is the plan to bind this > feature to a real CPU-RNG capability, i.e. to the RDRAND instruction, > which excludes a significant group of x86 of CPUs?
It's kind of the opposite. Elena benchmarked it, and RDRAND's performance was truly awful here. > > Because calling tens of millions of system calls per second will deplete > any non-CPU-RNG sources of entropy and will also starve all other users > of random numbers, which might have a more legitimate need for > randomness, such as the networking stack ... There's no such thing as "starving" other users in this context. The current core RNG code uses a cryptographic RNG that has no limits on the number of bytes extracted. If you want the entropy-accounted stuff, you can use /dev/random, which is separate. > 8 gigabits/sec sounds good throughput in principle, if there's no > scalability pathologies with that. The latency is horrible. > > It would also be nice to know whether RDRAND does buffering *internally*, Not in a useful way :( > Any non-CPU source of randomness for system calls and plans to add > several extra function calls to every x86 system call is crazy talk I > believe... I think that, in practice, the only real downside to enabling this thing will be the jitter in syscall times. Although we could decide that the benefit is a bit dubious and the whole thing isn't worth it. But it will definitely be optional.

