John Denker writes:
> That is:
> 1a') When there is entropy in the pool, it gobbles it all up before
> acting like a PRNG. Leverage factor=1. This causes other applications to
> stall if they need to read /dev/random.
This does not seem to be a big problem, and in fact is arguably the right
behavior.
What it means is, /dev/urandom provides the best quality random numbers
possible, given the entropy available. It provides true random numbers
if available, and automatically transitions to pseudo random numbers when
the entropy runs out, switching back to true randomness as more entropy
becomes available. This is exactly what 99% of crypto applications
would want!
The one disadvantage, as John points out, is that if there is another
application drawing from the same pool which MUST have true randomness,
it can be impaired by the actions of an application which could have
gotten by with pseudo randomness. But how often will this happen?
Almost never.
Actually, it is questionable whether any application absolutely needs
true randomness. The PRNG in use is cryptographically strong and should
be designed to resist all known attacks. It is extremely unlikely
that a modern crypto PRNG will ever be broken badly enough to lead to
a key compromise.
Even if true randomness is desired occasionally, such as for long term
keys, these cases will be rare. The problem above will only arise
if you need to generate a long term key, a rare event, on a machine
which is simultaneously running a program which is constantly using
randomness from /dev/urandom, which will be an uncommon machine, maybe
some kind of networking server. Chances are you wouldn't use the same
machine for these two purposes, for security and efficiency reasons.
We are talking about a very rare combination of events.
> 5) So let's talk about solving problem (1a). For clarity, let's talk in
> terms of a new device /dev/vrandom. Consider the following possible
> design: We use code similar to the existing /dev/urandom, EXCEPT that it
> does not share its internal state with /dev/random or /dev/urandom. The
> new device initializes its state from /dev/random or some other TRNG. (We
> *really* want the initial state to be really random.) For a stripped-down
> host on which TRNG bits are scarce or unavailable, this initialization is
> done "at the factory". Thereafter it performs quantized reseeding often
> enough to fend off iterative guessing attacks but not so often as to
> deplete the TRNG.
Any proposal to reserve some (true) randomness for /dev/random means
that /dev/urandom will no longer be providing the best quality random
bits available. You are hurting the quality of the random numbers which
will be used in 99% of the cases to protect something which only happens
1% of the time.
Consider another approach. Let applications which need true randomness
use an ioctl to temporarily turn off the transfer of randomness from
/dev/random to /dev/urandom. They can then pop up a dialog asking the
user to wiggle the mouse, assured that the randomness goes only into
/dev/random. They can then use the true randomness, and use the ioctl
to turn back on the transfer of randomness to /dev/urandom.
This has several advantages. /dev/urandom, most of the time, gets all
the randomness that is available. This is exactly the right thing to
do almost all of the time. For the /dev/random user, it actually adds
assurance that the numbers are fresh and that some rogue app hasn't
sniffed the random pool a few minutes ago. Users will probably feel
better when generating long term keys to be asked to input randomness
at the time of keygen.
It also can be used to solve another problem, which is a possible race
condition if two applications are drawing from /dev/random at once.
The ioctl could not only block randomness from going to /dev/urandom,
it could actually reserve randomness for the process which issued it.
It serves as a sort of lock on /dev/random. An app can give the ioctl,
put up a dialog to have the user wiggle the mouse until enough bits are
in the random pool, then read the random data, confident that another
app won't come in and grab data from /dev/random in the interim.
(Granted this might be accomplished in other ways, but it would be
convenient to use the proposed ioctl for both purposes.)
This approach avoids the unnecessary politeness of John's proposed
/dev/vrandom. Most of the time there will be no other application
to give up randomness to, hence it is pointless not to use all the
randomness available.
But at this point it is not clear that this problem will arise frequently
enough to be worth worrying about much. It would be a nice cleanup
to have a way to reserve entropy for those tasks which really need it,
but it is unlikely that anyone is going to run into this problem in the
next year or so.