Paul Koning writes:

> The most straightforward way to do what's proposed seems to be like
> this:
>
> 1. Make two pools, one for /dev/random, one for /dev/urandom.  The
> former needs an entropy counter, the latter doesn't need it.
>
> 2. Create a third pool, which doesn't ned to be big.  That's the
> entropy staging area.  It too has an entropy counter.
>
> 3. Have the add entropy function stir into that third pool, and credit 
> its entropy counter.
>
> 4. Whenever the entropy counter of the staging pool exceeds N bits (a
> good value for N is probably the hash length), draw N bits from it,
> and debit its entropy counter by N.
>
> If the entropy counter of the /dev/random pool is below K% of its
> upper bound (K = 75 has been suggested) stir these N bits into the
> /dev/random pool.  Otherwise, alternate between the two pools.  Credit 
> the pool's entropy counter by N.

Some suggested modifications:

The third pool, the entropy staging area, doesn't have to be big.
In fact, it doesn't have to be any bigger than the amount of entropy it
buffers, perhaps 100 bits or so.  This size need only be large enough
to prevent exhaustive search by an attacker.  80 or even 60 bits should
be enough in practice, but a multiple of 32 like 96 or 128 would be
more convenient for some algorithmss.  Probably it would want to use a
different mechanism than that used in the main random pool since it is
so much smaller.  A SHA hash context could be used as in Yarrow, but
that may be somewhat slow.  A 96 bit CRC would be another good choice.
Cryptographic strength is not an issue here, just mixing.

Having the two pools for /dev/random and /dev/urandom sounds like the
right thing to do.  However the proposal to favor /dev/random over
/dev/urandom ignores the fact that /dev/random is seldom used.

The description above calls for entropy to be given preferentially
to /dev/random until it is 75% full.  But this will starve urandom
during the crucial startup phase.  As was proposed earlier, it would
be better to get initial randomness into /dev/urandom so that it can
be well seeded and start producing good random numbers.  This should
be about one staging-pool size, about 100 bits.  Once you have this,
you can give entropy to both pools as suggested above.

In operation, it is likely that the random pool will be filled and
virtually never drawn upon, and it is unnecessary to keep putting more
entropy into that pool.  The urandom pool will be much more heavily used.
It would make sense to have the algorithm for distributing entropy
between the pools be aware of this.

One possible mechanism would be to keep an entropy counter for both pools.
Put the first buffer of entropy into the urandom pool so that it gets
off to a good start.  Then divide incoming entropy between the pools
proportionally to how far they are from full.  If both pools are full,
divide it equally.  If one is full and the other is not, all incoming
entropy goes to fill the smaller one.  If neither is full, entropy is
divided proportionally, so that if one is 100 bits from full and the
other is 200 bits from full, the second is twice as likely to get the
input.

This will cause both pools to constantly be refreshed when the machine
is quiescent and not using randomness.  When it is active and using
/dev/urandom, that pool will get all the incoming entropy once /dev/random
is full.  This makes the most efficient use of incoming entropy and does
not waste it by giving it to an already-full /dev/random pool, which
would discard entropy that is already there.  Entropy is a scarce and
valuable resource in many configurations and it should not be thrown away.

Reply via email to