> > /dev/random should become two-stage, ...

I thought that /dev/urandom was the problem: that as new entropy comes
in, the cryptographically secure pseudo-RNG needs to get its entropy
in big chunks, so an attacker can't probe it to guess each bit of new
entropy as it comes in.

This, it seems, would require keeping two pools of entropy:
running a separate pool for /dev/urandom, and only dumping in more
entropy from /dev/random's pool when /dev/random accumulates more than
N bits' worth.  (N large enough to preclude exhaustive search.)

Making this change doesn't involve changing any cryptographic primitives.

(We should definitely not be using unproven AES candidates for
anything; we're trying to rely only on algorithms that have seen
extensive and unsuccessful attack.)

The other useful change we identified is to allow /dev/random to
accumulate larger entropy than 512 bytes, and carry it across reboots
so it's available to establish tunnels quickly.  I think Ted Ts'O has
already done some of that work, though programs like pluto can't yet
tell /dev/random how big an entropy buffer they want it to keep.

In the linux-ipsec release, Pluto and KLIPS also need to document
their need for random and pseudo-random values.  And we should cut
back their appetite anywhere we safely can.

        John

Reply via email to