Date: Sun, 01 Aug 1999 17:04:14 +0000
From: Sandy Harris <[EMAIL PROTECTED]>
More analysis is needed, especially in the area of how
to estimate input entropy.
True. I actually don't believe perfection is at all possible. There
are things which could probably do a better job, such as trying to run
gzip -9 over the entropy stream and then using the size of the
compressed stream (minus the dictionary) as the entropy. This is
neither fast nor practical to do in the kernel, though.
Yarrow's two-stage design, where the output from hashing the pool
seeds a pseudo-random number generator based on a strong block
cipher, offers significant advantages over the one-stage design in
/dev/random which delivers hash results as output. In particular, it
makes the hash harder to attack since its outputs are never directly
seen, makes denial-of-service attacks based on depleting the
generator nearly impossible, and "catastrophic reseeding" prevents
iterative guessing attacks on generator internal state.
Yarrow is different. It uses a 160 bit pool, and as such is much more
dependent on the strength of the cryptographic function. Hence, the two
stage design is much more important. It also doesn't use a block
cipher, BTW. It just uses a an interated hash function such as SHA, at
least the last time I looked at the Counterpane paper.
Linux's /dev/random uses a very different design, in that it uses a
large pool to store the entropy. As long as you have enough entropy
(i.e., you don't overdraw on the pool's entropy), /dev/random isn't
relying on the cryptographic properties as much as Yarrow does.
Consider that if you only withdraw 160 bits of randomness out of a 32k
bit pool, even if you can completely reverse the SHA function, you can't
possibly determine more than 0.3% of the pool.
As such, I don't really believe the second stage design part of Yarrow
is really necessary for /dev/random. Does it add something to the
security? Yes, but at the cost of relying more on the crypto hash, and
less on the entropy collection aspects of /dev/random, which is as far
as I'm concerned much more important anyway.
If Free S/WAN really wants the second stage design, I will observe that
the second stage can be done entirely in user space. Just use
/dev/random or /dev/urandom as the first stage, and then simply use an
iterated SHA (or AES candidate in MAC mode --- it really doesn't matter)
as your second stage, periodically doing the catastrophic reseed by
grabbing more data from /dev/random. This gives you all of the
benefits (speed of key generation, no worry about DOS attacks by
depleting entropy --- by mostly ignoring the problem) and drawbacks
(over-dependence on the crypto function) of Yarrow.
- Ted
P.S. PGP's random number generator is similar to Linux's, and is
similarly quite different from Yarrow. Probably the best thing to say
is that philosophically quite different. I don't really believe we have
enough analysis tools to say which one is "better".