Hi Tony On Mon, Mar 09, 2015 at 11:29:30AM +0000, Tony Finch wrote: > Robert Martin-Legene <[email protected]> wrote: > > > > Does anyone have experiences using haveged for PRNG? When generating > > DNSSEC keys on a virtual server is takes a looong time to get > > randomness. > > My view is that haveged might be snake-oil, but it is a useful way of > fixing braindamage in the Linux implementation of /dev/random. > > An RNG should block until it has been securely seeded, and after that it > should run freely. Linux /dev/urandom fails to block and /dev/random fails > to run freely. Sigh. Haveged at least fixes the /dev/random bogus entropy > estimation, but you should also check that your distro ensures the RNG is > properly initialized e.g. using a random seed file.
There does not seem to be a reference for /dev/random. After reading the
above, I tried to find it mentioned somewhere within POSIX but don't see
any references. According to the OpenBSD manpage, it first appeared on
Linux and other operating systems cloned from it.
/dev/random and /dev/urandom are both PRNGs. A good PRNG implementation
can generate a long pseudo random sequence of values from a small source
of randomness. E.g., the new ChaCha based PRNG in OpenBSD libc can
generate 1,600,000 bytes in pseudo random sequence from just 40 bytes of
entropy before requiring re-seeding. The earlier RC4 PRNG used 128 bytes
of entropy for a similar sequence length.
The OpenBSD device (unlike the implementation in its libc) seems to use
an intermediate SHA-512 step before passing entropy to the ChaCha
PRNG. When entropy in the PRNG is exhausted, it replenishes with the
SHA-512 output mixed with current time (and so, never blocking), but
this may not be good enough for every kind of purpose.
This doesn't mean that /dev/urandom is absolutely insecure, or
/dev/random is truly random. There does not seem to be even a
specification for these two devices, so implementations do as they
please and it's not possible to quantify "good enough for every kind of
purpose".
If a PRNG runs out of entropy (which is a very rare event), how is an
application to know (if it chooses to) that the sequence of random bytes
after this event is weaker in some relative manner? See RFC 4086 for
some background:
> 6.3. Entropy Pool Techniques
>
> Many modern pseudo-random number sources, such as those described in
> Sections 7.1.2 and 7.1.3 utilize the technique of maintaining a
> "pool" of bits and providing operations for strongly mixing input
> with some randomness into the pool and extracting pseudo-random bits
> from the pool.
[snip]
>
> Bits to be fed into the pool can come from any of the various
> hardware, environmental, or user input sources discussed above. It
> is also common to save the state of the pool on system shutdown and
> to restore it on re-starting, when stable storage is available.
>
> Care must be taken that enough entropy has been added to the pool to
> support particular output uses desired.
and
> 7.1.2. The /dev/random Device
[snip]
>
> /dev/urandom works like /dev/random; however, it provides data even
> when the entropy estimate for the random pool drops to zero. This
> may be adequate for session keys or for other key generation tasks
> for which blocking to await more random bits is not acceptable. The
> risk of continuing to take data even when the pool's entropy estimate
> is small in that past output may be computable from current output,
> provided that an attacker can reverse SHA-1. Given that SHA-1 is
> designed to be non-invertible, this is a reasonable risk.
(with implementation details of that era.)
Mukund
pgpKta6mbBt6P.pgp
Description: PGP signature
