On 04/18/2011 09:26 PM, Sandy Harris wrote:
In many situations, you have some sort of randomness pool
and some cryptographic operations that require random
numbers. One concern is whether there is enough entropy
to support the usage.

Is it useful to make the crypto throw something back
into the pool? Of course this cannot replace real new
entropy, but does it help?

_If_ one is doing everything fundamentally right in the CSRNG I think that it does not help.

If you are doing IPsec with HMAC SHA-1, for example,

Then we should expect you to have sufficient entropy in the pool to make brute force impractical forever (say 128 bits) and for you to be extracting your random stream from it through an effective one-way function.

You do not need to decrement the entropy estimate of the pool as you generate random numbers from it. If you believe the entropy of the pool decreases as you generate output from it, then either your starting entropy was much too small or you don't believe in the one-way-ness of your extraction function.

In other words, /dev/random should be a symlink to /dev/urandom.

SHA gives a 160-bit hash and it is truncated to 96 bits
for the HMAC.
Why not throw the other 64 bits into the
pool?

The costs are:

  - Code complexity. More chance something could go wrong.

    This was pointed out today:
    https://twitter.com/bleidl/status/60111957818212352
    It looks to me like a bad bug in the code which decremented
    the entropy estimate that may have caused the pool to
    go largely unused altogether.

  - Performance.

The benefit is:

  ?   Very hard to quantify, except that it comes in proportion to
      the degree to which other very important parts of the system
      are broken.
      How much do you credit for these bits that you put back in?

In other cases, you might construct something to
derive data for the pool. You do not need a lot.
Each keying operation uses a few hundred bits
of randomness

I don't think that these bits of randomness are "consumed" in a well-constructed system.

IIRC, Peter Gutmann was using the term "computational entropy" to refer to the entropy seemingly generated within the hash function. But I don't think he was willing to go all the way to conclude that the pool entropy was nondecreasing.

I know that I'm probably disagreeing with an old textbook with this, but these recommendations do need to be re-evaluated from time to time.

Some things have quietly changed over the last few years:

* We know more about hash functions. MD5 is seriously busted but still no one has yet calculated even a second preimage. SHA-1 has one foot in the grave too, but has no published collisions. The only published attacks on SHA-2 don't seem serious enough to mention. The SHA-3 contest is more an attempt to improve flexibility and performance than a need to improve security over SHA-2.

* We learned to expect and rely on related-key and known-plaintext resistance from our block ciphers. We also must learn to rely on the fundamental properties of our hash functions. In constructions like HMAC, the attacker is presumed to know the plaintext and the resulting MAC. An insufficiency in the one-wayness of the underlying hash function threatens the key directly. So if your hash function is weak in that respect, then you probably have bigger problems.

* Computers are attacked in different ways now. The threats to a CSRNG on a virtual server in a cloud provider data center are much different than those of a barely-networked PC running PGP years ago. Once pwned, we can no longer trust the computer again until it has been completely reformatted, and perhaps even including the firmware. In the past we might have hoped that a self-reseeding PRNG could "recover" its security. But today most compromises seem to put the attacker in a position to inject malware into the running kernel code itself. There's no recovery from that.

* An attacker may be able to force the generation of output from the CSPRNG. What do you do when its entropy drops to zero? Block? Now the bad guy can easily DoS your apps and they will simply move to using some other non-blocking RNG.

Still your cloud server provider could disclose the contents of your virtual memory somehow, so periodic stirring (with a one-way function of course) and catastrophic reseeding could be useful. But the reseeding is only to help mitigate an uncontrolled state disclosure, not because the entropy in the pool has significantly depleted. Even if you did experience such a memory disclosure, you'd probably be more worried about the private and session keys that had been leaked directly.

I would love to hear about an example, but I don't think a well-constructed CSPRNG with even a 100 bit pool size has ever been compromised due to entropy-depletion effects.

Last I looked at OpenSSL, it's CSPRNG would accumulate 70 or so bytes of real entropy in its pool and generate an arbitrary amount of output from that.

- Marsh
_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to