On 02/17/2012 01:32 PM, Thierry Moreau wrote:

Isn't /dev/urandom BY DEFINITION of limited true entropy?

It depends on the model you use.

In the model that makes sense to me, one in which the attacker has finite computational resources (i.e., insufficient to brute-force the search space of your pool) and all output is produced through a one-way function, then no, the entropy perceived by the attacker of /dev/urandom is not different from that of /dev/random.

True entropy
collection may take time (and is inescapably based on environmental
assumptions) while /dev/urandom is defined as non-blocking. No matter
the theoretical properties of the (deterministic) PRNG component of
/dev/urandom, they can not expand *true* entropy.

OK, but to what extent is this distinction between "true" and "pseudo" entropy equally theoretical when the system as a whole is considered?

These authors seem to be attempting to formalize this intuitive notion of "unpredictability entropy" which (somewhat surprisingly to me) they seem to say has not been sufficiently considered as distinct from its traditional statistical and incompressibility definitions.
http://www.cs.bu.edu/~reyzin/papers/entropy.ps

And this is so, no matter the amount of details you delegate to reputed
security software developers.

It's still an interesting enough topic that honest people can disagree.

Personally, I'd like to see it get sorted out well enough that kernels can save the tens of KiB of nonpageable RAM they use for their entropy pools and use instead some small multiple of an attacker's brute-force capability. For example, we could estimate an upper bound on an attacker's resources at 2^80 and size the entropy pool at a generous 256 bits to match a modern hash function. Make that a few separate accumulation pools for catastrophic reseeding and we're still well under 100 bytes.

- Marsh
_______________________________________________
cryptography mailing list
[email protected]
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to