On 8/17/2013 9:39 AM, Sandy Harris wrote:
Papers like Yarrow with respected authors argue convincingly that systems with far smaller state can be secure.

I've argued in private (and now here) that a large entropy pool is a natural response to entropy famine and uneven supply, just like a large grain depot guards against food shortages and uneven supply.

If you've got lots of good quality random data available, you don't need a large state. You can just stir lots on raw data into a small state and the small state will become fully entropic. The natural size for the state shrinks to the block size of the crypto function being used for entropy extraction. Once the value is formed and fully entropic, you spit it out and start again.

This is one of the things that drove the design decisions in the RdRand DRNG. With 2.5Gbps of 95% entropic data, there is no value in stirring the data into a huge pool (E.G. like Linux) so that you can live off that pool for a long time, even though the user isn't wiggling the mouse or whatever. There will be more random data along in under 300ps, so prepare another 256 bits from a few thousand raw bits and reseed.

A consequence of Linux having a big pool is that the stirring algorithm is expensive because it has to operate over a many bits. So an LFSR is used because it's cheaper than a hash or MAC. An LFSR may be a good entropy extractor, but I don't know of any math that shows that to be the case. We do have that math for hashes, CBC-MACs and various GF arithmetic methods.

When I count my raw data in bits per second, rather than gigabits per second, I am of course going to use them efficiently and mix up a large pot of state, so I can get maximum utility. With the RdRand DRNG, the bus is the limiting factor, not the supply or the pool size.

DJ

_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to