----- Original Message ----- From: <[EMAIL PROTECTED]> To: <cryptography@metzdowd.com> Sent: Friday, January 07, 2005 9:30 AM Subject: Re: entropy depletion (was: SSL/TLS passive sniffing)

> > From: [EMAIL PROTECTED] > > [mailto:[EMAIL PROTECTED] On Behalf Of Enzo > > Michelangeli > > Sent: Tuesday, January 04, 2005 7:50 PM > > > > This "entropy depletion" issue keeps coming up every now and > > then, but I still don't understand how it is supposed to > > happen. If the PRNG uses a really non-invertible algorithm > > (or one invertible only with intractable complexity), its > > output gives no insight whatsoever on its internal state. > > > I see much misunderstanding of entropy depletion and many misstatements > because of it. > > It is true you don't know what the internal state is but the number of > possible internal states tends to reduce with every update of the > internal state. See "Random Mapping Statistics" by Philippe Flajolet and > Andrew M. Odlyzko (Proceedings of the workshop on the theory and > application of cryptographic techniques on Advances in cryptology, > Houthalen, Belgium, Pages: 329 - 354, year 1990) for a thorough > discussion. [...] > In the real world, our PRNG state update functions are complex enough > that we don't know if they are well behaved. Nobody knows how many > cycles exist in a PRNG state update function using, for example, SHA-1. > You run your PRNG long enough and you may actually hit a state that, > when updated, maps onto itself. When this occurs your PRNG will start > producing the same bits over and over again. It would be worse if you > hit a cycle of 10,000 or so because you may never realize it. > > I don't know of any work on how not-so well behaved PRNG state update > function lose entropy. But that was precisely my initial position: that the insight on the internal state (which I saw, by definition, as the loss of entropy by the generator) that we gain from one bit of output is much smaller than one full bit. However, I've been convinced by the argument broght by John and others - thanks guys - that we should not mix the concept of "entropy" with issues of computational hardness. That said, however, I wonder if we shouldn't focus more, for practical purposes, on the replacement concept offered by John of "usable randomness", with a formal definition allowing its calculation in concrete cases (so that we may assess the risk deriving from using a seeded PRNG rather than a pure RNG in a more quantitative way). The paper you mention appears to go in that direction. Enzo --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]