I wrote: >>
A long string produced by a good PRNG is conditionally compressible in the sense that we know there exists a
shorter representation, but at the same time we believe it
to be conditionally incompressible in the sense that the
adversaries have no feasible way of finding a shorter
representation.
In contrast,
A long string produced by a HESG is unconditionally, absolutely incompressible. There does not exist a shorter representation. There cannot possibly exist a shorter representation.
Jerrold Leichter wrote:
You're using Kolgomorv/Chaitin complexity here, but unfortunately that's a
bit hard to apply.
It works for me. It is in principle hard to apply exactly, but in practice it's easy to apply to a good-enough approximation.
In particular, for any given PRNG I can readily exhibit the compressed representation of its output, namely the PRNG itself along with the initial internal state.
> K/C complexity is robust
when you talk about sets of possible strings, because playing around with
the machine can only shorten a fixed number of them.
Yes, I stand corrected. I sloppily wrote "string" when I should have written "strings". Specifically, the set of strings collectively cannot be compressed at all. As for each string individually, it is not compressible either, except possibly for trivially rare cases and/or trivial amounts of compression.
*Any* string has a shorter representation - if I get to specify the model *after* you choose the string.
Yes, for one particular string, or very small set of strings. You get to choose the model, but you only get to choose once. So you can only compress a trivially small subset of all the output strings, unless we're talking about a trivial degree of compression. In any case, you derive no cryptanalytic advantage from compression based on an ad-hoc model, so you might as well not bother, and just stick to some convenient standard model.
> Turning that into a
notion useful for cryptography is a bit tougher - and in any case, if you want to get at PRNG's, you need to get at K/C complexity "with trapdoor information": The bit stream may *look* uncompressible, but given the internal state, it is easily produced.
Isn't that what I said, as quoted above? I said it was conditionally compressible. It should go without saying that knowing the trapdoor information, i.e. the internal state, is the sufficient condition for feasible compressibility.
More generally: We're talking about "stretching" entropy here.
Well, I can agree that that's what we _should_ be talking about. I like to speak of an SRNG (stretched random number generator) as having a nonzero lower bound on the entropy density of the output, as opposed to a traditional PRNG where the entropy density is asymptotically zero.
> There are
certainly theoretical results in that direction, the one usually mentioned being the BBS bit generator, which takes k bits of entropy and gives you p(k) (for some polynomial p) bits that are polynomially-indistinguishable from random bits. But you (a) need some significant work to set this up and prove it; (b) BBS generators are slow.
A simpler approach that does work (in some sense) is: Choose a random starting value R, and a number k. Compute SHA^i(R) for i from 0 to k. Emit these values *backwards*. Then given the first k-1 outputs, an attacker cannot determine the next one (under the standard hypotheses about SHA).
Unfortunately, this is useless as, say, a key generator: If you send me the k-1'st output for use as a key, I can't determine what your *next* key will be - but I can trivially read your preceding k-2 sessions.
When I wanted to stretch entropy, I implemented the Yarrow
approach, i.e. encrypted counter plus systematic reseeding:
http://www.av8n.com/turbid/paper/turbid.htm#sec-srandom
It's not slow, and appears incomparably stronger than the
SHA^i example. Indeed it does not have any serious weaknesses that I know of. If anybody knows of any flaws, please explain!
The idea that revealing just the hash of random bits "doesn't reduce the
effective entropy" sounds great, but it's naive.
We agree it's naive. Indeed it's just wrong, as I've said N times over the last couple of days. And please let's not talk about "effective" entropy. I have no idea what "ineffective" entropy is supposed to be, and I can't imagine any way that makes sense to distinguish "effective" entropy from plain old entropy.
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]