On Fri, Apr 28, 2017 at 09:42:45AM -0500, Jason Resch wrote: > On Fri, Apr 28, 2017 at 1:51 AM, Russell Standish <[email protected]> > > HAVEGE periodically dumps entropy into /dev/random, when available. Do you > know what the consumption-rate of your ALife simulation is in relation to > the introduction of new entropy from HAVEGE? >
The consumption rate is much, much higher than the entropy is generated, of course. HAVEGE generates pseudo random numbers that is reseeded by the entropy when available. Its the entropy production rate that is important for what I'm researching, of course, but it is not clear what that is. > > > There was some increase in complexity over time in all cases, maybe a > > little more with HAVEGE, but not statistically significant. This is to > > be expected - if you have a computational process generating output, > > then the output can be expected to grow at most logarithmically in > > time. This follows from the compiler theorem: for some computable > > process M(t), t∈N, we have > > > > K(M(t)) ≤ K(M) + K(t) > > > > where K(M) is the constant complexity of the machine, and K(t)≤log₂t > > > > > Is K Kolmogorov complexity? I tried to lookup the "Compiler Theorem" but > could not find anything that seemed right. Do you have a reference? > The classic resource is Li and Vitanyi's book. Googling "compiler theorem" came up this item from Juergen Schmidhuber, a one-time participant of this list: http://people.idsia.ch/~juergen/locoart/node2.html > > > (This was pointed out in a recent paper I reviewed on open ended > > evolution.) > > > > If, however, a random oracle is available producing r bits of > > information per timestep, then K(M(t)) is bounded linearly in time. If > > one can demonstrate genuine linear growth in complexity, then it would > > be a smoking gun. In the Tierra simulation, it was hard to track where > > all the information was going - part of it goes into creating new > > digital organisms, and part into the foodweb structure. I have more > > recently been able to quantify how much information is in the foodweb. > > > > This reminds me of Generative Adversarial Networks, a recent breakthrough > in deep learning where two neural networks face off against each other to > continue the drive of complexity and learning. For example: > https://arxiv.org/abs/1612.03242 > Yes - its funny it is considered a recent breakthrough, though. The idea is implicit in the "Red Queen effect" literature from, well, my prehistory anyway. > It seems that if the ALife's purpose was to predict the output of the RNG, > rather than aspects of the environment, then the complexity of the > (successful program evolved to predict the output of the RNG) would be > related to the complexity of the program generating the random bits. > However, if the ALife has indirect access to an environment seeded by the > RNG, and the difficulty of surviving in the environment is not strictly > tied to better predicting the random behavior of the environment, then I > think there is a disconnect between the complexity of the organisms an the > complexity of the RNG. Is it even possible in theory, for the ALife in > Tierra, to collect enough information about the environment to work out the > state of the RNG? For Mersenne Twister, for example, one needs to see 640 > consecutive outputs to efficiently reverse-engineer the internal state of > the RNG. For cryptographically secure or truly random sources, this task is > perhaps impossible. > It doesn't need to work out the complete state of the MT, nor even that an MT is being used. Just to pick up and exploit some patterns in the environment, which evolution algorithms are notably famous for doing. If you read any of Tom Ray's early papers, you will have no doubt come across the anecdote where his digital creatures discovered and exploited a bug in his Tierra code. Regarding cryptographically secure generators, if Tierra ever did behave differently, then it would indicate the RNG was not cryptographically secure. Since (as I understand it) these generators are considered cryptographically secure by relying on P≠NP, such a result would actually have profound implications on an important open problem (and cryptography in general). I am not hopeful that will ever happen, though - what I found in practice is that Isaac was so much slower than MT and Havege, that I had trouble gathering enough data to perform any analysis. > > > > > What makes it difficult is to get an good handle on how many bits of > > genuine entropy HAVEGE produces over time, as it depends on how much > > real world interactivity there is (eg mouse/keyboard interrupts, disk > > seeks). It may well be the case that with my run, there is an initial > > burst of entropy that decays over time. > > > > > You may be interested in some relatively recent hardware instructions Intel > has added to their chips: RDRAND and RDSEED. These are connected to a > hardware RNG based on thermal noise. RDSEED can produce many MB per second > of full-entropy bits. RDRAND is faster, but is based on a pseudorandom > number generator, and is only reseeded every hundred or so calls. > Interesting! Thanks for the tip. It looks like RDRAND is available on my current dev CPU (a Broadwell chip), and that the /dev/random already exploits it as an additional source of entropy. -- ---------------------------------------------------------------------------- Dr Russell Standish Phone 0425 253119 (mobile) Principal, High Performance Coders Visiting Senior Research Fellow [email protected] Economics, Kingston University http://www.hpcoders.com.au ---------------------------------------------------------------------------- -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.

