On Fri, Apr 28, 2017 at 1:51 AM, Russell Standish <[email protected]> wrote:
> On Thu, Apr 27, 2017 at 09:23:56PM -0500, Jason Resch wrote: > > On Thu, Apr 27, 2017 at 9:13 PM, Brent Meeker <[email protected]> > wrote: > > > > > > > > > > > On 4/27/2017 4:39 PM, Jason Resch wrote: > > > > > >> > > >> If t is possible to make a cryptographically secure pseudorandom > number > > >> generator then I think this means that a creative processes that runs > in > > >> sub-exponential time, should demonstrate creativity whether it uses a > > >> cryptographically secure pseudorandom number generators or true random > > >> number generators. Otherwise, the failure to demonstrate creativity > in one > > >> case but not the other could be used to differentiate > cryptographically > > >> secure random number generators from truly random sources in sub > > >> exponential time. > > >> > > > > > > Why would you suppose it easier to distinguish creativity from > > > non-creativity than random from psuedo-random? > > > > > > If I recall correctly, the hypothesis Russell was testing was "without > true > > randomness, creativity is not possible". How he measured or defined > > creativity, I am not sure. I think he may have been running an > evolutionary > > simulation. > > > > Jason > > The work I did on this was reported at an ALife conference in 2004, so > took place during 2003-4. The results were somewhat inconclusive. No > signs of open-ended evolution were apparent in any of the 3 cases: > pseudo-random (Mersenne Twister), cryptographically strong pseudo > random (based on the Isaac generator) and entropy harvesting (HAVEGE > generator, which is now found as a Linux device > /dev/random). Actually, IIRC, that conference paper did not include > any of the Isaac runs. > HAVEGE periodically dumps entropy into /dev/random, when available. Do you know what the consumption-rate of your ALife simulation is in relation to the introduction of new entropy from HAVEGE? > There was some increase in complexity over time in all cases, maybe a > little more with HAVEGE, but not statistically significant. This is to > be expected - if you have a computational process generating output, > then the output can be expected to grow at most logarithmically in > time. This follows from the compiler theorem: for some computable > process M(t), t∈N, we have > > K(M(t)) ≤ K(M) + K(t) > > where K(M) is the constant complexity of the machine, and K(t)≤log₂t > > Is K Kolmogorov complexity? I tried to lookup the "Compiler Theorem" but could not find anything that seemed right. Do you have a reference? > (This was pointed out in a recent paper I reviewed on open ended > evolution.) > > If, however, a random oracle is available producing r bits of > information per timestep, then K(M(t)) is bounded linearly in time. If > one can demonstrate genuine linear growth in complexity, then it would > be a smoking gun. In the Tierra simulation, it was hard to track where > all the information was going - part of it goes into creating new > digital organisms, and part into the foodweb structure. I have more > recently been able to quantify how much information is in the foodweb. > This reminds me of Generative Adversarial Networks, a recent breakthrough in deep learning where two neural networks face off against each other to continue the drive of complexity and learning. For example: https://arxiv.org/abs/1612.03242 It seems that if the ALife's purpose was to predict the output of the RNG, rather than aspects of the environment, then the complexity of the (successful program evolved to predict the output of the RNG) would be related to the complexity of the program generating the random bits. However, if the ALife has indirect access to an environment seeded by the RNG, and the difficulty of surviving in the environment is not strictly tied to better predicting the random behavior of the environment, then I think there is a disconnect between the complexity of the organisms an the complexity of the RNG. Is it even possible in theory, for the ALife in Tierra, to collect enough information about the environment to work out the state of the RNG? For Mersenne Twister, for example, one needs to see 640 consecutive outputs to efficiently reverse-engineer the internal state of the RNG. For cryptographically secure or truly random sources, this task is perhaps impossible. > > What makes it difficult is to get an good handle on how many bits of > genuine entropy HAVEGE produces over time, as it depends on how much > real world interactivity there is (eg mouse/keyboard interrupts, disk > seeks). It may well be the case that with my run, there is an initial > burst of entropy that decays over time. > > You may be interested in some relatively recent hardware instructions Intel has added to their chips: RDRAND and RDSEED. These are connected to a hardware RNG based on thermal noise. RDSEED can produce many MB per second of full-entropy bits. RDRAND is faster, but is based on a pseudorandom number generator, and is only reseeded every hundred or so calls. > One curious thing I did notice when doing the experiments was one day > I connected a unix pipe to the /dev/random on another machine, and on > that other machine connected a pipe back to /dev/random of the initial > machine. I then wrote the generated random numbers back to disk (which > were NFS shares). I though this sort of scheme might harness parallel > computing to speed up entropy harvesting. What happened was that this > crossover process generated million-fold increase in the production of > numbers and brought the entire cluster to a screaming halt. Since this > was a national research cluster, I got severely rapped over the > knuckles and promised to never do it again! > > Nice story! :-) > I have since tried doing this with some personal computer clusters, > but without any significant speedup in random number generation. Maybe > it requires a massive workload of parallel scientific computations to > work :). > > Anyway, it is a question that I will return to in the next few years - > I now have a completely new implementation of Tierra and the > complexity analysis tools... > > I look forward to the results! Jason -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.

