On Thursday, July 4, 2002, at 09:26  AM, Hal Finney wrote:

> Brent Meeker writes:
>> What's Wolfram's critereon for "randomness"?  how it looks?  It would 
>> seem
>> that he's using something different than Chaitin if we thinks a simple
>> system can produce random output.  Chaitin *defines* random as an 
>> output
>> that can't be reproduced by a system simpler than the output itself.
>
> That's a good point.  It's curious that Chaitin used the primes as an
> example of mathematical randomness when they must have inherently low
> algorithmic complexity.  A handful of relatively short axioms define the
> integers and the operations of addition, multiplication, etc., and this
> is sufficient to produce the "random" spacing of the primes.

However, the "dual" of "descriptional complexity" is "logical depth": 
the running time of a program to generate some string or set from some 
initial conditions or axioms. An apparently complex system, one which 
apparently has no description shorter than itself, may arise from a 
short description run through a machine or process a number of times. 
Logical depth is the term coined by Charles Bennett, of course.

(Arguably, the universe is such a high logical depth system.)

I say "apparently complex" rather than "complex" to admit this very 
possibility: that an object with seemingly no shorter description of 
itself _than_ itself ("random" in Kolmogorov-Chaitin sense) actually has 
a much shorter description.

Examples of this abound. An apparently random string may be a very 
simple string run through an encryption process. An apparently random 
string may have a very short description once someone spots the pattern.

A DNA string coding for some organism which has evolved over billions of 
generations has "packed" a lot into a fixed length string. Each 
twiddling of the string through mutation and crossover, followed by each 
test of reproductive fitness, is effectively a computation. The "spring" 
of this string is "compressed" with more and more energy. The DNA string 
for an organism has a lot of "energy" or "cleverness" (viewed in 
different ways). The string length remains the same, more or less, and 
naive calculations of entropy (which are always simplistic) give 
essentially the same measure. But the logical depth is different, and 
that matters a lot.

We can say "the set of primes is not random, but has high logical 
depth." It will take a certain amount of energy to generate/compute the 
primes up to some number.

I think Chaitin is making a point about the  set of primes being a 
readily-visualizable example of something which has a seemingly random 
structure but with high logical depth. Which is what a lot of cellular 
automata, e.g., LIFE, have been showing us.

Simple initial conditions + Long running times ---> Complex structures 
with no simpler descriptions than themselves

(unless we know the initial conditions and algorithms, in which case we 
can do the computation ourselves)

Aside: Trying to compute the initial conditions from a final 
configuration is difficult. This was dealt with in some of Wolfram's 
earlier papers, collected together in his World Scientific book 
"Cellular Automata."


--Tim May
"Stupidity is not a sin, the victim can't help being stupid.  But 
stupidity is the only universal crime;  the sentence is death, there is 
no appeal, and execution is carried out automatically and without pity." 
--Robert A. Heinlein

Reply via email to