Jesse Mazer writes:
> Hal Finney wrote:
> >However, I prefer a model in which what we consider equally likely is
> >not patterns of matter, but the laws of physics and initial conditions
> >which generate a given universe.  In this model, universes with simple
> >laws are far more likely than universes with complex ones.
> Why? If you consider each possible distinct Turing machine program to be 
> equally likely, then as I said before, for any finite complexity bound there 
> will be only a finite number of programs with less complexity than that, and 
> an infinite number with greater complexity, so if each program had equal 
> measure we should expect the laws of nature are always more complex than any 
> possible finite rule we can think of. If you believe in putting a measure on 
> "universes" in the first place (instead of a measure on first-person 
> experiences, which I prefer), then for your idea to work the measure would 
> need to be biased towards smaller program/rules, like the "universal prior" 
> or the "speed prior" that have been discussed on this list by Juergen 
> Schimdhuber and Russell Standish (I think you were around for these 
> discussions, but if not see 
> and 
> for more details)

No doubt I am reiterating our earlier discussion, but I can't easily find
it right now.  I claim that the universal measure is equivalent to the
measure I described, where all programs are equally likely.

Feed a UTM an infinite-length random bit string as its program tape.
It will execute only a prefix of that bit string.  Let L be the length
of that prefix.  The remainder of the bits are irrelevant, as the UTM
never gets to them.  Therefore all infinite-length bit strings which
start with that L-bit prefix represent the same (L-bit) program and will
produce precisely the same UTM behavior.

Therefore a UTM running a program chosen at random will execute a
program of length L bits with probability 1/2^L.  Executing a random
bit string on a UTM automatically leads to the universal distribution.
Simpler programs are inherently more likely, QED.

> If the "everything that can exist does exist" idea is true, then every 
> possible universe is in a sense both an "outer universe" (an independent 
> Platonic object) and an "inner universe" (a simulation in some other 
> logically possible universe).

This is true.  In fact, this may mean that it is meaningless to ask
whether we are an inner or outer universe.  We are both.  However it
might make sense to ask what percentage of our measure is inner vs outer,
and as you point out to consider whether second-order simulations could
add significantly to the measure of a universe.

> If you want a measure on universes, it's 
> possible that universes which have lots of simulated copies running in 
> high-measure universes will themselves tend to have higher measure, perhaps 
> you could bootstrap the global measure this way...but this would require an 
> answer to the question I keep mentioning from the Chalmers paper, namely 
> deciding what it means for one simulation to "contain" another. Without an 
> answer to this, we can't really say that a computer running a simulation of 
> a universe with particular laws and initial conditions is contributing more 
> to the measure of that possible universe than the random motions of 
> molecules in a rock are contributing to its measure, since both can be seen 
> as isomorphic to the events of that universe with the right mapping.

We have had some discussion of the implementation problem on this list,
around June or July, 1999, with the thread title "implementations".

I would say the problem is even worse, in a way, in that we not only
can't tell when one universe simulates another; we also can't be certain
(in the same way) whether a given program produces a given universe.
So on its face, this inability undercuts the entire Schmidhuberian
proposal of identifying universes with programs.

However I believe we have discussed on this list an elegant way to
solve both of these problems, so that we can in fact tell whether a
program creates a universe, and whether a second universe simulates the
first universe.  Basically you look at the Kolmogorov complexity of a
mapping between the computational system in question and some canonical
representation of the universe.  I don't have time to write more now
but I might be able to discuss this in more detail later.

Hal Finney

Reply via email to