Lee Corbin writes:
> But in general, what do observer-moments explain? Or what does the
> hypothesis concerning them explain?  I just don't get a good feel
> that there are any "higher level" phenomena which might be reduced
> to observer-moments (I am still very skeptical that all of physics
> or math or something could be reduced to them---but if that is 
> what is meant, I stand corrected). Rather, it always seems like
> a number of (other) people are trying to explain observer-moments
> as arising from the activity of a Universal Dovetailer, or a 
> Platonic ensemble of bit strings, or something.

I would say that observer-moments are what need explaining, rather than
things that do the explaining.  Or you could say that in a sense they
"explain" our experiences, although I think of them more as *being*
our experiences, moment by moment.  As we agreed:

> > An observer-moment is really all we have as our primary experience of
> > the world.  The world around us may be fake; we may be in the Matrix or
> > a brain in a vat.  Even our memories may be fake.  But the fact that we
> > are having particular experiences at a particular moment cannot be faked.
> Nothing could be truer.

That is the sense in which I say that observer-moments are primary;
they are the most fundamental experience we have of the world.
Everything else is only a theory which is built upon the raw existence
of observer-moments.

> > In terms of measure, Schmidhuber (and possibly Tegmark) provides a means
> > to estimate the measure of a universe.  Consider the fraction of all bit
> > strings that create that universe as its measure.
> I think that perhaps I know exactly what is meant; but I'm unwilling
> to take the chance. Let's say that we have a universe U, and now we
> want to find its measure (its share of the mega-multi-Everything
> resources).  So, as you write, we consider all the bit strings
> that create U.  Let's say for concreteness that only five bit strings
> "really exist" in some deep sense:
> 010101110100101010011101010110001010110101...
> 101101110100010101111111001011010110100101...
> 001010100111010100111010001001000010101111...
> 11011101000100100001010l110110000101010011...   
> 110010111010101110100010000101001010011111...
> and then it just so happens that only 2 out of these five actually
> make the universe U manifest. That is, in the innards of 2 of these,
> one finds all the structures that U contains. Am I following so far?

In the Schmidhuber picture, it's not that the strings contain U,
rather the strings are programs which when run on some UTM produce
U as the output.  This corresponds to the concept you mention below,
the Kolmogorov complexity.  KC is based on the length of programs that
output the objects (strings, or universes, or any other information
based entity).  Measure as I am using it is 1/2^KC where KC is the
Kolmogorov complexity of an object.

> > In practice this is roughly 1/2^n where n is the size of the
> > shortest program that outputs that universe.
> So each of these universes (each of the five, in my toy example)
> has a certain Kolmogorov complexity?  Each of the five can be
> output by some program?

Yes, I think this is equivalent to my conception, although when I spoke
of bit strings I was thinking of the inputs to the UTM while you are
talking about the outputs.  But the basic idea is the same.

> But is that program infinite or finite?
> Argument for finite: normally we want to speak of *short* programs
> and so that seems to indicate the program has a limited size.
> Argument for infinite: dramatically *few* bit strings that are
> infinite in length have just a finite amount of information.
> Our infinite level-one Tegmark universe, for example, probably
> is tiled by Hubble volumes in a non-repeating irregular way so
> that no program could output it.

Now I think we are both talking about the inputs to the UTM.  Should
we consider infinite length inputs?

I don't think it is necessary, for three reasons.  First, due to the
way TM's work, in practice a random tape will only have some specific
number of input bits that ever get used.  The chance of an infinite
number of bits being used is zero.  Second, you could construct tapes
which used an infinite number of bits, but they would be of measure zero
and hence would make no detectable contribution to the actual numeric
predictions of the theory.  Third, there are variants on UTMs which
only accept self-delimiting input tapes that have, in effect, lengths
that are easily determined.  Greg Chaitin's work focuses on the use of
self-delimiting programs to achieve a more precise picture of algorithmic
complexity (which is equivalent to KC).  The lengths of such programs
are inherently finite.  These UTMs are equivalent to all others.

Note that you could, I think, create an infinite universe even using a
finite tape.  I believe that our universe, even if infinite in Tegmark's
level-one sense, could be output by a finite program, at least in an
MWI model.  The amount of information in such a universe is roughly zero;
all of the order that we see around us is due to splitting of universes.
See Tegmark's paper, http://space.mit.edu/home/tegmark/nihilo.html .


Reply via email to