Some time back Lee Corbin posed the question of which was more
fundamental: observer-moments or universes?  I would say, with more
thought, that observer-moments are more fundamental in terms of explaining
the subjective appearance of what we see, and what we can expect.
An observer-moment is really all we have as our primary experience of
the world.  The world around us may be fake; we may be in the Matrix or
a brain in a vat.  Even our memories may be fake.  But the fact that we
are having particular experiences at a particular moment cannot be faked.

But the universe is fundamental, in my view, in terms of the ontology,
the physical reality of the world.  Universes create and contain observers
who experience observer-moments.  This is the Schmidhuber/Tegmark model.
(I think Bruno Marchal may invert this relationship.)

In terms of measure, Schmidhuber (and possibly Tegmark) provides a means
to estimate the measure of a universe.  Consider the fraction of all bit
strings that create that universe as its measure.  In practice this is
roughly 1/2^n where n is the size of the shortest program that outputs
that universe.  The Tegmark model may allow for similar reasoning,
applied to mathematical structures rather than computer programs.

Now, how to get from universe measure to observer-moment (OM) measure?
This is what I want to write about.

First, the measure of an OM should be the sum of contributions from
each of the universes that instantiate that OM.  Generally there are
many possible universes that may create or contain a particular OM.
Some are variants of our own, where things are different that we have
not yet observed.  For example, a universe which is like ours except
for some minor change in a galaxy billions of light years away could
contain a copy of us experiencing the same OMs.  Even bigger changes
may not matter; for example if you flip a coin but haven't yet looked
at the result, this may not change your OM.  Then there are even more
drastic universes, like The Matrix where we are living in a simulation
created in some kind of future or alien world.

Perhaps the most extreme case is a "universe" which only creates that OM.
Think of it as a universe which only exists for a brief moment and
which only contains a brain, or a computer or some such system, which
contains the state associated with that OM.  This is the "brain in a
vat" model taken to the most extreme, where there isn't anything else,
and there isn't even a vat, there is just a brain.  We would hope,
if our multiverse models are going to amount to anything, that such
universes would only contribute a small measure to each of our OMs.
Otherwise the AUH can't explain what we see.

But all of these universes contribute to the measure of our OMs.
We are living in all of them.  The measure of the OM is the sum of the
contribution from each universe.

However, and here is the key point, the contribution to an OM from a
universe cannot just be taken as equal to the measure of that universe.
Otherwise we reach some paradoxical conclusions.  For one thing,
a universe may instantiate a particular OM more than once.  What do
we do in that case?  For another, intuitively it might seem that the
contribution of a universe to an OM should depend to some extent on how
much of the universe's resources are devoted to that OM.  An enormous
universe which managed to instantiate a particular OM in some little
corner might be said to contribute less of its measure to that OM than
if a smaller universe instantiates the same OM.

The most extreme case is a trivial universe (equivalently, a program,
in Schmidhuber terms) which simply counts.  It outputs 1, 2, 3, 4, ...
forever.  This is a small program and has large measure.  At some point
it will output a number corresponding to any given OM.  Should we count
the entire measure of this small program (one of the smallest programs
that can exist) to this OM?  If so, it will seem that for every OM we
should assume that we exist as part of such a counting program, which
is another variant on the brain-in-a-vat scenario.  This destroys the
AUH as a predictive model.

Years ago Wei Dai on this list suggested a better approach.  He proposed
a formula for determining how much of a universe's measure contributes to
an OM that it instantiates.  It is very specific and also illustrates
some problems in the rather loose discussion so far.  For example,
what does it really mean to instantiate an OM?  How would we know if a
universe is really instantiating a particular OM?  Aren't there fuzzy
cases where a universe is only "sort of" instantiating one?  What about
the longstanding problem that you can look at the atomic vibrations in
a crystal, select a subset of them to pay attention to, and have that
pattern match the pattern of any given OM?  Does this mean that every
crystal instantiates every OM?  (Hans Moravec sometimes seems to say yes!)

To apply Wei's method, first we need to get serious about what is an OM.
We need a formal model and description of a particular OM.  Consider, for
example, someone's brain when he is having a particular experience.  He is
eating chocolate ice cream while listening to Beethoven's 5th symphony,
on his 30th birthday.  Imagine that we could scan his brain with advanced
technology and record his neural activity.  Imagine further that with the
aid of an advanced brain model we are able to prune out the unnecessary
information and distill this to the essence of the experience.  We come
up with a pattern that represents that observer moment.  Any system which
instantiates that pattern genuinely creates an experience of that observer
moment.  This pattern is something that can be specified, recorded and
written down in some form.  It probably involves a huge volume of data.

So, now that we have a handle on what a particular OM is, we can more
reasonably ask whether a universe instantiates it.  It comes down to
whether it produces and contains that particular pattern.  But this may
not be such an easy question.  It could be that the "raw" output format of
a universe program does not lend itself to seeing larger scale patterns.
For example, in our own universe, the raw output would probably be at
the level of the Planck scale, far, far smaller than an atomic nucleus.
At that level, even a single brain neuron would be the size of a galaxy.
And the time for enough neural firings to occur to make up a noticeable
conscious experience would be like the entire age of the universe.
It will take considerable interpretation of the raw output of our
universe's program to detect the faint traces of an observer moment.

And as noted above, an over-aggressive attempt to hunt out observer
moments will find false positives, random patterns which, if we are
selective enough, happen to match what we are looking for.

Wei proposed to solve both of these problems by introducing an
interpretation program.  It would be take as its input, the output of the
universe-creation program.  It would then output the observer moment in
whatever formal specification format we had decided on (the exact format
will not be significant).

So how would this program work, in the case of our universe?  It would
have encoded in it the location in space and time of the brain which
was experiencing the OM.  It would know the size of the brain and the
spatial distribution of its neurons.  And it would know the faint traces
and changes at the Planck scale that would correspond to neural firings
or pauses.  Based on this information, which is encoded into the program,
it would run and output the results.  And that output would then match
the formal encoding of the OM.

Now, Wei applies the same kind of reasoning that we do for the measure
of the Schmidhuber ensemble itself.  He proposes that the size of the
interpretation program should determine how much of the universe's measure
contributes to the OM.  If the interpretation program is relatively small,
that is evidence that the universe is making a strong contribution to
the OM.  But if the interpretation program is huge, then we would say
that little of the universe's measure should go into the OM.

In the most extreme case, the interpretation program could just encode the
OM within itself, ignore the universe state and output that data pattern.
In effect that is what would have to be done in order to find an OM
within a crystal as described above.  You'd have to have the whole OM
state in the program since the crystal doesn't actually have any real
relationship to the OM.  But that would be an enormous interpretation
program, which would deliver only a trivial measure.

For a universe like our own, the hope and expectation is that the
interpretation program will be relatively small.  Such a program takes
the entire universe as input and outputs a particular OM.  I did some
back of the envelope calculations and you will probably be amazed that
I estimate that such a program could be less than 1000 bits in size.
(This is assuming the universe is roughly as big as what is visible, and
neglecting the MWI.)  Compared to the information in an OM, which I can't
even guess but will surely be at least gigabytes, this is insignificant.
Therefore we do have strong grounds to say that the universe which
appears real is in fact making a major contribution to our OMs.

To be specific, Wei's idea was to count the measure of a universe's
contribution to an OM as 1/2^(n+m), where n is the size of the program
that creates the universe, and m is the size of the interpretation
program that reads the output of the first program, and outputs the OM
specification from that.  In effect, you can think of the two programs
together as a single program which outputs the formal spec of the OM,
and ask what are the shortest ways to do that.  In this way you can
actually calculate the measure of an OM directly without even looking at
the intermediate step of calculating a universe.  But I prefer thinking of
the two step method as it gives us a handle on such concepts as whether
we are living in the Matrix or as a brain in a vat.

Overall I think this is a very attractive formulation.  It's quantitative,
and it gives the intuitively right answer for many cases.  The counting
program contributes effectively no measure, because the only way we can
find an OM is by encoding the whole thing in the interpretation program.
And as another example, if there are multiple OMs instantiated by a
particular universe, that will allow the interpretation program to be
smaller because less information is needed to localize an OM.  It also
implies that small universes will devote more of their measure to OMs
that they instantiate than large ones, which basically makes sense.

There are a few unintuitive consequences, though, such as that large
instantiations of OMs will have more measure than small ones, and likewise
slow ones will have more measure than fast ones.  This is because in each
case the interpretation program can be smaller if it is easier to find the
OM in the vastness of a universe, and the slower and bigger an OM is the
easier it is to find.  I am inclined to tentatively accept these results.
It does imply that the extreme future vision of some transhumanists,
to upload themselves to super-fast, super-small computers, may greatly
reduce their measure, which would mean that it would be like taking a
large chance of dying.

There is one big problem with the approach, though, which I have not yet
solved.  I wrote above that a very short program could localize a given OM
within our universe.  It only takes ~300 bits to locate a brain (i.e. a
brain-sized piece of space)!  However this neglects the MWI.  If we take
as our universe-model a world governed by the MWI, it is exponentially
larger than what we see as the visible universe.  Every decoherence-time,
the universe splits.  That's like picoseconds, or nanoseconds at best.
The number of splittings since the universe was created is vast, and
the size of the universe is like 2 (or more!) to that power.

Providing the information to localize a particular OM within the vastness
of a universe governed by the MWI appears to be truly intractable.
Granted, we don't necessarily have to narrow it down to an exact branch,
but unless there are tremendous amounts of de facto convergent evolution
after splits, it seems to me that the percentage of quantum space-time
occupied by a given OM is far smaller than the 1/2^1000 I would estimate
in a non-MWI universe.  It's more like 1/2^2^100.  At that rate the
interpretation program to find an OM would be much *bigger* than the one
that just hard-codes the OM itself.  In short, it would appear that an MWI
universe cannot contribute significant measure to an OM, under this model.
That's a serious problem.

So there are a couple of possible solutions to this problem.  One is to
reject the MWI on these grounds.  That's not too attractive; this line of
argument is awfully speculative for such a conclusion.  Also, creating a
program for a non-MWI universe requires a random number generator, which
is an ugly kludge and implies that quantum randomness is algorithmic
rather than true, a bizarre result.  A more hopeful possibility is that
there will turn out to be structure in the MWI phase space that will
allow us to localize OM's much more easily than the brute force method
I assumed above.  I have only the barest speculations about how that
might work, to which I need to give more thought.

But even with this problem, I think the overall formulation is the
best I have seen in terms of grappling with the reality of a multiverse
and addressing the issue of where we as observers fit into the greater
structure.  It provides a quantitative and approximable measure which
allows us to calculate, in principle, how much of our reality is as it
appears and how much is an illusion.  It answers questions like whether
copies contribute to measure.  And it provides some interesting and
surprising predictions about how various changes to the substrate
of intelligence (uploading to computers, etc.) may change measure.
In general I think Wei Dai's approach is the best foundation for
understanding the place of observers within the multiverse.

Hal Finney

Reply via email to