Bruno Marchal writes:
Le 10-déc.-05, à 13:24, Stathis Papaioannou a écrit :
In addition to the above arguments, consider the problem from the point of
view of the subject. If multiple copies of a person are created and run in
parallel for a period, what difference does this make to his experience?
It seems to me that there is no test or experiment the person could do
which would allow him to determine if he is living in a period of high
measure or low measure.
To determine this with certainty? I agree with you in that case. But we can
make "sure bets".
Take the iterated self-duplication (thought) experiment: You are "read",
"cut" and then "pasted" in two identical rooms except one has 1 drawn on
the wall where the other has 0.
Then "each of you" do it again and again.
After 64 duplications you stop. A vast majority among the 2^64 "yous" will
confirms they bet on their normality. Normal experience here is guarantied
by the incompressible information of most bits sequences (provable by a
simple combinatorial analysis).
This is equivalent of betting the halving of the intensity of a beam of x
polarized photons going through a y analyser, with Everett QM.
What I meant above was that the presence of parallel copies per se cannot
directly change the quality of the first person experience of any of the
copies. It may be possible to infer the presence of other copies by indirect
means; for example, in a closed system a high measure period may be
characterised by faster oxygen consumption.
I'm not sure what you mean by "[a] vast majority among the 2^64 "yous" will
confirms they bet on their normality", but I'm guessing that you are
referring to the idea that if you bet on being sampled from high measure
group rather than the low measure group, you are more likely to be right.
This method has its problems. Consider this thought experiment which I
proposed a few months ago:
You find yourself alone in a room with a light that alternates red/green
with a period of one minute. A letter in the room informs you that every
other minute, 10^100 copies of you are created and run in parallel for one
minute, then shut down. The transition between the two states (low measure/
high measure) corresponds with the change in the colour of the light, and
you task is to guess which colour corresponds to which state.
The problem is, whether the light is red or green, you could argue that you
are vastly more likely to be sampled from the 10^100 group. You might decide
to say that *both* red and green correspond to the larger group, because if
you say this 10^100 copies in the multiverse will be correct and only one
copy will be wrong. But clearly, this tyranny of the majority strategy
brings you no closer to the truth. If you tossed a coin, at least you would
have a 1/2 chance of being right.
If an OM is the smallest discernible unit of conscious experience, it
therefore seems reasonable to treat multiple instantiations of the same OM
as one OM.
OK but with comp I have argued that OMs are not primitive but are
"generated", in platonia, by the Universal Dovetailer. A 3- OM is just an
UD-accessible state, and the 1-OMs inherit relative probabilities from the
computer science theoretical structuring of the 3-OMs.
It is the 1 3 person distinction which forces, I think, the relativity or
conditionality of the measure. There is no a priori means to know if we
are, just now, in a Harry Potter (abnormally informative) type of OM, but
we can always bet our next OMs will belong to the set of their most normal
continuators (probably the product of long (deep) computations with
stability on dovetailing on the reals or noise).
Are OMs directly generated by the UD, or does the UD generate the physical
(apparently) universe, which leads to the evolution of conscious beings, who
then give rise to OMs?
Win over $10,000 in Dell prizes this Christmas