--- On Wed, 2/11/09, Stathis Papaioannou <[email protected]> wrote:
> Well, this seems to be the real point of disagreement between you and the
> pro-QI people. If I am one of the extra versions and die overnight, but the
> original survives, then I have survived. This is why there can be a many to
> one relationship between earlier and later copies. If you don't agree with
> this then you should make explicit your theory of personal identity.
It is close to the point, but there is room for a misunderstanding so I have to
be careful. Here I am consolidating replies to some of the branched post
threads and will present some thought experiments.
On personal identity:
As I explained, there are several possible definitions of personal identity,
and the most useful ones are 1) All branched/fused people are the same person,
2) Causal chains determine identity, and 3) Observer - moments.
This can become confusing because it is not always clear which definition
someone is using, especially if quickly typing out a reply to a tangentially
related post. This can lead to a kind of Hydra-whacking effect in which one
point is dealt with, only another confusion is simultaneously created because
(for example) I was not clear on what I did not spell out as it was not the
main point at issue in the post I was responding to.
That was the case recently when some people misconstrued my use of the causal
chain (in terms of "you might die, only the original will survive") as some
kind of crucial point. Causal differentiation applied to the question at hand,
so I used that one. If anyone has read my QI paper, you would have known that
I accept that "teleportation is OK" and that measure is what matters, not so
much the original vs. copy issue. I will explain more below on these important
points.
If I had to pick one definition and stick with it, I would go with the least
misleading one, which is an observer-moment.
The important thing to realize is that _definitions don't matter_!
Predictions, decisions, appropriate emotions to a situation - these are
completely independent of definitions of personal identity. Personal identity
is a useful concept in practice but not a fundamental thing, and therefore can
have no fundamental relevance, unlike its misuse in QS thinking where it could
supposedly affect a measure distribution.
On probability:
Bruno Marchal wrote:
> You say: "no randomness involved" but you seem to accept probabilities. Do I
> just miss something here?
Yes, Bruno, you did, though my quickness contributed. In my QI paper I defined
"effective probability" and carefully spelled out the roles it can play. But
again, in posting on tangentially related topics, it is much easier to just say
"probability" and hope that people remember what I am really talking about.
Classically there are two kinds of probability: true randomness, and subjective
uncertainty due to ignorance. I do not believe that the former exists.. When
I talk about probability it either involves some ignorance on the part of the
subject (as in the Reflection argument), or the use of "effective probability"
in theory confirmation.
I may get sloppy sometimes (and say probability) when talking about a situation
after an experiment that is yet to be performed, but in thinking about such
cases it is absolutely necessary to remember that in the MWI there is neither
randomness nor subjective ignorance, and that one must use Caring Measure.
On the "first person" slogan:
Any observation is made by the person observing it. In that sense, they are
all "first person".
Truths do not depend on point of view. We do not know the measure
distribution, but we can guess about it, and can study a model for it.
Assuming the model is accurate, it is the distibution of these "first person"
observations.
Calling it a "third person" view is a false charge; an accurate model is not a
view, it is simply the truth. Invoking "first person measure distributions" as
an alternative is an empty slogan.
The real key point at which the QS fallacy appears seems to be that some people
find it inconcievable that they will not have a future. Thus, they assume that
they will survive and only need to take into account effective probabilities
that are conditional on survival. This fallacy is undefined (in terms of
personal identity which is required for the condition) and is false by
definition of the measure distribution.
This can be seen using either causal chains (if a person is defined as a casual
chain, then when the chain ends, so will he) or more generally just in terms of
decreasing measure of observer-moments with age. In the latter case increasing
age is no different than, for example, increasing brightness of your visual
field. There is a sequence of observer-moments in which what you see is more
and more bright, and after some point the measure distribution will decline as
a function of increasing brightness. You can define (or attempt to define) a
conditional effective probability for what else you would experience as a
function of increasing brightness, but I don't see people worrying about it in
the same way they do for increasing age. There is no logical reason to make
such a distinction.
Thought experiments:
1) The fair trade
This is the teleportation or Star Trek transporter thought experiment. A
person is disintegrated, while a physically identical copy is created elsewhere.
Even on Star Trek, not everyone was comfortable with doing this. The first
question is: Is the original person killed, or is he merely moved from one
place to another? The second question is, should he be worried?
The answer to the first question depends on the definition of personal
identity. If it is a causal chain, then if the transporter is reliable, the
causal chain will continue. However, if the copy was only created due to
extreme luck and its memory (though coincidentally identical to that of the
original) is not determined by that of the original, then the chain was ended
and a new one started.
The second question is more important.
Since we are considering the situation before the experiment, we have to use
Caring Measure here. The temptation is to skip such complications because
there is no splitting and no change in measure, but skipping it here can lead
to confusion in more complicated situations.
The utility function I'll use is oversimplified for most people in terms of
being so utilitarian (as opposed to conservatively history-respecting, which
might oppose 'teleportation') but will serve.
So if our utility function is U = M Q, where M is the guy's measure (which is
constant here) and Q is his quality of life factor (which we can assume to be
constant), we see that it does not depend on whether or not the teleportation
is done. (In practice, Q should be better afterwards, or there is no reason to
do it.) Therefore it is OK to do it. It is a fair trade.
2) The unfair trade
Now we come to the situation where there are 2 ‘copies’ of a person in the
evening, but one will be removed overnight, leaving just one from then on.
I’ll call this a culling.
I pointed out that in this situation, the person does not know which copy he
is, so subjectively he has a 50% chance of dying overnight. That is true,
using causal chains to define identity, but the objection was raised that
‘since one copy survives, the person survives’ based on the ‘teleportation’
idea that the existence elsewhere of a person with the same memory and
functioning is equivalent to the original person surviving.
So to be clear, we can combine a culling with teleportation as follows: both
copies are destroyed overnight, but elsewhere a new copy is created that is
identical to what the copies would have been like had they survived.
Is it still true that the person has a subjective 50% chance to die overnight?
If causal chains are the definition, then depending on the unreliability of the
teleporter and how it was done, the chance of dying might be more like 100%.
But as we have seen, definitions of personal identity are not important. What
matters is whether the person should be unhappy about this state of affairs; in
other words, whether his utility function is decreased by conducting the
culling.
Using U = M Q, it obviously is decreased, since M is halved and Q is unchanged.
So as far as I can see, the only point of contention that might remain is
whether this is a reasonable utility function. That is what the next thought
experiment will address.
3) The Riker brothers
Will T. Riker tried to teleport from his spaceship down to a planet, but due to
a freak storm, there was a malfunction. Luckily he was reconstructed back on
the ship, fully intact, and the ship left the area.
Unknown to those on the ship, a copy of him also materialized on the planet..
He survived, and years later, the two were reunited when Will’s new ship passed
by. Now known as Tom, the copy that was on the planet did not join Star Fleet
but went on to have many adventures of his own, often supporting rebel causes
that Will would not. Will and Tom over their lifetimes played important but
often conflicting roles in galactic events. They married different women and
had children of their own.
The two brothers became very different people, it seems obvious to say –
similar in appearance, but as different as two brothers typically are. It
should be obvious that killing one of them (say, Will) would not be morally OK,
just because Tom is still alive somewhere. Based on functionalism, Will is
just as conscious as any other person; his measure (amount of consciousness) is
not half that of a normal person.
When did they become two people, rather than one? Did it happen as soon as the
first bit of information they received was different? If that were so, then it
would be morally imperative to measure many bits of information to
differentiate your own MWI copies. But no one believes that.
No, what matters is that Rikers’ measure increased during the accident. As
soon as there were two copies of him – even before they opened their eyes and
saw different surroundings – they were two different (though initially similar)
people and each had his own value. Their later experiences just differentiated
them further.
4) The 'snatch'
Suppose an alien decided to ‘clone’ (copy) a human, much as Riker was copied.
The alien scans Earth and gets the information he needs, then goes back to his
home planet, many light years away. Then he produces the clone.
Does this in any way affect the original on Earth? Is half of his
consciousness suddenly snatched away? It seems obvious to me that it is not.
If it is not, then measure is what matters, not ‘first person probabilities’.
5) The groggy mornings
Bob always wakes up to his alarm clock at 7:00 every morning. However, he is
always groggy at first, and some of his memory does not ‘come on line’ until
7:05.
During this time, Bob does not remember how old he is. Since there is genuine
subjective uncertainty on his part, he can use the Reflection Argument to guess
his current age. The effective probability of his being a certain age is
proportional to his measure during that year. Thus, we can talk about his
expectation value for his age, the age which he is 90% ‘likely’ to be younger
than, etc.
If he is mortal, then his measure decreases with time, so that his expected age
and so on fall into typical human parameters.
If he were immortal, then his expected age would diverge, and the ‘chance’ that
his age would be normal is 0%. Clearly that is not the case. Thus, having a
long time tail in the measure distribution is not immortality.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---