Jacques Mallah writes:
>     The problem comes when some people consider death in this context.  I'll 
> try to explain the insane view on this, but since I am not myself insane I 
> will probably not do so to the satisfaction of those that are.

I have mixed feelings about this line of reasoning, but I can offer
some arguments in favor of it.

>     OK.  Now, suppose there are two exactly identical twins who lead exactly 
> identical lives up until a moment when suddenly, one of them is killed.  
> (This serves as a model of the case with people in parrallel universes 
> acting as 'twins'.)
>     Obviously, if you care about these twins, the death was a bad thing to 
> happen.  There is now less of these people.  True, one twin survived, but 
> supposing he is still happy, there is still only half as much happiness in 
> the world due to the twins as there would be if one had not died.  I 
> formalize this by saying that the measure of such conscious observations has 
> been reduced by a factor of 2.
>     The insane view however holds that the mind of the "killed" twin somehow 
> leaps into the surviving twin at the moment he would have been killed.  
> Thus, except for the effect on other people who might have known the twins, 
> the apparent death is of no consequence.

It's not that the mind "leaps".  That would imply that minds have
location, wouldn't it?  And spatial limits?  But that notion doesn't
work well.

Mind is not something that is localized in the universe in the way
that physical objects are.  You can't pin down the location of a mind.
Where in our brains is mind located?  In the glial cells?  In the neurons?
The whole neuron, or just the synapse?  It doesn't make sense to imagine
that you can assign a numerical value to each point in the brain which
represents its degree of mind-ness.  Location is not a property of mind.
Hence we cannot speak of minds "leaping".

It makes more sense to think of mind as a relational phenomenon, like
"greater than" or "next to", but enormously more complicated.  In that
sense, if there are two identical brains, then they both exhibit the
same relational properties.  That means that the mind is the same in
both brains.  It's not that there are two minds each located in a brain,
but rather that all copies of that brain implement the mind.

Further support for this model can be found by considering things from
the point of view of that mind.  Let it consider the question, which
brain am I in at this time?  Which location in the universe do I occupy?
There is no way for the mind to give a meaningful, unique response to
this question.  It does not occupy just one brain or just one location.
Any answer will be both wrong and right.

In this model, if the number of brains increases or decreases, the mind
will not notice, it will not feel a change.  In fact, it has no way
of telling.  No introspection will reveal the number of implementations
of itself that exist in a universe or a multiverse.  (Possibly it can
tell that there are more than zero implementations, but even that will
be questioned by some.)

>     This they call the "quantum theory of immortality" (QTI) because, due to 
> quantum mechanics, there would always be some parrallel universe in which 
> any given person would have copies that live past any given age, and they 
> figure the minds would always leap into those copies.  I will from now on 
> call it the "fallacious immortality nonsense" (FIN).  Those beliefs are 
> dangerous by the way, since they can encourage suicide or worse.

This is only dangerous if the belief is wrong, of course.  The contrary
belief could be said to be dangerous in its way, if it were wrong as well.
(For example, it might lead to an urgent desire to build copies.)

>    I have repeated pointed out the obvious consequence that if that were 
> true, then a typical observer would find himself to be much older than the 
> apparent lifetime of his species would allow; the fact that you do not find 
> yourself so old gives their hypothesis a probability of about 0 that it is 
> the truth.  However, they hold fast to their incomprehensible beliefs.

This is a different argument and has nothing to do with the idea of
"leaping", which is mostly what I want to take issue with.  All this
argument shows is that measure or probability decreases with time.
The implications of this for how minds should regard changes in their
numbers of implementation are complex and IMO unresolved.  (For example,
is adding a new implementation to be desired as much as the destruction
of one implementation is to be avoided?  What about size, are big
implementations better than small ones?  How about speed, does it
matter if the implementations get out of phase, how much does it affect
probability and measure?)

Hal Finney

Reply via email to