On Nov 23, 2008, at 4:18 AM, Bruno Marchal wrote:
> Let us consider your  "lucky teleportation case", where someone use a
> teleporter which fails badly. So it just annihilates the "original"
> person, but then, by an incredible luck the person is reconstructed
> with his right state after. If you ask him "how do you know how to tie
> shoes", if the person answers, after that bad but lucky
> "teleportation" "because I learn in my youth": he is correct.
> He is correct for the same reason Alice's answer to her exams were
> correct, even if luckily so.

I think it's (subtly) incorrect to focus on Lucky Alice's *answers* to  
her exams. By definition, she wrote down the correct answers. But (I  
claim) she didn't compute those answers. A bunch of cosmic rays just  
made her look like she did. Fully-Functional Alice, on the other hand,  
actually did compute the answers.

Let's imagine that someone interrupts Fully-Functional Alice while  
she's taking the exam and asks her, "Do you think that your actions  
right now are being caused primarily by a very unlikely sequence of  
cosmic rays?", and she answers "No". She is answering correctly. By  
definition, Lucky Alice will answer the same way. But she will be  
answering incorrectly. That is the sense in which I'm saying that  
Lucky Kory is making a false statement when he says "I learned to tie  
my shoes in my youth".

In the case of Lucky Kory, I concur that, despite this difference, his  
subjective consciousness is identical to what Kory's would have been  
if the teleportation was successful. But the reason I can view Lucky  
Kory as conscious at all is that once the lucky accident creates him  
out of whole cloth, his neurons are firing correctly, are causally  
connected to each other in the requisite ways, etc. I have a harder  
time understanding how Lucky Alice can be conscious, because at the  
time I'm supposed to be viewing her as conscious, she isn't meeting  
the causal / computational pre-requisites that I thought were  
necessary for consciousness. And I can essentially turn Lucky Kory  
into Lucky Alice by imagining that he is "nothing but" a series of  
lucky teleportations. And then suddenly I don't see how he can be  
conscious, either.

> Suppose I send you a copy of my sane paper by the internet, and that,
> the internet demolishes it completely, but that by an incredible
> chance your buggy computer rebuild it in its exact original form. This
> will not change the content of the paper, and the paper will be
> correct or false independently of the way it has flight from me to  
> you.

That's because the SANE paper doesn't happen to talk about it's own  
causal history. Imagine that I take a pencil and a sheet of paper and  
write the following on it:

"The patterns of markings on this paper were caused by Kory Heath. Of  
course, that doesn't mean that the molecules in this piece of paper  
touched the hands of Kory Heath. Maybe the paper has been teleported  
since Kory wrote it, and reconstructed out of totally different  
molecules. But there is an unbroken causal chain from these markings  
back to something Kory once did."

If you teleport that paper normally, the statement on it remains true.  
If the teleportation fails, but a lucky accident creates an identical  
piece of paper, the statement on it is false. Maybe this has no  
bearing on consciousness or anything else, but I don't want to forget  
about the distinction until I'm sure it's not relevant.

> Of course, the movie
> has still some relationship with the original consciousness of Alice,
> and this will help us to save the MEC part of the physical
> supervenience thesis, giving rise to the notion of "computational
> supervenience", but this form of supervenience does no more refer to
> anything *primarily* physical, and this will be enough preventing the
> use of a concrete universe for blocking the UDA conclusion.

I see what you mean. But for me, these thought experiments are making  
me doubt that I even have a coherent notion of "computational  
supervenience".

-- Kory


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to