On 24 Nov 2008, at 18:08, Kory Heath wrote:

> I see what you mean. But for me, these thought experiments are making
> me doubt that I even have a coherent notion of "computational
> supervenience".

You are not supposed to have a coherent idea of what is "computational  
supervenience". This belongs to the conclusion of the reasoning, and  
this will need elaboration on what is a computation. This is not so  
hard with ... computer science.

To understand that MEC+MAT is contradictory, you have only to  
understand them well enough so as to get up to the point where the  
contradiction occurs. You give us many quite good argument for saying  
that Lucky Alice, and even Lucky Kory, are not conscious. I do agree,  
mainly, with those argument.

So let me be clear; you argument that , assuming MEC+MAT, Lucky Alice  
is not conscious are almost correct, and very convincing. And so, of  
course Lucky Alice is not conscious.

Now, MGA 1 is an argument showing, that MEC+MAT, due to the physical  
supervenience thesis, and the non prescience of the neurons, entails  
that Lucky Alice is conscious. The question is: do you see this. too

If you see this, we have:

MEC+MAT entails Lucky Alice is not conscious (by your correct argument)
MEC+MAT entails Lucky Alice is conscious (by MGA 1)

Thus MEC+MAT entails (Lucky Alice is conscious AND Lucky Alice is not  
conscious), that is, MEC+MAT entails "false", a contradiction.
And that is the point.

If your argument were not merely convincing but definitive, then I  
would not need to make MGA 3 for showing it is ridiculous to endow the  
projection of a movie of a computation with consciousness (in real  
"space-time", like the physical supervenience thesis asked for).



> -- Kory
> >


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

Reply via email to