On 22 Jun 2012, at 20:55, meekerdb wrote:

On 6/22/2012 11:42 AM, Stephen P. King wrote:

You don't need 1p-indeterminacy for this. The independance requires only that if a brain support consciousness in a particular computation not using neuron 323, and if physical supervenience is true, then consciousness can be said to be supported by the same brain, doing the same computation with the neuron 323 being eliminated. Do you agree with this?

That doesn't follow. You are treating consciousness as though it were a single thing to be either 'supported' or 'not supported'. Eliminating 323 would only show that those particular conscious thoughts did not depend on 323, not that 'consciousness' is independent of 323. Some other conscious thoughts may be impossible after eliminating 323.

You quote me here, not Stephen. Here we are in step 8, where, for the reduction ad absurdo, we assume both comp and physical supervenience.

It is obvious that, by eliminating the piece 323, we loss the counterfactualness, and that some conscious thought will be impossible if the machine is put in a different context. But that is not the case: the question was asked for the same context. If consciousness disappear, because 323 might be needed in some different context, this introduces enough "magic" for losing the comp idea to say "yes" to a doctor for reason of being Turing emulable. The role of "323" becomes magical, and a priori not Turing emulable. Comp is just false. If we keep comp, we just abandon the idea that consciousness is supported by a physically active device, and so, physical device can only become what numbers perceive statistically in arithmetic (with the other steps of UDA).



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to