On 8/11/2011 12:25 AM, Stathis Papaioannou wrote:
On Thu, Aug 11, 2011 at 4:55 PM, Stephen P. King<stephe...@charter.net>  wrote:

    Exactly how would we know that that component was unconscious? What is
the test?
There is no test, it is just assumed for the purpose of the thought
experiment that the component lacks the special sauce required for
consciousness. We could even say that the component works by magic to
avoid discussions about technical difficulties, and the thought
experiment is unaffected. The conclusion is that such a device is
impossible because it leads to conceptual difficulties.

    What special sauce? Why is it ok to assume that consciousness is
something special that can only occur is special circumstances? Why not
consider that possibility that it is just as primitive as mass, charge and
spin? Why do we need to work so hard to dismiss the direct evidence of our
1st person experience? Why not just accept that it is real and then wonder
why materialist theories have no room whatsoever in them for it?
The specific question I'm asking is whether it is possible to separate
consciousness from behaviour. Is it possible to make a brain component
that from the engineering point of view functions perfectly when
installed but does not contribute the same consciousness to the brain?
You will note that there is no claim here about any theory of
consciousness: it could be intrinsic to matter, it could come from
tiny black holes inside cells, it could be generated on the fly by
God. Whatever it is, can it be separated from function?

Just to be clear, I'm interested in a slightly different question which relative to Stathis might be phrased as "function of what?" If we look at the whole person/robot we talk about behavior, which I think is enough to establish some kind of consciousness, but not necessarily to map each instance of a behavior to a specific conscious thought. People can be thinking different things while performing the same act. So unless we specify "same behavior" to mean "same input/output for all possible input sequences" there is room for same behavior and different consciousness. And this same kind of analysis applies to subsets of the brain as well as to the whole person. So in Stathis example of replacing half the brain with a super AI module which has the same input/output relation with the body and the other half of the brain, it is not at all clear to me that the person's consciousness is unchanged. Stathis relies on it being *reported* as unchanged because the speech center is in the other half, but where is the "consciousness center"? It may be that we're over-idealizing the isolation of the brain. If the super AI half were perfectly isolated except for those input/output channels which we are hypothesizing to be perfectly emulating the dumb brain then Stathis argument would show that what ever change in consciousness might be inside the super AI side it would be undetectable. But in fact the super AI side cannot be perfectly isolated to those channels, even aside from quantum entanglement there are thermal perturbations and radioactivity. This means that the super AI will produce different behavior because it will respond differently under these perturbations. This different behavior will evince its different consciousness.

So in saying 'yes' to the doctor you should either be ready to assume some difference in consciousness or suppose that the substitution level may encompass a significant part of the Milky Way down to the fundamental particle level.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to