On 9/5/2012 5:14 PM, Stathis Papaioannou wrote:
On Thu, Sep 6, 2012 at 1:04 AM, Craig Weinberg<whatsons...@gmail.com>  wrote:

The ability to test depends entirely on my familiarity with the human and
how good the technology is. Can I touch them, smell them? If so, then I
would be surprised if I could be fooled by an inorganic body. Has there ever
been one synthetic imitation of a natural biological product that can
withstand even moderate examination?

If you limit the channel of my interaction with the robot however, I stand
much less of a chance of being able to tell the difference. A video
conference with the robot only requires that they look convincing on camera.
We can't tell the difference between a live performance and a taped
performance unless there is some clue in the content. That is because we
aren't literally present so we are only dealing with a narrow channel of
sense experience to begin with.

In any case, what does being able to tell from the outside have to do with
whether or not the thing feels? If it is designed by experts to fool other
people into thinking that it is alive, then so what if it succeeds at
fooling everyone? Something can't fool itself into thinking that it is
A film is nor a good example because you can't interact with it. The
point is that if it is possible to make a robot that fools everyone
then this is ipso facto a philosophical zombie. It doesn't feel but it
pretends to feel. A corollary of this is that a philosophical zombie
could display all the behaviour of a living being. So how can you be
sure that living beings other than you are not zombies? Also, what is
the evolutionary utility of consciousness if the same results could
have in principle been obtained without it?

I agree with all you say, except the implication of the last sentence: that evolution would never produce results with some inessential side effect. First, evolution has to produce things by evolving - not starting from a clean sheet. In the case of consciousness I think it quite likely that this happened. Conscious thinking is similar to talking-to-yourself because evolution happened to take advantage of auditory processing of language to internalize symbolic cogitation. Second, even though the same result might be obtained in some other way, it might be less efficient in some sense to do so. We might conceivably make a human-acting robot that cogitated using a computer separate from the one used for processing language and while I think it would be conscious, it would be conscious in a different way.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to