Kory Heath wrote:
> On Nov 7, 2008, at 9:34 AM, Brent Meeker wrote:
>> I think I agree with Bruno that it is *logically* possible, e.g.
>> accidental zombies. It's just not nomologically possible.
> I'm not sure what counts as an "accidental zombie". Do you mean
> something like the following:
> I can write a very short computer program that accepts ascii
> characters as input, and then spews out a random series of characters
> as output, and then accepts more input, etc. It's logically possible
> for me to have a "conversation" with this program in which the program
> just happens (by accident) to pass the Turing Test with flying colors.
> Is this what you mean by an "accidental" zombie? If so, it's important
> to understand that this is not a zombie at all by Dennett's definition
> (unless I've really misunderstood Dennett). A zombie is something
> that's physically indistinguishable from a physical conscious entity
> and yet isn't conscious.
It's sort of what I meant; except I imagined a kind of robot that, like your
Turing test program, had it's behavior run by a random number generator but
happened to behave as if it were conscious. I'm not sure where you would draw
the line between the accidentally convincing conversation and the accidentally
behaving robot to say one was a philosophical zombie and the other wasn't.
Since the concept is just a hypothetical it's a question of semantics.
>That program might be accidentally behaving
> as if it were conscious, but if you had the proper instruments to
> examine it physically, you would be able to conclude exactly that:
> it's a random number generator that's accidentally behaving as though
> it were conscious. Dennett would claim that a random number generator
> that passes a Turing Test is logically possible (but extraordinarily
> unlikely), and he'd happily claim that it's not conscious. He'd claim
> that zombies are something different, and that they're logically
> impossible. (He's also used words like "unimaginable" and "incoherent".)
OK. It's just that the usual definition in strictly in terms of behavior and
doesn't consider inner workings.
My own view is that someday we will understand a lot about the inner workings
brains; enough that we can tell what someone is thinking by monitoring the
firing of neurons and that we will be able to build robots that really do
exhibit conscious behavior (although see John McCarthy's website for why we
shouldn't do this). When we've reached this state of knowledge, questions
qualia and what is consciousness will be seen to be the wrong questions. They
will be like asking where is life located in an animal.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at