Kory Heath wrote:
> On Nov 9, 2008, at 9:56 AM, Brent Meeker wrote:
>> It's sort of what I meant; except I imagined a kind of robot that,
>> like your
>> Turing test program, had it's behavior run by a random number
>> generator but just
>> happened to behave as if it were conscious.
> Ok. That works just as well for me.
>> I'm not sure where you would draw
>> the line between the accidentally convincing conversation and the
>> behaving robot to say one was a philosophical zombie and the other
> I wouldn't. I would say that neither of them are philosophical zombies
> at all. And I'm pretty sure that that would be Dennett's position.
>> Since the concept is just a hypothetical it's a question of semantics.
> I agree. But the semantics are important when it comes to
> communicating with other philosophers. My only point at the beginning
> of this thread was that Bruno would be getting himself into hot water
> with other philosophers by claiming that unimplemented computations
> describing conscious beings should count as zombies, because that's a
> misuse of the established term.
>> OK. It's just that the usual definition in strictly in terms of
>> behavior and
>> doesn't consider inner workings.
> But the inner workings are part of the behavior, and I'm pretty sure
> that the usual definition of "philosophical zombie" includes these
> inner workings.
>> My own view is that someday we will understand a lot about the inner
>> workings of
>> brains; enough that we can tell what someone is thinking by
>> monitoring the
>> firing of neurons and that we will be able to build robots that
>> really do
>> exhibit conscious behavior (although see John McCarthy's website for
>> why we
>> shouldn't do this). When we've reached this state of knowledge,
>> questions about
>> qualia and what is consciousness will be seen to be the wrong
>> questions. They
>> will be like asking where is life located in an animal.
> As far as I understand it, this is exactly Dennett's position.
> Let's imagine we know enough about the inner working of brains to
> examine a brain and tell what that person is thinking, feeling, etc.
> Imagine that we certainly know enough to examine a brain and confirm
> that it is *not* just a random-number generator that's accidentally
> seeming to be conscious. We can look at a brain and tell that it
> really is responding to the words that are being spoken to it, etc.
> Let's say that we actually do examine some particular brain, and
> confirm that it's meeting all of our physical criteria of
> consciousness. Do you think it's logically possible for that brain to
> *not* be conscious? If you don't believe that, then you, like Dennett
> (and me), don't believe in the logical possibility of zombies.
I'm with you and Dennett - except I'm reserved about the use of "logical
possibility". I don't think logic makes anything impossible except "A and ~A";
which is a failure of expression. So I tend to just say "impossible" or
sometimes "nomologically impossible".
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at