On Thu, Aug 11, 2011 at 3:02 PM, meekerdb <[email protected]> wrote:

>> It's not the same outputs for the same inputs, since the mathematician
>> has far more elaborate mental states even if he just answers "21" and
>> "12". For example, he may be thinking about how boring the questions
>> are and about what he is going to have for lunch. So if part of the
>> mathematician's brain were replaced with a calculator it isn't the
>> case that neither his behaviour would change nor would he notice that
>> anything had changed.
>>
>
> But his behavior is exactly the same.  Your are evading the hypothesis by
> counting internal thoughts as "behavior".  As noted before "behavior" is
> fuzzy.  You could try defining "same behavior" to mean same output for all
> possible inputs; but it's not clear that "all possible inputs" is a coherent
> concept.  From a more empirical standpoint you really mean something more
> vague and "same behavior" means "similar to past behavior such that his
> friends don't think he's had a personality change."  But that, I think,
> leaves a lot of room for differences of qualia.

The statement was "neither his behaviour would change nor would he
notice that anything had changed". If both these criteria are
satisfied then the qualia are preserved. If a brain component is
replaced with a functional equivalent (perhaps in a different
substrate) neither the behaviour would change nor would the subject
notice any change, therefore his consciousness would not change. Take
care of the engineering problem and the consciousness follows
automatically.

It may be difficult to exactly define and be sure of "same behaviour"
or "same output for all possible inputs" but it is a commonplace
difficulty for engineers, where one may be called on to replace a
component in a machine with a different but hopefully functionally
identical component. If an op amp in a piece of electronic equipment
has burned out you may look for another device with similar or better
power handling, bandwidth etc. The replacement may for example be an
IC where the original was made of discrete parts. It may not function
exactly the same under all possible tests but it should be close
enough for the conditions to which the equipment will be subjected.

>>> Similarly, the left hemisphere might implement some
>>> superintelligence which experiences much more, but is deciding to fool
>>> the
>>> right hemisphere into thinking all is well.
>>>
>>
>> Suppose your left hemisphere is replaced with a superintelligent AI
>> that easily models the behaviour of your bilogical brain and interacts
>> appropriately with your right hemisphere, but in addition has various
>> lofty thoughts of its own. The result would then be that you, Jason
>> Resch, would continue to behave normally and not notice any change in
>> your consciousness.
>
> Why would he not notice?  Who is "he"?  You seem to invoke the Cartesian
> theatre where "noticing" takes place and so the AI part isn't noticed
> because it doesn't go to the theater.

This is rather like the fallacy of the Chinese Room, where Searle
claims that since the human operator doesn't understand Chinese the
room can't understand Chinese. There are two systems, the room and the
operator, and just because they interact there is no requirement that
one understands anything the other understands, let alone that they
are one mind.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to