On 8/11/2011 8:47 PM, Stathis Papaioannou wrote:
On Fri, Aug 12, 2011 at 3:16 AM, meekerdb<meeke...@verizon.net>  wrote:

But his behavior is exactly the same.  Your are evading the hypothesis by
counting internal thoughts as "behavior".  As noted before "behavior" is
fuzzy.  You could try defining "same behavior" to mean same output for
all
possible inputs; but it's not clear that "all possible inputs" is a
coherent
concept.  From a more empirical standpoint you really mean something more
vague and "same behavior" means "similar to past behavior such that his
friends don't think he's had a personality change."  But that, I think,
leaves a lot of room for differences of qualia.

The statement was "neither his behaviour would change nor would he
notice that anything had changed".
But the second clause is begging the question.  To say "nor would he notice
than anything had changed" is the same as saying his consciousness is not
changed.  It's going beyond functionalism.
The claim is that (a) he would not notice any change, and (b)
therefore his consciousness would not change. (a) is not begging the
question but is a consequence of any physicalist theory of mind, i.e.
mental events occur because of physical events in the brain, so that
if the same physical events occur the same mental events will also
occur.

But ex hypothesi the physical events in the "brain" are no longer the same because some of them are in the chip or the AI hemisphere.

John Searle claims to be a physicalist but he believes that if
part of your brain is replaced by a functionally identical computer
chip your behaviour will remain the same but your consciousness will
fade away. Incidentally, Searle accepts that there is no problem in
principle with making such a zombie chip. However, this is not
possible under a physicalist theory as defined above. If the computer
chip has the same I/O behaviour as the volume of tissue it replaces,
the brain that does the noticing

And what part would that be? The homunculus in the Cartesian theater. I don't think functionalism entails that there is some "noticing" neuron in the brain. If functionalism is correct, "noticing" must be distributed.

cannot tell that anything has
changed. Only if consciousness is disconnected from brain activity,
due for example to an immaterial soul, could the subject notice a
change even though his brain is responding normally. The conclusion is
that IF the replacement is functionally identical THEN the
consciousness is also preserved,

But the question is what "functionally identical" means. Can it mean only the same input/output or must it be similar inside at some lower level. If you specify the same input/output for all possible input sequences, including "environmental" ones, then I agree that your argument goes through. But failing that, it seems to me the consciousness that is within or due to the AI hemisphere can be different AND noticed in that hemisphere.

Your argument seems to assume that consciousness is localized and must be outside the AI part, but that would lead to philosophical zombies when you replaced the whole brain and there was no "outside".

which establishes
substrate-independence and functionalism. How in practice you would
determine that something is functionally identical is a separate
question.

If both these criteria are
satisfied then the qualia are preserved. If a brain component is
replaced with a functional equivalent (perhaps in a different
substrate) neither the behaviour would change nor would the subject
notice any change, therefore his consciousness would not change. Take
care of the engineering problem and the consciousness follows
automatically.

It may be difficult to exactly define and be sure of "same behaviour"
or "same output for all possible inputs" but it is a commonplace
difficulty for engineers, where one may be called on to replace a
component in a machine with a different but hopefully functionally
identical component. If an op amp in a piece of electronic equipment
has burned out you may look for another device with similar or better
power handling, bandwidth etc. The replacement may for example be an
IC where the original was made of discrete parts. It may not function
exactly the same under all possible tests but it should be close
enough for the conditions to which the equipment will be subjected.

Exactly.  The engineer only cares about the input/output  so he can replace
a simple circuit with a computer programmed to emulate it.  But the computer
(assuming it's sufficiently fast) can multi-task and do other things at the
same time, like calculate a trillion digits of pi.  If there's no monitoring
of this by some output no one can tell this by just looking at the computer.
  But they could tell if they looked at the internal circuit activity.  So
what counts as "behavior" is ambiguous and depends on defining arbitrary
black boxes.
The same is true of neurons. They could be doing complex calculations
for protein folding, could even be aware of it like worker bees aware
of their position in the hive, but the subject is not himself aware of
it. All he is aware of is the higher level behaviour that manifests
through motor output

I don't think there is motor output to manifest my every thought. Some mental activity can be localized in the brain or a part of it. Of course I am not aware of most stuff my brain does. I can't account for processes by which thoughts pop into my head. As it is my brain is relatively insensitive to magnetic fields. If I replaced a part of it with a chip that had the same neural input/output function but was sensitive to magnetic fields then my thoughts would be the same except when I changed my orientation relative to the Earth's magnetic field. So my consciousness would be changed.

Similarly, the left hemisphere might implement some
superintelligence which experiences much more, but is deciding to fool
the
right hemisphere into thinking all is well.


Suppose your left hemisphere is replaced with a superintelligent AI
that easily models the behaviour of your bilogical brain and interacts
appropriately with your right hemisphere, but in addition has various
lofty thoughts of its own. The result would then be that you, Jason
Resch, would continue to behave normally and not notice any change in
your consciousness.

Why would he not notice?  Who is "he"?  You seem to invoke the Cartesian
theatre where "noticing" takes place and so the AI part isn't noticed
because it doesn't go to the theater.

This is rather like the fallacy of the Chinese Room, where Searle
claims that since the human operator doesn't understand Chinese the
room can't understand Chinese. There are two systems, the room and the
operator, and just because they interact there is no requirement that
one understands anything the other understands, let alone that they
are one mind.

But you refer to Jason "not noticing a difference" as evidence that
functionalism is true.  "Noticing" is a psychological, non-functional
concept, so it can't be invoked in supporting functionalism.  In the Chinese
room there is no "not noticing a change" because there is no noticing at
all.
This is why the thought experiment involves *partial* replacement. The
behaviour we are considering is the low level behaviour of the neurons
as well as the high level behaviour of the subject. The original
biological tissue is constrained to behave normally if it receives the
normal inputs from the replacement, and if consciousness supervenes on
the physical it is therefore constrained not to notice that anything
has changed.

That doesn't follow. Your argument assumes that consciousness supervenes on the biological part, but not on the replacement.

Brent

The conclusion is therefore that consciousness is not
separable from function; it cannot, for example, be substrate
dependent.



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to