On 01 Oct 2013, at 15:31, Pierz wrote:
Maybe. It would be a lot more profound if we definitely *could*
reproduce the brain's behaviour. The devil is in the detail as they
say. But a challenge to Chalmer's position has occurred to me. It
seems to me that Bruno has convincingly argued that *if* comp holds,
then consciousness supervenes on the computation, not on the
physical matter. But functionalism suggests that what counts is the
output, not the manner in which it as arrived at. That is to say,
the brain or whatever neural subunit or computer is doing the
processing is a black box. You input something and then read the
output, but the intervening steps don't matter. Consider what this
might mean in terms of a brain.
That's not clear to me. The question is "output of what". If it is the
entie subject, this is more behaviorism than functionalism.
Putnam's functionalism makes clear that we have to take the output of
the neurons into account.
Comp is functionalism, but with the idea that we don't know the level
of substitution, so it might be that we have to take into account the
oputput of the gluons in our atoms (so comp makes clear that it only
ask for the existence of a level of substitution, and then show that
no machine can know for sure its subst. level, making Putnam's sort of
functionalism a bit fuzzy).
Let's say a vastly advanced alien species comes to earth. It looks
at our puny little brains and decides to make one to fool us. This
constructed person/brain receives normal conversational input and
outputs conversation that it knows will perfectly mimic a human
being. But in fact the computer doing this processing is vastly
superior to the human brain. It's like a modern PC emulating a
TRS-80, except much more so. When it computes/thinks up a response,
it draws on a vast amount of knowledge, intelligence and creativity
and accesses qualia undreamed of by a human. Yet its response will
completely fool any normal human and will pass Turing tests till the
cows come home. What this thought experiment shows is that, while
half-qualia may be absurd, it most certainly is possible to
reproduce the outputs of a brain without replicating its qualia. It
might have completely different qualia, just as a very good actor's
emotions can't be distinguished from the real thing, even though his
or her internal experience is quite different. And if qualia can be
quite different even though the functional outputs are the same,
this does seem to leave functionalism in something of a quandary.
All we can say is that there must be some kind of qualia occurring,
rather a different result from what Chalmers is claiming. When we
extend this type of scenario to artificial neurons or partial brain
prostheses as in Chamer's paper, we quickly run up against
perplexing problems. Imagine the advanced alien provides these
prostheses. It takes the same inputs and generates the same correct
outputs, but it processes those inputs within a much vaster, more
complex system. Does the brain utilizing this advanced prosthesis
experience a kind of expanded consciousness because of this, without
that difference being detectable? Or do the qualia remain somehow
confined to the prosthesis (whatever that means)? These crazy
quandaries suggest to me that basically, we don't know shit.
Hmm, I am not convinced. "Chalmers argument" is that to get a
philosophical zombie, the fading argument shows that you have to go
through half-qualia, which is absurd. His goal (here) is to show that
"no qualia" is absurd.
That the qualia can be different is known in the qualia literature,
and is a big open problem per se. But Chalmers argues only that "no
qualia" is absurd, indeed because it would needs some absurd notion of
intermediate half qualia.
My be I miss a point. Stathis can clarify this furher.
Eventually the qualia is determined by infinitely many number
relations, and a brain filters them. It does not create them, like no
machine can create PI, only "re-compute" it, somehow. The anlogy here
break sown as qualia are purely first person notion, which explains
why they are distributed on the whole universal dovetailing (sigma_1
arithmetic).
Bruno
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.