On 10/2/2013 5:15 PM, Stathis Papaioannou wrote:
On 1 October 2013 23:31, Pierz <[email protected]> wrote:
Maybe. It would be a lot more profound if we definitely *could* reproduce the
brain's behaviour. The devil is in the detail as they say. But a challenge to
Chalmer's position has occurred to me. It seems to me that Bruno has
convincingly argued that *if* comp holds, then consciousness supervenes on the
computation, not on the physical matter.
When I say "comp holds" I mean in the first instance that my physical
brain could be replaced with an appropriate computer and I would still
be me. But this assumption leads to the conclusion that the computer
is not actually needed, just the computation as platonic object.
But what if you were just slightly different or different only in some rare circumstances
(like being in an MRI), which seems very likely?
So if
it's true that my brain could be replaced with a physical computer
then my brain and the computer were not physical in a fundamental
sense in the first place!
But this depends on the MGA or Olympia argument, which find suspect.
While this is circular-sounding I don't
think that it's actually contradictory. It is not a necessary premise
of Chalmer's argument (or indeed, for most scientific arguments) that
there be a fundamental physical reality.
As for reproducing the brain's behaviour, it comes down to whether
brain physics is computable. It probably *is* computable, since we
have not found evidence of non-computable physics of which I am aware.
Suppose it was not Turing computable, but was computable in some other sense (e.g.
hypercomputable). Aren't you just setting up a tautology in which whatever the brain
does, whatever the universe does, we'll call it X-computable. Already we have one good
model of the universe, Copenhagen QM, that says it's not Turing computable.
If it is not, then computationalism is false. But even if
computationalism is false, Chalmer's argument still shows that
*functionalism* is true. Computationalism is a subset of
functionalism.
But functionalism suggests that what counts is the output, not the manner in
which it as arrived at. That is to say, the brain or whatever neural subunit or
computer is doing the processing is a black box. You input something and then
read the output, but the intervening steps don't matter. Consider what this
might mean in terms of a brain. Let's say a vastly advanced alien species comes
to earth. It looks at our puny little brains and decides to make one to fool
us. This constructed person/brain receives normal conversational input and
outputs conversation that it knows will perfectly mimic a human being. But in
fact the computer doing this processing is vastly superior to the human brain.
It's like a modern PC emulating a TRS-80, except much more so. When it
computes/thinks up a response, it draws on a vast amount of knowledge,
intelligence and creativity and accesses qualia undreamed of by a human. Yet
its response will completely fool any normal human and will pass Turing tests
till the cows come home. What this thought experiment shows is that, while
half-qualia may be absurd, it most certainly is possible to reproduce the
outputs of a brain without replicating its qualia. It might have completely
different qualia, just as a very good actor's emotions can't be distinguished
from the real thing, even though his or her internal experience is quite
different. And if qualia can be quite different even though the functional
outputs are the same, this does seem to leave functionalism in something of a
quandary. All we can say is that there must be some kind of qualia occurring,
rather a different result from what Chalmers is claiming. When we extend this
type of scenario to artificial neurons or partial brain prostheses as in
Chamer's paper, we quickly run up against perplexing problems. Imagine the
advanced alien provides these prostheses. It takes the same inputs and
generates the same correct outputs, but it processes those inputs within a much
vaster, more complex system. Does the brain utilizing this advanced prosthesis
experience a kind of expanded consciousness because of this, without that
difference being detectable? Or do the qualia remain somehow confined to the
prosthesis (whatever that means)? These crazy quandaries suggest to me that
basically, we don't know shit.
Essentially, I think that if the alien computer reproduces human
behaviour then it will also reproduce human qualia. Start with a
prosthesis that replaces 1% of the brain. If it has different qualia
despite copying the original neurons' I/O behaviour then very quickly
the system will deteriorate: the brain's owner will notice that the
qualia are different and behave differently
I don't see how you can be sure of that. How will he compare his qualia of red now with
his qualia of red before? And why would small differences imply "the system will quickly
deteriorate". Suppose he became color blind - which he could realize - Color blind people
are still conscious.
, which is impossible if
the original assumption about copying the original neurons' I/O
behaviour is true.
See above point about approximation and circumstance.
The same is the case if the prosthesis replaces 99%
of the neurons - the 1% remaining neurons would notice that the qualia
were different and deviate from normal behaviour, and the same would
be the case if only one of the original neurons were present.
You seem to be agreeing with Craig that each neuron alone is conscious.
If you
assume it is possible that the prosthesis reproduces the I/O behaviour
but not the qualia you get a contradiction, and a contradiction is
worse than a crazy quandary.
I agree if you generalize that to "reproduces the i/o behaviour in all circumstances and
to a very high fidelity."
Brent
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.