On Tue, May 12, 2015 at 11:42 PM, Russell Standish <li...@hpcoders.com.au>
wrote:

> On Tue, May 12, 2015 at 08:59:57PM -0500, Jason Resch wrote:
> > Chalmer's fading quailia argument <http://consc.net/papers/qualia.html>
> > shows that if replacing a biological neuron with a functionally
> equivalent
> > silicon neuron changed conscious perception, then it would lead to an
> > absurdity, either:
> > 1. quaila fade/change as silicon neurons gradually replace the biological
> > ones, leading to a case where the quaila are being completely out of
> touch
> > with the functional state of the brain.
> > or
> > 2. the replacement eventually leads to a sudden and complete loss of all
> > quaila, but this suggests a single neuron, or even a few molecules of
> that
> > neuron, when substituted, somehow completely determine the presence of
> > quaila
>
> This syllogism is wrong. After all, when removing links from a
> network, each time following a different sequence links to be removed,
> it will be a different link that causes the network to fall apart.
>
> So it does not suggest "a single neuron, or even a few molecules of that
> neuron, when substituted, somehow completely determine the presence of
> qualia".
>
> This was always why I found the fading qualia argument unconvincing -
> in spite of being a died-in-the-wool functionalist.
>
>
What is he/I missing? The non functionalist will say that a robot brain is
a zombie, and a biological brain is fully conscious with qualia. Along the
way of replacing real neurons with artificial ones you will go from an all
biological conscious brain to a non-conscious zombie. So if the end result
is a zombie, and the starting result is consciousness, then logically (it
seems to be) either that on the path of replacing a greater and greater
fraction of biological neurons with artificial ones that somewhere along
the way the consciousness/qualia either changes or it disappears suddenly.
I don't see any way around that.



>
> >
> > His argument is convincing, but what happens when we replace neurons not
> > with functionally identical ones, but with neurons that fire according
> to a
> > RNG. In all but 1 case, the random firings of the neurons will result in
> > completely different behaviors, but what about that 1 (immensely rare)
> case
> > where the random neuron firings (by chance) equal the firing patterns of
> > the substituted neurons.
> >
> > In this case, behavior as observed from the outside is identical. Brain
> > patterns and activity are similar, but according to computationalism the
> > consciousness is different, or perhaps a zombie (if all neurons are
> > replaced with random firing neurons). Presume that the activity of
> neurons
> > in the visual cortex is required for visual quaila, and that all neurons
> in
> > the visual cortex are replaced with random firing neurons, which by
> chance,
> > mimic the behavior of neurons when viewing an apple.
> >
> > Is this not an example of fading quaila, or quaila desynchronized from
> the
> > brain state? Would this person feel that they are blind, or lack visual
> > quaila, all the while not being able to express their deficiency? I used
> to
> > think when Searle argued this exact same thing would occur when
> substituted
> > functionally identical biological neurons with artificial neurons that it
> > was completely ridiculous, for there would be no room in the functionally
> > equivalent brain to support thoughts such as "help! I can't see, I am
> > blind!" for the information content in the brain is identical when the
> > neurons are functionally identical.
> >
> > But then how does this reconcile with fading quaila as the result of
> > substituting randomly firing neurons? The computations are not the same,
> so
> > presumably the consciousness is not the same.
>
> That also does not follow from computational
> supervenience. Difference in computation does not entail a difference
> in qualia. It's the converse that is entailed.
>

But if you attribute the same consciousness to what is in effect a random
computation, then I would think computationalism ceases to be an effective
theory of consciousness. For then any/all conscious states could in theory
be mapped to what is a random computation. Imagine a black-box computer
function that took two inputs (x, y), and returned an output. And you test
it 100 times with varying inputs and each time it returned (x*y). You
conclude the function is multiplying the inputs, but when you inspect the
code find that the function was ignoring the inputs and returning
random.randint(-(y^x), y^x) # Returning a random number from negative y to
the power x to y to the power x. You were just (unlucky enough to get the
value x*y for each of your 100 tests. Now since the computation is
different from what you expected, had you built a larger computer program
using this function in place of mul(x, y), the computations performed by
that larger program would be completely different from what you supposed,
but there still might be some rare occasions where it outputs the expected
result, or when it (for a time) mirrors your intended behavior.

In the case of neurons, if they are not summing the firings of other
connected neurons, but still firing in the right pattern, then couldn't you
disconnect and separate all the neurons, and still get the same firing
patterns? If there's no causal relation, between the neurons, and if it is
not relevant, then these disconnected neurons in space would still have the
same consciousness. But that seems unlikely to me. What's the relation
between these disconnected neurons, and why aren't other disconnected
neurons, elsewhere in time or space, also considered part of this mind?


>
> > But also, the information
> > content does not support knowing/believing/expressing/thinking something
> is
> > wrong. If anything, the information content of this random brain is much
> > less, but it seems the result is something where the quaila is out of
> sync
> > with the global state of the brain. Can anyone else where shed some
> clarity
> > on what they think happens, and how to explain it in the rare case of
> > luckily working randomly firing neurons, when only partial substitutions
> of
> > the neurons in a brain is performed?
> >
>
> I think one's intuitions are an imperfect guide, particularly when the
> number of neurons involved are in any way a significant fraction of
> the brain.
>
> Computational supervenience => don't know
> Physical supervenience => yes (but only in a classical physical universe).
>

Searle said (which I very much disagree with):

"...as the silicon is progressively implanted into your dwindling brain,
you find that the area of your conscious experience is shrinking, but that
this shows no effect on your external behavior. You find, to your total
amazement, that you are indeed losing control of your external behavior.
You find, for example, that when the doctors test your vision, you hear
them say, "We are holding up a red object in front of you; please tell us
what you see." You want to cry out, "I can't see anything. I'm going
totally blind." But you hear your voice saying in a way that is completely
out of your control, "I see a red object in front of me." If we carry the
thought-experiment out to the limit, we get a much more depressing result
than last time. We imagine that your conscious experience slowly shrinks to
nothing, while your externally observable behavior remains the same."

But this seems to be what computationalism would imply for someone whose
neurons were systematically replaced with randomly (but luckily) firing
neurons. At what point do the inputs matter, and to whom do they matter? If
computations link my mind with yours, (as my message sent from my brain
reaches yours through your e-mail clinet) then why does one mind and its
consciousness end and some other begin? Can you replace computations at the
level of the retina, at regions of brains, between hemispheres of brains,
between brains? At what point does something become merely bits of input
rather than a necessary an indispensable stage of some computation, whose
provenance and history as important as its value?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to