On 6/9/2020 6:41 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 10:41, 'Brent Meeker' via Everything List <[email protected] <mailto:[email protected]>> wrote:



    On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:


    On Wed, 10 Jun 2020 at 09:15, Jason Resch <[email protected]
    <mailto:[email protected]>> wrote:



        On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou
        <[email protected] <mailto:[email protected]>> wrote:



            On Wed, 10 Jun 2020 at 03:08, Jason Resch
            <[email protected] <mailto:[email protected]>> wrote:

                For the present discussion/question, I want to ignore
                the testable implications of computationalism on
                physical law, and instead focus on the following idea:

                "How can we know if a robot is conscious?"

                Let's say there are two brains, one biological and
                one an exact computational emulation, meaning
                exact functional equivalence. Then let's say we can
                exactly control sensory input and perfectly monitor
                motor control outputs between the two brains.

                Given that computationalism implies functional
                equivalence, then identical inputs yield identical
                internal behavior (nerve activations, etc.) and
                outputs, in terms of muscle movement, facial
                expressions, and speech.

                If we stimulate nerves in the person's back to cause
                pain, and ask them both to describe the pain, both
                will speak identical sentences. Both will say it
                hurts when asked, and if asked to write a paragraph
                describing the pain, will provide identical accounts.

                Does the definition of functional equivalence mean
                that any scientific objective third-person analysis
                or test is doomed to fail to find any distinction in
                behaviors, and thus necessarily fails in its ability
                to disprove consciousness in the functionally
                equivalent robot mind?

                Is computationalism as far as science can go on a
                theory of mind before it reaches this testing roadblock?


            We can’t know if a particular entity is conscious, but we
            can know that if it is conscious, then a functional
            equivalent, as you describe, is also conscious. This is
            the subject of David Chalmers’ paper:

            http://consc.net/papers/qualia.html


        Chalmers' argument is that if a different brain is not
        conscious, then somewhere along the way we get either
        suddenly disappearing or fading qualia, which I agree are
        philosophically distasteful.

        But what if someone is fine with philosophical zombies and
        suddenly disappearing qualia? Is there any impossibility
        proof for such things?


    Philosophical zombies are less problematic than partial
    philosophical zombies. Partial philosophical zombies would render
    the idea of qualia absurd, because it would mean that we might be
    blind completely blind, for example, without realising it.

    Isn't this what blindsight exemplifies?


Blindsight entails behaving as if you have vision but not believing that you have vision.

And you don't believe you have vision because you're missing the qualia of seeing.

Anton syndrome entails believing you have vision but not behaving as if you have vision. Being a partial zombie would entail believing you have vision and behaving as if you have vision, but not actually having vision.

That would be a total zombie with respect to vision.  The person with blindsight is a partial zombie.  They have the function but not the qualia.

    As an absolute minimum, although we may not be able to test for
    or define qualia, we should know if we have them. Take this
    requirement away, and there is nothing left.

    Suddenly disappearing qualia are logically possible but it is
    difficult to imagine how it could work. We would be normally
    conscious while our neurons were being replaced, but when one
    special glutamate receptor in a special neuron in the left
    parietal lobe was replaced, or when exactly 35.54876% replacement
    of all neurons was reached, the internal lights would suddenly go
    out.

    I think this all-or-nothing is misconceived.  It's not internal
    cognition that might vanish suddenly, it's some specific aspect of
    experience: There are people who, thru brain injury, lose the
    ability to recognize faces...recognition is a qualia.   Of course
    people's frequency range of hearing fades (don't ask me how I
    know).  My mother, when she was 95 lost color vision in one eye,
    but not the other.  Some people, it seems cannot do higher
    mathematics.  So how would you know if you lost the qualia of
    empathy for example?  Could it not just fade...i.e. become evoked
    less and less?


I don't believe suddenly disappearing qualia can happen, but either this - leading to full zombiehood - or fading qualia - leading to partial zombiehood - would be a consequence of  replacement of the brain if behaviour could be replicated without replicating qualia.

No.  You're assuming the replacements either instaniate the qualia or they do nothing.  The third possibility is that they instantiate some different qualia, or conditional qualia.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c04fc448-4164-e6dd-1958-54f581839dd7%40verizon.net.

Reply via email to