On Wed, 10 Jun 2020 at 09:15, Jason Resch <jasonre...@gmail.com> wrote:

>
>
> On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
>>
>>
>> On Wed, 10 Jun 2020 at 03:08, Jason Resch <jasonre...@gmail.com> wrote:
>>
>>> For the present discussion/question, I want to ignore the testable
>>> implications of computationalism on physical law, and instead focus on the
>>> following idea:
>>>
>>> "How can we know if a robot is conscious?"
>>>
>>> Let's say there are two brains, one biological and one an exact
>>> computational emulation, meaning exact functional equivalence. Then let's
>>> say we can exactly control sensory input and perfectly monitor motor
>>> control outputs between the two brains.
>>>
>>> Given that computationalism implies functional equivalence, then
>>> identical inputs yield identical internal behavior (nerve activations,
>>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>>> speech.
>>>
>>> If we stimulate nerves in the person's back to cause pain, and ask them
>>> both to describe the pain, both will speak identical sentences. Both will
>>> say it hurts when asked, and if asked to write a paragraph describing the
>>> pain, will provide identical accounts.
>>>
>>> Does the definition of functional equivalence mean that any scientific
>>> objective third-person analysis or test is doomed to fail to find any
>>> distinction in behaviors, and thus necessarily fails in its ability to
>>> disprove consciousness in the functionally equivalent robot mind?
>>>
>>> Is computationalism as far as science can go on a theory of mind before
>>> it reaches this testing roadblock?
>>>
>>
>> We can’t know if a particular entity is conscious, but we can know that
>> if it is conscious, then a functional equivalent, as you describe, is also
>> conscious. This is the subject of David Chalmers’ paper:
>>
>> http://consc.net/papers/qualia.html
>>
>
> Chalmers' argument is that if a different brain is not conscious, then
> somewhere along the way we get either suddenly disappearing or fading
> qualia, which I agree are philosophically distasteful.
>
> But what if someone is fine with philosophical zombies and suddenly
> disappearing qualia? Is there any impossibility proof for such things?
>

Philosophical zombies are less problematic than partial philosophical
zombies. Partial philosophical zombies would render the idea of qualia
absurd, because it would mean that we might be blind completely blind, for
example, without realising it. As an absolute minimum, although we may not
be able to test for or define qualia, we should know if we have them. Take
this requirement away, and there is nothing left.

Suddenly disappearing qualia are logically possible but it is difficult to
imagine how it could work. We would be normally conscious while our neurons
were being replaced, but when one special glutamate receptor in a special
neuron in the left parietal lobe was replaced, or when exactly 35.54876%
replacement of all neurons was reached, the internal lights would suddenly
go out.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUZjiyCppw-qGPM9XPnnP%3D%2BeVCwbD00wxqesBrSvR-shg%40mail.gmail.com.

Reply via email to