On Wed, 10 Jun 2020 at 12:49, 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/9/2020 6:41 PM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 10 Jun 2020 at 10:41, 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:
>>
>>
>>
>> On Wed, 10 Jun 2020 at 09:15, Jason Resch <jasonre...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou <stath...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Wed, 10 Jun 2020 at 03:08, Jason Resch <jasonre...@gmail.com> wrote:
>>>>
>>>>> For the present discussion/question, I want to ignore the testable
>>>>> implications of computationalism on physical law, and instead focus on the
>>>>> following idea:
>>>>>
>>>>> "How can we know if a robot is conscious?"
>>>>>
>>>>> Let's say there are two brains, one biological and one an exact
>>>>> computational emulation, meaning exact functional equivalence. Then let's
>>>>> say we can exactly control sensory input and perfectly monitor motor
>>>>> control outputs between the two brains.
>>>>>
>>>>> Given that computationalism implies functional equivalence, then
>>>>> identical inputs yield identical internal behavior (nerve activations,
>>>>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>>>>> speech.
>>>>>
>>>>> If we stimulate nerves in the person's back to cause pain, and ask
>>>>> them both to describe the pain, both will speak identical sentences. Both
>>>>> will say it hurts when asked, and if asked to write a paragraph
>>>>> describing the pain, will provide identical accounts.
>>>>>
>>>>> Does the definition of functional equivalence mean that any scientific
>>>>> objective third-person analysis or test is doomed to fail to find any
>>>>> distinction in behaviors, and thus necessarily fails in its ability to
>>>>> disprove consciousness in the functionally equivalent robot mind?
>>>>>
>>>>> Is computationalism as far as science can go on a theory of mind
>>>>> before it reaches this testing roadblock?
>>>>>
>>>>
>>>> We can’t know if a particular entity is conscious, but we can know that
>>>> if it is conscious, then a functional equivalent, as you describe, is also
>>>> conscious. This is the subject of David Chalmers’ paper:
>>>>
>>>> http://consc.net/papers/qualia.html
>>>>
>>>
>>> Chalmers' argument is that if a different brain is not conscious, then
>>> somewhere along the way we get either suddenly disappearing or fading
>>> qualia, which I agree are philosophically distasteful.
>>>
>>> But what if someone is fine with philosophical zombies and suddenly
>>> disappearing qualia? Is there any impossibility proof for such things?
>>>
>>
>> Philosophical zombies are less problematic than partial philosophical
>> zombies. Partial philosophical zombies would render the idea of qualia
>> absurd, because it would mean that we might be blind completely blind, for
>> example, without realising it.
>>
>>
>> Isn't this what blindsight exemplifies?
>>
>
> Blindsight entails behaving as if you have vision but not believing that
> you have vision.
>
>
> And you don't believe you have vision because you're missing the qualia of
> seeing.
>
> Anton syndrome entails believing you have vision but not behaving as if
> you have vision.
> Being a partial zombie would entail believing you have vision and behaving
> as if you have vision, but not actually having vision.
>
>
> That would be a total zombie with respect to vision.  The person with
> blindsight is a partial zombie.  They have the function but not the qualia.
>
> As an absolute minimum, although we may not be able to test for or define
>> qualia, we should know if we have them. Take this requirement away, and
>> there is nothing left.
>>
>> Suddenly disappearing qualia are logically possible but it is difficult
>> to imagine how it could work. We would be normally conscious while our
>> neurons were being replaced, but when one special glutamate receptor in a
>> special neuron in the left parietal lobe was replaced, or when exactly
>> 35.54876% replacement of all neurons was reached, the internal lights would
>> suddenly go out.
>>
>>
>> I think this all-or-nothing is misconceived.  It's not internal cognition
>> that might vanish suddenly, it's some specific aspect of experience: There
>> are people who, thru brain injury, lose the ability to recognize
>> faces...recognition is a qualia.   Of course people's frequency range of
>> hearing fades (don't ask me how I know).  My mother, when she was 95 lost
>> color vision in one eye, but not the other.  Some people, it seems cannot
>> do higher mathematics.  So how would you know if you lost the qualia of
>> empathy for example?  Could it not just fade...i.e. become evoked less and
>> less?
>>
>
> I don't believe suddenly disappearing qualia can happen, but either this -
> leading to full zombiehood - or fading qualia - leading to partial
> zombiehood - would be a consequence of  replacement of the brain if
> behaviour could be replicated without replicating qualia.
>
>
> No.  You're assuming the replacements either instaniate the qualia or they
> do nothing.  The third possibility is that they instantiate some different
> qualia, or conditional qualia.
>

The possibilities are that replacement which is functionally identical
leaves the qualia unchanged or changes the qualia. If it changes the
qualia, it leads to a strange situation: the subject could have an
arbitrarily large change in qualia, but would behave the same and declare
that everything is the same. That would mean that the subject either does
not notice the change, or notices but is unable to control his body, which
of its own accord declares that everything is the same.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypVOEUXuUP0c2TBLqrzHE3i6LxgnGPzn-FJ20cdhiM1nYg%40mail.gmail.com.

Reply via email to