On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett <bhkell...@optusnet.com.au>
wrote:

> From: Stathis Papaioannou <stath...@gmail.com>
>
>
> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett <bhkell...@optusnet.com.au>
> wrote:
>
>> From: Stathis Papaioannou < <stath...@gmail.com>stath...@gmail.com>
>>
>> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett <
>> <bhkell...@optusnet.com.au>bhkell...@optusnet.com.au> wrote:
>>
>>>
>>> If the theory is that if the observable behaviour of the brain is
>>> replicated, then consciousness will also be replicated, then the clear
>>> corollary is that consciousness can be inferred from observable behaviour.
>>> Which implies that I can be as certain of the consciousness of other people
>>> as I am of my own. This seems to do some violence to the 1p/1pp/3p
>>> distinctions that computationalism rely on so much: only 1p is "certainly
>>> certain". But if I can reliably infer consciousness in others, then other
>>> things can be as certain as 1p experiences....
>>>
>>
>> You can’t reliable infer consciousness in others. What you can infer is
>> that whatever consciousness an entity has, it will be preserved if
>> functionally identical substitutions in its brain are made.
>>
>>
>> You have that backwards. You can infer consciousness in others, by
>> observing their behaviour. The alternative would be solipsism. Now, while
>> you can't prove or disprove solipsism in a mathematical sense, you can
>> reject solipsism as a useless theory, since it tells you nothing about
>> anything. Whereas science acts on the available evidence -- observations of
>> behaviour in this case.
>>
>> But we have no evidence that consciousness would be preserved under
>> functionally identical substitutions in the brain. Consciousness may be a
>> global affair, so functionally equivalence may not be achievable, or even
>> definable, within the context of a conscious brain. Can you map the
>> functionality of even a single neuron? You are assuming that you can, but
>> if that function is global, then you probably can't. There is a fair amount
>> of glibness in your assumption that consciousness will be preserved under
>> such substitutions.
>>
>>
>>
>> You can’t know if a mouse is conscious, but you can know that if mouse
>> neurones are replaced with functionally identical electronic neurones its
>> behaviour will be the same and any consciousness it may have will also be
>> the same.
>>
>>
>> You cannot know this without actually doing the substitution and
>> observing the results.
>>
>
> So do you think that it is possible to replace the neurones with
> functionally identical neurones (same output for same input) and the
> mouse’s behaviour would *not* be the same?
>
>
> Individual neurons may not be the appropriate functional unit.
>
> It seems that you might be close to circularity -- neural functionality
> includes consciousness. So if I maintain neural functionality, I will
> maintain consciousness.
>

The only assumption is that the brain is somehow responsible for
consciousness. The argument I am making is that if any part of the brain is
replaced with a functionally identical non-biological part, engineered to
replicate its interactions with the surrounding tissue,  consciousness will
also necessarily be replicated; for if not, an absurd situation would
result, whereby consciousness can radically change but the subject not
notice, or consciousness decouple completely from behaviour, or
consciousness flip on or off with the change of one subatomic particle.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to