> On 10 Jun 2020, at 05:25, 'Brent Meeker' via Everything List 
> <everything-list@googlegroups.com> wrote:
> 
> 
> 
> On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:
>> 
>> 
>> On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List 
>> <everything-list@googlegroups.com <mailto:everything-list@googlegroups.com>> 
>> wrote:
>> 
>> 
>> On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:
>>> 
>>> 
>>> On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List 
>>> <everything-list@googlegroups.com 
>>> <mailto:everything-list@googlegroups.com>> wrote:
>>> 
>>> 
>>> On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
>>>> 
>>>> 
>>>> On Wed, 10 Jun 2020 at 03:08, Jason Resch <jasonre...@gmail.com 
>>>> <mailto:jasonre...@gmail.com>> wrote:
>>>> For the present discussion/question, I want to ignore the testable 
>>>> implications of computationalism on physical law, and instead focus on the 
>>>> following idea:
>>>> 
>>>> "How can we know if a robot is conscious?"
>>>> 
>>>> Let's say there are two brains, one biological and one an exact 
>>>> computational emulation, meaning exact functional equivalence. Then let's 
>>>> say we can exactly control sensory input and perfectly monitor motor 
>>>> control outputs between the two brains.
>>>> 
>>>> Given that computationalism implies functional equivalence, then identical 
>>>> inputs yield identical internal behavior (nerve activations, etc.) and 
>>>> outputs, in terms of muscle movement, facial expressions, and speech.
>>>> 
>>>> If we stimulate nerves in the person's back to cause pain, and ask them 
>>>> both to describe the pain, both will speak identical sentences. Both will 
>>>> say it hurts when asked, and if asked to write a paragraph describing the 
>>>> pain, will provide identical accounts.
>>>> 
>>>> Does the definition of functional equivalence mean that any scientific 
>>>> objective third-person analysis or test is doomed to fail to find any 
>>>> distinction in behaviors, and thus necessarily fails in its ability to 
>>>> disprove consciousness in the functionally equivalent robot mind?
>>>> 
>>>> Is computationalism as far as science can go on a theory of mind before it 
>>>> reaches this testing roadblock?
>>>> 
>>>> We can’t know if a particular entity is conscious,
>>> 
>>> If the term means anything, you can know one particular entity is conscious.
>>> 
>>> Yes, I should have added we can’t know know that a particular entity other 
>>> than oneself is conscious.
>>>> but we can know that if it is conscious, then a functional equivalent, as 
>>>> you describe, is also conscious.
>>> 
>>> So any entity functionally equivalent to yourself, you must know is 
>>> conscious.  But "functionally equivalent" is vague, ambiguous, and 
>>> certainly needs qualifying by environment and other factors.  Is a dolphin 
>>> functionally equivalent to me.  Not in swimming.
>>> 
>>> Functional equivalence here means that you replace a part with a new part 
>>> that behaves in the same way. So if you replaced the copper wires in a 
>>> computer with silver wires, the silver wires would be functionally 
>>> equivalent, and you would notice no change in using the computer. Copper 
>>> and silver have different physical properties such as conductivity, but the 
>>> replacement would be chosen so that this is not functionally relevant.
>> 
>> But that functional equivalence at a microscopic level is worthless in 
>> judging what entities are conscious.    The whole reason for bringing it up 
>> is that it provides a criterion for recognizing consciousness at the entity 
>> level.
>> 
>> The thought experiment involves removing a part of the brain that would 
>> normally result in an obvious deficit in qualia and replacing it with a 
>> non-biological component that replicates its interactions with the rest of 
>> the brain. Remove the visual cortex, and the subject becomes blind, 
>> staggering around walking into things, saying "I'm blind, I can't see 
>> anything, why have you done this to me?" But if you replace it with an 
>> implant that processes input and sends output to the remaining neural 
>> tissue, the subject will have normal input to his leg muscles and his vocal 
>> cords, so he will be able to navigate his way around a room and will say "I 
>> can see everything normally, I feel just the same as before". This follows 
>> necessarily from the assumptions. But does it also follow that the subject 
>> will have normal visual qualia? If not, something very strange would be 
>> happening: he would be blind, but would behave normally, including his 
>> behaviour in communicating that everything feels normal.
> 
> I understand the "Yes doctor" experiment.  But Jason was asking about being 
> able to recognize consciousness by function of the entity, and I think that 
> is a different problem that needs to into account the possibility of 
> different kinds and degrees of     consciousness.  The YD question makes it 
> binary by equating consciousness with exactly the same as pre-doctor.  
> Applying that to Jason's question you would conclude that you cannot infer 
> that other people are conscious because, while they are functionally 
> equivalent is a loose sense, they are not exactly the same as you.  They 
> don't give exactly the same answers to questions.  They may not even be able 
> to see or hear things you do.
> 
> I think what refer to as "very strange" is possible given a little fuzziness 
> about being functionally identical. 

There are many fuzziness indeed, and they belong to two very different kinds. 
One is that functional equivalence makes sense only relatively to a choice of 
substitution level. That fuzziness is about which probability predicate 
represents us, which “’[]p” defines us, or which machine supports us. The set 
of such machine is a non computable set (the set of codes of any function is 
not a computable set, and that plays a role in the Measure problem).

Then you have the fuzziness due to the first person, third person, first person 
plural, etc. The “[]p & p” is not definable by the machine (by the “[]p”), so 
the first person “I” does not refer to anything third person describable. But 
it is imposed by incompleteness, the machine can’t avoid it in introspection, 
it is non dubitable, etc. Here the “functional equivalence” would mean the 
complete invariance of the (relative) experience. Consciousness enter through 
the mode “[]p & p”, but is not exactly equivalent to it, and the qualia appears 
through the most extended mode: []p & <>t & p. (But also the graded variants, 
like [][]p & <><><>t & p, which plays a role in the origin of space (as 
qualia). Qualia requires *some* consistency or reality to anticipate on (<>t).

Let me give you the (8) modes in the less theological way possible:

Truth
Mind
Soul
---
Quanta
Qualia

Which corresponds to the universal machine’s self-referential modes (defined in 
arithmetic, or through arithmetical truth)

p
[]p
[]p & p
---
[]p & <>t
[]p & <>t & p

(Cf Boolos 1979).

But I can’t resist and add the neoplatonist vocabulary:

The One,
The Intellect,
The Soul
—
Intelligible Matter
Sensible Matter


There are 8 of them, as Mind, Quanta and Qualia are separated along what the 
machine can justify and what is true but that the machine cannot justify. 
Eventually, they are more like 4 + 4 * infinity, by the graded variants 
mentioned above.

I just over-simplify a bit, as the quanta seems to appear more in the “5” modes 
than in the 4, but that remains to be detailed. It might be that a theorem 
prover for quanta and qualia requires a quantum algorithm. But everything 
deduced from this has been verified by nature, which would not be the case 
without quantum mechanics.


Bruno






> Suppose his vision was replaced by some combination of sonar and radar.  He 
> could be as close to you as a color blind person in his answers.
> 
> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/3fdb39ac-ec70-2fc1-db1f-4c0710d155b4%40verizon.net
>  
> <https://groups.google.com/d/msgid/everything-list/3fdb39ac-ec70-2fc1-db1f-4c0710d155b4%40verizon.net?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/5054CFEC-F299-49CA-AF00-10430F5B3189%40ulb.ac.be.

Reply via email to