> On 11 Jun 2020, at 21:26, 'Brent Meeker' via Everything List 
> <everything-list@googlegroups.com> wrote:
> 
> 
> 
> On 6/11/2020 9:03 AM, Bruno Marchal wrote:
>> 
>>> On 9 Jun 2020, at 19:08, Jason Resch <jasonre...@gmail.com 
>>> <mailto:jasonre...@gmail.com>> wrote:
>>> 
>>> For the present discussion/question, I want to ignore the testable 
>>> implications of computationalism on physical law, and instead focus on the 
>>> following idea:
>>> 
>>> "How can we know if a robot is conscious?”
>> 
>> That question is very different than “is functionalism/computationalism 
>> unfalsifiable?”.
>> 
>> Note that in my older paper, I relate computationisme to Putnam’s ambiguous 
>> functionalism, by defining computationalism by asserting the existence of of 
>> level of description of my body/brain such that I survive (ma consciousness 
>> remains relatively invariant) with a digital machine (supposedly physically 
>> implemented) replacing my body/brain.
>> 
>> 
>> 
>>> 
>>> Let's say there are two brains, one biological and one an exact 
>>> computational emulation, meaning exact functional equivalence.
>> 
>> I guess you mean “for all possible inputs”.
>> 
>> 
>> 
>> 
>>> Then let's say we can exactly control sensory input and perfectly monitor 
>>> motor control outputs between the two brains.
>>> 
>>> Given that computationalism implies functional equivalence, then identical 
>>> inputs yield identical internal behavior (nerve activations, etc.) and 
>>> outputs, in terms of muscle movement, facial expressions, and speech.
>>> 
>>> If we stimulate nerves in the person's back to cause pain, and ask them 
>>> both to describe the pain, both will speak identical sentences. Both will 
>>> say it hurts when asked, and if asked to write a paragraph describing the 
>>> pain, will provide identical accounts.
>>> 
>>> Does the definition of functional equivalence mean that any scientific 
>>> objective third-person analysis or test is doomed to fail to find any 
>>> distinction in behaviors, and thus necessarily fails in its ability to 
>>> disprove consciousness in the functionally equivalent robot mind?
>> 
>> With computationalism, (and perhaps without) we cannot prove that anything 
>> is conscious (we can know our own consciousness, but still cannot justified 
>> it to ourself in any public way, or third person communicable way). 
>> 
>> 
>> 
>>> 
>>> Is computationalism as far as science can go on a theory of mind before it 
>>> reaches this testing roadblock?
>> 
>> Computationalism is indirectly testable. By verifying the physics implied by 
>> the theory of consciousness, we verify it indirectly.
>> 
>> As you know, I define consciousness by that indubitable truth that all 
>> universal machine, cognitively enough rich to know that they are universal, 
>> finds by looking inward (in the Gödel-Kleene sense), and which is also non 
>> provable (non rationally justifiable) and even non definable without 
>> invoking *some* notion of truth. Then such consciousness appears to be a 
>> fixed point for the doubting procedure, like in Descartes, and it get a key 
>> role: self-speeding up relatively to universal machine(s).
>> 
>> So, it seems so clear to me that nobody can prove that anything is conscious 
>> that I make it into one of the main way to characterise it.
> 
> Of course as a logician you tend to use "proof" to mean deductive proof...but 
> then you switch to a theological attitude toward the premises you've used and 
> treat them as given truths, instead of mere axioms. 

Here I was using “proof” in its common informal sense, it is more S4Grz1 than G 
(it is more []p & p, than []p. Note that the machine cannot formalise []p & p).




> I appreciate your categorization of logics of self-reference. 


It is not really mine. All sound universal machine got it, soon or later.



> But I  doubt that it has anything to do with human (or animal) consciousness. 
>  I don't think my dog is unconscious because he doesn't understand Goedelian 
> incompleteness. 

This is like saying that we don’t need superstring theory to appreciate a 
pizza. You dog does not need to understand Gödel’s theorem to have its 
consciousness explained by machine theology.



> And I'm not conscious because I do.  I'm conscious because of the Darwinian 
> utility of being able to imagine myself in hypothetical situations.

If that is true, then consciousness is purely functional, which is contradicted 
by any personal data. As I have explained, consciousness accompanies such 
imagination, but that imagination filter consciousness. It cannot create it, 
like two apples cannot create the number two. 




> 
>> 
>> Consciousness is already very similar with consistency, which is (for 
>> effective theories, and sound machine) equivalent to a belief in some 
>> reality. No machine can prove its own consistency, and no machines can prove 
>> that there is reality satisfying their beliefs.
> 
> First, I can't prove it because such a proof would be relative to premises 
> which simply be my beliefs. 

But you can still search for a simpler theory. It will be shared by more 
people. Doing metaphysics with the scientific method means that we limit the 
ontological commitment as much as possible.


> Second, I can prove it in the sense of jurisprudence...i.e. beyond reasonable 
> doubt.  Science doesn't care about "proofs", only about evidence.


The whole point is that there is no evidence for primary matter. No one doubt 
the existence of matter. The question is about the need to assume it (primary 
matter), or if its appearance admits a simpler, and testable, theory, like the 
fact that all computations are executed in (the standard model of) arithmetic.

Bruno



> 
> Brent
> 
>> 
>> In all case, it is never the machine per se which is conscious, but the 
>> first person associated with the machine. There is a core universal person 
>> common to each of “us” (with “us” in a very large sense of universal 
>> numbers/machines).
>> 
>> Consciousness is not much more than knowledge, and in particular indubitable 
>> knowledge.
>> 
>> Bruno
>> 
>> 
>> 
>>> 
>>> Jason
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com 
>>> <mailto:everything-list+unsubscr...@googlegroups.com>.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com
>>>  
>>> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com?utm_medium=email&utm_source=footer>.
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com 
>> <mailto:everything-list+unsubscr...@googlegroups.com>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/6BFB3D6E-1AFB-4DDA-988E-B7BA03FF897F%40ulb.ac.be
>>  
>> <https://groups.google.com/d/msgid/everything-list/6BFB3D6E-1AFB-4DDA-988E-B7BA03FF897F%40ulb.ac.be?utm_medium=email&utm_source=footer>.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/37d6db17-fbc6-d241-c03d-6a090f95c7aa%40verizon.net
>  
> <https://groups.google.com/d/msgid/everything-list/37d6db17-fbc6-d241-c03d-6a090f95c7aa%40verizon.net?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/9CC7FC03-72D7-41EB-AF9D-17D8C44F959A%40ulb.ac.be.

Reply via email to