On 2/17/2019 2:10 AM, Bruno Marchal wrote:
But the machine itself will not believe us, or understand this.
Why not?  It can't prove what algorithm it is, but it can know that we 
know...so why would it disbelieve us.
Tha machine becomes inconsistent if it assumes its consistency (cf Rogers’s 
sentence). The machine can assume a sort of consistency of its past belief, 
like PA can add the axioms that PA is consistent, (or that PA is inconsistent) 
without losing its consistency, but in that case it becomes a new machine, with 
a similar theology in shape, but a different content/meaning for the box. She 
has changed her own code (as we do every second instinctively).

I think this is misleading.  When you say it becomes inconsistent if it assumes it's consistency, you mean that if it uses its consistency as an axiom it can lead to proving "false".   But in fact everyone assumes that their beliefs are consistent, they just don't take it as an axiom and neither do they take it as an axiom that they are inconsistent.  If I'm creating an AI I see no reason to have it make any assumption or inference about it's consistency in the sense of Goedel.  It need only be consistent in the sense of avoiding ex quod libet.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to