Le 05-juil.-07, à 22:14, Jesse Mazer a écrit :

> His [Penrose] whole Godelian argument is based on the idea that for 
> any computational
> theorem-proving machine, by examining its construction we can use this
> "understanding" to find a mathematical statement which *we* know must 
> be
> true, but which the machine can never output--that we understand 
> something
> it doesn't. But I think my argument shows that if you were really to 
> build a
> simulated mathematician or community of mathematicians in a computer, 
> the
> Godel statement for this system would only be true *if* they never 
> made a
> mistake in reasoning or chose to output a false statement to be 
> perverse,
> and that therefore there is no way for us on the outside to have any 
> more
> confidence about whether they will ever output this statement than 
> they do
> (and thus neither of us can know whether the statement is actually a 
> true or
> false theorem of arithmetic).

I think I agree with your line of argumentation, but you way of talking 
could be misleading. Especially if people interpret "arithmetic" by
If we are in front of a machine that we know to be sound, then we can 
indeed know that the Godelian proposition associated to the machine is 
true. For example, nobody (serious) doubt that PA (Peano Arithmetic, 
the first order formal arithmetic theory/machine) is sound. So we know 
that all the godelian sentences are true, and PA cannot know that. But 
this just proves that I am not PA, and that I have actually stronger 
ability than PA.
I could have taken ZF instead (ZF is Zermelo Fraenkel formal 
theory/machine of sets), although I must say that if I have entire 
confidence in PA, I have only 99,9998% confidence in ZF (and thus I can 
already be only 99,9998% sure of the ZF godelian sentences).
About NF (Quine's New Foundation formal theory machine) I have only 50% 

Now all (sufficiently rich) theories/machine can prove their own 
Godel's theorem. PA can prove that if PA is consistent then PA cannot 
prove its consitency. A somehow weak (compared to ZF) theory like PA 
can even prove the corresponding theorem for the richer ZF: PA can 
prove that if ZF is consistent then ZF can prove its own consistency. 
So, in general a machine can find its own godelian sentences, and can 
even infer their truth in some abductive way from very minimal 
inference inductive abilities, or from assumptions.

No sound (or just consistent) machine can ever prove its own godelian 
sentences, in particular no machine can prove its own consistency, but 
then machine can bet on them or "know" them serendipitously). This is 
comparable with consciousness. Indeed it is easy to manufacture thought 
experiements illustrating that no conscious being can prove it is 
conscious, except that "consciousness" is more truth related, so that 
machine cannot even define their own consciousness (by Tarski 
undefinability of truth theorem).



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

Reply via email to