Russell Standish:

>You are right when it comes to the combination of two independent
>systems A and B. What the original poster's idea was a
>self-simulating, or self-aware system. In this case, consider the liar
>type paradox:
>   I cannot prove this statement
>Whilst I cannot prove this statement, I do know it is true, simply
>because if I could prove the statement it would be false.

Yes, but Godel statements are more complex than verbal statements like the 
one above, they actually encode the complete rules of the theorem-proving 
system into the statement. A better analogy might be if you were an upload 
(see living in a self-contained 
deterministic computer simulation, and the only messages you could send to 
the outside world were judgments about whether particular mathematical 
theorems were true (once you make a judgement, you can't take it back...for 
any judgements your program makes, there must be a halting program that can 
show that you'll definitely make that judgement after some finite number of 
steps). Suppose you know the complete initial conditions X and dynamical 
rules Y of the simulation. Then suppose you're given a mathematical theorem 
Z which you can see qualifies as an "encoding" of the statement "the 
deterministic computer simulation with initial conditions X and dynamical 
rules Y will never output theorem Z as a true statement." You can see 
intuitively that it should be true if your reasoning remains correct, but 
you can't be sure that at some point after the simulation is running for a 
million years or something you won't decide to output that statement in a 
fit of perversity, nor can you actually come up with a rigorous *proof* that 
you'll never do that, since you can't find any shortcut to predicting the 
system's behavior aside from actually letting the simulation run and seeing 
what happens.

The same thing would be true even if you replaced an individual in a 
computer simulation with a giant simulated community of mathematicians who 
could only output a given theorem if they had a unanimous vote, and where 
the size of the community was constantly growing so the probability of 
errors should be ever-diminishing...although they might hope that they might 
never make an error even if the simulation ran forever, they couldn't 
rigorously prove this unless they found some shortcut for predicting their 
own community's behavior better than just letting the program run and seeing 
what would happen (if they did find such a shortcut, this would have strange 
implications for their own feeling of free will!)


More photos, more messages, more storage--get 2GB with Windows Live Hotmail.

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to