On Sun, 24 Oct 2004, J.Andrew Rogers wrote:
This does not follow. You can build arbitrarily complex machines with a very tiny finite control function and plenty of tape. The complexity of AI as an algorithm and design space is not in the same class as the complexity of an instance of human-level AI, even though the latter is just the former given some state space to play with.
But that machine cannot fully understand/represent *itself*. The longer the tape of its own understanding gets, the more tape it needs to represent itself, etc etc.
The Godel statement represents itself, completely, via diagonalization.
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
