"Suppose you have a simple learner that can predict any computable sequence of 
symbols with some probability at least as good as random guessing. Then I can 
create a simple sequence that your predictor will get wrong 100% of the time. 
My program runs a copy of your program and outputs something different from 
your guess."

This kind of program is an example of narrow AGI, and the application of the 
theory as a proof that a universal learner is impossible is irrelevant. It does 
not apply to all forms of knowledge, in particular, the kind of knowledge that 
we work with all of the time. There is no basis that the prediction  made by a 
program like this could be absolutely right all of the time. The 
misunderstanding that a 'predictor' is the same as absolute knowledge that is 
always right has no basis in the world that might be known from common sense. 
This is not a proof that a universal learner is impossible because the 
foundation of knowledge is not the striving for  perfect knowledge of the 
future.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M686945359db215b9199c28a4
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to