Hey "agi" (formerly Jim Bromer, just kidding buddy :)

I like this comment of yours:

"The misunderstanding that a 'predictor' is the same as absolute
> knowledge that is always right has no basis in the world that might be known
> from common sense."

This captures the tension between the
mathematical-algorithmic-information-theory school and less
mathematical psychology/philosophy approaches. People tend to fall in
one school or the other.

Mike A

On 8/13/19, agi <[email protected]> wrote:
> "Suppose you have a simple learner that can predict any computable sequence
> of symbols with some probability at least as good as random guessing. Then I
> can create a simple sequence that your predictor will get wrong 100% of
> the time. My program runs a copy of your program and outputs something
> different from your guess."
> 
> This kind of program is an example of narrow AGI, and the application of the
> theory as a proof that a universal learner is impossible is irrelevant. It
> does not apply to all forms of knowledge, in particular, the kind of
> knowledge that we work with all of the time. There is no basis that the
> prediction  made by a program like this could be absolutely right all of the
> time. The misunderstanding that a 'predictor' is the same as absolute
> knowledge that is always right has no basis in the world that might be known
> from common sense. This is not a proof that a universal learner is
> impossible because the foundation of knowledge is not the striving for
> perfect knowledge of the future.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M315ee1488b208c022e170412
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to