--- William Pearson <[EMAIL PROTECTED]> wrote:
> Matt mahoney:
> "I propose prediction as a general test of understanding.  For example,
> do you understand the sequence 0101010101010101 ?  If I asked you to
> predict
> the next bit and you did so correctly, then I would say you understand
> it."
> 
> What would happen if I said, "I don't have time for silly games,
> please stop emailing me". Would you consider that I understood it?

If it was a Turing test, then probably yes.  But a Turing test is not the
best way to test for intelligence.

Ben Goertzel once said something like "pattern recognition + goals = AGI".
 I am generalizing pattern recognition to prediction and proposing that
the two components can be tested separately.

For example, a speech recognition system is evaluated by word error rate. 
But for development it is useful to separate the system into its two main
components, an acoustic model and a language model, and test them
separately.  A language model is just a probability distribution.  It does
not have a goal.  Nevertheless, the model's accuracy can be measured by
using it in a data compressor whose goal (implicit in the encoder) is to
minimize the size of the output without losing information.  The
compressed size correlates well with word error rate.  Such testing is
useful because if the system has a poor word error rate but the language
model is good, then the problem can be narrowed down to the acoustic
model.  Without this test, you wouldn't know.

I propose compression as a universal goal for testing the predictor
component of AI.  More formally, if the system predicts the next symbol
with probability p, then that symbol has utility log(p).

AIXI provides a formal justification for this approach.  In AIXI, an agent
and an environment (both Turing machines) exchange symbols interactively. 
In addition, the environment signals a numeric reward to the agent during
each cycle.  The goal of the agent is to maximize the accumulated reward. 
Hutter proved that the optimal (but uncomputable) strategy of the agent is
to guess at each step that the environment is modeled by the shortest
Turing machine consistent with the interaction so far.

Note that this strategy is independent of the goal implied by the reward
signal.


-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to