--- Jim Bromer <[EMAIL PROTECTED]> wrote:

> Matt Mahoney said,
> "A formal explanation of a program P would be a equivalent program Q,
> such
> that P(x) = Q(x) for all x.  Although it is not possible to prove
> equivalence in general, it is sometimes possible to prove nonequivalence
> by finding x such that P(x) != Q(x), i.e. Q fails to predict what P will
> output given x."
> 
> But I have a few problems with this although his one example was ok.
> One, there are explanations of ideas that cannot be expressed using the
> kind of formality he was talking about. Secondly, there are ideas that
> are inadequate when expressed only using the methods of formality he
> mentioned,  Third, an explanation needs to be used relative to some
> other purpose.  For example, making a prediction of how long something
> will fall to the ground is a start, but if a person understands Newton's
> law of gravity, he will be able to utilize it in other gravities as
> well.  And he may be able to relate it to real world situations where
> precise measurements are not available.  And he might apply his
> knowledge of Newton's laws to see the dimensional similarities (of
> length, mass, force and so on) between different kinds of physical
> formulas.

Remember that the goal is to test for "understanding" in intelligent
agents that are not necessarily human.  What does it mean for a machine to
understand something?  What does it mean to understand a string of bits?

I propose prediction as a general test of understanding.  For example, do
you understand the sequence 0101010101010101 ?  If I asked you to predict
the next bit and you did so correctly, then I would say you understand it.

If I want to test your understanding of X, I can describe X, give you part
of the description, and test if you can predict the rest.  If I want to
test if you understand a picture, I can cover part of it and ask you to
predict what might be there.

Understanding = compression.  If you can take a string and find a shorter
description (a program) that generates the string, then use that program
to predict subsequent symbols correctly, then I would say you understand
the string (or its origin).

This is what Hutter's universal intelligent agent does.  The significance
of AIXI is not a solution to AI (AIXI is not computable), but that it
defines a mathematical framework for intelligence.


-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to