----- Original Message ----

Matt Mahoney said:

Remember that the goal is to test for "understanding" in intelligent
agents that are not necessarily human.  What does it mean for a machine to
understand something?  What does it mean to understand a string of bits?

I propose prediction as a general test of understanding.  For example, do
you understand the sequence 0101010101010101 ?  If I asked you to predict
the next bit and you did so correctly, then I would say you understand it.

If I want to test your understanding of X, I can describe X, give you part
of the description, and test if you can predict the rest.  If I want to
test if you understand a picture, I can cover part of it and ask you to
predict what might be there.

Understanding = compression.  If you can take a string and find a shorter
description (a program) that generates the string, then use that program
to predict subsequent symbols correctly, then I would say you understand
the string (or its origin).

This is what Hutter's universal intelligent agent does.  The significance
of AIXI is not a solution to AI (AIXI is not computable), but that it
defines a mathematical framework for intelligence.

-- Matt Mahoney, [EMAIL PROTECTED]
-------------------------------------------


I have heard this argument in another discussion group and
it is too weak to spend much time on.  I am not debating that "prediction" is 
an aspect
of intelligence.  And I agree that we
need better ways to gauge 'understanding' for computer programs.
 
But, "Understanding=compression."  That is really pretty far out there.  This 
conclusion is based on an argument
like: One would be able to predict everything if he was able to
understand everything (or at least everything predictable).  This argument, 
however, is clearly a
fantasy.  So we come up with a weaker
version.  If someone was able to predict
a number of events accurately this would be a sign that he must understand 
something about those events.  This argument
might work when talking about people, but it does not quite work the way you 
seem to want it to.  You cannot just paste a term
of intelligence like 'prediction' onto a mechanical process and reason that the
mechanical process may then be seen as equivalent to a mental process.  Suppose 
someone wrote a computer program
with 10000 questions and the program was able to 'predict' the correct answer
for every single one of those questions.  Does the machine understand the 
questions? Of course not.  The person who wrote the program understands the 
subject of those
questions, but you cannot conclude that the program understood the subject
matter.
 
Ok, so at this point the argument might be salvaged by
saying that if the program understood the answers to questions that it had
never seen before that this would have to be considered understanding.  But 
that is not good enough either because
we know that even if a program is not tested for every single mathematical
calculation it could possibly make, there is no evidence that it actually
understands anything about its calculator functions in any useful way other 
than producing a
result.  While I am willing to agree
that computation may be one measure of understanding in persons, calculators are
usually very limited, and therefore there must be a great deal more to
understanding than producing the correct result based on a generalized
procedure.
 
When an advanced intelligent program learns something new it
would be able to apply that new knowledge in ways that were not produced
stereotypically through the generalizations or generative functions that it was
programmed with.  This last step is difficult
to express perfectly, but if the reader doesn't appreciate the significance of
this paragraph so far, it won't matter anyway.  Technically, a program that was 
able to learn in the manner that I am
thinking of would have to use some programmed generalizations (or 
generalization-like
relations) in that learning.  So my
first sentence in this paragraph was actually an imperfect simplification.  
But, the point is, that an advanced program,
like the kind that I am thinking of, will also be able to form its own
generalizations (and generalization-like relations) about learned information
that is not fully predictable at the time the program was launched.  (I am not 
using 'predictable' here in a
contrary argument, I am using it to show that a new concept could be learned in
a non-stereotyped way relative to some conceptual context.)  This means that 
understanding, even an
understanding of a process that is strictly formalized, may have effects on the
understanding of other subject matter, and the understanding of other subject
matter may have other effects on the new insight.
 
Insight may be applied to a range of instances.  However, the measure of
insightfulness must also be related to some more sophisticated method of
analysis; one that is effectively based on relative conceptual boundaries.  
 
So if you want to use generalization and generators as a
measure of computer intelligence (as I feel is implied in your argument that
accurate result from a compressed form is equivalent to prediction), you will
need to make it more sophisticated.
 Jim
Bromer



      

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to