Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-16 Thread Jim Bromer
- Original Message 

From: Matt Mahoney [EMAIL PROTECTED]

I don't claim that compression is simple.  It is not.  Text compression is
AI-complete.  The general problem is not even computable.

...I claim that compression can be used to measure intelligence.  I explain in 
more detail at http://cs.fit.edu/~mmahoney/compression/rationale.html

-- Matt Mahoney, [EMAIL PROTECTED]
---

It will take me a while to read your paper.  However, I want to say that I am 
skeptical that you would be able to use compression to even measure 
intelligence.

I do think it might be worthwhile to come up with basic elements of 
intelligence, and these could include correlations of productive output from 
different algorithms or something like that.  But, from there you have to 
continue to build the  system.  It would be necessary to show how those 
elements can be combined to produce higher (or better) intelligence, and the 
Shannon/Hutter enthusiasts (along with everyone else) simply have not done 
this.  (I think the contemporary advancements in AI are probably due to faster 
memory access and parallelism as much as any achievement in AI software.)  But 
this means that you are advancing a purely speculative theory without any 
evidence to support it.

Right now I am working on my own religious journey (but mine is seriously 
religious interestingly enough) writing a polynomial time SAT program. Now 
let's say that this SAT theory actually worked and was followed by a theory 
that showed that it could be used both to advance AI and to compress data.  You 
might have a -I told you so- moment.  But I might then have a -so what- moment. 
 (I say that in a competitive but cordial way.)  Of course intelligence will 
involve some kind of compression method!  But so what?  It will also involve 
some kind of speculative method.  Does that mean that we can use speculation to 
'measure' intelligence?  Well, sure.  Someone might be able to devise a 
psychometric measure of speculative potential or something like that.  But this 
does not translate into an objective measure of intelligence until it is 
compared with thousands of subjects and integrated into a system that indicates 
that this particular measure of speculative
 potential can be correlated with other measures of intelligence and 
achievement.
Sometimes a compression algorithm is just a compression algorithm.
Jim Bromer


  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-16 Thread Matt Mahoney

--- Jim Bromer [EMAIL PROTECTED] wrote:

 - Original Message 
 
 From: Matt Mahoney [EMAIL PROTECTED]
 
 I don't claim that compression is simple.  It is not.  Text compression
 is
 AI-complete.  The general problem is not even computable.
 
 ...I claim that compression can be used to measure intelligence.  I
 explain in more detail at
 http://cs.fit.edu/~mmahoney/compression/rationale.html
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 ---
 
 It will take me a while to read your paper.  However, I want to say that
 I am skeptical that you would be able to use compression to even measure
 intelligence.
 
 I do think it might be worthwhile to come up with basic elements of
 intelligence, and these could include correlations of productive output
 from different algorithms or something like that.  But, from there you
 have to continue to build the  system.  It would be necessary to show
 how those elements can be combined to produce higher (or better)
 intelligence, and the Shannon/Hutter enthusiasts (along with everyone
 else) simply have not done this.  (I think the contemporary advancements
 in AI are probably due to faster memory access and parallelism as much
 as any achievement in AI software.)  But this means that you are
 advancing a purely speculative theory without any evidence to support
 it.

The evidence is described in my paper which you haven't read yet.

For building AGI, my proposal is http://www.mattmahoney.net/agi.html
Unfortunately, I estimate the cost to be US $1 quadrillion over the next
30 years.  But I believe it is coming, because AGI is worth that much.  If
I use compression anywhere, it will be to evaluate candidate language
models for peers in a market that right now does not yet exist.

 Right now I am working on my own religious journey (but mine is
 seriously religious interestingly enough) writing a polynomial time SAT
 program. 

It is worth $1 million if you succeed, but I wouldn't waste my time on it.
http://www.claymath.org/millennium/P_vs_NP/


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-16 Thread Jim Bromer
I had said:

 But this means that you are
 advancing a purely speculative theory without any evidence to support
 it.

Matt said:
The evidence is described in my paper which you haven't read yet.


I did glance at the paper and I don't think I will be able to understand your 
evidence.  Can you give me some clues using plain language.
-
Matt said:
For building AGI, my proposal is http://www.mattmahoney.net/agi.html
Unfortunately, I estimate the cost to be US $1 quadrillion over the next
30 years.  But I believe it is coming, because AGI is worth that much.  If
I use compression anywhere, it will be to evaluate candidate language
models for peers in a market that right now does not yet exist.
-

Can you explain what you mean by the statement that you would use compression 
to evaluate candidate language models?

I had said:
 Right now I am working on my own religious journey (but mine is
 seriously religious interestingly enough) writing a polynomial time SAT
 program. 

Matt said:
It is worth $1 million if you succeed, but I wouldn't waste my time on it.
http://www.claymath.org/millennium/P_vs_NP/
-

I had given up on it as a waste of time, but I decided to look more carefully 
at it on what I considered the slight possibility that the Lord had actually 
indicated that I would be able to do it.  I have evidence now that I did not 
have 7 months ago that it may actually work.

Jim Bromer


  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-15 Thread Jim Bromer



- Original Message 
From: Matt Mahoney [EMAIL PROTECTED]

Your question answering machine is algorithmically complex.  A smaller
program could describe a procedure for answering the questions, and in
that case it could answer questions not in the original set of 1.

Here is another example:

  3 = 9
  7 = 49
  8 = 64
  12 = 144
  2 = 4
  6 = ?

You could write a program that stores the first 5 training examples in a
table, or you could find a smaller program that computes the output as a
mathematical function of the input.  When you test your programs with 6
= ? which program would give you the answer you expect?  Which would you
say understands the training set?

You can take the position that a machine can never understand anything
the way that a human could.  I don't care.  Call it something else if you
want, like AI.


-- Matt Mahoney, [EMAIL PROTECTED]

---
First of all, just to make sure you understand me, when I said that the 
'generator=prediction  compression=understanding' theory and the is not worth 
spending much time on, I did not mean that I thought your ideas were not worth 
spending time on.  I was just criticizing that one idea, not all of your ideas.

Secondly, I have already considered the example that you supplied me with.  I 
tried to explain that I have already discussed these ideas in another group.
And I have never taken the position that a machine can never understand 
anything.  That is either a straw man argument or it shows that you did not 
understand what it was that I did say in my last message.  Or maybe you did not 
even bother to read my last message very carefully before you fired off your 
resply.  I don't see any other way of explanation.

I do think a truly intelligent computer program would be algorithmically 
complex.   Evidently that would be a difference in our opinions if you have 
completely accepted the position that you are advocating.  I also think that  
generalization, generalization-like relations or generative procedures are 
necessary to produce AI.  I think that is a similarity in our positions.

You can try to find the fundamentals of intelligence, that is of algorithmic 
intelligence, but that does not mean that you will be able to produce 
intelligence before you find a theory that is complex enough to explain how 
artificial intelligence can be produced.  That is another weakness of 
compression=understanding theory.

Jim Bromer



  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-15 Thread Matt Mahoney
--- Jim Bromer [EMAIL PROTECTED] wrote:
 You can try to find the fundamentals of intelligence, that is of
 algorithmic intelligence, but that does not mean that you will be able
 to produce intelligence before you find a theory that is complex enough
 to explain how artificial intelligence can be produced.  That is another
 weakness of compression=understanding theory.

I don't claim that compression is simple.  It is not.  Text compression is
AI-complete.  The general problem is not even computable.

Perhaps you misunderstand that I think gzip has AI.  No, I claim that
compression can be used to measure intelligence.  I explain in more detail
at http://cs.fit.edu/~mmahoney/compression/rationale.html

-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread William Pearson
Matt mahoney:
  I am not sure what you mean by AGI.  I consider a measure of intelligence
  to be the degree to which goals are satisfied in a range of environments.
  It does not matter what the goals are.  They may seem irrational to you.
  The goal of a smart bomb is to blow itself up at a given target.  I would
  consider bombs that hit their targets more often to be more intelligent.

  I consider understanding to mean intelligence in this context.  You
  can't say that a robot that does nothing is unintelligent unless you
  specify its goals.

  We may consider intelligence as a measure and AGI as a threshold.  AGI is
  not required for understanding.  You can measure the degree to which
  various search engines understand your query, spam filters understand your
  email, language translators understand your document, vision systems
  understand images, intrusion detection systems understand network traffic,
  etc.  Each system was designed with a goal and can be evaluated according
  to how well that goal is met.

  AIXI allows us to evaluate intelligence independent of goals.  An agent
  understands its input if it can predict it.  This can be measured
  precisely.


I think you are thinking of solomonoff induction, AIXI won't answer
your questions unless it has the goal of getting reward from you for
answering the question. It will do what it predicts will get it
reward, not try and output the end of all strings given to it.

I propose prediction as a general test of understanding.  For example, do
you understand the sequence 0101010101010101 ?  If I asked you to predict
the next bit and you did so correctly, then I would say you understand it.

What would happen if I said, I don't have time for silly games,
please stop emailing me. Would you consider that I understood it?

If I want to test your understanding of X, I can describe X, give you part
of the description, and test if you can predict the rest.  If I want to
test if you understand a picture, I can cover part of it and ask you to
predict what might be there.

This only works if my goal includes revealing my understanding to you.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread Matt Mahoney
--- William Pearson [EMAIL PROTECTED] wrote:
 Matt mahoney:
 I propose prediction as a general test of understanding.  For example,
 do you understand the sequence 0101010101010101 ?  If I asked you to
 predict
 the next bit and you did so correctly, then I would say you understand
 it.
 
 What would happen if I said, I don't have time for silly games,
 please stop emailing me. Would you consider that I understood it?

If it was a Turing test, then probably yes.  But a Turing test is not the
best way to test for intelligence.

Ben Goertzel once said something like pattern recognition + goals = AGI.
 I am generalizing pattern recognition to prediction and proposing that
the two components can be tested separately.

For example, a speech recognition system is evaluated by word error rate. 
But for development it is useful to separate the system into its two main
components, an acoustic model and a language model, and test them
separately.  A language model is just a probability distribution.  It does
not have a goal.  Nevertheless, the model's accuracy can be measured by
using it in a data compressor whose goal (implicit in the encoder) is to
minimize the size of the output without losing information.  The
compressed size correlates well with word error rate.  Such testing is
useful because if the system has a poor word error rate but the language
model is good, then the problem can be narrowed down to the acoustic
model.  Without this test, you wouldn't know.

I propose compression as a universal goal for testing the predictor
component of AI.  More formally, if the system predicts the next symbol
with probability p, then that symbol has utility log(p).

AIXI provides a formal justification for this approach.  In AIXI, an agent
and an environment (both Turing machines) exchange symbols interactively. 
In addition, the environment signals a numeric reward to the agent during
each cycle.  The goal of the agent is to maximize the accumulated reward. 
Hutter proved that the optimal (but uncomputable) strategy of the agent is
to guess at each step that the environment is modeled by the shortest
Turing machine consistent with the interaction so far.

Note that this strategy is independent of the goal implied by the reward
signal.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread Jim Bromer
- Original Message 

Matt Mahoney said:

Remember that the goal is to test for understanding in intelligent
agents that are not necessarily human.  What does it mean for a machine to
understand something?  What does it mean to understand a string of bits?

I propose prediction as a general test of understanding.  For example, do
you understand the sequence 0101010101010101 ?  If I asked you to predict
the next bit and you did so correctly, then I would say you understand it.

If I want to test your understanding of X, I can describe X, give you part
of the description, and test if you can predict the rest.  If I want to
test if you understand a picture, I can cover part of it and ask you to
predict what might be there.

Understanding = compression.  If you can take a string and find a shorter
description (a program) that generates the string, then use that program
to predict subsequent symbols correctly, then I would say you understand
the string (or its origin).

This is what Hutter's universal intelligent agent does.  The significance
of AIXI is not a solution to AI (AIXI is not computable), but that it
defines a mathematical framework for intelligence.

-- Matt Mahoney, [EMAIL PROTECTED]
---


I have heard this argument in another discussion group and
it is too weak to spend much time on.  I am not debating that prediction is 
an aspect
of intelligence.  And I agree that we
need better ways to gauge 'understanding' for computer programs.
 
But, Understanding=compression.  That is really pretty far out there.  This 
conclusion is based on an argument
like: One would be able to predict everything if he was able to
understand everything (or at least everything predictable).  This argument, 
however, is clearly a
fantasy.  So we come up with a weaker
version.  If someone was able to predict
a number of events accurately this would be a sign that he must understand 
something about those events.  This argument
might work when talking about people, but it does not quite work the way you 
seem to want it to.  You cannot just paste a term
of intelligence like 'prediction' onto a mechanical process and reason that the
mechanical process may then be seen as equivalent to a mental process.  Suppose 
someone wrote a computer program
with 1 questions and the program was able to 'predict' the correct answer
for every single one of those questions.  Does the machine understand the 
questions? Of course not.  The person who wrote the program understands the 
subject of those
questions, but you cannot conclude that the program understood the subject
matter.
 
Ok, so at this point the argument might be salvaged by
saying that if the program understood the answers to questions that it had
never seen before that this would have to be considered understanding.  But 
that is not good enough either because
we know that even if a program is not tested for every single mathematical
calculation it could possibly make, there is no evidence that it actually
understands anything about its calculator functions in any useful way other 
than producing a
result.  While I am willing to agree
that computation may be one measure of understanding in persons, calculators are
usually very limited, and therefore there must be a great deal more to
understanding than producing the correct result based on a generalized
procedure.
 
When an advanced intelligent program learns something new it
would be able to apply that new knowledge in ways that were not produced
stereotypically through the generalizations or generative functions that it was
programmed with.  This last step is difficult
to express perfectly, but if the reader doesn't appreciate the significance of
this paragraph so far, it won't matter anyway.  Technically, a program that was 
able to learn in the manner that I am
thinking of would have to use some programmed generalizations (or 
generalization-like
relations) in that learning.  So my
first sentence in this paragraph was actually an imperfect simplification.  
But, the point is, that an advanced program,
like the kind that I am thinking of, will also be able to form its own
generalizations (and generalization-like relations) about learned information
that is not fully predictable at the time the program was launched.  (I am not 
using 'predictable' here in a
contrary argument, I am using it to show that a new concept could be learned in
a non-stereotyped way relative to some conceptual context.)  This means that 
understanding, even an
understanding of a process that is strictly formalized, may have effects on the
understanding of other subject matter, and the understanding of other subject
matter may have other effects on the new insight.
 
Insight may be applied to a range of instances.  However, the measure of
insightfulness must also be related to some more sophisticated method of
analysis; one that is effectively based on relative conceptual boundaries.  
 
So if you want 

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread Matt Mahoney
--- Jim Bromer [EMAIL PROTECTED] wrote:

 But, Understanding=compression.  That is really pretty far out there. 
 This conclusion is based on an argument
 like: One would be able to predict everything if he was able to
 understand everything (or at least everything predictable).  This
 argument, however, is clearly a fantasy.  So we come up with a weaker
 version.  If someone was able to predict
 a number of events accurately this would be a sign that he must
 understand something about those events.  This argument
 might work when talking about people, but it does not quite work the way
 you seem to want it to.  You cannot just paste a term
 of intelligence like 'prediction' onto a mechanical process and reason
 that the
 mechanical process may then be seen as equivalent to a mental process. 
 Suppose someone wrote a computer program
 with 1 questions and the program was able to 'predict' the correct
 answer
 for every single one of those questions.  Does the machine understand
 the questions? Of course not.  The person who wrote the program
 understands the subject of those
 questions, but you cannot conclude that the program understood the
 subject matter.

Your question answering machine is algorithmically complex.  A smaller
program could describe a procedure for answering the questions, and in
that case it could answer questions not in the original set of 1.

Here is another example:

  3 = 9
  7 = 49
  8 = 64
  12 = 144
  2 = 4
  6 = ?

You could write a program that stores the first 5 training examples in a
table, or you could find a smaller program that computes the output as a
mathematical function of the input.  When you test your programs with 6
= ? which program would give you the answer you expect?  Which would you
say understands the training set?

You can take the position that a machine can never understand anything
the way that a human could.  I don't care.  Call it something else if you
want, like AI.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-13 Thread Stan Nilsen

Matt Mahoney wrote:



Remember that the goal is to test for understanding in intelligent
agents that are not necessarily human.  What does it mean for a machine to
understand something?  What does it mean to understand a string of bits?



Have you considered testing intelligent agents by simply observing what 
they do when left alone?  If it has understanding, wouldn't it do 
something?  And wouldn't it's choice be revealing?  Just a thought.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-13 Thread Matt Mahoney

--- Stan Nilsen [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
 
  
  Remember that the goal is to test for understanding in intelligent
  agents that are not necessarily human.  What does it mean for a
 machine to
  understand something?  What does it mean to understand a string of
 bits?
  
 
 Have you considered testing intelligent agents by simply observing what 
 they do when left alone?  If it has understanding, wouldn't it do 
 something?  And wouldn't it's choice be revealing?  Just a thought.

What it does depends on its goals, in addition to understanding.  Suppose
a robot just sits there, doing nothing.  Maybe it understands its
environment but doesn't need to do anything because its batteries are
charged.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-13 Thread Stan Nilsen

Matt Mahoney wrote:

--- Stan Nilsen [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:


Remember that the goal is to test for understanding in intelligent
agents that are not necessarily human.  What does it mean for a

machine to

understand something?  What does it mean to understand a string of

bits?
Have you considered testing intelligent agents by simply observing what 
they do when left alone?  If it has understanding, wouldn't it do 
something?  And wouldn't it's choice be revealing?  Just a thought.


What it does depends on its goals, in addition to understanding.  Suppose
a robot just sits there, doing nothing.  Maybe it understands its
environment but doesn't need to do anything because its batteries are
charged.


-- Matt Mahoney, [EMAIL PROTECTED]


If the batteries are charged and it waits around for an order from 
it's master, then it will always be a robot and not an AGI. If it 
understands it's environment, it is not an AGI - there are too many 
mysteries in the big environment to understand it.  If nothing else, 
it ought to be looking for a way to engage itself for someone or 
somethings benefit - else it probably doesn't understand existence.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-13 Thread Matt Mahoney

--- Stan Nilsen [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Stan Nilsen [EMAIL PROTECTED] wrote:
  
  Matt Mahoney wrote:
 
  Remember that the goal is to test for understanding in intelligent
  agents that are not necessarily human.  What does it mean for a
  machine to
  understand something?  What does it mean to understand a string of
  bits?
  Have you considered testing intelligent agents by simply observing
 what 
  they do when left alone?  If it has understanding, wouldn't it do 
  something?  And wouldn't it's choice be revealing?  Just a thought.
  
  What it does depends on its goals, in addition to understanding. 
 Suppose
  a robot just sits there, doing nothing.  Maybe it understands its
  environment but doesn't need to do anything because its batteries are
  charged.
  
  
  -- Matt Mahoney, [EMAIL PROTECTED]
 
 If the batteries are charged and it waits around for an order from 
 it's master, then it will always be a robot and not an AGI. If it 
 understands it's environment, it is not an AGI - there are too many 
 mysteries in the big environment to understand it.  If nothing else, 
 it ought to be looking for a way to engage itself for someone or 
 somethings benefit - else it probably doesn't understand existence.

I am not sure what you mean by AGI.  I consider a measure of intelligence
to be the degree to which goals are satisfied in a range of environments. 
It does not matter what the goals are.  They may seem irrational to you. 
The goal of a smart bomb is to blow itself up at a given target.  I would
consider bombs that hit their targets more often to be more intelligent.

I consider understanding to mean intelligence in this context.  You
can't say that a robot that does nothing is unintelligent unless you
specify its goals.

We may consider intelligence as a measure and AGI as a threshold.  AGI is
not required for understanding.  You can measure the degree to which
various search engines understand your query, spam filters understand your
email, language translators understand your document, vision systems
understand images, intrusion detection systems understand network traffic,
etc.  Each system was designed with a goal and can be evaluated according
to how well that goal is met.

AIXI allows us to evaluate intelligence independent of goals.  An agent
understands its input if it can predict it.  This can be measured
precisely.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-12 Thread Matt Mahoney
--- Jim Bromer [EMAIL PROTECTED] wrote:

 Matt Mahoney said,
 A formal explanation of a program P would be a equivalent program Q,
 such
 that P(x) = Q(x) for all x.  Although it is not possible to prove
 equivalence in general, it is sometimes possible to prove nonequivalence
 by finding x such that P(x) != Q(x), i.e. Q fails to predict what P will
 output given x.
 
 But I have a few problems with this although his one example was ok.
 One, there are explanations of ideas that cannot be expressed using the
 kind of formality he was talking about. Secondly, there are ideas that
 are inadequate when expressed only using the methods of formality he
 mentioned,  Third, an explanation needs to be used relative to some
 other purpose.  For example, making a prediction of how long something
 will fall to the ground is a start, but if a person understands Newton's
 law of gravity, he will be able to utilize it in other gravities as
 well.  And he may be able to relate it to real world situations where
 precise measurements are not available.  And he might apply his
 knowledge of Newton's laws to see the dimensional similarities (of
 length, mass, force and so on) between different kinds of physical
 formulas.

Remember that the goal is to test for understanding in intelligent
agents that are not necessarily human.  What does it mean for a machine to
understand something?  What does it mean to understand a string of bits?

I propose prediction as a general test of understanding.  For example, do
you understand the sequence 0101010101010101 ?  If I asked you to predict
the next bit and you did so correctly, then I would say you understand it.

If I want to test your understanding of X, I can describe X, give you part
of the description, and test if you can predict the rest.  If I want to
test if you understand a picture, I can cover part of it and ask you to
predict what might be there.

Understanding = compression.  If you can take a string and find a shorter
description (a program) that generates the string, then use that program
to predict subsequent symbols correctly, then I would say you understand
the string (or its origin).

This is what Hutter's universal intelligent agent does.  The significance
of AIXI is not a solution to AI (AIXI is not computable), but that it
defines a mathematical framework for intelligence.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-10 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote:

 I'm not understanding why an *explanation* would be ambiguous?  If I 
 have a process / function that consistently transforms x into y, then 
 doesn't the process serve as a non-ambiguous explanation of how y came 
 into being? (presuming this is the thing to be explained.)

A formal explanation of a program P would be a equivalent program Q, such
that P(x) = Q(x) for all x.  Although it is not possible to prove
equivalence in general, it is sometimes possible to prove nonequivalence
by finding x such that P(x) != Q(x), i.e. Q fails to predict what P will
output given x.

Prediction can be used as a test of understanding lots of things.  For
example, if I wanted to test whether you understand Newton's law of
gravity, I would ask you to predict how long it will take an object of a
certain mass to fall from a certain height.  If I wanted to test whether
you understand French, I could give you a few lines of text in French and
ask you to predict what the next word will be.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com