Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread William Pearson
Matt mahoney:
  I am not sure what you mean by AGI.  I consider a measure of intelligence
  to be the degree to which goals are satisfied in a range of environments.
  It does not matter what the goals are.  They may seem irrational to you.
  The goal of a smart bomb is to blow itself up at a given target.  I would
  consider bombs that hit their targets more often to be more intelligent.

  I consider understanding to mean intelligence in this context.  You
  can't say that a robot that does nothing is unintelligent unless you
  specify its goals.

  We may consider intelligence as a measure and AGI as a threshold.  AGI is
  not required for understanding.  You can measure the degree to which
  various search engines understand your query, spam filters understand your
  email, language translators understand your document, vision systems
  understand images, intrusion detection systems understand network traffic,
  etc.  Each system was designed with a goal and can be evaluated according
  to how well that goal is met.

  AIXI allows us to evaluate intelligence independent of goals.  An agent
  understands its input if it can predict it.  This can be measured
  precisely.


I think you are thinking of solomonoff induction, AIXI won't answer
your questions unless it has the goal of getting reward from you for
answering the question. It will do what it predicts will get it
reward, not try and output the end of all strings given to it.

I propose prediction as a general test of understanding.  For example, do
you understand the sequence 0101010101010101 ?  If I asked you to predict
the next bit and you did so correctly, then I would say you understand it.

What would happen if I said, I don't have time for silly games,
please stop emailing me. Would you consider that I understood it?

If I want to test your understanding of X, I can describe X, give you part
of the description, and test if you can predict the rest.  If I want to
test if you understand a picture, I can cover part of it and ask you to
predict what might be there.

This only works if my goal includes revealing my understanding to you.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread Matt Mahoney
--- William Pearson [EMAIL PROTECTED] wrote:
 Matt mahoney:
 I propose prediction as a general test of understanding.  For example,
 do you understand the sequence 0101010101010101 ?  If I asked you to
 predict
 the next bit and you did so correctly, then I would say you understand
 it.
 
 What would happen if I said, I don't have time for silly games,
 please stop emailing me. Would you consider that I understood it?

If it was a Turing test, then probably yes.  But a Turing test is not the
best way to test for intelligence.

Ben Goertzel once said something like pattern recognition + goals = AGI.
 I am generalizing pattern recognition to prediction and proposing that
the two components can be tested separately.

For example, a speech recognition system is evaluated by word error rate. 
But for development it is useful to separate the system into its two main
components, an acoustic model and a language model, and test them
separately.  A language model is just a probability distribution.  It does
not have a goal.  Nevertheless, the model's accuracy can be measured by
using it in a data compressor whose goal (implicit in the encoder) is to
minimize the size of the output without losing information.  The
compressed size correlates well with word error rate.  Such testing is
useful because if the system has a poor word error rate but the language
model is good, then the problem can be narrowed down to the acoustic
model.  Without this test, you wouldn't know.

I propose compression as a universal goal for testing the predictor
component of AI.  More formally, if the system predicts the next symbol
with probability p, then that symbol has utility log(p).

AIXI provides a formal justification for this approach.  In AIXI, an agent
and an environment (both Turing machines) exchange symbols interactively. 
In addition, the environment signals a numeric reward to the agent during
each cycle.  The goal of the agent is to maximize the accumulated reward. 
Hutter proved that the optimal (but uncomputable) strategy of the agent is
to guess at each step that the environment is modeled by the shortest
Turing machine consistent with the interaction so far.

Note that this strategy is independent of the goal implied by the reward
signal.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Understanding a sick puppy

2008-05-14 Thread Mike Tintner
Steve,

This is more or less where I came into this group. You've picked a, if not the, 
classic AGI problem. The problem that distinguishes it from narrow AI. 
Problematic, no right answer. And every option could often be wrong. I tried to 
open a similar problem for discussion way back - how do you invest in the 
stockmarket right now? There are an infinity of such problems.

The problem with such problems is that you can''t program for them. Why? 
Because 1) neither you nor your AGI if you have one, know the right answer. 
There ain't one. In fact, every option could be wrong. And mistakes can be 
expensive. ANd you may have got things fundamentally wrong (as per the ulcer 
problem).  And 2) you and your AGI are learner-livers, so you may not only 
have got things fundamentally wrong at the domain level, but at the 
cross-domain, still deeper level of how to learn and how to solve problems 
generally. (And Bayes won't help you if your assumptions are fundamentally 
wrong).  You have to find out how to deal with these problems - and how to 
learn and solve problems generally  - as you go along, and you never stop 
learning.

If you think you've got a way of programming - in effect, a right way to live 
- for problems one has - by definition - inadequate knowledge about at every 
level - and can usually *never* get adequate, definitive knowledge about,  pray 
tell - with reference to your particular problem..

This is the most central question in AGI, and my experience is- everyone avoids 
it like the plague.

P.S. A psychologist would point out that you may well have unconsciously 
intended v. sick puppy as a metaphor for AGI :} .

  Steve:
  I am right now up against an understanding issue that might be a worthy 
foil for the present discussions.

  The thing to be understood:

  My daughter is a pug dog breeder, and considering my health interests, she 
gave me a hopeless case failure-to-thrive puppy to try to save ~3 days ago, 
that was apparently within hours of death upon arrival. Theories abound as to 
what the underlying problem is, so it would appear that the best course to 
success would be one that considers as many possibilities as possible.

  Saleable puppies are worth ~US$1K each, whereas UNsaleable puppies have a 
large negative value because of the great difficulties in disposition thereof. 
Therefore, extensive testing for hypothyroidism, Addison's, etc. have been 
tentatively ruled out on the theory that a puppy with such a problem would be 
worth more dead than alive, so why bother testing or treating such a puppy?

  Present theories:
  1.  The vet thinks that evidence of hydrocephalus, failure of the bones on 
the top of the skull to fuse together, may indicate a brain disorder. He thinks 
that some combination of a splitting headache and mis-wiring of the metabolic 
control system resulting from this explains everything.
  2.  I see that the puppy's temperature is running low and he greatly likes to 
sit at the outlet of an electric heater, and he looks weeks younger than he 
actually is, so perhaps his development is retarded due to a metabolic disorder 
of some sort, and the failure of the bones in his skull to fuse is just another 
part of retarded development - in short, that the vet may have cause and effect 
reversed.
  3.  My lady decided to try treating the puppy as though it were the age that 
it appeared to be - small enough to still be nursing, so she started feeding it 
a goat's milk formula, and it seems to be doing much better.
  4.  My daughter thinks everything is genetic and keeps a mental scoreboard of 
the problems with the puppies coming from each bitch. When one has had too many 
problem puppies, she neuters the bitch and sells it.

  Knowledge and experience would seem to favor the vet's theory. Unfortunately, 
there is no success path leading from this theory, so why even bother to 
consider it, even if it may very well be correct?

  My metabolic theories may be a little better, because there are ways of 
surviving with hypothyroidism, Addison's. etc. However, success would still 
leave a negative-value result.

  My lady's implied theory of slow development would, if correct, lead to the 
best result - perhaps even a new sort of mineature pug that might be of 
astronomical value as a stud.

  My daughter's theory, though draconian in nature, does work at the heart of 
such problems. However, where problems have hidden familial or environmental 
origins, it has the problem that it can lead to some really bad decisions, as 
neutering a good breeder reduces a ~US$5K dog to ~US$500 in value and 
eliminates the source of future ~US$1K puppies.

  As you can see, technical correctness of a theory ends up having secondary 
value compared with potential result. I have also seen this in automobile 
repair, where the best theory is the one with the least expensive correction. 
At least where you are wrong, the cost is minimized.

  Any thoughts?

  Steve Richfield



Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread Jim Bromer
- Original Message 

Matt Mahoney said:

Remember that the goal is to test for understanding in intelligent
agents that are not necessarily human.  What does it mean for a machine to
understand something?  What does it mean to understand a string of bits?

I propose prediction as a general test of understanding.  For example, do
you understand the sequence 0101010101010101 ?  If I asked you to predict
the next bit and you did so correctly, then I would say you understand it.

If I want to test your understanding of X, I can describe X, give you part
of the description, and test if you can predict the rest.  If I want to
test if you understand a picture, I can cover part of it and ask you to
predict what might be there.

Understanding = compression.  If you can take a string and find a shorter
description (a program) that generates the string, then use that program
to predict subsequent symbols correctly, then I would say you understand
the string (or its origin).

This is what Hutter's universal intelligent agent does.  The significance
of AIXI is not a solution to AI (AIXI is not computable), but that it
defines a mathematical framework for intelligence.

-- Matt Mahoney, [EMAIL PROTECTED]
---


I have heard this argument in another discussion group and
it is too weak to spend much time on.  I am not debating that prediction is 
an aspect
of intelligence.  And I agree that we
need better ways to gauge 'understanding' for computer programs.
 
But, Understanding=compression.  That is really pretty far out there.  This 
conclusion is based on an argument
like: One would be able to predict everything if he was able to
understand everything (or at least everything predictable).  This argument, 
however, is clearly a
fantasy.  So we come up with a weaker
version.  If someone was able to predict
a number of events accurately this would be a sign that he must understand 
something about those events.  This argument
might work when talking about people, but it does not quite work the way you 
seem to want it to.  You cannot just paste a term
of intelligence like 'prediction' onto a mechanical process and reason that the
mechanical process may then be seen as equivalent to a mental process.  Suppose 
someone wrote a computer program
with 1 questions and the program was able to 'predict' the correct answer
for every single one of those questions.  Does the machine understand the 
questions? Of course not.  The person who wrote the program understands the 
subject of those
questions, but you cannot conclude that the program understood the subject
matter.
 
Ok, so at this point the argument might be salvaged by
saying that if the program understood the answers to questions that it had
never seen before that this would have to be considered understanding.  But 
that is not good enough either because
we know that even if a program is not tested for every single mathematical
calculation it could possibly make, there is no evidence that it actually
understands anything about its calculator functions in any useful way other 
than producing a
result.  While I am willing to agree
that computation may be one measure of understanding in persons, calculators are
usually very limited, and therefore there must be a great deal more to
understanding than producing the correct result based on a generalized
procedure.
 
When an advanced intelligent program learns something new it
would be able to apply that new knowledge in ways that were not produced
stereotypically through the generalizations or generative functions that it was
programmed with.  This last step is difficult
to express perfectly, but if the reader doesn't appreciate the significance of
this paragraph so far, it won't matter anyway.  Technically, a program that was 
able to learn in the manner that I am
thinking of would have to use some programmed generalizations (or 
generalization-like
relations) in that learning.  So my
first sentence in this paragraph was actually an imperfect simplification.  
But, the point is, that an advanced program,
like the kind that I am thinking of, will also be able to form its own
generalizations (and generalization-like relations) about learned information
that is not fully predictable at the time the program was launched.  (I am not 
using 'predictable' here in a
contrary argument, I am using it to show that a new concept could be learned in
a non-stereotyped way relative to some conceptual context.)  This means that 
understanding, even an
understanding of a process that is strictly formalized, may have effects on the
understanding of other subject matter, and the understanding of other subject
matter may have other effects on the new insight.
 
Insight may be applied to a range of instances.  However, the measure of
insightfulness must also be related to some more sophisticated method of
analysis; one that is effectively based on relative conceptual boundaries.  
 
So if you want 

Re: [agi] Understanding a sick puppy

2008-05-14 Thread Steve Richfield
Mike,

On 5/14/08, Mike Tintner [EMAIL PROTECTED] wrote:

  This is more or less where I came into this group. You've picked a, if
 not the, classic AGI problem. The problem that distinguishes it from narrow
 AI. Problematic, no right answer. And every option could often be wrong. I
 tried to open a similar problem for discussion way back - how do you invest
 in the stockmarket right now? There are an infinity of such problems.


At least we are on the same page.


 The problem with such problems is that you can''t program for them.


But ... THAT is exactly what my Dr. Eliza program was intended to address!!!

 Why?


YES - let's dive into the presumptions that I believe are leading AGI
astray.

 Because


 1) neither you nor your AGI if you have one, know the right answer.


Is the operative word here the or right or answer?

a) the is probably a misdirection, because there are probably
several right answers.

b) right has many shades of gray, e.g. cures are greatly preferred to
treatments, and some cures/treatments are better than others. Often/usually
there is more concern for the costs of being wrong than for the benefit of
being correct.

c) answer implies that the AGI is making the decision, rather than the
user. Ultimately, at least in this case, it is the caregiver who makes the
final decision where to invest their money and/or effort.

 There ain't one. In fact, every option could be wrong.


Note that each of the options describes a complex cause-and-effect chain,
but they have some common links, e.g. the sick puppy is clearly
metabolically impaired, though whatever link leads to this link is unclear.
Further, there are a very finite number of potential links leading to
metabolic impairment (dehydration, organ malfunction, brain malfunction,
premature weaning, etc.)

 And mistakes can be expensive.


Indeed, the primary initial effort is to minimize the cost of mistakes while
further information is being gathered. Here, we have kept the puppy alive
for 2 days longer than it was estimated to live, and it seems to be getting
better. Unfortunately, care has been SO careful regarding the many hazards
indicated by various theories that little additional information has been
gathered, other than the puppy probably does NOT have really serious brain
damage, because it gets up out of its bed to eliminate, and sticks really
close to one particular adult dog (his father).

 ANd you may have got things fundamentally wrong (as per the ulcer problem).


In this case, most theories MUST be wrong because they are mutually
exclusive.

 And


 2) you and your AGI are learner-livers, so you may not only have got
 things fundamentally wrong at the domain level, but at the cross-domain,
 still deeper level of how to learn and how to solve problems generally.


Hopefully, frequent updating of the problem statement being analyzed will
compensate for errors here.

  (And Bayes won't help you if your assumptions are fundamentally wrong).


I think that the key here is to DO SOMETHING. Changing the situation will
act as an experiment and result in gathering more information to be placed
into the problem statement. The key is to not go too far and kill the puppy
by continuing in any particular wrong direction. Obviously, the puppy would
have been dead before the sun set if he hadn't been fed SOMETHING. His
choice of goat's milk formula over the best available puppy food tells a
LOT.

 You have to find out how to deal with these problems - and how to learn and
 solve problems generally  - as you go along, and you never stop learning.


There are SO many subtle clues that suggest cause and effect chain links.
The BIG problem with puppies over people is that you can't simply ask them
direct questions. I have been indirectly asking questions by offering the
puppy varying things to eat and drink and observing his preferences,
offering warm and cool environments to choose between, etc.

In the case of people, really subtle clues guide this process, e.g. most
metabolic problems result in what the military calls IFF (Identification
Friend or Foe) malfunctions in the immune system, which then cause minor
symptoms like allergies, asthma, minor infections, etc. There may be a
really MAJOR presenting symptom like cancer or COPD (emphysema), but these
almost always go along with many minor symptoms which the patient may have
completely dismissed as a part of being quite normal. Once you know that
(for example) there is a metabolic (cellular environment) problem, the list
of usual culprits is relatively short and easy to check, and most of these
problems are easily fixed.

Note that the medical/legal system has made this approach ILLEGAL and will
take away the medical license of any physician who does this! I have seen a
couple of very good doctors go through this process. The problem is that
doctors, and most especially the doctors on the medical quality assurance
boards, have absolutely no applicable education or experience in 

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread Matt Mahoney
--- Jim Bromer [EMAIL PROTECTED] wrote:

 But, Understanding=compression.  That is really pretty far out there. 
 This conclusion is based on an argument
 like: One would be able to predict everything if he was able to
 understand everything (or at least everything predictable).  This
 argument, however, is clearly a fantasy.  So we come up with a weaker
 version.  If someone was able to predict
 a number of events accurately this would be a sign that he must
 understand something about those events.  This argument
 might work when talking about people, but it does not quite work the way
 you seem to want it to.  You cannot just paste a term
 of intelligence like 'prediction' onto a mechanical process and reason
 that the
 mechanical process may then be seen as equivalent to a mental process. 
 Suppose someone wrote a computer program
 with 1 questions and the program was able to 'predict' the correct
 answer
 for every single one of those questions.  Does the machine understand
 the questions? Of course not.  The person who wrote the program
 understands the subject of those
 questions, but you cannot conclude that the program understood the
 subject matter.

Your question answering machine is algorithmically complex.  A smaller
program could describe a procedure for answering the questions, and in
that case it could answer questions not in the original set of 1.

Here is another example:

  3 = 9
  7 = 49
  8 = 64
  12 = 144
  2 = 4
  6 = ?

You could write a program that stores the first 5 training examples in a
table, or you could find a smaller program that computes the output as a
mathematical function of the input.  When you test your programs with 6
= ? which program would give you the answer you expect?  Which would you
say understands the training set?

You can take the position that a machine can never understand anything
the way that a human could.  I don't care.  Call it something else if you
want, like AI.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Understanding a sick puppy

2008-05-14 Thread Mike Tintner
Steve,

Like most people here I'm interested in general intelligence. You seem to be 
talking mainly about specific domain intelligence - medical diagnosis - not 
say, a computer or agent that will encompass many domains.

My off-the-cuff thought here is that a central database, organised on some open 
source basis getting medical professionals continually to contribute and 
update, which would enable people to immediately get a run-down of the major 
possible causes (and indeed minor possible ones - anything that has been 
proposed) - for any given illness or  set of symptoms, would be a great thing - 
assuming somesuch doesn't already exist. That would leave the user to make his 
choices.

In the same way, it would be great to have a database that could immediately 
make long lists of suggestions for any given set of investment requirements. 
That too would clearly have to leave the user to choose.

I'm dubious about any program here making specific recommendations/ diagnoses - 
because the medical field like every other professional field is rife with 
conflicting opinions about the great majority of areas/illnesses. There are 
just so many problematic areas. It's almost the equivalent of a program that 
would make political recommendations about how to run a country.

I welcome your rare interest in discussing the end-problems of AGI, (as 
distinct from the engineering problems) in detail - but if it's to be AGI it 
has to be couched in general terms - you have to explain how your or any 
approach will apply across domains.  What are the common problem-solving 
concepts, say, that will enable a program or agent to think and learn about 
symptoms of breakdown/ malfunction or whatever in say, medicine/the human 
body, cars/mechanics, plumbing,  electrical systems, computer hardware,  
nuclear power stations, sick plants etc. ?


  Mike,


  On 5/14/08, Mike Tintner [EMAIL PROTECTED] wrote: 
This is more or less where I came into this group. You've picked a, if not 
the, classic AGI problem. The problem that distinguishes it from narrow AI. 
Problematic, no right answer. And every option could often be wrong. I tried to 
open a similar problem for discussion way back - how do you invest in the 
stockmarket right now? There are an infinity of such problems.

  At least we are on the same page.

The problem with such problems is that you can''t program for them.

  But ... THAT is exactly what my Dr. Eliza program was intended to address!!!


Why?

  YES - let's dive into the presumptions that I believe are leading AGI astray.


Because


1) neither you nor your AGI if you have one, know the right answer.

  Is the operative word here the or right or answer?

  a) the is probably a misdirection, because there are probably several 
right answers.

  b) right has many shades of gray, e.g. cures are greatly preferred to 
treatments, and some cures/treatments are better than others. Often/usually 
there is more concern for the costs of being wrong than for the benefit of 
being correct.

  c) answer implies that the AGI is making the decision, rather than the 
user. Ultimately, at least in this case, it is the caregiver who makes the 
final decision where to invest their money and/or effort.


There ain't one. In fact, every option could be wrong.

  Note that each of the options describes a complex cause-and-effect chain, 
but they have some common links, e.g. the sick puppy is clearly metabolically 
impaired, though whatever link leads to this link is unclear. Further, there 
are a very finite number of potential links leading to metabolic impairment 
(dehydration, organ malfunction, brain malfunction, premature weaning, etc.)


And mistakes can be expensive.

  Indeed, the primary initial effort is to minimize the cost of mistakes while 
further information is being gathered. Here, we have kept the puppy alive for 2 
days longer than it was estimated to live, and it seems to be getting better. 
Unfortunately, care has been SO careful regarding the many hazards indicated by 
various theories that little additional information has been gathered, other 
than the puppy probably does NOT have really serious brain damage, because it 
gets up out of its bed to eliminate, and sticks really close to one particular 
adult dog (his father).


ANd you may have got things fundamentally wrong (as per the ulcer problem).

  In this case, most theories MUST be wrong because they are mutually exclusive.


And


2) you and your AGI are learner-livers, so you may not only have got 
things fundamentally wrong at the domain level, but at the cross-domain, still 
deeper level of how to learn and how to solve problems generally.

  Hopefully, frequent updating of the problem statement being analyzed will 
compensate for errors here.


(And Bayes won't help you if your assumptions are fundamentally wrong).

  I think that the key here is to DO SOMETHING. Changing the situation will act