Matt Mahoney wrote:
--- "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote:
...
So you are arguing that RSI is a hard problem? That is my question.
Understanding software to the point where a program could make intelligent
changes to itself seems to require human level intelligence. But could it
come sooner? For example, Deep Blue had less chess knowledge than Kasparov,
but made up for it with brute force computation. In a similar way, a less
intelligent agent could try millions of variations of itself, of which only a
few would succeed. What is the minimum level of intelligence required for
this strategy to succeed?
-- Matt Mahoney, [EMAIL PROTECTED]
Recursive self improvement, where the program is required to understand
what it's doing seems a very hard problem.
If it doesn't need to understand, but merely optimize some function,
then it's only a hard problem...with a slow solution.
N.B.: This may be the major difference between evolutionary programming
and seed AI.
We appear, in our history, to have evolved many approached to causing
evolutionary algorithms to work better (for the particular classes of
problem that we faced...bacteria faced different problems and evolved
different solutions). The most recent attempt has involved
understanding *parts* of what we are doing. But do note that not only
chimpanzees, but also most humans, have extreme difficulty in acting in
their perceived long term best interest. Ask any dieter. Or ask a
smoker who's trying to quit.
Granted that an argument from "these are the solutions found by
evolution" isn't theoretically satisfying, but evolution has a pretty
good record of finding "good enough" solutions. Probably the best that
can be achieved without understanding. (It's also bloody and
inefficient...but no better solution is known.)
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48484304-a8ef96