Ben,

We seem to be thinking along similar lines in most aspects here...


The way the human brain seems to work is:

* some of its architecture is oriented towards increasing the sum over n of
int(S,n,rS,rT), where rS is given by brain capacity (enhanced by tools like
writing?) and rT is different for different problems, but is a "humanly
meaningful time scale."

* some of its architecture is oriented towards increasing problem-solving
ability for n that are so large that int(S,n,rS,rT) is totally  miserable
for realistic (rS, rT)
Right, for example my brain doesn't solve visual processing problems
using completely general learning structures.


That is: any real-world-useful general intelligence is going to have a mix
of "truly general intelligence methods" focused on boosting int(S,n,rS,rT)
for small n, and "specialized intelligence methods" that are different from
narrow-AI methods in that they specifically leverage a combination of

* specialized heuristics
* the small-n general intelligence methods
Yes, though we will have to be careful because an AI is likely to be
a lot more dynamic than our brains.  I'm not even talking about fancy
stuff like having the AI recode itself, just the fact that an AI would
be able to solve some problems using very general intelligence and then
adapt that solution into a very efficient and quite specialized ability
with relative easy.  That ability could then be a stepping stone towards
other problems and the development of further more complex specialized
abilities as required.


Now, we're asking not just whether S can solve simple problems, we're asking
whether S can solve problems that are simple against the background of its
environment and its own mind at a given point in time.

In this case, I think the same conclusions as above will hold, but more
weakly.  I.e. the general intelligence capability will hold robustly for
somewhat larger n.  But still there will be the dichotomy between small-n
general intelligence, and large-n specialized-building-on-general
intelligence.  Because only some problems relating to self and environment
are useful for achieving system goals, and the system S will be specialized
for solving *these* problems.
Which I think is more or less what I also said above.  Here the training
of the AI becomes very important as this is what develops the abilities
of the system.  Or put another way; it's often the case that knowing how
to solve a simpler or related problem to the problem you are currently
facing is very useful.  We don't just solve all problems from scratch,
we draw on our experiences with similar problems from the past.


Now, moving on, I'll make the following claim:

** achieving "small-n general intelligence" is a basically simple math
problem **
As you always like to say: given infinite resources... ;)


I think that the hard problem of AGI is actually the other part:

BUILDING A SYSTEM CAPABLE OF SUPPORTING SPECIALIZED INTELLIGENCES THAT
COMBINE NARROW-AI HEURISTICS WITH SMALL-N GENERAL INTELLIGENCE
Yes this is part of the problem, the other thing you don't mention
is the difficulty of trying to solve small-n problems efficiently.


I agree, Shane, that algorithmic information theory is useful for the
"small-n general intelligence" part.  But it's just providing a complicated,
sometimes elegant mathematical formalism for what is actually one of the
*easier* parts of the practical AGI task.
I would say that there is something very important here that you haven't
mentioned: The value in having a precisely mathematically defined and
provably strong definition of what general intelligence actually is.

Clearly I accept that this is possible and I think you do too. However
let's not forget that almost NOBODY else in the field of AI (or psych
or anywhere else) accepts this or has even heard of the idea before!
Even if they had heard of it I'm sure there would be an enormous
amount of resistance to the idea.

So yeah, in terms of the "practical AGI task" is might not turn out
to be all that huge a deal, I'm really not sure. But in terms of
actually getting a sizable number of people all agreeing on what the
hell the goal at least in theory is, it could be very significant.

In the words of Charles Kettering,

"A problem well stated is a problem half solved."

The work of Marcus Hutter is, I believe, currently the most
significant piece of work in this direction to date.

Cheers
Shane


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to