> some unformed thoughts:
>
> One of the first things that struck me was a concern that this
> method is so thoroughly grounded in rationality.  It must be the
> case that a real AGI will need to tolerate pockets of
> irrationality, at least at some time scales, to find truly
> optimal and creative solutions.
>
>This economically based system is so perfect at finding and
>exploiting loopholes, that if implemented as a subset of an AGI
>system, it might end up finding and exploiting loopholes and
>pockets of irrationality in the rest of the AGI instead of finding
>a good solution.

In a Novamente context, Goalnodes provide reinforcement to schemata
(procedures) being learned.

It's true that if the GoalNode is poorly constructed, then the reinforcement
learning method could find a way to fulfill the letter of the GoalNode but
not the spirit, so to speak...

This is going to be true of genetic programming as well, though, and GP is
not based on rationality.  So I think this is just the "overfitting" problem
common to all machine learning algorithms.  It's a risk of narrow AI that
doesn't understand context.

The solution to this problem is to use integrative intelligence, i.e. to
have an inferential loop by which the goal can be revised based on
experience doing learning in the context of the goal.  Also, of course, to
have the ability to learn how to achieve abstractly expressed goals not just
concretely expressed ones

ben g

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to