On 11/19/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

The goal-stack AI might very well turn out simply not to be a workable
design at all!  I really do mean that:  it won't become intelligent
enough to be a threat.   Specifically, we may find that the kind of
system that drives itself using only a goal stack never makes it up to
full human level intelligence because it simply cannot do the kind of
general, broad-spectrum learning that a Motivational System AI would do.

Why?  Many reasons, but one is that the system could never learn
autonomously from a low level of knowledge *because* it is using goals
that are articulated using the system's own knowledge base.  Put simply,
when the system is in its child phase it cannot have the goal "acquire
new knowledge" because it cannot understand the meaning of the words
"acquire" or "new" or "knowledge"!  It isn't due to learn those words
until it becomes more mature (develops more mature concepts), so how can
it put "acquire new knowledge" on its goal stack and then unpack that
goal into subgoals, etc?

This is an excellent observation that I hadn't heard before - thanks, Richard!

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to