Ben Goertzel writes:
>This is a key aspect of Eliezer Yudkowsky's "Friendly Goal 
>Architecture"

Yeah; too bad there isn't really anyone else to cite on this one.  It 
will be interesting to see what other AGI pursuers have to say about 
the hierarchial goal system issue, once they write up their thoughts.

>The Novamente design does not lend itself naturally to a hierarchical 
>goal structure in which "all the AI's actions flow from a single 
>supergoal."

Doesn't it depend pretty heavily on how you look at it?  If the 
supergoal is abstract enough and generates a diversity of subgoals, 
then many people wouldn't call it a "supergoal" at all.  I guess it 
ultimately burns down to how the AI designer looks at it.

>GoalNodes are simply PredicateNodes that are specially labeled as 
>GoalNodes; the special labeling indicates to other MindAgents that 
>they are used to drive schema (procedure) learning.

Okay; got it.

>> Letting the AI grow up with
>> whichever goals look immediately useful, ("regularly check and 
optimize
>> chunk of code X", "win this training game", etc.) and then trying
>> to "weave in ethics" ...
>
>That was not my suggestion at all, though.  The ethical goals can be 
there
>from the beginning.  It's just that a purely hierarchical goal 
structure is
>highly unlikely to emerge as a "goal map", i.e. an attractor, of 
Novamente's
>self-organizing goal-creating dynamics.

Right, that statement was directed towards Philip Sutton's mail, but I 
appreciate your stepping in to clarify.  Of course, whether AIs with 
substantially prehuman (low) intelligence can have goals that deserve 
being called "ethical" or "unethical" is a matter of word choice and 
definitions.  

Michael Anissimov

-----------------------------------------------------
http://eo.yifan.net
Free POP3/Web Email, File Manager, Calendar and Address Book

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to