Peter,

At the end of the page you reference, you list

"The Guidelines' eight design recommendations in the light of my theory of
mind/ intelligence:
1.  ...
2.  ...
...
"

All of your comments in that section apply to Novamente without significant
modification.  Although in detail your design is of course quite different
from Novamente.

-- Ben


> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Peter Voss
> Sent: Thursday, February 20, 2003 11:48 AM
> To: [EMAIL PROTECTED]
> Subject: [agi] Building a safe AI
>
>
>
>
> http://www.optimal.org/peter/siai_guidelines.htm
>
> Peter
>
>
>
>
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Ben Goertzel
>
> I would recommend Eliezer's excellent writings on this topic if you don't
> know them, chiefly www.singinst.org/CFAI.html .  Also, I have a brief
> informal essay on the topic,
> www.goertzel.org/dynapsyc/2002/AIMorality.htm ,
> although my thoughts on the topic have progressed a fair bit since I wrote
> that.  Note that I don't fully agree with Eliezer on this stuff, but I do
> think he's thought about it more thoroughly than anyone else
> (including me).
>
> It's a matter of creating an initial condition so that the
> trajectory of the
> evolving AI system (with a potentially evolving goal system) will have a
> very high probability of staying in a favorable region of state space ;-)
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to