http://www.optimal.org/peter/siai_guidelines.htm

Peter




-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
Behalf Of Ben Goertzel

I would recommend Eliezer's excellent writings on this topic if you don't
know them, chiefly www.singinst.org/CFAI.html .  Also, I have a brief
informal essay on the topic, www.goertzel.org/dynapsyc/2002/AIMorality.htm ,
although my thoughts on the topic have progressed a fair bit since I wrote
that.  Note that I don't fully agree with Eliezer on this stuff, but I do
think he's thought about it more thoroughly than anyone else (including me).

It's a matter of creating an initial condition so that the trajectory of the
evolving AI system (with a potentially evolving goal system) will have a
very high probability of staying in a favorable region of state space ;-)

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to