> To avoid the problem entirely, you have to figure out how to make
> an AI that
> doesn't want to tinker with its reward system in the first place. This, in
> turn, requires some tricky design work that would not necessarily seem
> important unless one were aware of this problem. Which, of course, is the
> reason I commented on it in the first place.
>
> Billy Brown

I don't think that preventing an AI from tinkering with its reward system is
the only solution, or even the best one...

It will in many cases be appropriate for an AI to tinker with its goal
system...

I would recommend Eliezer's excellent writings on this topic if you don't
know them, chiefly www.singinst.org/CFAI.html .  Also, I have a brief
informal essay on the topic, www.goertzel.org/dynapsyc/2002/AIMorality.htm ,
although my thoughts on the topic have progressed a fair bit since I wrote
that.  Note that I don't fully agree with Eliezer on this stuff, but I do
think he's thought about it more thoroughly than anyone else (including me).

It's a matter of creating an initial condition so that the trajectory of the
evolving AI system (with a potentially evolving goal system) will have a
very high probability of staying in a favorable region of state space ;-)

-- Ben G




-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to