> Ben Goertzel wrote: > > I don't think that preventing an AI from tinkering with its > > reward system is the only solution, or even the best one... > > > > It will in many cases be appropriate for an AI to tinker with its goal > > system... > > I don't think I was being clear there. I don't mean the AI should be > prevented from adjusting its goal system content, but rather that > it should > be sophisticated enough that it doesn't want to wirehead in the > first place.
Ah, I certainly agree with you then. The risk that's tricky to mitigate against is that, like a human drifting into drug addiction, the AI slowly drifts into a state of mind where it does want to "wirehead" ... ben ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
