It will continue to be an open problem until engineers actually get to work
with the technology, rather than just dreaming about it. Right now, our
technology is so far from general intelligence on a useful scale that it is
difficult to imagine all the things that could go right or wrong with it
when we get there. I have full confidence, though, that we *will *solve the
problems *as we encounter them*, just as we have done with every other
potentially dangerous technology. Until we get there, all we're doing is
speculating, so people have the leeway to make their fantasies as dreamy or
deadly as they wish. That's why you see people saying, "It will be the
devil, and destroy us all," while others contradict them with, "No, we will
make a perfect god to save us from ourselves." The truth is undoubtedly
going to be much closer to ordinary.


On Mon, May 5, 2014 at 5:44 PM, Alex Miller <[email protected]> wrote:

> On Mon, May 5, 2014 at 1:11 PM, Aaron Hosford <[email protected]> wrote:
>
>> I think the trick lies in multiple redundancies, both for triggering and
>> effecting termination.
>>
>> We should also design in as many mechanisms as possible to avoid the
>> problem in the first place. For example, a very strong negative reward
>> signal for the AGI even considering modifications to certain critical zones
>> of its own software or hardware, particularly those that determine the
>> reward levels themselves, in a reinforcement learning-based AGI. (This
>> could be interpreted as an overpowering urge to "stay true to oneself" on
>> the part of the AGI, meaning that it would try to preserve its own personal
>> identity.)
>>
>>
> (I realize that AIXI is not practical, but this is a philosophical
> discussion)
>
> Suppose you were gifted with an infinitely fast computer, and wanted to
> put it to work for the good of the humankind running AIXI. How would you
> actually implement its reward function, in code, so that it doesn't go off
> the reservation?
>
> Is this still very much an open problem?
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to