I think the trick lies in multiple redundancies, both for triggering and
effecting termination.

We should also design in as many mechanisms as possible to avoid the
problem in the first place. For example, a very strong negative reward
signal for the AGI even considering modifications to certain critical zones
of its own software or hardware, particularly those that determine the
reward levels themselves, in a reinforcement learning-based AGI. (This
could be interpreted as an overpowering urge to "stay true to oneself" on
the part of the AGI, meaning that it would try to preserve its own personal
identity.)


On Mon, May 5, 2014 at 2:51 PM, Mike Archbold <[email protected]> wrote:

> There has been a little talk here, but there needs to be a
>
> SCIENCE OF THE KILL SWITCH.
>
> There are efforts I know of for mobile kill switch.  Can someone name
> a technology that does NOT need a kill switch?
>
> On 5/5/14, Aaron Hosford <[email protected]> wrote:
> > What a silly article. Apparently if you are very intelligent, your
> opinion
> > matters more than those of experts in the field regardless of your own
> > expertise.
> >
> > I have yet to see a convincing argument that AGI would indeed "take over
> > the world". It is a technology. Like any technology, we will
> incrementally
> > improve it in the directions that make the most economic sense. The fact
> > that the design process (not to mention the adoption of new technologies)
> > is incremental means that we will have plenty of time to steer clear of
> the
> > design instabilities that would lead to such a debacle, and the fact that
> > such instabilities are not economically beneficial ensures that even if
> we
> > are short-sighted, we will still have plenty of incentive to avoid those
> > instabilities.
> >
> > In other words, why on earth would we design it to do *that*?
> >
> >
> >
> > On Sat, May 3, 2014 at 12:40 PM, just camel via AGI <[email protected]>
> > wrote:
> >
> >> One would think that Hawking was way less anthropomorphic? Expecting a
> >> superintelligent entity to behave like the worst Roman emperor? If you
> >> have
> >> to be anthropomorphic then why not expect them to behave way better than
> >> the brightest and most empathic human being? Some of us even stopped
> >> eating
> >> meat for ethical reasons and I guess that it is safe to assume that an
> >> advanced AGI will not fight over resources or even atoms in this
> universe
> >> of abundance.
> >>
> >> There just is no good reason for an AGI to obsolete humanity against our
> >> will. In fact there are so many productive and cooperative options from
> >> coexistence to merging to teaching us about the purpose of our existence
> >> and helping us to become better beings ...
> >>
> >> On 05/03/2014 04:57 AM, Alan Grimes via AGI wrote:
> >>
> >>>
> >>> http://guardianlv.com/2014/05/stephen-hawking-tells-truth-
> >>> on-ai-perhaps-worst-thing-to-happen-to-humans/
> >>>
> >>>
> >>
> >>
> >> -------------------------------------------
> >> AGI
> >> Archives: https://www.listbox.com/member/archive/303/=now
> >> RSS Feed:
> >> https://www.listbox.com/member/archive/rss/303/23050605-2da819ff
> >> Modify Your Subscription: https://www.listbox.com/
> >> member/?&
> >> Powered by Listbox: http://www.listbox.com
> >>
> >
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/23050605-2da819ff
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to