Kevin Copple wrote:> Ben said,


When the system is smart enough, it will learn to outsmart the posited
Control Code, and the ethics-monitor AGI....


This isn't apparent at all, given that the Control Code could be pervasively
imbedded and keyed to things beyond the AGI's control.  The idea is to limit
the AGI and control its progress as we wish.  I just don't see the risk that
the AGI will suddenly become so intelligent that it is able to "jump out of
the box" in a near-supernatural fashion, as some seem to fear.

Someone once said that a cave can trap and control a man, even though the
cave is dumb rock.  We are considerably more intelligent than granite, so I
would not hesitate to believe that we control an AGI that we create.

Of course, the details of a sophisticated "kill switch" would depend on the
architecture of system, and be beyond the scope of this casual conversation.
But to dismiss it out of hand as conceptually ineffectual is rather
puzzling.


Hi Kevin, you may not realize that you want to turn off the kill switch, but you do.


http://www.sysopmind.com/essays/aibox.html

In general, the idea that any lesser intelligences can communicate in any way with a smarter AGI, and manage to keep control over it is highly likely to be wrong. I certainly wouldn't want to bet on it.
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to