Kevin C said: > > Of course, the details of a sophisticated "kill switch" would depend on the > architecture of system, and be beyond the scope of this casual conversation. > But to dismiss it out of hand as conceptually ineffectual is rather > puzzling.
The problem is unsolvable by defintion. An AGI, one that we are afraid of at least, will be far smarter than we are. Consequently, we cannot expect to be able to prevent it from circumventing any precautions we implement, beyond those of intrinsic motivation. For an analogy, consider chimpanzees trying to keep a Navy SEAL operative in containment for an indefinite period of time (while still interacting with him). -Brad ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
