C. David Noziglia wrote:
The problem with the issue we are discussing here is that the worst-case
scenario for handing power to unrestricted, super-capable AI entities is
very bad, indeed.  So what we are looking for is not really building an
ethical structure or moral sense at all.  Failure is not an option.  The
only way to prevent the worst-case scenarios that have been mentioned by
discussants is not to design moral values and hope, but to build in
hard-wired, Three Laws-type rules that cannot be overridden.  And then, on
top of that, build in social, competitive systems that use the presence of
mulitple AIs, dependence on humans as suppliers or intermediaries, ethical,
legal, and even game-theory (remember the movie /War Games/?) strictures,
and even punishment systems up to and incuding shut-down capabilities.
That *still* doesn't work.

1) "Hard-wired" rules are a pipe dream. It consists of mixing mechanomorphism ("machines only do what they're told to do") with anthropomorphism ("I wish those slaves down on the plantation would stop rebelling"). The only hard-wired level of organization is code, or, in a seed AI, physics. Once cognition exists it can no longer be usefully described using the adjective "hard-wired". This is like saying you can make Windows XP stable by hard-wiring it not to crash, presumably by including the precompilation statement #define BUGS 0.

2) Any individual ethic that "cannot be overridden" - if we are speaking about a successfully implemented design property of the system, and not a mythical hardwiring - will never be any stronger, smarter, or more reliable than the frozen goal system of its creator as it existed at the time of producing that ethic. In particular, odd things start happening when you take an intelligence of order X and try to control it using goal patterns that were produced by an intelligence of order << X. You say "cannot be overridden", I hear "cannot be renormalized".

3) A society of selfish AIs may develop certain (not really primatelike) rules for enforcing cooperative interactions among themselves; but you cannot prove for any entropic specification, and I will undertake to *disprove* for any clear specification, that this creates any rational reason to assign a greater probability to the proposition that the AI society will protect human beings.

4) As for dependence on human suppliers, if you're talking about transhumans of any kind, AIs, uploads, what-have-you, transhumans dependent on a human economy is a pipe dream. (Order custom proteins from an online DNA synthesis and peptide sequencer; build nanotech; total time of dependence on human economy, 48 hours.)

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Reply via email to