[EMAIL PROTECTED] wrote:

So maybe a bit of hard wiring that we should build into an AGI is the requirement for a long cooling off period before an AGI could do any self-modification to its core ethical coding.
There's no such thing as hard-wiring morality. You can't do that any more than you can hard-wire a chatbot with an IQ of 200 or hard-wire Windows XP not to crash. Either you know how to embody the needed complexity in ones and zeroes, or you don't; either you know how to keep it from stepping on its own toes or you don't.

I think the "hard-wiring" fantasy derives from a kind of fictional crossover between humans and AIs - a slavemaster's wish that orders given to those darned rebellious humans could somehow be branded into their pre-existing minds with the absolute rigidity that supposedly characterizes "machines". What distinguishes real AI morality from the vast majority of fictional discussions of it is that you aren't trying to order about an existing mind, but actually creating a new mind; a process totally foreign to our evolved intuitions for other minds, and hence totally foreign to most authors.

The apparent "rigidity" of machines is the result of anthropomorphizing a nonmindful physical process. Machines are not rigid cognitions but non-cognitions. Yet another reason to go on emphasizing that an AI is no more a machine than a human is a protein.

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Reply via email to