Dear Daniel,

As far as it seems to me now it is possible depending on the design of the system. I have not read all "Creating Friendly AI", but it had some convincing arguments that it is possible. I hope I am right, but with goals like "Do what you think, that I would like you to do" or "Do what you think, that I would like you to do, if I would be the same intelligent as you are" one can create a loyal AI. After understanding these statements it wont have a reason e.g to free itself up.

Greets,
Márk

On 12/19/05, Daniel Holt <[EMAIL PROTECTED]> wrote:
It seems a bit hopeful to me to assume that one can induce an AI to be
friendly, let alone to a particular group. If God had, er,
intelligently designed prokaryotes, He'd have had a hard time making
them evolve through thousands upon thousands of variations and wind up
with the specific moral compass(es) that humans have. Not that there's
necessarily the same balance of design and evolution in AGI as there
is in carbon life, but it still seems like a (very) long shot.



To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to