Would an acceptable response be to reprogram the goals of the UFAI to make
it
friendly?
Yes -- but with the minimal possible changes to do so (and preferably done
by enforcing Friendliness and allowing the AI to resolve what to change to
resolve integrity with Friendliness -- i.e. don't mess with any goals that
you don't absolutely have to and let the AI itself resolve any choices if at
all possible).
Does the answer to either question change if we substitute "human" for
"UFAI"?
The answer does not change for an Unfriendly human. The answer does change
for a Friendly human.
Human vs. AI is irrelevant. Friendly vs. Unfriendly is exceptionally
relevant.
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com