On Wed, Aug 27, 2008 at 5:40 AM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> It doesn't matter what I do with the question. It only matters what an AGI 
> does with it.

AGI doesn't do anything with the question, you do. You answer the
question by implementing Friendly AI. FAI is the answer to the
question.

> If you can't guarantee Friendliness, then self-modifying approaches to
> AGI should just be abandoned. Do we agree on that?

More or less, but keeping in mind that "guarantee" doesn't need to be
a formal proof of absolute certainty. If you can't show that a design
implements Friendliness, you shouldn't implement it. One can't say
anything about the whole open-ended class of designs like
"self-modifying approaches" though.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to