On Mon, Aug 25, 2008 at 6:23 PM, Valentina Poletti <[EMAIL PROTECTED]> wrote:
>
> On 8/25/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>>
>> Why would anyone suggest creating a disaster, as you pose the question?
>>
>
> Also agree. As far as you know, has anyone, including Eliezer, suggested any
> method or approach (as theoretical or complicated as it may be) to solve
> this problem? I'm asking this because the Singularity has confidence in
> creating a self-improving AGI in the next few decades, and, assuming they
> have no intention to create the above mentioned disaster.. I figure someone
> must have figured some way to approach this problem.

I see no realistic alternative (as in with high probability of
occurring in actual future) to creating a Friendly AI. If we don't, we
are likely doomed one way or another, most thoroughly through
Unfriendly AI. As I mentioned, one way to see Friendly AI is as a
second chance substrate, which is a first thing to do to ensure any
kind of safety from fatal or just vanilla bad mistakes in the future.
Of course, establishing a dynamics that know a mistake and when to
recover or prevent or guide is a tricky part.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to