This is wishful thinking. Wishful thinking is dangerous. How about instead of 
hoping that AGI won't destroy the world, you study the problem and come up with 
a safe design.

 -- Matt Mahoney, [email protected]




________________________________
From: rob levy <[email protected]>
To: agi <[email protected]>
Sent: Sat, June 26, 2010 1:14:22 PM
Subject: Re: [agi] Questions for an AGI

>why should AGIs give a damn about us?

>
I like to think that they will give a damn because humans have a unique way of 
experiencing reality and there is no reason to not take advantage of that 
precious opportunity to create astonishment or bliss. If anything is important 
in the universe, its insuring positive experiences for all areas in which it is 
conscious, I think it will realize that. And with the resources available in 
the solar system alone, I don't think we will be much of a burden. 

I like that idea.  Another reason might be that we won't crack the problem of 
autonomous general intelligence, but the singularity will proceed regardless as 
a symbiotic relationship between life and AI.  That would be beneficial to us 
as a form of intelligence expansion, and beneficial to the artificial entity a 
way of being alive and having an experience of the world.  
agi | Archives  | Modify Your Subscription  


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to