My theory is that AGI (artificial rationality) will improve the morality of its 
owners/ users.   Many/ most immoral acts are driven and supported by fear, 
ignorance, and poor thinking.  AGI is an inherent antidote.

Obviously this is not the whole story, but I think a crucial element.

 

From: Steve Richfield [mailto:[email protected]] 
…

I would like to see SOME "clear vision" of an ultimately "good" AGI before even 
considering whether a particular route is necessary for getting there. Until 
such a vision can be held up to close scrutiny, discussions of route are 
EXTREMELY premature. I strongly suspect that ALL "good" AGI descriptions are 
just wishful thinking about mechanisms that if allowed to follow their designs 
would do very "bad" things (in the eyes of 99% of our population).

Note that my discussion above is more about the flaws in us than in AGIs, but 
it appears that it is OpenAI's goal to preserve those flaws in AI, which will 
predictably lead to an even bigger social mess than we have now. Right?

 

Steve
========

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to