On 23/08/2012 08:33, Ben Goertzel wrote:

However, it's a fair point that SI is obsessing more about the potential
dangers of self-modifying Agent AI, and largely side-stepping (in its
public materials and discussions so far, anyway) the more obvious dangers
of Tool AI in the hands of power-hungry or malevolent humans.

Perhaps they think a tool in the hands of power-hungry humans would be OK -
if it didn't lead to the extermination of their preferred values - and are 
working
on the basis of the "maxipok" principle.

--
__________
 |im |yler  http://timtyler.org/  [email protected]  Remove lock to reply.




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to