But then it's also not yet superintelligent and can not yet destroy/obsolete our species? Just like a person with down syndrome probably can't destroy/obsolete it.

On 05/12/2014 03:54 PM, Aaron Hosford wrote:
Bugs happen. The truth is, the first few versions of this technology are going to suck -- until we improve it. This happens with every new technology.
It does not need the "human notion" of "right and wrong". There are absolute/universal notions of right and wrong. Lower entropy states are more profitable and thus "right".

Also why do you imply that something vastly more intelligent than us and something which grew within our society would not understand our notions of right and wrong? That makes no sense. We won't grab into the Yudkowskian "Mindspace" and pick out some random fully fledged agent with predefined properties. Whatever AGI system we are talking about will need to evolved based on our knowledge pool and of course it will be confronted with our notions of right and wrong.


On 05/12/2014 03:54 PM, Aaron Hosford wrote:
AGIs won't know, understand, or (especially) care about the human notions of right and wrong, good and evil, unless we design it to do so.




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to