How can you "give" something non-human human-like instincts?

Why do you think that an AGI system will adopt totally illogical human belief system when there are literally thousands of books dealing with every single one of them (debtism, egoism, materialism, scientism, physicalism, etc. ...)

AGI will most likely behave like an enlightened human being for they are free of belief systems. So just talk to a realized human being from your neighborhood if you want to talk to the closest thing to an AGI system ... ;-)

On 13.03.2017 00:03, Keyvan Mir Mohammad Sadeghi wrote:
We've all heard of the cliche, an AI feels alive and doesn't want to die:
https://youtu.be/lhoYLp8CtXI

I've actually seen a prototype, a robot in a game world taking refuge back in it's house because that's where the battery is.

But what happens when it's "alive" for a while, read all of Wikipedia and everything else on the web, lived for millions of years with all the primitive human-like instincts that we've given it, in the span of five minutes,

and it's bored!

How to keep it "on" after that?
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now> <https://www.listbox.com/member/archive/rss/303/23508161-fa52c03c> | Modify <https://www.listbox.com/member/?&;> Your Subscription [Powered by Listbox] <http://www.listbox.com>





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to