A superintelligent (above human intelligence) machine will question its
belief systems just like any intelligent and empathic person will do. It
seems that we prefer to talk about super ignorant machines instead of
super intelligent ones?
Also the concept of having one AGI safeguard you against other AGI's is
horribly anthropomorphic in so many ways. Why are we always implying
that something vastly more developed than us will try to harm anyone?
That must be some primordial fear? While the AGI system develops it will
inevitably become more loving and empathic and you will most likely not
be able to hardcode any "human level" belief traps into that AGI which
would make the system become that destructive omnipotent Roman emperor
demigod so many people seem to have in mind ...
On 05/11/2014 02:06 PM, Kyle Kidd via AGI wrote:
Just because a machine has human intelligence doesn't mean it has
human desires. If machines do evil it is because people have put
their own desires into the machine. Obviously there will be systems
that are put in place that safeguard against this since more people
will seek to protect their property rather than destroy property of
others.
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com