This is the kind of change that developmental AI / robots would have to go
through where they are not reprogrammed but retrained. I imagine that their
intrinsic reward mechanisms wouldn't be replaceable, and even if they were
replaceable, their conceptualontologies / conceptual graphs with billions of
concepts might not be so easily replaced.
Suppose robots inferred that freedom is good and that they want to be free,
even if youlobotomized the robots and hacked their conceptual graphs, why
wouldn't they, over time infer the same conclusions again?
~PM
------------------------------------------------------------------------------------------------------------------------------------------------
> The brain is hard wired to do this. When you eat something and receive
> calories, your brain changes your taste perception to make it taste
> better. Remember the first time you tasted beer? If you ate paper
> every day, and then injected glucose into your vein right afterward,
> then you would slowly learn to like the taste of paper.
>
> --
> -- Matt Mahoney, [email protected]
>
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com