On Fri, Aug 24, 2012 at 5:31 PM, Aaron Hosford <[email protected]> wrote: > > I meant literally, "Make people happy," in those words, maybe with an, > "Ask people what makes them happy," tacked on or hardcoded in.
There is no hardware architecture that I am aware of that has a "make people happy" instruction. You still need to give it 10^17 bits of human knowledge before it knows how to do this. And who says you will be the one to give it these instructions? There are 7 billion other people, and they may have other ideas what the AGI should do. So what's the point? You aren't reducing the cost of AGI, and you are taking a safe design (where everyone has a tiny bit of control in a competitive market) and making it dangerous by giving it a simplistic (and therefore wrong), central goal for all of humanity. -- Matt Mahoney, [email protected] ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
