On Sun, Sep 21, 2014 at 5:31 AM, John Rose via AGI <[email protected]> wrote: >> You don't need to design consciousness into AGI. You just need to design >> human behavior. When people see that it behaves like a human, they will >> just assume that it is conscious. What other test could they do? >> > > Agreed. You can make a p-zombie AGI. But would it understand really who we > are if it doesn't have phenomenal consciousness? And might that not be > dangerous to us?
Are you saying that a p-zombie AGI would behave differently than a conscious AGI? Now we're getting somewhere. Exactly how would you test if an AGI really "understands" us? -- -- Matt Mahoney, [email protected] ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
