You make the statement below as if it were a fact and I don't believe it to be fact at all.
If a "disembodied" AGI has models suggested by an embodied person, then that concept can have meaning in a real world setting without the AGI actually having a body at all. If a disembodied AGI has a hypothesis about the real world and doesn't have a direct way to test if it is true, then it could just ask a human to do so on it's behalf. Disabled persons are not stupid or useless just because some/most of their ability to interact with the world is impaired. If a climate model has algorithms and data from the real world, do you argue that the result will be nothing but semantic free gibberish? I know that some systems (specifically systems without models or a lot of human interaction) have had grounding problems but your statement below seems to be stating something that is far from proven fact. Your conclusions about "concept of self" and "unemboodied agent means ungrounded symbols" are also not shared by me and not explained or proven by you. Your saying something is doesn't necessarily make it true. -- David Clark ----- Original Message ----- From: "Terren Suydam" <[EMAIL PROTECTED]> To: <[email protected]> Sent: Friday, August 29, 2008 9:18 AM Subject: Re: [agi] How Would You Design a Play Machine? > To an unembodied agent, the concept of self is indistinguishable from any other "concept" it works with. I use concept in quotes because to the unembodied agent, it is not a concept at all, but merely a symbol with no semantic context attached. All such an agent can do is perform operations on ungrounded symbols - at best, the result of which can appear to be intelligent within some domain (e.g., a chess program). ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51 Powered by Listbox: http://www.listbox.com
