On Mon, Aug 4, 2008 at 2:55 PM, Harry Chesley <[EMAIL PROTECTED]> wrote: > the argument being that a dictionary cannot be complete because it is only > self-referential, and *has* to be grounded at some point to be truly > meaningful. This argument is used to claim that abstract AI can never > succeed, and that there must be a physical component of the AI that connects > it to reality. I have never bought this line of reasoning.
The "physical component" is not necessarily needed, but grounding NL-only input well enough to support reasoning is nearly impossible (unless the system already knows a lot). When your AGI starts learning, you need to provide some extra support for grounding / categorical perception. You can do it through embodiment (real-world or simulated) which is IMO a risky approach because of the level of implementation difficulty (which could easily kill your project), OR you can use a formal language which will help your AGI to semantically sort out the input(=possibly [initially] less user friendly, but fewer resources needed for implementation + you can go for NL support later, after implementing input-understanding, reasoning & possibly scaling). Regards, Jiri Jelinek ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121 Powered by Listbox: http://www.listbox.com
