The symbols "crashing around" are well-grounded but then you *disconnect* them from their grounding with your manipulation.
A system needs to build it's own mental structure from the bottom up -- not have it imposed from above by an entity with questionable congruence. ----- Original Message ----- From: Derek Zahn To: [email protected] Sent: Sunday, March 30, 2008 5:13 PM Subject: RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity. Mark Waser writes: >> True enough, that is one answer: "by hand-crafting the symbols and >> the mechanics for instantiating them from subsymbolic structures". >> We of course hope for better than this but perhaps generalizing these >> working systems is a practical approach. > Um. That is what is known as the grounding problem. I'm sure that > Richard Loosemore would be more than happy to send references explaining > why this is not productive. It's not the grounding problem. The symbols crashing around in these robotic systems are very well grounded. The problem is that these systems are narrow, not that they manipulate ungrounded symbols. ------------------------------------------------------------------------------ agi | Archives | Modify Your Subscription ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63 Powered by Listbox: http://www.listbox.com
