X:Of course this is a variation on "the grounding problem" in AI. But do you think some sort of **absolute** grounding is relevant to effective interaction between individual agents (assuming you think any such ultimate grounding could even perform a function within a limited system), or might it be that systems interact effectively to the extent their dynamics are based on **relevant** models, regardless of even proximate grounding in any functional sense?
Er.. my body couldn't make any sense of this :). Could you be clearer giving examples of the agents/systems and what you mean by absolute/ proximate grounding?
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=90585588-e20790
