Harry,

Count me in the camp that views grounding as the essential problem of 
traditional AI approaches, at least as it relates to AGI. An embodied AI [*], 
in which the only informational inputs to the AI come via so-called "sensory" 
modalities, is the only way I can see for an AI to arrive at its own internal 
sense of meaning. Without an internal sense of meaning, symbols passed to the 
AI are simply arbitrary data to be manipulated. John Searle's Chinese Room (see 
Wikipedia) argument effectively shows why manipulation of ungrounded symbols is 
nothing but raw computation with no understanding of the symbols in question. 

Of course, traditional AI approaches have proceeded on the assumption that 
intelligence can be achieved without the need for an internal sense of meaning 
- i.e., that purely algorithmic approaches can lead to intelligent behavior. 
While this is certainly true for narrow domains (such as chess), I can't see 
how an AGI can be truly general if meaning is externally defined.

Terren

[*]
Note that the embodiment does not have to be physical, it can be virtual as in 
Novamente's use of Second Life. 


--- On Mon, 8/4/08, Harry Chesley <[EMAIL PROTECTED]> wrote:
> As I've come out of the closet over the list tone
> issues, I guess I 
> should post something AI-related as well -- at least that
> will make me 
> net neutral between relevant and irrelevant postings. :-)
> 
> One of the classic current AI issues is grounding, the
> argument being 
> that a dictionary cannot be complete because it is only 
> self-referential, and *has* to be grounded at some point to
> be truly 
> meaningful. This argument is used to claim that abstract AI
> can never 
> succeed, and that there must be a physical component of the
> AI that 
> connects it to reality.
> 
> I have never bought this line of reasoning. It seems to me
> that meaning 
> is a layered thing, and that you can do perfectly good
> reasoning at one 
> (or two or three) levels in the layering, without having to
> go "all the 
> way down." And if that layering turns out to be
> circular (as it is in a 
> dictionary in the pure sense), that in no way invalidates
> the reasoning 
> done.
> 
> My own AI work makes no attempt at grounding, so I'm
> really hoping I'm 
> right here.
> 
> 
> 
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to