> On 12/9/02 7:13 PM, "Pei Wang" <[EMAIL PROTECTED]> wrote: > > On this issue, we can distinguish 4 approaches: > > > > (1) let symbols get their meaning through "interpretation" (provided in > > another language) --- this is the approach used in traditional > symbolic AI. > > > > (2) let symbols get their meaning by grounding on textual experience --- > > this is what I and Kevin suggested. > > > > (3) let symbols get their meaning by grounding on simplified perceptual > > experience --- this is what Ben and Shane suggested. > > > > (4) let symbols get their meaning by grounding on human-level perceptual > > experience --- this is what Brooks (the robotics researcher at MIT) and > > Harnad (who raised the "symbol grounding" issue in the first place) > > proposed. > > > I can be put pretty much in the (2) camp. This is adequate for > proving the > basic capability of the system and you can incrementally add (3+) > later. I > mostly view this as a pragmatic engineering issue though; no need to > unnecessarily complicate the test environment until you can prove > the system > is capable of handling the simplest environment. It is a much easier > development trajectory unless you believe that (3) or (4) are an absolute > minimum for the system to work at all (obviously I don't).
Well, we feel that a simple 2D shape-recognition/creation environment is actually going to be *easier* for intuitively tuning system parameters and exploring system behavior, than purely textual & formal-language interactions. But we are just starting this aspect of testing, and will tell y'all how it goes, over the next N months... -- Ben G ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]