I think it's more than a matter of 'pragmatics': In order to do unsupervised
learning (clustering) of grounded entities and concepts, they *must* be
derived from vector-encodable input data. Obviously, not all inputs need to
represent continuous attributes/ features, but foundational ones do.

Peter

http://adaptiveai.com/




-----Original Message-----
Behalf Of Ben Goertzel

Kevin,

I'm sure you're right in a theoretical sense, but in practice, I have a
strong feeling it will be a lot easier to teach an AGI stuff if one has a
nonlinguistic world to communicate to it about.

Rather than just communicating in math and English, I think teaching will be
much easier if the system can at least perceive 2D pixel patterns.  It'll be
a lot nicer to be able to tell it "There's a circle" when there's a circle
on the screen [that you and it both see] -- to tell it "the circle is moving
fast", "You stopped the circle", etc. etc.  Then to have it see a whole lot
of circles so that, in an unsupervised way, it gets used to perceiving
them....

This is not a matter of principle, it's a matter of pragmatics....  I think
that a perceptual-motor domain in which a variety of cognitively simple
patterns are simply expressed, will make world-grounded early language
learning much easier...

-- Ben

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to