On Feb 4, 2008 7:38 PM, Bob Mottram <[EMAIL PROTECTED]> wrote:
> Well if you take something like the "talking heads" experiment
> (http://www.isrl.uiuc.edu/~amag/langev/cited2/steelsthetalkingheadsexperiment.html)
> and ask what it would take to scale this up to human-like language
> abilities inevitably you're always drawn back to the fact that the
> images used are of a trivial nature.

Perhaps. However, I think there's at least as much work required to
take a robot (with localisation + mapping if you like) and scale it up
to communicate with human-like language.

> There needs to be some kind of reliable pattern which you can
> correlate your linguistics with.  Uncertainties can be dealt with, but
> if the pattern is completely unreliable from one observation to the
> next you're lost.  Simulation doesn't really deal with the problem, or
> rather it deals with the problem by ignoring it.

This is a very good point. Reliable patterns are important and dealing
with uncertainty in your patterns is critical to real-world
situations. That said though, it might be possible to make AGI without
a decent ability to deal with uncertainty then program that in later.
Its hard to tell. What do you guys think?

-J

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93353273-22cd00

Reply via email to