>> I agree about developmental language learning combined with automated
>> learning of grammar rules being the right approach to NLP.
I think that the fundamentals of grammar rules are hard-coded into humans
and that the specific non-determined details (i.e. language-specific
differences) are (relatively :-) quickly picked up by specifically evolved
parts of the brain (i.e. not a generalized learning mechanism). Why are you
insisting that using a generalized learning mechanism to learn grammar rules is
an effective approach? It looks to me like *learning* grammar could be skipped
entirely.
----- Original Message -----
From: Benjamin Goertzel
To: [email protected]
Sent: Saturday, April 28, 2007 9:02 AM
Subject: Re: [agi] rule-based NL system
I agree about developmental language learning combined with automated
learning of grammar rules being the right approach to NLP.
In fact, my first wife did her PhD work on this topic in 1994, at Waikato
University in Hamilton New Zealand. She got frustrated and quite before
finishing her degree, but her program (which I helped with) inferred some nifty
grammatical rules from a bunch of really simple children's books, and then used
them as a seed for learning more complex grammatical rules from slightly more
complex children's books. This work was never published (like at least 80% of
my work, because writing things up for publication is boring and sometimes
takes more time than doing the work...).
However, a notable thing we found during that research was that nearly all
children's books, and children's spoken language (e.g. from the CHILDES corpus
of childrens spoken language), make copious and constant reference to PICTURES
(in the book case) or objects in the physical surround (in the spoken language
case).
In other words: I became convinced that in the developmental approach, if you
want to take the human child language learning metaphor at all seriously, you
need to go beyond pure language learning and take an experientially grounded
approach.
Of course, this doesn't rule out the potential viability of pursuing
developmental approaches that **don't** take the human child language learning
metaphor at all seriously ;-)
But it seems pretty clear that, in the human case, experiential grounding
plays a rather huge role in helping small children learn the rules of
language...
-- Ben G
On 4/28/07, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote:
I think YKY is right on this one. There was a Dave Barry column about going
to
the movies with kids in which a 40-foot image of a handgun appears on the
screen, at which point every mother in the theater turns to her kid and
says,
"Oh look, he's got a GUN!"
Communication in natural language is extremely compressed. It's a code that
expresses the *difference* between the speaker's and the hearer's states of
knowledge, not a full readout of the meaning. (this is why misunderstanding
is so common, as witness the "intelligence" discussion here)
Even a theoretical Solomonoff/Hutter AI would flounder if given a
completely
compressed bit-stream: it would be completely random, incompressible and
unpredictable like Chaitin's Omega number. Language is a lot closer to this
than is the sensory input stream of a kid.
There's a quote widely attributed to a "William Martin" (anybody know who
he
is?): "You can't learn anything unless you almost know it already." In
general, the hearer needs a world model almost the same as the speaker's.
Let's call this "Winograd's Theory of Understanding": that having a model
capable of simulating the domain of discourse is necessary and sufficient
for
understanding discourse about it. (NB: (a) there are different levels of
completeness and accuracy for simulations and also for understanding; (b)
"symbol grounding" in the sense of associations to physical sensory/motor
signals is *not necessary*.)
I find SHRDLU and its intellectual descencents a convincing demonstration of
WTU. This implies that understanding an NL sentence consists not only in
parsing it into an internal representation and stashing it somewhere, but,
if
it's something you didn't already know, modifying and augmenting the
mechanism of your world model to reflect the new knowledge in future
simulations. In other words, building a working mechanism and integrating it
into an existing vast, complex machine.
Josh
On Saturday 28 April 2007 03:29, YKY (Yan King Yin) wrote:
> "Layered learning" is not just better, it's actually the only
> computationally feasible approach.
>
> We may talk to a baby like:
> "MILK?"
> "You want to play BALL?"
> "Oh you POO-POO again" etc.
> And these things are said simultaneously as some *physical* events (eg
> milk, ball, poo) are happening, which allows the baby to correctly *bind*
> the words to concepts, ie achieve grounding.
>
> Contrast this with something from Wall Street Journal:
> Headline: "Employees of a new plan to get Dell back on the road to
growth,
> including streamlining management and looking at new methods of
> distribution beyond the computer company's direct-selling model."
> Can a baby really learn from THIS ^^^ ?
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
------------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936