--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Uh... I forgot to mention that explaining those data about child 
> language learning was the point of the work.  It's a well known effect, 
> and this is one of the reasons why the connectionist models got everyone 
> excited:  psychological facts started to be explained by the performance 
> of the connectionist nets.

Yes, which is why still I believe this is the right approach (not that it will
be easy).

> The next problem that you will face, along this path, is to figure out 
> how you can get such nets to elegantly represent such things as more 
> than one token of a concept in one sentence:  you can't just activate 
> the "duck" node when you here that phrase from the Dire Straits song 
> Wild West End:  "I go down to Chinatown ...  Duck inside a doorway; 
> Duck to Eat".

That is a problem.  Humans use context to resolve ambiguity.  A neural net
ought to do the same on its own if we get it right.  One problem with some
connectionist models is trying to assign a 1-1 mapping between words and
neurons.  The brain might have 10^8 neurons devoted to language, enough to
represent many copies of the different senses of a word and to learn new ones.

> Then you'll need to represent sequential information in such a way that 
> you can do something with it.  Recurrent neural nets suck very badly if 
> you actually try to use them for anything, so don't get fooled by their 
> Soren Song.

Yes, but I think they are necessary.  Lexical words, semantics, and grammar
all constrain each other.  Recurrent networks can oscillate or become chaotic.
 Even the human brain doesn't deal with this perfectly, so we have migraines
and epilepsy.

> Then you will need to represent layered representations:  concepts 
> learned from conjunctikons of other concepts rather than layer-1 
> percepts.  Then represent action, negation, operations, intentions, 
> variables.......

These are high level grammars, like learning how to convert word problems into
arithmetic or first order logic.  I think anything learned at the level of
higher education is going to require a huge network (beyond what is practical
now), but I think the underlying learning principles are the same.

> It is just not procuctive to focus on the computaional complexity issues 
> at this stage:  gotta get a lot of mechanisms tried out before we can 
> even begin to talk about such stuff (and, as I say, I don't believe we 
> will really care even then).

I think it is important to estimate these things.  The analogy is that it is
useful to know that certain problems are hard or impossible regardless of any
proposed solution, like traveling salesman or recursive data compression.  If
we can estimate the complexity of language modeling in a similar way, I see no
reason not to.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to