> As I think about it, one problem is, depending on how its
> parametrized, its not going to build much of a world model.
> Say for example it uses trigrams. The average hs grad knows
> something like 50,000 words. So there are something like 10^17
> trigrams. It will never see enough data to build a model capturing
> much semantics, unless it builds an incredibly compact model,
> in which case-- what is the underlying structure and how
> (computationally) are you going to learn it?

Absolutely correct.  That's why I said "My belief is that if you had the proper 
structure-building learning algorithms that your operator grammar system would 
simply (re-)discover the 
basic parts of speech and would then successfully proceed from there."  and why 
I slammed it for ""reinventing the wheel" in terms of it's unnecessary 
generalization of dependency"

> In unsupervised learning, you can learn a lot,
> say you can cluster the world into two clusters. But until you get 
> supervision, you can't learn the final few bits to distinguish good
> from bad, or whatever.

I'm afraid that I disagree completely with the latter sentence.

> Operator grammar might be very useful for
> getting a structure that could then be rapidly trained to produce
> meaning, but I don't think you can finish the job until you interact
> with sensation.

It seems as if you're now talking sensory fusion (which is a whole 'nother can 
o' worms).
    
        Mark

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to