Hi Rob,

On Mon, Feb 18, 2019 at 4:40 PM Rob Freeman <[email protected]>
wrote:

> Ben,
>
> That's what I thought. You're still working with Link Grammar.
>
> But since last year working on informing your links with stats from
> deep-NN type, learned, embedding vector based predictive models? You're
> trying to span the weakness of each formalism with the strengths of the
> other??
>

Yes but no. I've been trying to explain what exactly is good, and what,
exactly is bad with NN vector-space models. There is a long tract written
on this here.
https://github.com/opencog/opencog/raw/master/opencog/nlp/learn/learn-lang-diary/skippy.pdf



>
> There's a lot to say about all of that.
>
> Your grammar will be learned, with only the resolution you bake in from
> the beginning.
>
No.


> Your embedding vectors will be learned,
>

The point of the long PDF is to explain why NN-vectors are bad. It attempts
to first explain *why* neural nets work for language, and why vectors are
*almost* the right thing, and then it tries to explain why NN vectors don't
actually do everything you actually want.  I've noticed that, in the middle
of all these explanations, I lose my audience; haven't figured out how to
keep them, yet.


> and the dependency decisions they can inform on learned, and thus finite,
> too. Plus you need to keep two formalisms and marry them together... Large
> teams for all of that...
>

No. I've already got 75% of it coded up. It actually works, I've got long
diary entries and notes with detailed stats on it all.  Unfortunately, I
have not been able to carve out the time to finish the work, its been
stalled since the fall of last year.

It would be wonderful if I could get someone else interested in this work.

--linas


>
> On the plus side large teams have already been working on those formalisms
> for decades. An asymptote plots the final failure of learning based
> methods, but after decades their development is already far down whatever
> asymptote, so you start right at the top of the tallest tree. No-one will
> complain about your performance, because it is all anyone achieves.
>
> So, state-of-the-art. But complex, and doomed to asymptotic failure as
> ever more comprehensive learning, ever more definitively fails, to capture
> every Zipf long tail.
>
> Me, it's simple.
>
> You make the embedding vectors generative by substituting them into each
> other. Infinite patterns. But the patterns are all meaningful, because the
> substitution is meaningful (it's the very basis of all embedding vectors.)
> You get hierarchy, with all that implies about dependency, grammar, for
> free, as a natural consequence of the substitution process (it's
> non-associative.)
>
> And I now think the "substituting them into each other" step may be as
> simple as setting a network of observed sequences oscillating.
>
> As you say 2013 is a long time ago. When I was pitching embedded vector
> models in 2013 (let alone 2000), they were not the mainstream. Now they are.
>
> If you ask me whether I feel vindicated, the answer is yes.
>
> But vindication is hollow. We still don't have what I was also pitching
> back then: vector recombination to generate new, meaningful, patterns,
> rather than learn patterns.
>
> No large teams working on this yet, so it is still crude. In particular it
> probably requires parallel hardware.
>
> Anyway, if you don't want to try this pattern creation idea for language,
> I suggest you look at what Pissanetzky has done. That is more readily
> interpretable in terms of vision. For vision the generative aspect is not
> so obvious. I'm not sure Pissanetzky realizes his permutation "invariants"
> will need to be constantly generated too. But by using permutation as his
> base, the machinery is all there. Permutation is a generative process.
>
> -Rob
>
> On Mon, Feb 18, 2019 at 9:26 PM Ben Goertzel <[email protected]> wrote:
>
>> 2013 seems an insanely long time ago ;) ...  we started with these ideas
>>
>> https://arxiv.org/abs/1401.3372
>>
>> https://arxiv.org/abs/1703.04368
>>
>> but have gone some way since... last summer's partial update was
>>
>> https://www.youtube.com/watch?v=ABvopAfc3jY
>>
>>
>> http://agi-conf.org/2018/wp-content/uploads/2018/08/UnsupervisedLanguageLearningAGI2018.pdf
>>
>> But since last summer we have onboarded a new team that does deep-NN
>> language modeling and we are experimenting with using the output of
>> deep-NN predictive models to guide syntactic parsing and semantic
>> interpretation in OpenCog...
>>
>> -- Ben
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T581199cf280badd7-Mb51fe3bc6cac5a7076ab3244>
>


-- 
cassette tapes - analog TV - film cameras - you

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T581199cf280badd7-M3bb2829c8f437e49de914e74
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to