Anton, sequential and random parses are in D56 and D57. Or do you want specifically the ones for GS and SS? If so, please tell me where you want them to avoid messing with your file structure, please.

Yes, the mix of distance and MI is what we have been doing when we use the distance weighting in MST parsing. But as I noticed before, we should find a good tuning for each case, because the MI's vary about two orders of magnitude.

a.

On 07/05/19 15:58, Anton Kolonin @ Gmail wrote:

Andres, can you upload the sequential parses that you have evaluated and provide them in the comments to the cells?

Ben, I think the 0.67-0.72 corresponds to naive impression that 2/3-3/4 of word-to-word connections in English is "sequential" and the rest is not. For Russian and Portuguese, it would be somewhat less, I guess.

What you suggest here ("used *both* the sequential parse *and* some fancier hierarchical parse as inputs to clustering and grammar learning?   I.e. don't throw out the information of simple before-and-after co-occurrence, but augment it with information from the statistically inferred dependency parse tree") can be simply (I guess) implemented in existing MST-Parser given the changes that Andres and Claudia have done year ago.

That could be tried with "distance_vs_MI" blending parameter in the MST-Parser code which accounts for word-to-word distance. So that if the distance_vs_MI=1.0 we would get "sequential parses", distance_vs_MI=0.0 would produce "Pure MST-Parses", distance_vs_MI=0.7 would provide "English parses", distance_vs_MI=0.5 would provide "Russian parses", does it make sense, Andres?

Ben, do you want let Andres to try this - get parses with different distance_vs_MI in range 0.0-1.0 an see what happens?

This could be tried both ways using  traditional MI or DNN-MI, BTW.

Cheers,

-Anton


06.05.2019 12:30, Ben Goertzel :



On Sun, May 5, 2019 at 10:15 PM Anton Kolonin @ Gmail <akolo...@gmail.com <mailto:akolo...@gmail.com>> wrote:

    Hi Linas, I am re-reading your emails and updating our TODO
    issues from some of them.

    Not sure about this one:

    >Did Deniz Yuret falsify his thesis data? He got better than 80%
    accuracy; we should too.

    I don't recall Deniz Yuret comparing MST-parses to
    LG-English-grammar-parses.



Linas: Where does the > 80% figure come from?

This paper of Yuret's

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.129.5016&rep=rep1&type=pdf

cites 53% accuracy compared against "dependency parses derived from dependency-grammar-izing Penn Treebank parses on WSJ text" ....   It was written after his PhD thesis. Is there more recent work by Yuret that gives massively better results?  If so I haven't seen it.

Spitkovsky's more recent work on unsupervised grammar induction seems to have gotten better statistics than this, but it used radically different methods.



    a) Seemingly "worse than LG-English" "sequential parses" provide
    seemingly better "LG grammar" - that may be some mistake, so we
    will have to double-check this.


Anton -- Have you looked at the inferred grammar for this case, to see how much sense it makes conceptually?

Using sequential parses is basically just using co-occurrence rather than syntactic information

I wonder what would happen if you used *both* the sequential parse *and* some fancier hierarchical parse as inputs to clustering and grammar learning?   I.e. don't throw out the information of simple before-and-after co-occurrence, but augment it with information from the statistically inferred dependency parse tree...




-- Ben
--
-Anton Kolonin
skype: akolonin
cell: +79139250058
akolo...@aigents.com
https://aigents.com
https://www.youtube.com/aigents
https://www.facebook.com/aigents
https://medium.com/@aigents
https://steemit.com/@aigents
https://golos.blog/@aigents
https://vk.com/aigents
--
You received this message because you are subscribed to the Google Groups "lang-learn" group. To unsubscribe from this group and stop receiving emails from it, send an email to lang-learn+unsubscr...@googlegroups.com <mailto:lang-learn+unsubscr...@googlegroups.com>. To post to this group, send email to lang-le...@googlegroups.com <mailto:lang-le...@googlegroups.com>. To view this discussion on the web visit https://groups.google.com/d/msgid/lang-learn/f6f8a242-fcb4-3456-77cf-dfa8833612ca%40gmail.com <https://groups.google.com/d/msgid/lang-learn/f6f8a242-fcb4-3456-77cf-dfa8833612ca%40gmail.com?utm_medium=email&utm_source=footer>.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/41c5c2bd-3c4e-4485-537a-626d1cfc48ea%40gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to