Dear Dr. Goertzel and contributors,


You could also enrich the distributional ideas giving support to 
compositionality in other way. In your arxiv:1703.04368 you link a pregroup 
grammar parse tree of a sentence to a morphism in a symmetric monoidal 
category. In work from Coecke, Clark and others a categorial grammar parse 
tree is associated to a morphism in the category of linear maps which is 
monoidal with the good old linear algebra tensor product. This morphism is 
a tensor network that corresponds naturally with the categorial grammar 
parse tree, where ground types such as nouns correspond to vectors obtained 
by a distributional method such as word2vec and compound types of words 
such as verbs correspond to higher rank tensors. That’s why they call it 
DisCoCat (distributional, compositional, categorical) model. While 
theoretically nice I think that computationally is still work in progress 
from the point of view of getting hands on and start coding, though.


You can browse some slides of talks of Stephen Clark on this here: 
https://sites.google.com/site/stephenclark609/talks


Warm regards, Jesus Lopez.



On Sunday, 26 March 2017 18:44:10 UTC+2, Ben Goertzel wrote:
>
> Linas, 
>
> I thought a bit about how to use a modified version of the word2vec 
> idea in our language learning pipeline... 
>
> I'm thinking about the Skip-gram model of word2vec, as summarized 
> informally e.g. here 
>
> http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/ 
>
> Following up the suggestion you made in Addis in our chat with 
> Masresha, I'm thinking to replace the "adjacent word-pairs" used in 
> word2vec with "word-pairs that are adjacent in the parse tree" (where 
> e.g. the parse tree may be the max-weight spanning tree in our 
> language learning algorithm).... 
>
> This would still produce a vector just like word2vec does, via the 
> hidden layer of the NN ... but the vector would likely be more 
> meaningful than a typical word2vec vector... 
>
> What would the purpose of this be, in the context of our language 
> learning algorithm?  The purpose would be that clustering should work 
> better on the word2vec vectors than on the raw-er data regarding "word 
> co-occurrence in parse trees."   At least, that seems plausible, since 
> clustering on word2vec vectors generally works better than on 
> co-occurrence vectors 
>
> This would be something that Masresha or someone else in Addis could 
> work on, I think... 
>
> We can discuss at the office this week... 
>
> ben 
>
>
> -- 
> Ben Goertzel, PhD 
> http://goertzel.org 
>
> “Our first mothers and fathers … were endowed with intelligence; they 
> saw and instantly they could see far … they succeeded in knowing all 
> that there is in the world. When they looked, instantly they saw all 
> around them, and they contemplated in turn the arch of heaven and the 
> round face of the earth. … Great was their wisdom …. They were able to 
> know all.... 
>
> But the Creator and the Maker did not hear this with pleasure. … ‘Are 
> they not by nature simple creatures of our making? Must they also be 
> gods? … What if they do not reproduce and multiply?’ 
>
> Then the Heart of Heaven blew mist into their eyes, which clouded 
> their sight as when a mirror is breathed upon. Their eyes were covered 
> and they could see only what was close, only that was clear to them.” 
>
> — Popol Vuh (holy book of the ancient Mayas) 
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/fececc5f-f40e-4cc0-8d0f-9361c5750265%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to