I hit "send" too soon, without finishing the thought:

On Fri, Nov 16, 2018 at 3:02 PM Linas Vepstas <[email protected]>
wrote:

> For example, this parse makes sense, and seems right:
>
>      +-------->WV------->+
>     +---->Wd-----+      |
>     |      +Ds**c+-Ss*s-+---Pa--+
>     |      |     |      |       |
> LEFT-WALL the  dog.n was.v-d black.a
>
> but there is another possibility, that kind-of makes sense (and perhaps
> language learning will find):
>
>     +---->Wd---->+
>     |            +-->adjcomp--->+
>     |      +Ds**c+      +<-cop<-+
>     |      |     |      |       |
> LEFT-WALL the  dog.n   was    black
>
> Here, adjcomp is "adjectival compliment" and "cop" was copula.  Some
> dependency grammars draw this graph. Some call it "predicative adjectival
> modifier". Lets quibble. Note that I did not draw an arrow from subject to
> verb. I could, I suppose.  Note that it is now IMPOSSIBLE to draw an arrow
> from root/left-wall to the verb, because it would require a
> link-crossing, it would have to cross over the adjcomp arrow.
>
> Thus, if you want to draw an arrow from root to head-verb, and also get a
> planar graph, you are not allowed to draw the adjcomp/predadj arrow.  That
> helps explain what LG does.
>
> It also helps make clear that the no-links-crossing constraint is
> imperfect. It seems reasonable, but clearly, there is a violation in the
> above rather
> trivial sentence!
>

OK, to finish this thought. Let us speculate what an MST parse of this
sentence might be like. It depends on the MI values for the word-pairs
MI(dog,was) MI(was,black) and MI(dog,black)  I don't know what these are,
but clearly they will be different for a corpus of kids-lit, than a corpus
of math texts.

Next question: what happens when words are sorted into categories?  What is
MI(dog, some color)? What is MI(some animal, some color)? What is
MI(physical object, some color)?

I don't have a good story here, except to say that copulas and predicative
adjectives prsent maybe the simplest-possible example of a difficulty of
moving from surface syntax (SSynt, what LG does) to deep syntax (DSynt,
what MMT does). Yet, this move is a critical one.

I'm currently thinking of it as a graph-write rule, that converts the SSynt
graph into a PLN graph

EvaluationLink
     PredicateNode "has color"
     ListLink
         Concept "dog"
         Concept "black"

Or, perhaps as Nil might like to write:

LambdaLink
     VariableList
          Variable $PHY
          Variable $COL
    AndLink
          EvaluationLink
              PredicateNode "has color"
              ListLink
                  Variable $PHY
                  Variable $COL
          InheritanceLink
                Variable $PHY
                Concept "physical object"
           InheritanceLink
                Variable $COL
                Concept "color"

Of course, even the above representation is wrong, in several ways, but
nit-picking it at this stage is counter-productive.

The question is: given a learned grammar, with statistics, how to we get to
the DSynt or the opencog variant?  Well, the now-quite-old Dekang Lin DIRT
paper, and the newer-but-still-old Poon&Domingos unsupervised learning
paper show the way.

Onward ho!

Linas
-- 
cassette tapes - analog TV - film cameras - you

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA36aRbObkgMmOGvxO2eGr0RV6pcwrkVBUR-yua_LOYNFSg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to