One more somewhat amusing observation is that PLN would be expected to
CREATE Chomskyan deep syntactic structures for sentences in the course
of learning surface structure based on embodied experience...
Recall the notion of Chomskyan "deep structure." I suggest that
probabilistic reasoning gives a new slant on this...
Consider the example
"Who did Ben tickle?"
The Chomskyan theory would explain this as wh-movement from the "deep
structure" version
"Ben did tickle who?"
Now, arguably this hypothetical syntactic deep structure version is a
better parallel to the *semantic* deep structure. This certainly
follows if we take the OpenCog semantic deep structure, in which we
get
Who did Ben tickle?
==>
EvaluationLink
PredicateNode "tickle"
ListLink
ConceptNode "Ben"
VariableNode $w
InheritanceLink
VariableNode $w
ConceptNode "person"
Ben did tickle Ruiting.
EvaluationLink
PredicateNode "tickle"
ListLink
ConceptNode "Ben"
ConceptNode "Ruiting"
(let's call this L1 for future reference)
The relation between the semantic representations of "Ben did tickle
Ruiting" and "Ben did tickle who?" is one of substitution of the
representation of "Ruiting" for the representation of "who"...
Similarly, the relation between the syntactic representation of "Ben
did tickle Ruiting" and "Ben did tickle who?" is one of substitution
of the lexical representation of "Ruiting" and the lexical
representation of "who?"
On the other hand, the relationship between the syntactic
representation of "Ben did tickle Ruiting" and that of "Who did Ben
tickle?" is not one of simple substitution...
If we represent substitution as an algebraic operation on both the
syntax-parse and semantic-representation side, then there is clearly a
morphism between the symmetry (the invariance wrt substitution) on the
semantic-structure side and the deep-syntactic-structure side... But
there's no such straightforward morphism on the
shallow-syntactic-structure side... (though the syntax algebra and the
logical-semantics algebra are morphic generally, there is no morphism
btw the substitution algebras on the two sides...)
HOWEVER, and here is the interesting part... suppose a mind has all three of
Who did Ben tickle?
Ben did tickle Ruiting.
and
EvaluationLink
PredicateNode "tickle"
ListLink
ConceptNode "Ben"
VariableNode $w
(let's call this L for future reference)
in it?
I submit that, in that case, PLN is going to *infer* the syntactic form
Ben did tickle who?
with uncertainties on the links...
That is, the deep syntactic structure is going to get produced via
uncertain inference...
So what?
Well, consider what happens during language learning. At a certain
point, the system's understanding of the meaning of
Who did Ben tickle?
is going to be incomplete and uncertain ... i.e. its probability
weighting on the link from the syntactic form to its logical semantic
interpretation will be relatively low...
At that stage, the construction of the deep syntactic form
Ben did tickle who?
will be part of the process of bolstering the probability of the
correct interpretation of
Who did Ben tickle?
Loosely speaking, the inference will have the form
Who did Ben tickle?
==>
L
is bolstered by
(Ben did tickle who? ==> L)
(Who did Ben tickle? ==> Ben did tickle who?)
|-
(Who did Ben tickle ==> L)
where
(Ben did tickle who? ==> L)
comes by analogical inference from
(Ben did tickle Ruiting ==> L1)
and
(Who did Ben tickle? ==> Ben did tickle who?)
comes by
(Who did Ben tickle ==> Ben did tickle Ruiting)
(Ben did tickle Ruiting ==> Ben did tickle who)
|-
(Who did Ben tickle? ==> Ben did tickle who?)
and
(Ben did tickle Ruiting ==> Ben did tickle who)
comes from analogical inference on the syntactic links, and
(Who did Ben tickle ==> Ben did tickle Ruiting)
comes by analogical inference from
(Who did Ben tickle ==> L)
(Ben did tickle Ruiting ==> L1)
Similarity L L1
|-
(Who did Ben tickle ==> Ben did tickle Ruiting)
So philosophically the conclusion we come to is: The syntactic deep
structure will get invented in the mind during the process of language
learning, because it helps to learn the surface form, as it's a
bridging structure between the semantic structure and the surface
syntactic structure...
One thing this means is that, contra Chomsky, the presence of the deep
structure in language does NOT imply that the deep structure has to be
innate ... the deep structure would naturally emerge in the mind as a
consequence of probabilistic inference ... and furthermore, languages
whose surface form is relatively easily tweakable into deep structures
that parallel syntactic structure, are likely to be more easily
learned using probabilistic reasoning.... So one would expect surface
syntax to emerge via multiple constraints include
-- ease of tweakability into deep structures that parallel semantic structure
-- ease of comprehension and production of surface structure
I believe Jackendoff made this latter point a few times...
-- Ben
On Sat, Sep 3, 2016 at 10:13 PM, Ben Goertzel <[email protected]> wrote:
>> MAPPING SYNTAX TO LOGIC
>>
>> "RelEx + RelEx2Logic” maps syntactic structures into logical
>> structures. It takes in structures that care about left vs. right,
>> and outputs symmetric structures that don’t care about left vs. right.
>> The output of this semantic mapping framework, given a sentence, can
>> be viewed as a set of type judgments, i.e. a set of assignations of
>> terms to types. (Categorially, assigning term t to type T
>> corresponds to an arrow “t \circ ! : Gamma ---> T” where ! is an arrow
>> pointing to the unit of the category and Gamma is the set of type
>> definitions of the typed lambda calculus in question, and \circ is
>> function composition) .
>
> One philosophically nice observation here is: Frege's "principle of
> compositionality" here corresponds to the observation that there is a
> morphism from the asymmetric monoidal category corresponding to link
> grammar, into the symmetric locally cartesian closed category
> corresponding to lambda calculus w/ dependent types...
>
> This principle basically says that you can get the meaning of the
> whole by combining the meaning of the parts, in language...
>
> The case of "Every man who has a donkey, beats it" illustrates that in
> order to get compositionality for weird sentences like this, you
> basically want to have dependent types in your lambda calculus at the
> logic end of your mapping...
>
> -- Ben
--
Ben Goertzel, PhD
http://goertzel.org
Super-benevolent super-intelligence is the thought the Global Brain is
currently struggling to form...
--
You received this message because you are subscribed to the Google Groups
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit
https://groups.google.com/d/msgid/opencog/CACYTDBeiYW9cqG_gBSXyeQWb_JL7Np5QT6y5YaP-ueQdq%2B6LBg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.