Linas,

Alas my window of opportunities for writing long emails on math-y
stuff has passed, so I'll reply to your email more thoroughly in a
couple days...

However, let me just say that I am not so sure linear logic is what we
really want....  I understand that we want to take resource usage into
account in our reasoning generally... and that in link grammar we want
to account for the particular exclusive nature of the disjuncts ...
but I haven't yet convinced myself linear logic is necessarily the
right way to do this... I need to take a few hours and reflect on it
more and try to assuage my doubts on this (or not)

-- ben


On Tue, Aug 30, 2016 at 6:14 AM, Linas Vepstas <[email protected]> wrote:
> It will take me a while to digest this fully, but one error/confusion (and
> very important point) pops up immediately: link-grammar is NOT cartesian,
> and we most definitely do not want cartesian-ness in the system.  That would
> destroy everything interesting, everything that we want to have.  Here's the
> deal:
>
> When we parse in link-grammar, we create multiple parses.  Each parse can be
> considered to "live" in its own unique world or universe (it's own Kripke
> frame)  These universes are typically incompatible with each other: they
> conflict. Only one parse is right, the others are wrong (typically --
> although sometimes there are some ambiguous cases, where more than one parse
> may be right, or where one parse might be 'more right' than another).
>
> These multiple incompatible universes are symptomatic of a "linear type
> system".  Now, linear type theory finds applications in several places: it
> can describe parallel computation (each universe is a parallel thread) and
> also mutex locks and synchronization, and also vending machines: for one
> dollar you get a menu selection of items to pick from -- the ChoiceLink that
> drove Eddie nuts.
>
> The linear type system is the type system of Linear logic, which is the
> internal language of the closed monoidal categories, of which the closed
> cartesian categories are a special case.
>
> Let me return to multiple universes -- we also want this in PLN reasoning. A
> man is discovered standing over a dead body, a bloody sword in his hand --
> did he do the deed, or is he simply the first witness to stumble onto the
> scene?  What is the evidence pro and con?
> This scenario describes two parallel universes: one in which he is guilty,
> and one in which he is not. It is the job of the prosecutor, defense, judge
> and jury to figure out which universe he belongs to.  The mechanism is a
> presentation of evidence and reasoning and deduction and inference.
>
> Please be hyper-aware of this, and don't get confused: just because we do
> not know his guilt does not mean he is "half-guilty", -- just like an
> unflipped coin is not a some blurry, vague superimposition of heads and
> tails.
>
> Instead, as the evidence rolls in, we want to find that the probability of
> one universe is increasing, while the probability of the other one is
> decreasing.  Its just one guy -- he cannot be both guilty and innocent --
> one universe must eventually be the right one,m and it can be the only one.
> (this is perhaps more clear in 3-way choices, or 4-way choices...)
>
> Anyway, the logic of these parallel universes is linear logic, and the type
> theory is linear type theory, and the category is closed monoidal.
>
> (Actually, I suspect that we might want to use affine logic, which is per
> wikipedia "a substructural logic whose proof theory rejects the structural
> rule of contraction. It can also be characterized as linear logic with
> weakening.")
>
> Anyway, another key point: lambda calculus is the internal language of
> *cartesian* closed categories.  It is NOT compatible with linear logic or
> linear types.   This is why I said in a different email, that "this way lies
> madness".  Pursuit of lambda calc will leave us up a creek without a paddle,
> it will prevent us from being able to apply PLN to guilty/not-guilty court
> cases.
>
> ----
> BTW, vector spaces are NOT cartesian closed! They are the prime #1 most
> common example of where one can have tensor-hom adjunction, i.e. can do
> currying, and NOT be cartesian!  Vector spaces *are* closed-monoidal.
>
> The fact that some people are able to map linguistics onto vector spaces
> (although with assorted difficulties/pathologies) re-affirms that
> closed-monoidal is the way to go.  The reason that linguistics maps poorly
> onto vector spaces is due to their symmetry -- the linguistics is NOT
> symmetric, the vector spaces are.    So what we are actually doing (or need
> to do) is develop the infrastructure for *cough cough* a non-symmetric
> vector space.. which is kind-of-ish what the point of the categorial
> grammars is.
>
> Enough for now.
>
> --linas
>
>
> On Mon, Aug 29, 2016 at 4:41 PM, Ben Goertzel <[email protected]> wrote:
>>
>> Linas, Nil, etc. --
>>
>> This variation of type theory
>>
>> http://www.dcs.kcl.ac.uk/staff/lappin/papers/cdll_lilt15.pdf
>>
>> seems like it may be right for PLN and OpenCog ... basically,
>> dependent type theory with records (persistent memory) and
>> probabilities ...
>>
>> If we view PLN as having this sort of semantics, then RelEx+R2L is
>> viewed as enacting a morphism from:
>>
>> -- link grammar, which is apparently equivalent to pregroup grammar,
>> which is a nonsymmetric cartesian closed category
>>
>> to
>>
>> -- lambda calculus endowed with the probabilistic TTR type system,
>> which is a locally cartesian closed category
>>
>>
>> https://ncatlab.org/nlab/show/relation+between+type+theory+and+category+theory#DependentTypeTheory
>>
>> For the value of dependent types in natural language semantics, see e.g.
>>
>>
>> http://www.slideshare.net/kaleidotheater/hakodate2015-julyslide?qid=85e8a7fc-f073-4ded-a2c8-9622e89fd07d&v=&b=&from_search=1
>>
>> (the examples regarding anaphora in the above are quite clear)
>>
>>
>> https://ncatlab.org/nlab/show/dependent+type+theoretic+methods+in+natural+language+semantics
>>
>> This paper
>>
>>
>> http://www.slideshare.net/DimitriosKartsaklis1/tensorbased-models-of-natural-language-semantics?qid=fd4cc5b3-a548-46a7-b929-da8246e6c530&v=&b=&from_search=2
>>
>> on the other hand, seems mathematically sound but conceptually wrong
>> in its linguistic interpretation.
>>
>> It constructs a nice morphism from pregroup grammars (closed cartesian
>> categories) to categories defined over vector spaces -- where the
>> vector spaces are taken to represent co-occurence vectors and such,
>> indicating word semantics....  The morphism is nice... however, the
>> idea that semantics consists of numerical vectors is silly ...
>> semantics is much  richer than that
>>
>> If we view grammar as link-grammar/pregroup-grammar/asymmetric-CCC ...
>> we should view semantics as {probabilistic TTR  / locally compact
>> closed CCC *plus* numerical-vectors/linear-algebra}
>>
>> I.e. semantics has a distributional aspect AND ALSO a more explicitly
>> logical aspect
>>
>> Trying to push all of semantics into distributional word vectors,
>> leads them into insane complexities like modeling determiners using
>> Frobenius algebras... which is IMO just not sensiblen ... it's trying
>> to achieve a certain sort of mathematical simplicity that does not
>> reflect the kind of simplicity seen in natural systems like natural
>> language...
>>
>> Instead I would say RelEx+R2L+ECAN (on language) +
>> word-frequency-analysis can be viewed as enacting a morphism from:
>>
>> -- link grammar, which is apparently equivalent to pregroup grammar,
>> which is a nonsymmetric cartesian closed category
>>
>> to the product of
>>
>> -- lambda calculus endowed with the probabilistic TTR type system,
>> which is a locally cartesian closed category
>>
>> -- the algebra of finite-dimensional vector spaces
>>
>> This approach accepts fundamental heterogeneity in semantic
>> representation...
>>
>> -- Ben
>>
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>>
>> Super-benevolent super-intelligence is the thought the Global Brain is
>> currently struggling to form...
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "link-grammar" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/link-grammar.
> For more options, visit https://groups.google.com/d/optout.



-- 
Ben Goertzel, PhD
http://goertzel.org

Super-benevolent super-intelligence is the thought the Global Brain is
currently struggling to form...

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBeRwpZiNS%3DShs2YfrRbKkC426v3z51oz6-Fc8u7SoJ8Mw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to