Linas,

Actually, even after more thought, I still don't (yet) see why linear
logic is needed here...

In PLN, each statement is associated with at least two numbers

(strength, count)

Let's consider for now the case where the strength is just a probability...

Then in the guilt/innocence case, if you have no evidence about the
guilt or innocence, you have count =0 ....  So you don't have to
represent ignorance as p=.6 ... you can represent it as

(p,n) = (*,0)

The count is the number of observations made to arrive at the strength figure...

PLN count rules propagate counts from premises to conclusions, and if
everything is done right without double-counting of evidence, then the
amount of evidence (number of observations) supporting the conclusion
is less than or equal to the amount of evidence supporting the
premises...

This does not handle estimation of resource utilization in inference,
but it does handle the guilt/innocence example

As for the resource utilization issue, certainly one can count the
amount of space and time resources used in drawing a certain inference
... and one can weight an inference chain via the amount of resources
it uses... and one can prioritize less expensive inferences in doing
inference control.  This will result in inferences that are "simpler"
in the sense of resource utilization, and hence more plausible
according to some variant of Occam's Razor...

But this is layering resource-awareness on top of the logic, and using
it in the control aspect, rather than sticking it into the logic as
linear and affine logic do...

The textbook linear logic example of

"I have $5" ==> I can buy a sandwich
"I have $5" ==> I can buy a salad
|- (oops?)
"I have $5" ==> I can buy a sandwich and I can buy a salad

doesn't impress much much, I mean you should just say

If I have $5, I can exchange $5 for a sandwich
If I have $5, I  can exchange $5 for a salad
After I exchange $X for something else, I don't have $X anymore

or whatever, and that expresses the structure of the situation more
nicely than putting the nature of exchange into the logical deduction
apparatus....  There is no need to complicate one's logic just to
salvage a crappy representational choice...

In linear logic: It is no longer the case that given A implies B and
given A, one can deduce both A and B ...

In PLN, if one has

A <sA, nA>
(ImplicationLink A B) <sAB, nAB>

one can deduce

B <sB,nB>

but there is some math to do to deduce sB and nB, and one can base
this math on various assumptions including independence assumptions,
assumptions about the shapes of concepts, etc.

In short I think if we extent probabilistic TTR to be "TTR with <p,n>
truth values", then we can use lambda calculus with a type system
drawn from TTR and with each statement labeled with a <p,n> truth
value ... and then we can handle the finitude of evidence without
needing to complicate the base logic...

A coherent and sensible way to assess <p,n> truth values for
statements with quantified variables was given by me and Matt in 2008,
in

http://www.agiri.org/IndefiniteProbabilities.pdf

Don't let the third-order probabilities worry you ;)

...

In essence, it seems, the linear logic folks push a bunch of
complexity into the logic itself, whereas Matt and I pushed the
complexity into the truth values, and the Occam bias on proofs (into
which resource utilization should be factored)

-- Ben
.



On Tue, Aug 30, 2016 at 6:52 PM, Linas Vepstas <[email protected]> wrote:
> Hi Ben,
>
> Well, it might not have to be linear, it might be affine, I have not thought
> it through myself. What is clear is that cartesian is clearly wrong.
>
> The reason I keep repeating the guilt/innocence example is that its not just
> the "exclusive nature of disjuncts in link-grammar", but rather, that it is
> a generic real-world reasoning problem.
>
> I think I understand one of the points of confusion, though. In digital
> circuit verification (semiconductor chip industry), everyone agrees that the
> chips themselves behave according to classical boolean logic - its all just
> ones and zeros.  However, in verification, you have to prove that a
> particular chip design is working correctly.  That proof process does NOT
> use classical logic to achieve its ends -- it does use linear logic!
> Specifically, the proof process goes through sequences of Kripke frames,
> where you verify that certain ever-larger parts of the chip are behaving
> correctly, and you use the frames to keep track of how the various
> combinatorial possibilities feed back into one another.  Visualize it as a
> kind of lattice: at first, you have a combinatoric explosion, a kind of tree
> or vine, but then later, the branches join back together again, into a
> smaller collection. Those that fail to join up are either incompletely
> modelled, or indicate a design error in the chip.
>
> There's another way of thinking of chip verification: one might say, in any
> given universe/kripke frame, that a given transistor is in one of three
> states: on, off, or "don't know", with the "don't know" state corresponding
> to the "we haven't simulated/verified that one yet".   The collection of
> possible universes shrinks, as you eliminate the "don't know" states during
> the proof process.    This kind of tri-valued logic is called
> "intuitionistic logic" and has assorted close relationships to linear logic.
>
> These same ideas should generalize to PLN:  although PLN is itself a
> probabilistic logic, and I do not advocate changing that, the actual
> chaining process, the proof process of arriving at conclusions in PLN,
> cannot be, must not be.
>
> I hope the above pins down the source of confusion, when we talk about these
> things.  The logic happening at the proof level, the ludics level, is very
> different from the structures representing real-world knowledge.
>
> --linas
>
> On Tue, Aug 30, 2016 at 9:28 AM, Ben Goertzel <[email protected]> wrote:
>>
>> Linas,
>>
>> Alas my window of opportunities for writing long emails on math-y
>> stuff has passed, so I'll reply to your email more thoroughly in a
>> couple days...
>>
>> However, let me just say that I am not so sure linear logic is what we
>> really want....  I understand that we want to take resource usage into
>> account in our reasoning generally... and that in link grammar we want
>> to account for the particular exclusive nature of the disjuncts ...
>> but I haven't yet convinced myself linear logic is necessarily the
>> right way to do this... I need to take a few hours and reflect on it
>> more and try to assuage my doubts on this (or not)
>>
>> -- ben
>>
>>
>> On Tue, Aug 30, 2016 at 6:14 AM, Linas Vepstas <[email protected]>
>> wrote:
>> > It will take me a while to digest this fully, but one error/confusion
>> > (and
>> > very important point) pops up immediately: link-grammar is NOT
>> > cartesian,
>> > and we most definitely do not want cartesian-ness in the system.  That
>> > would
>> > destroy everything interesting, everything that we want to have.  Here's
>> > the
>> > deal:
>> >
>> > When we parse in link-grammar, we create multiple parses.  Each parse
>> > can be
>> > considered to "live" in its own unique world or universe (it's own
>> > Kripke
>> > frame)  These universes are typically incompatible with each other: they
>> > conflict. Only one parse is right, the others are wrong (typically --
>> > although sometimes there are some ambiguous cases, where more than one
>> > parse
>> > may be right, or where one parse might be 'more right' than another).
>> >
>> > These multiple incompatible universes are symptomatic of a "linear type
>> > system".  Now, linear type theory finds applications in several places:
>> > it
>> > can describe parallel computation (each universe is a parallel thread)
>> > and
>> > also mutex locks and synchronization, and also vending machines: for one
>> > dollar you get a menu selection of items to pick from -- the ChoiceLink
>> > that
>> > drove Eddie nuts.
>> >
>> > The linear type system is the type system of Linear logic, which is the
>> > internal language of the closed monoidal categories, of which the closed
>> > cartesian categories are a special case.
>> >
>> > Let me return to multiple universes -- we also want this in PLN
>> > reasoning. A
>> > man is discovered standing over a dead body, a bloody sword in his hand
>> > --
>> > did he do the deed, or is he simply the first witness to stumble onto
>> > the
>> > scene?  What is the evidence pro and con?
>> > This scenario describes two parallel universes: one in which he is
>> > guilty,
>> > and one in which he is not. It is the job of the prosecutor, defense,
>> > judge
>> > and jury to figure out which universe he belongs to.  The mechanism is a
>> > presentation of evidence and reasoning and deduction and inference.
>> >
>> > Please be hyper-aware of this, and don't get confused: just because we
>> > do
>> > not know his guilt does not mean he is "half-guilty", -- just like an
>> > unflipped coin is not a some blurry, vague superimposition of heads and
>> > tails.
>> >
>> > Instead, as the evidence rolls in, we want to find that the probability
>> > of
>> > one universe is increasing, while the probability of the other one is
>> > decreasing.  Its just one guy -- he cannot be both guilty and innocent
>> > --
>> > one universe must eventually be the right one,m and it can be the only
>> > one.
>> > (this is perhaps more clear in 3-way choices, or 4-way choices...)
>> >
>> > Anyway, the logic of these parallel universes is linear logic, and the
>> > type
>> > theory is linear type theory, and the category is closed monoidal.
>> >
>> > (Actually, I suspect that we might want to use affine logic, which is
>> > per
>> > wikipedia "a substructural logic whose proof theory rejects the
>> > structural
>> > rule of contraction. It can also be characterized as linear logic with
>> > weakening.")
>> >
>> > Anyway, another key point: lambda calculus is the internal language of
>> > *cartesian* closed categories.  It is NOT compatible with linear logic
>> > or
>> > linear types.   This is why I said in a different email, that "this way
>> > lies
>> > madness".  Pursuit of lambda calc will leave us up a creek without a
>> > paddle,
>> > it will prevent us from being able to apply PLN to guilty/not-guilty
>> > court
>> > cases.
>> >
>> > ----
>> > BTW, vector spaces are NOT cartesian closed! They are the prime #1 most
>> > common example of where one can have tensor-hom adjunction, i.e. can do
>> > currying, and NOT be cartesian!  Vector spaces *are* closed-monoidal.
>> >
>> > The fact that some people are able to map linguistics onto vector spaces
>> > (although with assorted difficulties/pathologies) re-affirms that
>> > closed-monoidal is the way to go.  The reason that linguistics maps
>> > poorly
>> > onto vector spaces is due to their symmetry -- the linguistics is NOT
>> > symmetric, the vector spaces are.    So what we are actually doing (or
>> > need
>> > to do) is develop the infrastructure for *cough cough* a non-symmetric
>> > vector space.. which is kind-of-ish what the point of the categorial
>> > grammars is.
>> >
>> > Enough for now.
>> >
>> > --linas
>> >
>> >
>> > On Mon, Aug 29, 2016 at 4:41 PM, Ben Goertzel <[email protected]> wrote:
>> >>
>> >> Linas, Nil, etc. --
>> >>
>> >> This variation of type theory
>> >>
>> >> http://www.dcs.kcl.ac.uk/staff/lappin/papers/cdll_lilt15.pdf
>> >>
>> >> seems like it may be right for PLN and OpenCog ... basically,
>> >> dependent type theory with records (persistent memory) and
>> >> probabilities ...
>> >>
>> >> If we view PLN as having this sort of semantics, then RelEx+R2L is
>> >> viewed as enacting a morphism from:
>> >>
>> >> -- link grammar, which is apparently equivalent to pregroup grammar,
>> >> which is a nonsymmetric cartesian closed category
>> >>
>> >> to
>> >>
>> >> -- lambda calculus endowed with the probabilistic TTR type system,
>> >> which is a locally cartesian closed category
>> >>
>> >>
>> >>
>> >> https://ncatlab.org/nlab/show/relation+between+type+theory+and+category+theory#DependentTypeTheory
>> >>
>> >> For the value of dependent types in natural language semantics, see
>> >> e.g.
>> >>
>> >>
>> >>
>> >> http://www.slideshare.net/kaleidotheater/hakodate2015-julyslide?qid=85e8a7fc-f073-4ded-a2c8-9622e89fd07d&v=&b=&from_search=1
>> >>
>> >> (the examples regarding anaphora in the above are quite clear)
>> >>
>> >>
>> >>
>> >> https://ncatlab.org/nlab/show/dependent+type+theoretic+methods+in+natural+language+semantics
>> >>
>> >> This paper
>> >>
>> >>
>> >>
>> >> http://www.slideshare.net/DimitriosKartsaklis1/tensorbased-models-of-natural-language-semantics?qid=fd4cc5b3-a548-46a7-b929-da8246e6c530&v=&b=&from_search=2
>> >>
>> >> on the other hand, seems mathematically sound but conceptually wrong
>> >> in its linguistic interpretation.
>> >>
>> >> It constructs a nice morphism from pregroup grammars (closed cartesian
>> >> categories) to categories defined over vector spaces -- where the
>> >> vector spaces are taken to represent co-occurence vectors and such,
>> >> indicating word semantics....  The morphism is nice... however, the
>> >> idea that semantics consists of numerical vectors is silly ...
>> >> semantics is much  richer than that
>> >>
>> >> If we view grammar as link-grammar/pregroup-grammar/asymmetric-CCC ...
>> >> we should view semantics as {probabilistic TTR  / locally compact
>> >> closed CCC *plus* numerical-vectors/linear-algebra}
>> >>
>> >> I.e. semantics has a distributional aspect AND ALSO a more explicitly
>> >> logical aspect
>> >>
>> >> Trying to push all of semantics into distributional word vectors,
>> >> leads them into insane complexities like modeling determiners using
>> >> Frobenius algebras... which is IMO just not sensiblen ... it's trying
>> >> to achieve a certain sort of mathematical simplicity that does not
>> >> reflect the kind of simplicity seen in natural systems like natural
>> >> language...
>> >>
>> >> Instead I would say RelEx+R2L+ECAN (on language) +
>> >> word-frequency-analysis can be viewed as enacting a morphism from:
>> >>
>> >> -- link grammar, which is apparently equivalent to pregroup grammar,
>> >> which is a nonsymmetric cartesian closed category
>> >>
>> >> to the product of
>> >>
>> >> -- lambda calculus endowed with the probabilistic TTR type system,
>> >> which is a locally cartesian closed category
>> >>
>> >> -- the algebra of finite-dimensional vector spaces
>> >>
>> >> This approach accepts fundamental heterogeneity in semantic
>> >> representation...
>> >>
>> >> -- Ben
>> >>
>> >> --
>> >> Ben Goertzel, PhD
>> >> http://goertzel.org
>> >>
>> >> Super-benevolent super-intelligence is the thought the Global Brain is
>> >> currently struggling to form...
>> >
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "link-grammar" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to [email protected].
>> > To post to this group, send email to [email protected].
>> > Visit this group at https://groups.google.com/group/link-grammar.
>> > For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>>
>> Super-benevolent super-intelligence is the thought the Global Brain is
>> currently struggling to form...
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at https://groups.google.com/group/opencog.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/opencog/CACYTDBeRwpZiNS%3DShs2YfrRbKkC426v3z51oz6-Fc8u7SoJ8Mw%40mail.gmail.com.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "link-grammar" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/link-grammar.
> For more options, visit https://groups.google.com/d/optout.



-- 
Ben Goertzel, PhD
http://goertzel.org

Super-benevolent super-intelligence is the thought the Global Brain is
currently struggling to form...

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBfYYLL1Lh_K1%2BYpfBLSpc_XK56HK38givQ5XC3cKxjMRg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to