And so here is the blog post -- its a lightly reformatted version of this
email, with lots of links to wikipedia and a few papers.

http://blog.opencog.org/2016/08/31/many-worlds-reasoning-about-reasoning/

I really really hope that this clarifies something that is often seen as
mysterious.

--linas

On Wed, Aug 31, 2016 at 4:16 PM, Linas Vepstas <[email protected]>
wrote:

> Hi Ben,
>
> What's TTR?
>
> We can talk about link-grammar, but I want to talk about something a
> little bit different: not PLN, but the *implementation* of PLN.   This
> conversation requires resolving the "category error" email I sent, just
> before this.
>
> Thus, I take PLN as a given, including the formulas that PLN uses, and
> every possible example of *using* PLN that you could throw my way.  I have
> no quibble about any of those examples, or with the formulas, or with
> anything like that. I have no objections to the design of the PLN rules of
> inference.
>
> What I want to talk about is how the PLN rules of inference are
> implemented in the block of C++ code in github.   I also want to assume
> that the implementation is complete, and completely bug-free (even though
> its not, but lets assume it is.)
>
> Now, PLN consists of maybe a half-dozen or a dozen rules of inference.
> They have names like "modus ponens" but we could call them just "rule MP"
> ... or just "rule A", "rule B", etc...
>
> Suppose I start with some atomspace contents, and apply the PLN rule A. As
> a result of this application, we have a "possible world 1".  If, instead,
> we started with the same original atomspace contents as before, but applied
> rule B, then we would get "possible world 2".  It might also be the case
> that PLN rule A can be applied to some different atoms from the atomspace,
> in which case, we get "possible world 3".
>
> Each possible world consists of the triple (some subset of the atomspace,
> some PLN inference rule, the result of applying the PLN rule to the input).
>
>
> Please note that some of these possible worlds are invalid or empty: it
> might not be possible to apply the choosen PLN rule to the chosen subset of
> the atomspace.  I guess we should call these "impossible worlds".  You can
> say that their probability is exactly zero.
>
> Observe that the triple above is an arrow:  the tail of the arrow is "some
> subset of the atomspace", the head of the arrow is "the result of applying
> PLN rule X", and the shaft of the arrow is given a name: its "rule X".
>
> (in fancy-pants, peacock language, the arrows are morphisms, and the
> slinging together, here, are kripke frames. But lets avoid the fancy
> language since its confusing things a lot, just right now.)
>
> Anyway -- considering this process, this clearly results in a very shallow
> tree, with the original atomspace as the root, and each branch of the tree
> corresponding to possible world.  Note that each possible world is a new
> and different atomspace: The rules of the game here are that we are NOT
> allowed to dump the results of the PLN inference back into the original
> atomsapce.  Instead, we MUST fork the atomspace.  Thus, if we have N
> possible worlds, then we have N distinct atomspaces. (not counting the
> original, starting atomspace)
>
> Now, for each possible world, we can apply the above procedure again.
> Naively, this is a combinatoric explosion. For the most part, each
> different possible world will be different than the others. They will share
> a lot of atoms in common, but some will be different.
>
> Note, also, that *some* of these worlds will NOT be different, but will
> converge, or be "confluent", arriving at the same atomspace contents along
> different routes.  So, although, naively, we have a highly branching tree,
> it should be clear that sometimes, some of the branches come back together
> again.
>
> I already pointed out that some of the worlds are "impossible" i.e. have a
> probability of zero. These can be discarded.  But wait, there's more.
> Suppose that one of the possible worlds contains the statement "John
> Kennedy is alive" (with a very very high confidence) , while another one
> contains the statement "John Kennedy is dead" (with a very very high
> confidence).  What I wish to claim is that, no matter what future PLN
> inferences might be made, these two worlds will never become confluent.
>
> There is also a different effect: during inferencing, one might find
> oneself in a situation where the atoms being added to the atomspace, at
> each inference step, have lower and lower probability. At some point, this
> suggests that one should just plain quit -- that particular branch is just
> not going anywhere. its a dead end.
>
> OK, that's it, I think, for the overview.   Now for some commentary.
>
> First, (let get it out of the way now) the above describes *exactly* how
> link-grammar works.  For "atomspace" substitute "linkage" and for "PLN rule
> of inference" substitute "disjunction".  That's it. End of story(QED).
>
> Notice that each distinct linkage in link-grammar is a distinct
> possible-world. The result of parsing is to create a list of possible
> worlds (linkages, aka "parses").  Now, link-grammar has a "cost system"
> that assigns different probabilities (different costs) to each possible
> world: this is "parse ranking": some parses (linkages) are more likely than
> others.
>
> Note that each different parse is, in a sense, "not compatible" with every
> other parse.  Two different parses may share common elements, but other
> parts will differ.
>
> Claim: the link-grammar is a closed monoidal category, where the words e
> the objects, and the disjuncts are the morphisms. I don't have the time or
> space to articulate this claim, so you'll have to take it on faith, or
> think it through, or compare it to other papers on categorial grammar e.g.
> the Bob Coecke paper op. cit.  It is useful to think of link-grammar
> disjuncts as jigsaw-puzzle pieces, and the act f parsing as the act of
> assembling a jigsaw puzzle.  (Se the original LG paper for a picture of the
> jigsaw pieces.  The Coecke paper also draws them. So does the Baez "rosetta
> stone" paper, though not as firmly)
>
> Theorem: the act of applying PLN, as described above, is a closed monoidal
> category.
> Proof:  A "PLN rule of inference" is, abstractly, exactly the same thing
> as a link-grammar disjunct. The contents of the atomspace is exactly the
> same thing as a (partially or fully) parsed sentence.  QED.
>
> There is nothing more to this proof than that.  I mean, it can fleshed it
> out in much greater detail, but that's the gist of it.
>
> Observe two very important things:  (1) during the proof, I never once had
> to talk about modus ponens, or any of the other PLN inference rules.  (2)
> during the proof, I never had to invoke the specific mathematical formulas
> that compute the TV's -- that compute the strength and confidence.   Both
> of these aspects of PLN are completely and utterly irrelevant to the
> proof.  The only thing that mattered is that PLN takes, as input, some
> atoms, and applies some transformation, and generates atoms. That's it.
>
> The above theorem is *why* I keep talking about possible worlds and
> kripke-blah-blah and intuitionistic logic, and linear logic. its got
> NOTHING TO DO WITH THE ACTUAL PLN RULES!!! the only thing that matters is
> that there are rules, that get applied in some way.  The generic properties
> of linear logic and etc. are the generic properties of rule systems and
> kripke frames. Examples of such rule systems include link-grammar, PLN,
> NARS, classical logic, and many more.  The details of the specific rule
> system do NOT alter the fundamental process of rule application aka
> "parsing" aka "reasoning" aka "natural deduction" aka "sequent calculus".
>  Confusing the details of PLN with the act of parsing is a category error:
> the logic that describes parsing is not PLN, and PLN dos not describe
> parsing: its a category error to intermix the two.
>
> Phew.
>
> What remains to be done:  I believe that what I describe above, the
> "many-worlds hypothesis" of reasoning, can be used to create a system that
> is far more efficient than the current PLN backward/forward chainer.  It's
> not easy, though: the link-parser algorithm struggles with the combinatoric
> explosion, and has some deep, tricky techniques to beat it down.  ECAN was
> invented to deal with the explosion in PLN.  There are other ways.
>
> By the way: the act of merging the results of a PLN inference back into
> the original atomspace corresponds, in a very literal sense, to a "wave
> function collapse". As long as you keep around multiple atomspaces, each
> containing partial results, you have "many worlds", but every time you
> discard or merge some of these atomspaces back into one, its a "collapse".
> That includes some of the TV merge rules that plague the system.
>
> Next, I plan to convert this email into a blog post.
>
> --linas
>
>
> On Wed, Aug 31, 2016 at 1:05 AM, Ben Goertzel <[email protected]> wrote:
>
>> Regarding link parses and possible worlds...
>>
>> In the TTR paper they point out that "possible worlds" is somehow
>> conceptually misleading terminology, and it may often be better to
>> think about "possible situations" (in a deep sense each possible
>> situation is some distribution over possible worlds, but it may rarely
>> be necessary to go that far)
>>
>> In that sense, we can perhaps view the type of a link parse as a
>> dependent type, that depends upon the situation ... (?)
>>
>> This is basically the same as viewing the link-parser itself as a
>> function that takes (sentence, dictionary) pairs into functions that
>> map situations into sets of link-parse-links  [but the latter is a
>> more boring and obvious way of saying it ;p]
>>
>> But again, I don't (yet) see why linear logic would be required
>> here... it seems to me something like TTR with <p,n> truth values is
>> good enough, and we can handle resource management on the "Occam's
>> razor" level
>>
>> As you already know (but others may not have thought about), weighting
>> possible link parses via their probabilities based on a background
>> corpus, is itself a form of "resource usage based Occam's razor
>> weighting".   Because the links and link-combinations with higher
>> probability based on the corpus, are the ones that the OpenCog system
>> doing the parsing has more reason to retain in the Atomspace --- thus
>> for higher-weighted links or link-combinations, the "marginal memory
>> usage" required to keep those links/link-combinations in memory is
>> less.  So we can view the probability weighting of a potential parse
>> as proportional to the memory-utilization-cost of that parse, in the
>> context of a system with a long-term memory full of other parses from
>> some corpus (or some body of embodied linguistic experience,
>> whatever...).....
>>
>> Currently it seems to me that the probabilistic weighting of parses
>> (corresponding to possible situations) is already handling
>> resource-management implicitly and we don't need linear logic to do
>> that here...
>>
>> Of course these things are pretty subtle when you really think about
>> them, and I may be missing something...
>>
>> ben
>>
>>
>> On Wed, Aug 31, 2016 at 1:10 AM, Ben Goertzel <[email protected]> wrote:
>> >  Linas,
>> >
>> > Actually, even after more thought, I still don't (yet) see why linear
>> > logic is needed here...
>> >
>> > In PLN, each statement is associated with at least two numbers
>> >
>> > (strength, count)
>> >
>> > Let's consider for now the case where the strength is just a
>> probability...
>> >
>> > Then in the guilt/innocence case, if you have no evidence about the
>> > guilt or innocence, you have count =0 ....  So you don't have to
>> > represent ignorance as p=.6 ... you can represent it as
>> >
>> > (p,n) = (*,0)
>> >
>> > The count is the number of observations made to arrive at the strength
>> figure...
>> >
>> > PLN count rules propagate counts from premises to conclusions, and if
>> > everything is done right without double-counting of evidence, then the
>> > amount of evidence (number of observations) supporting the conclusion
>> > is less than or equal to the amount of evidence supporting the
>> > premises...
>> >
>> > This does not handle estimation of resource utilization in inference,
>> > but it does handle the guilt/innocence example
>> >
>> > As for the resource utilization issue, certainly one can count the
>> > amount of space and time resources used in drawing a certain inference
>> > ... and one can weight an inference chain via the amount of resources
>> > it uses... and one can prioritize less expensive inferences in doing
>> > inference control.  This will result in inferences that are "simpler"
>> > in the sense of resource utilization, and hence more plausible
>> > according to some variant of Occam's Razor...
>> >
>> > But this is layering resource-awareness on top of the logic, and using
>> > it in the control aspect, rather than sticking it into the logic as
>> > linear and affine logic do...
>> >
>> > The textbook linear logic example of
>> >
>> > "I have $5" ==> I can buy a sandwich
>> > "I have $5" ==> I can buy a salad
>> > |- (oops?)
>> > "I have $5" ==> I can buy a sandwich and I can buy a salad
>> >
>> > doesn't impress much much, I mean you should just say
>> >
>> > If I have $5, I can exchange $5 for a sandwich
>> > If I have $5, I  can exchange $5 for a salad
>> > After I exchange $X for something else, I don't have $X anymore
>> >
>> > or whatever, and that expresses the structure of the situation more
>> > nicely than putting the nature of exchange into the logical deduction
>> > apparatus....  There is no need to complicate one's logic just to
>> > salvage a crappy representational choice...
>> >
>> > In linear logic: It is no longer the case that given A implies B and
>> > given A, one can deduce both A and B ...
>> >
>> > In PLN, if one has
>> >
>> > A <sA, nA>
>> > (ImplicationLink A B) <sAB, nAB>
>> >
>> > one can deduce
>> >
>> > B <sB,nB>
>> >
>> > but there is some math to do to deduce sB and nB, and one can base
>> > this math on various assumptions including independence assumptions,
>> > assumptions about the shapes of concepts, etc.
>> >
>> > In short I think if we extent probabilistic TTR to be "TTR with <p,n>
>> > truth values", then we can use lambda calculus with a type system
>> > drawn from TTR and with each statement labeled with a <p,n> truth
>> > value ... and then we can handle the finitude of evidence without
>> > needing to complicate the base logic...
>> >
>> > A coherent and sensible way to assess <p,n> truth values for
>> > statements with quantified variables was given by me and Matt in 2008,
>> > in
>> >
>> > http://www.agiri.org/IndefiniteProbabilities.pdf
>> >
>> > Don't let the third-order probabilities worry you ;)
>> >
>> > ...
>> >
>> > In essence, it seems, the linear logic folks push a bunch of
>> > complexity into the logic itself, whereas Matt and I pushed the
>> > complexity into the truth values, and the Occam bias on proofs (into
>> > which resource utilization should be factored)
>> >
>> > -- Ben
>> > .
>> >
>> >
>> >
>> > On Tue, Aug 30, 2016 at 6:52 PM, Linas Vepstas <[email protected]>
>> wrote:
>> >> Hi Ben,
>> >>
>> >> Well, it might not have to be linear, it might be affine, I have not
>> thought
>> >> it through myself. What is clear is that cartesian is clearly wrong.
>> >>
>> >> The reason I keep repeating the guilt/innocence example is that its
>> not just
>> >> the "exclusive nature of disjuncts in link-grammar", but rather, that
>> it is
>> >> a generic real-world reasoning problem.
>> >>
>> >> I think I understand one of the points of confusion, though. In digital
>> >> circuit verification (semiconductor chip industry), everyone agrees
>> that the
>> >> chips themselves behave according to classical boolean logic - its all
>> just
>> >> ones and zeros.  However, in verification, you have to prove that a
>> >> particular chip design is working correctly.  That proof process does
>> NOT
>> >> use classical logic to achieve its ends -- it does use linear logic!
>> >> Specifically, the proof process goes through sequences of Kripke
>> frames,
>> >> where you verify that certain ever-larger parts of the chip are
>> behaving
>> >> correctly, and you use the frames to keep track of how the various
>> >> combinatorial possibilities feed back into one another.  Visualize it
>> as a
>> >> kind of lattice: at first, you have a combinatoric explosion, a kind
>> of tree
>> >> or vine, but then later, the branches join back together again, into a
>> >> smaller collection. Those that fail to join up are either incompletely
>> >> modelled, or indicate a design error in the chip.
>> >>
>> >> There's another way of thinking of chip verification: one might say,
>> in any
>> >> given universe/kripke frame, that a given transistor is in one of three
>> >> states: on, off, or "don't know", with the "don't know" state
>> corresponding
>> >> to the "we haven't simulated/verified that one yet".   The collection
>> of
>> >> possible universes shrinks, as you eliminate the "don't know" states
>> during
>> >> the proof process.    This kind of tri-valued logic is called
>> >> "intuitionistic logic" and has assorted close relationships to linear
>> logic.
>> >>
>> >> These same ideas should generalize to PLN:  although PLN is itself a
>> >> probabilistic logic, and I do not advocate changing that, the actual
>> >> chaining process, the proof process of arriving at conclusions in PLN,
>> >> cannot be, must not be.
>> >>
>> >> I hope the above pins down the source of confusion, when we talk about
>> these
>> >> things.  The logic happening at the proof level, the ludics level, is
>> very
>> >> different from the structures representing real-world knowledge.
>> >>
>> >> --linas
>> >>
>> >> On Tue, Aug 30, 2016 at 9:28 AM, Ben Goertzel <[email protected]>
>> wrote:
>> >>>
>> >>> Linas,
>> >>>
>> >>> Alas my window of opportunities for writing long emails on math-y
>> >>> stuff has passed, so I'll reply to your email more thoroughly in a
>> >>> couple days...
>> >>>
>> >>> However, let me just say that I am not so sure linear logic is what we
>> >>> really want....  I understand that we want to take resource usage into
>> >>> account in our reasoning generally... and that in link grammar we want
>> >>> to account for the particular exclusive nature of the disjuncts ...
>> >>> but I haven't yet convinced myself linear logic is necessarily the
>> >>> right way to do this... I need to take a few hours and reflect on it
>> >>> more and try to assuage my doubts on this (or not)
>> >>>
>> >>> -- ben
>> >>>
>> >>>
>> >>> On Tue, Aug 30, 2016 at 6:14 AM, Linas Vepstas <
>> [email protected]>
>> >>> wrote:
>> >>> > It will take me a while to digest this fully, but one
>> error/confusion
>> >>> > (and
>> >>> > very important point) pops up immediately: link-grammar is NOT
>> >>> > cartesian,
>> >>> > and we most definitely do not want cartesian-ness in the system.
>> That
>> >>> > would
>> >>> > destroy everything interesting, everything that we want to have.
>> Here's
>> >>> > the
>> >>> > deal:
>> >>> >
>> >>> > When we parse in link-grammar, we create multiple parses.  Each
>> parse
>> >>> > can be
>> >>> > considered to "live" in its own unique world or universe (it's own
>> >>> > Kripke
>> >>> > frame)  These universes are typically incompatible with each other:
>> they
>> >>> > conflict. Only one parse is right, the others are wrong (typically
>> --
>> >>> > although sometimes there are some ambiguous cases, where more than
>> one
>> >>> > parse
>> >>> > may be right, or where one parse might be 'more right' than
>> another).
>> >>> >
>> >>> > These multiple incompatible universes are symptomatic of a "linear
>> type
>> >>> > system".  Now, linear type theory finds applications in several
>> places:
>> >>> > it
>> >>> > can describe parallel computation (each universe is a parallel
>> thread)
>> >>> > and
>> >>> > also mutex locks and synchronization, and also vending machines:
>> for one
>> >>> > dollar you get a menu selection of items to pick from -- the
>> ChoiceLink
>> >>> > that
>> >>> > drove Eddie nuts.
>> >>> >
>> >>> > The linear type system is the type system of Linear logic, which is
>> the
>> >>> > internal language of the closed monoidal categories, of which the
>> closed
>> >>> > cartesian categories are a special case.
>> >>> >
>> >>> > Let me return to multiple universes -- we also want this in PLN
>> >>> > reasoning. A
>> >>> > man is discovered standing over a dead body, a bloody sword in his
>> hand
>> >>> > --
>> >>> > did he do the deed, or is he simply the first witness to stumble
>> onto
>> >>> > the
>> >>> > scene?  What is the evidence pro and con?
>> >>> > This scenario describes two parallel universes: one in which he is
>> >>> > guilty,
>> >>> > and one in which he is not. It is the job of the prosecutor,
>> defense,
>> >>> > judge
>> >>> > and jury to figure out which universe he belongs to.  The mechanism
>> is a
>> >>> > presentation of evidence and reasoning and deduction and inference.
>> >>> >
>> >>> > Please be hyper-aware of this, and don't get confused: just because
>> we
>> >>> > do
>> >>> > not know his guilt does not mean he is "half-guilty", -- just like
>> an
>> >>> > unflipped coin is not a some blurry, vague superimposition of heads
>> and
>> >>> > tails.
>> >>> >
>> >>> > Instead, as the evidence rolls in, we want to find that the
>> probability
>> >>> > of
>> >>> > one universe is increasing, while the probability of the other one
>> is
>> >>> > decreasing.  Its just one guy -- he cannot be both guilty and
>> innocent
>> >>> > --
>> >>> > one universe must eventually be the right one,m and it can be the
>> only
>> >>> > one.
>> >>> > (this is perhaps more clear in 3-way choices, or 4-way choices...)
>> >>> >
>> >>> > Anyway, the logic of these parallel universes is linear logic, and
>> the
>> >>> > type
>> >>> > theory is linear type theory, and the category is closed monoidal.
>> >>> >
>> >>> > (Actually, I suspect that we might want to use affine logic, which
>> is
>> >>> > per
>> >>> > wikipedia "a substructural logic whose proof theory rejects the
>> >>> > structural
>> >>> > rule of contraction. It can also be characterized as linear logic
>> with
>> >>> > weakening.")
>> >>> >
>> >>> > Anyway, another key point: lambda calculus is the internal language
>> of
>> >>> > *cartesian* closed categories.  It is NOT compatible with linear
>> logic
>> >>> > or
>> >>> > linear types.   This is why I said in a different email, that "this
>> way
>> >>> > lies
>> >>> > madness".  Pursuit of lambda calc will leave us up a creek without a
>> >>> > paddle,
>> >>> > it will prevent us from being able to apply PLN to guilty/not-guilty
>> >>> > court
>> >>> > cases.
>> >>> >
>> >>> > ----
>> >>> > BTW, vector spaces are NOT cartesian closed! They are the prime #1
>> most
>> >>> > common example of where one can have tensor-hom adjunction, i.e.
>> can do
>> >>> > currying, and NOT be cartesian!  Vector spaces *are*
>> closed-monoidal.
>> >>> >
>> >>> > The fact that some people are able to map linguistics onto vector
>> spaces
>> >>> > (although with assorted difficulties/pathologies) re-affirms that
>> >>> > closed-monoidal is the way to go.  The reason that linguistics maps
>> >>> > poorly
>> >>> > onto vector spaces is due to their symmetry -- the linguistics is
>> NOT
>> >>> > symmetric, the vector spaces are.    So what we are actually doing
>> (or
>> >>> > need
>> >>> > to do) is develop the infrastructure for *cough cough* a
>> non-symmetric
>> >>> > vector space.. which is kind-of-ish what the point of the categorial
>> >>> > grammars is.
>> >>> >
>> >>> > Enough for now.
>> >>> >
>> >>> > --linas
>> >>> >
>> >>> >
>> >>> > On Mon, Aug 29, 2016 at 4:41 PM, Ben Goertzel <[email protected]>
>> wrote:
>> >>> >>
>> >>> >> Linas, Nil, etc. --
>> >>> >>
>> >>> >> This variation of type theory
>> >>> >>
>> >>> >> http://www.dcs.kcl.ac.uk/staff/lappin/papers/cdll_lilt15.pdf
>> >>> >>
>> >>> >> seems like it may be right for PLN and OpenCog ... basically,
>> >>> >> dependent type theory with records (persistent memory) and
>> >>> >> probabilities ...
>> >>> >>
>> >>> >> If we view PLN as having this sort of semantics, then RelEx+R2L is
>> >>> >> viewed as enacting a morphism from:
>> >>> >>
>> >>> >> -- link grammar, which is apparently equivalent to pregroup
>> grammar,
>> >>> >> which is a nonsymmetric cartesian closed category
>> >>> >>
>> >>> >> to
>> >>> >>
>> >>> >> -- lambda calculus endowed with the probabilistic TTR type system,
>> >>> >> which is a locally cartesian closed category
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> https://ncatlab.org/nlab/show/relation+between+type+theory+a
>> nd+category+theory#DependentTypeTheory
>> >>> >>
>> >>> >> For the value of dependent types in natural language semantics, see
>> >>> >> e.g.
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> http://www.slideshare.net/kaleidotheater/hakodate2015-julysl
>> ide?qid=85e8a7fc-f073-4ded-a2c8-9622e89fd07d&v=&b=&from_search=1
>> >>> >>
>> >>> >> (the examples regarding anaphora in the above are quite clear)
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> https://ncatlab.org/nlab/show/dependent+type+theoretic+metho
>> ds+in+natural+language+semantics
>> >>> >>
>> >>> >> This paper
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> http://www.slideshare.net/DimitriosKartsaklis1/tensorbased-
>> models-of-natural-language-semantics?qid=fd4cc5b3-a548-
>> 46a7-b929-da8246e6c530&v=&b=&from_search=2
>> >>> >>
>> >>> >> on the other hand, seems mathematically sound but conceptually
>> wrong
>> >>> >> in its linguistic interpretation.
>> >>> >>
>> >>> >> It constructs a nice morphism from pregroup grammars (closed
>> cartesian
>> >>> >> categories) to categories defined over vector spaces -- where the
>> >>> >> vector spaces are taken to represent co-occurence vectors and such,
>> >>> >> indicating word semantics....  The morphism is nice... however, the
>> >>> >> idea that semantics consists of numerical vectors is silly ...
>> >>> >> semantics is much  richer than that
>> >>> >>
>> >>> >> If we view grammar as link-grammar/pregroup-grammar/asymmetric-CCC
>> ...
>> >>> >> we should view semantics as {probabilistic TTR  / locally compact
>> >>> >> closed CCC *plus* numerical-vectors/linear-algebra}
>> >>> >>
>> >>> >> I.e. semantics has a distributional aspect AND ALSO a more
>> explicitly
>> >>> >> logical aspect
>> >>> >>
>> >>> >> Trying to push all of semantics into distributional word vectors,
>> >>> >> leads them into insane complexities like modeling determiners using
>> >>> >> Frobenius algebras... which is IMO just not sensiblen ... it's
>> trying
>> >>> >> to achieve a certain sort of mathematical simplicity that does not
>> >>> >> reflect the kind of simplicity seen in natural systems like natural
>> >>> >> language...
>> >>> >>
>> >>> >> Instead I would say RelEx+R2L+ECAN (on language) +
>> >>> >> word-frequency-analysis can be viewed as enacting a morphism from:
>> >>> >>
>> >>> >> -- link grammar, which is apparently equivalent to pregroup
>> grammar,
>> >>> >> which is a nonsymmetric cartesian closed category
>> >>> >>
>> >>> >> to the product of
>> >>> >>
>> >>> >> -- lambda calculus endowed with the probabilistic TTR type system,
>> >>> >> which is a locally cartesian closed category
>> >>> >>
>> >>> >> -- the algebra of finite-dimensional vector spaces
>> >>> >>
>> >>> >> This approach accepts fundamental heterogeneity in semantic
>> >>> >> representation...
>> >>> >>
>> >>> >> -- Ben
>> >>> >>
>> >>> >> --
>> >>> >> Ben Goertzel, PhD
>> >>> >> http://goertzel.org
>> >>> >>
>> >>> >> Super-benevolent super-intelligence is the thought the Global
>> Brain is
>> >>> >> currently struggling to form...
>> >>> >
>> >>> >
>> >>> > --
>> >>> > You received this message because you are subscribed to the Google
>> >>> > Groups
>> >>> > "link-grammar" group.
>> >>> > To unsubscribe from this group and stop receiving emails from it,
>> send
>> >>> > an
>> >>> > email to [email protected].
>> >>> > To post to this group, send email to [email protected].
>> >>> > Visit this group at https://groups.google.com/group/link-grammar.
>> >>> > For more options, visit https://groups.google.com/d/optout.
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Ben Goertzel, PhD
>> >>> http://goertzel.org
>> >>>
>> >>> Super-benevolent super-intelligence is the thought the Global Brain is
>> >>> currently struggling to form...
>> >>>
>> >>> --
>> >>> You received this message because you are subscribed to the Google
>> Groups
>> >>> "opencog" group.
>> >>> To unsubscribe from this group and stop receiving emails from it,
>> send an
>> >>> email to [email protected].
>> >>> To post to this group, send email to [email protected].
>> >>> Visit this group at https://groups.google.com/group/opencog.
>> >>> To view this discussion on the web visit
>> >>> https://groups.google.com/d/msgid/opencog/CACYTDBeRwpZiNS%3D
>> Shs2YfrRbKkC426v3z51oz6-Fc8u7SoJ8Mw%40mail.gmail.com.
>> >>> For more options, visit https://groups.google.com/d/optout.
>> >>
>> >>
>> >> --
>> >> You received this message because you are subscribed to the Google
>> Groups
>> >> "link-grammar" group.
>> >> To unsubscribe from this group and stop receiving emails from it, send
>> an
>> >> email to [email protected].
>> >> To post to this group, send email to [email protected].
>> >> Visit this group at https://groups.google.com/group/link-grammar.
>> >> For more options, visit https://groups.google.com/d/optout.
>> >
>> >
>> >
>> > --
>> > Ben Goertzel, PhD
>> > http://goertzel.org
>> >
>> > Super-benevolent super-intelligence is the thought the Global Brain is
>> > currently struggling to form...
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>>
>> Super-benevolent super-intelligence is the thought the Global Brain is
>> currently struggling to form...
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "link-grammar" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at https://groups.google.com/group/link-grammar.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA36VicRhRE-cvo7NCVbUodcxs5dAzUCqO1eTp-YiE5Z1%3Dg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to