Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-05 Thread 'Nil Geisweiller' via opencog

Linas,

On 09/03/2016 08:54 AM, Linas Vepstas wrote:

I claim that inference is like parsing, and that algorithms suitable for
parsing can be transported and used for inference. I also claim that
these algorithms will all provide superior performance to
backward/forward chaining.

Until we can start to talk about inference as if it was a kind of
parsing, then I think we'll remain stuck, for a while.


It is inconvenient that I do not know LG well (I do know the basics 
about parsing regular or context-free grammars but that's all).


I do have however some experience with automatic theorem proving, my 
take is that no matter what abstraction you may come up with, it will 
always suffer combinatorial explosion (as soon as the logic is 
expressive enough). That is what I mean by linear or intuitionistic 
logic being a hack, there is just no other way I can think of to tackle 
that explosion than by using meta-learning so that it at least works 
here on earth.


You say "of all of the algorithms that are known for performing 
reasoning, forward/backward chaining are the worst and the slowest and 
the lowest-performance of all", but that is not how FC and BC should be 
thought of. First of all BC is just FC with a target driven inference 
control. Second, FC is neither bad nor good, it all depends on the 
control, right?


That is said, I totally like your multi-atomspace abstraction, looking 
at confluence, etc. This is the way to go. I just fail to see how this 
abstraction can help us simplify or optimize inference control. But I'm 
certainly open to the idea.


Nil

--
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/57CD285B.6090608%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-05 Thread 'Nil Geisweiller' via opencog

Linas,

On 09/03/2016 04:59 AM, Linas Vepstas wrote:

However, I feel an area where something similar to linear logic,
etc, might be very worthwhile thinking of is in estimating how much
evidences inference traces have in common, as to have the revision
rule work correctly. This is kinda the only way I manage to relate
these barely-understandable-word-soup-sounding-to-me abstract
proposals to PLN.  Would really love to look deep into that once it
becomes more prioritized though.


OK, so in the blog post, at what point did things get too abstract, and
too hard to follow?


The blog is clear and I believe I understood it well, and agree with it. 
The only confusing part was when you mentioned the closed monoidal 
category, etc. I tried to quickly understand it but it seems it would 
suck me into layers of hyperlinks before I can get it. BTW, I would be 
happy to spend a week reading a book on category theory, I'm just not 
sure it's the best use of my time right now. But maybe it is, before 
re-implementing the BC, not sure.


Nil



--linas




--
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/57CD208A.9060805%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-05 Thread 'Nil Geisweiller' via opencog

On 09/03/2016 08:24 AM, Linas Vepstas wrote:

The other approach, that Nil was advocating with his distributional-TV
proposals, is to jam these two into one, and say that_advmod(see,
with) is half-true, and _prepadj(man, with) is half-true, -- and
then somehow hope that PLN is able to eventually sort it out.   We
currently don't do this approach, because it would break R2L -- the R2L
rules would probably mis-behave, because they don't know how to
propagate half-truths.


Oh, if you're talking about my Generalized Distributional TV proposal, 
it was not about this, it was just about fitting existing TV types into one.


Although, since GDTVs may actually represent conditional distributions, 
it could serve as composite TV as well.


Nil

--
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/57CD17F7.5020803%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-05 Thread 'Nil Geisweiller' via opencog



On 09/03/2016 08:24 AM, Linas Vepstas wrote:

The other approach, that Nil was advocating with his distributional-TV
proposals, is to jam these two into one, and say that_advmod(see,
with) is half-true, and _prepadj(man, with) is half-true, -- and
then somehow hope that PLN is able to eventually sort it out.   We
currently don't do this approach, because it would break R2L -- the R2L
rules would probably mis-behave, because they don't know how to
propagate half-truths.


Oh, if you're talking about my Generalized Distributional TV proposal, 
it was not about this, it was just about fitting existing TV types into one.


Although, since GDTVs may actually represent conditional distributions, 
it could serve as composite TV as well.


Nil

--
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/57CD16D8.1010502%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-05 Thread 'Nil Geisweiller' via opencog



On 09/03/2016 07:19 AM, Ben Goertzel wrote:

About ContextLink / CompositeTruthValue -- an interesting relevant
question is whether we want/need to use it in the PLN backward chainer
which Nil is now re-implementing  Quite possibly we do...


It's clear both the forward and backward chainer need to be able to 
handle contextual reasoning rather than constantly un/contextualize 
links. That is one should be able to launch reasoning queries in certain 
contexts. Not supported at the moment but I feel we can afford 
incremental progress in that respect.


Nil





On Sat, Sep 3, 2016 at 12:11 PM, Linas Vepstas  wrote:

Yes. I am starting to get very annoyed. Whenever I talk about
CompositeTruthValue, which I did earlier, I get the big brushoff. Now, when
I finally was able to sneak it back into the conversation, I once again get
the big brushoff.

I am starting to get really angry about this. I am spending wayyy too much
time writing these emails, and all I get is blank stares and the occasional
snide remark back.  This is just not that complicated, but as long as you do
not bother to apply your considerable brainpower to all of this, the
conversation is utterly completely stalled.

I'm pretty angry right now.

--linas


On Fri, Sep 2, 2016 at 10:44 PM, Ben Goertzel  wrote:


Linas,

On Sat, Sep 3, 2016 at 10:50 AM, Linas Vepstas 
wrote:

Today, by default, with the way the chainers are designed, the various
different atomspaces are *always* merged back together again (into one
single, global atomspace), and you are inventing things like
"distributional
TV" to control how that merge is done.

I am trying to point out that there is another possibility: one could,
if
desired, maintain many distinct atomspaces, and only sometimes merge
them.
So, for just a moment, just pretend you actually did want to do that.
How
could it actually be done?  Because doing it in the "naive" way is not
practical.  Well, there are several ways of doing this more efficiently.
One way is to create a new TV, which stores the pairs (atomspace-id,
simple-TV)  Then, if you wanted to merge two of these "abstract"
atomspaces
into one, you could just *erase* the atomspace-id.  Just as easy as that
--
erase some info. You could even take two different (atomspace-id,
simple-TV)
pairs and mash them into one distributional TV.


I note that we used to have something essentially equivalent to this,
for basically this same reason.

It was called CompositeTruthValue, and was a truth value object that
contained mutliple truth values, indexed by a certain ID.The ID
was a version-ID not an atomspace-ID, but same difference...

A dude named Linas Vepstas got rid of this mechanism, because he
(probably correctly) felt it was a poor software design ;)

The replacement methodology is to use EmbeddedTruthValueLink and
ContextAnchorNode , as in the example

Evaluation
   PredicateNode "thinks"
   ConceptNode "Bob"
   ContextAnchorNode "123"

EmbeddedTruthValueLink <0>
   ContextAnchorNode "123"
   Inheritance Ben sane

which uses more memory but does not complicate the core code so much...

-- Ben




--
Ben Goertzel, PhD
http://goertzel.org

Super-benevolent super-intelligence is the thought the Global Brain is
currently struggling to form...

--
You received this message because you are subscribed to the Google Groups
"link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to link-grammar+unsubscr...@googlegroups.com.
To post to this group, send email to link-gram...@googlegroups.com.
Visit this group at https://groups.google.com/group/link-grammar.
For more options, visit https://groups.google.com/d/optout.



--
You received this message because you are subscribed to the Google Groups
"link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to link-grammar+unsubscr...@googlegroups.com.
To post to this group, send email to link-gram...@googlegroups.com.
Visit this group at https://groups.google.com/group/link-grammar.

For more options, visit https://groups.google.com/d/optout.






--
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/57CD152C.1070300%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-05 Thread 'Nil Geisweiller' via opencog

Hi Ben,

On 09/03/2016 06:44 AM, Ben Goertzel wrote:

The replacement methodology is to use EmbeddedTruthValueLink and
ContextAnchorNode , as in the example

Evaluation
   PredicateNode "thinks"
   ConceptNode "Bob"
   ContextAnchorNode "123"

EmbeddedTruthValueLink <0>
   ContextAnchorNode "123"
   Inheritance Ben sane

which uses more memory but does not complicate the core code so much...


I'm not sure again (as a few months ago) why we wouldn't want to use a 
ContextLink instead. As the opencog wiki is unaccessible I can't find 
the definition of EmbeddedTruthValueLink, though I believe I understand 
what it is.


Nil

--
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/57CD13A1.9080108%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-03 Thread Ben Goertzel
One more somewhat amusing observation is that PLN would be expected to
CREATE Chomskyan deep syntactic structures for sentences in the course
of learning surface structure based on embodied experience...

Recall the notion of Chomskyan "deep structure."   I suggest that
probabilistic reasoning gives a new slant on this...

Consider the example

"Who did Ben tickle?"

The Chomskyan theory would explain this as wh-movement from the "deep
structure" version

"Ben did tickle who?"

Now, arguably this hypothetical syntactic deep structure version is a
better parallel to the *semantic* deep structure.   This certainly
follows if we take the OpenCog semantic deep structure, in which we
get

Who did Ben tickle?
==>
EvaluationLink
PredicateNode "tickle"
ListLink
 ConceptNode "Ben"
 VariableNode $w


InheritanceLink
VariableNode $w
ConceptNode "person"

Ben did tickle Ruiting.

EvaluationLink
PredicateNode "tickle"
ListLink
 ConceptNode "Ben"
 ConceptNode "Ruiting"

(let's call this L1 for future reference)

The relation between the semantic representations of "Ben did tickle
Ruiting" and "Ben did tickle who?" is one of substitution of the
representation of "Ruiting" for the representation of "who"...

Similarly, the relation between the syntactic representation of "Ben
did tickle Ruiting" and "Ben did tickle who?" is one of substitution
of the lexical representation of "Ruiting" and the lexical
representation of "who?"

On the other hand, the relationship between the syntactic
representation of "Ben did tickle Ruiting" and that of "Who did Ben
tickle?" is not one of simple substitution...

If we represent substitution as an algebraic operation on both the
syntax-parse and semantic-representation side, then there is clearly a
morphism between the symmetry (the invariance wrt substitution) on the
semantic-structure side and the deep-syntactic-structure side...  But
there's no such straightforward morphism on the
shallow-syntactic-structure side... (though the syntax algebra and the
logical-semantics algebra are morphic generally, there is no morphism
btw the substitution algebras on the two sides...)

HOWEVER, and here is the interesting part... suppose a mind has all three of

Who did Ben tickle?

Ben did tickle Ruiting.

and

EvaluationLink
PredicateNode "tickle"
ListLink
 ConceptNode "Ben"
 VariableNode $w

(let's call this L for future reference)

in it?

I submit that, in that case, PLN is going to *infer* the syntactic form

Ben did tickle who?

with uncertainties on the links...

That is, the deep syntactic structure is going to get produced via
uncertain inference...

So what?

Well, consider what happens during language learning.  At a certain
point, the system's understanding of the meaning of

Who did Ben tickle?

is going to be incomplete and uncertain ... i.e. its probability
weighting on the link from the syntactic form to its logical semantic
interpretation will be relatively low...

At that stage, the construction of the deep syntactic form

Ben did tickle who?

will be part of the process of bolstering the probability of the
correct interpretation of

Who did Ben tickle?

Loosely speaking, the inference will have the form

Who did Ben tickle?
==>
L

is bolstered by

(Ben did tickle who? ==> L)
(Who did Ben tickle? ==> Ben did tickle who?)
|-
(Who did Ben tickle ==> L)

where

(Ben did tickle who? ==> L)

comes by analogical inference from

(Ben did tickle Ruiting ==> L1)

and

(Who did Ben tickle? ==> Ben did tickle who?)

comes by

(Who did Ben tickle ==> Ben did tickle Ruiting)
(Ben did tickle Ruiting ==> Ben did tickle who)
|-
(Who did Ben tickle? ==> Ben did tickle who?)

and

(Ben did tickle Ruiting ==> Ben did tickle who)

comes from analogical inference on the syntactic links, and

(Who did Ben tickle ==> Ben did tickle Ruiting)

comes by analogical inference from

(Who did Ben tickle ==> L)
(Ben did tickle Ruiting ==> L1)
Similarity L L1
|-
(Who did Ben tickle ==> Ben did tickle Ruiting)

So philosophically the conclusion we come to is: The syntactic deep
structure will get invented in the mind during the process of language
learning, because it helps to learn the surface form, as it's a
bridging structure between the semantic structure and the surface
syntactic structure...

One thing this means is that, contra Chomsky, the presence of the deep
structure in language does NOT imply that the deep structure has to be
innate ... the deep structure would naturally emerge in the mind as a
consequence of probabilistic inference ... and furthermore, languages
whose surface form is relatively easily tweakable into deep structures
that parallel syntactic structure, are likely to be more easily
learned using probabilistic reasoning  So one would expect surface
syntax to emerge via multiple constraints include

-- ease of tweakability into deep structures that parallel semantic structure

-- ease of 

Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-03 Thread Ben Goertzel
> MAPPING SYNTAX TO LOGIC
>
>  "RelEx + RelEx2Logic” maps syntactic structures into logical
> structures.   It takes in structures that care about left vs. right,
> and outputs symmetric structures that don’t care about left vs. right.
>   The output of this semantic mapping framework, given a sentence, can
> be viewed as a set of type judgments, i.e. a set of assignations of
> terms to types.(Categorially, assigning term t to type T
> corresponds to an arrow “t \circ ! : Gamma ---> T” where ! is an arrow
> pointing to the unit of the category and Gamma is the set of type
> definitions of the typed lambda calculus in question, and \circ is
> function composition) .

One philosophically nice observation here is: Frege's "principle of
compositionality" here corresponds to the observation that there is a
morphism from the asymmetric monoidal category corresponding to link
grammar, into the symmetric locally cartesian closed category
corresponding to lambda calculus w/ dependent types...

This principle basically says that you can get the meaning of the
whole by combining the meaning of the parts, in language...

The case of "Every man who has a donkey, beats it" illustrates that in
order to get compositionality for weird sentences like this, you
basically want to have dependent types in your lambda calculus at the
logic end of your mapping...

-- Ben

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBeG7g9VCWqr_wDp_WsW3bNoNseazL%3D0pXRqSyBqjoMgjA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-02 Thread Linas Vepstas
GOD DAMN IT BEN

Stop writing these ninny emails, and start thinking about what the hell is
going on.  I've explained this six ways from Sunday, and I get the
impression that you are just skimming everything I write, and not bothering
to read it, much less think about it.

I know you are really really smart, and I know you can understand this
stuff, (cause its really not that hard)  but you are simply not making the
effort to do so.  You are probably overwhelmed with other work -- OK --
great -- so we can maybe follow up on this later on. But reading your
responses is just plain highly unproductive, and just doesn't lead
anywhere.  Its not interesting, its not constructive, it doesn't solve any
of the current problems in front of us.

--linas

On Fri, Sep 2, 2016 at 10:50 PM, Ben Goertzel  wrote:

> On Sat, Sep 3, 2016 at 9:59 AM, Linas Vepstas 
> wrote:
> > Hi Nil,
> >
> >>
> >>>
> >>> These same ideas should generalize to PLN:  although PLN is itself a
> >>> probabilistic logic, and I do not advocate changing that, the actual
> >>> chaining process, the proof process of arriving at conclusions in PLN,
> >>> cannot be, must not be.
> >>>
> >>> I hope the above pins down the source of confusion, when we talk about
> >>> these things.  The logic happening at the proof level, the ludics
> level,
> >>> is very different from the structures representing real-world
> knowledge.
> >>
> >>
> >> Oh, it's a lot clearer then! But in the case of PLN inference control we
> >> want to use meta-learning anyway, not "hacks" (sorry if I upset certain)
> >> like linear logic or intuitionistic logic.
> >
> >
> > Well, hey, that is like saying that 2+2=4 is a hack --
> >
> > The ideas that I am trying to describe are significantly older than PLN,
> and
> > PLN is not some magical potion that somehow is not bound by the rules of
> > reality, that can in some supernatural way violate the laws of
> mathematics.
>
> Hmm, no, but forms of logic with a Possibly operator are kinda crude
> -- they basically lump all non-crisp truth values into a single
> category, which is not really the most useful thing to do in most
> cases...
>
> Intuitionistic is indeed much older than probabilistic logic; but my
> feeling is it is largely superseded by probabilistic logic in terms of
> practical utility and relevance...
>
> It's a fair theoretical point, though, that a lot of the nice theory
> associated with intuitionistic logic could be generalized and ported
> to probabilistic logic -- and much of this mathematical/philosophical
> work has not been done...
>
> As for linear logic, I'm still less clear on the relevance.   It is
> clear to me that integrating resource-awareness into the inference
> process is important, but unclear to me that linear logic or affine
> logic are good ways to do this in a probabilistic context.   It may be
> that deep integration of probabilistic truth values provides better
> and different ways to incorporate resource-awareness...
>
> As for "reasoning about reasoning", it's unclear to me that this
> requires special treatment in terms of practicalities of inference
> software   Depending on one's semantic formalism, it may or may
> not require special treatment in terms of the formal semantics of
> reasoning  It seems to me that part of the elegance of dependent
> types is that one can suck meta-reasoning cleanly into the same
> formalism as reasoning.   This can also be done using type-free
> domains (Dana Scott's old work, etc.)   But then there are other
> formalisms where meta-reasoning and base-level reasoning are
> formalized quite differently...
>
> -- Ben
>
> -- Ben
>
> --
> You received this message because you are subscribed to the Google Groups
> "link-grammar" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to link-grammar+unsubscr...@googlegroups.com.
> To post to this group, send email to link-gram...@googlegroups.com.
> Visit this group at https://groups.google.com/group/link-grammar.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA36KRnFJkF9ELaOeDjmm%3DYWfY%2BSY42kaitevYJkS_H2nfg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-02 Thread Linas Vepstas
Yes. I am starting to get very annoyed. Whenever I talk about
CompositeTruthValue, which I did earlier, I get the big brushoff. Now, when
I finally was able to sneak it back into the conversation, I once again get
the big brushoff.

I am starting to get really angry about this. I am spending wayyy too much
time writing these emails, and all I get is blank stares and the occasional
snide remark back.  This is just not that complicated, but as long as you
do not bother to apply your considerable brainpower to all of this, the
conversation is utterly completely stalled.

I'm pretty angry right now.

--linas


On Fri, Sep 2, 2016 at 10:44 PM, Ben Goertzel  wrote:

> Linas,
>
> On Sat, Sep 3, 2016 at 10:50 AM, Linas Vepstas 
> wrote:
> > Today, by default, with the way the chainers are designed, the various
> > different atomspaces are *always* merged back together again (into one
> > single, global atomspace), and you are inventing things like
> "distributional
> > TV" to control how that merge is done.
> >
> > I am trying to point out that there is another possibility: one could, if
> > desired, maintain many distinct atomspaces, and only sometimes merge
> them.
> > So, for just a moment, just pretend you actually did want to do that.
> How
> > could it actually be done?  Because doing it in the "naive" way is not
> > practical.  Well, there are several ways of doing this more efficiently.
> > One way is to create a new TV, which stores the pairs (atomspace-id,
> > simple-TV)  Then, if you wanted to merge two of these "abstract"
> atomspaces
> > into one, you could just *erase* the atomspace-id.  Just as easy as that
> --
> > erase some info. You could even take two different (atomspace-id,
> simple-TV)
> > pairs and mash them into one distributional TV.
>
> I note that we used to have something essentially equivalent to this,
> for basically this same reason.
>
> It was called CompositeTruthValue, and was a truth value object that
> contained mutliple truth values, indexed by a certain ID.The ID
> was a version-ID not an atomspace-ID, but same difference...
>
> A dude named Linas Vepstas got rid of this mechanism, because he
> (probably correctly) felt it was a poor software design ;)
>
> The replacement methodology is to use EmbeddedTruthValueLink and
> ContextAnchorNode , as in the example
>
> Evaluation
>   PredicateNode "thinks"
>   ConceptNode "Bob"
>   ContextAnchorNode "123"
>
> EmbeddedTruthValueLink <0>
>   ContextAnchorNode "123"
>   Inheritance Ben sane
>
> which uses more memory but does not complicate the core code so much...
>
> -- Ben
>
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> Super-benevolent super-intelligence is the thought the Global Brain is
> currently struggling to form...
>
> --
> You received this message because you are subscribed to the Google Groups
> "link-grammar" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to link-grammar+unsubscr...@googlegroups.com.
> To post to this group, send email to link-gram...@googlegroups.com.
> Visit this group at https://groups.google.com/group/link-grammar.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA36PPmUTY9EG2_EGNS8%3DUUnScRLzZQ8oOqMCrTMPWSDAAA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-02 Thread Ben Goertzel
On Sat, Sep 3, 2016 at 9:59 AM, Linas Vepstas  wrote:
> Hi Nil,
>
>>
>>>
>>> These same ideas should generalize to PLN:  although PLN is itself a
>>> probabilistic logic, and I do not advocate changing that, the actual
>>> chaining process, the proof process of arriving at conclusions in PLN,
>>> cannot be, must not be.
>>>
>>> I hope the above pins down the source of confusion, when we talk about
>>> these things.  The logic happening at the proof level, the ludics level,
>>> is very different from the structures representing real-world knowledge.
>>
>>
>> Oh, it's a lot clearer then! But in the case of PLN inference control we
>> want to use meta-learning anyway, not "hacks" (sorry if I upset certain)
>> like linear logic or intuitionistic logic.
>
>
> Well, hey, that is like saying that 2+2=4 is a hack --
>
> The ideas that I am trying to describe are significantly older than PLN, and
> PLN is not some magical potion that somehow is not bound by the rules of
> reality, that can in some supernatural way violate the laws of mathematics.

Hmm, no, but forms of logic with a Possibly operator are kinda crude
-- they basically lump all non-crisp truth values into a single
category, which is not really the most useful thing to do in most
cases...

Intuitionistic is indeed much older than probabilistic logic; but my
feeling is it is largely superseded by probabilistic logic in terms of
practical utility and relevance...

It's a fair theoretical point, though, that a lot of the nice theory
associated with intuitionistic logic could be generalized and ported
to probabilistic logic -- and much of this mathematical/philosophical
work has not been done...

As for linear logic, I'm still less clear on the relevance.   It is
clear to me that integrating resource-awareness into the inference
process is important, but unclear to me that linear logic or affine
logic are good ways to do this in a probabilistic context.   It may be
that deep integration of probabilistic truth values provides better
and different ways to incorporate resource-awareness...

As for "reasoning about reasoning", it's unclear to me that this
requires special treatment in terms of practicalities of inference
software   Depending on one's semantic formalism, it may or may
not require special treatment in terms of the formal semantics of
reasoning  It seems to me that part of the elegance of dependent
types is that one can suck meta-reasoning cleanly into the same
formalism as reasoning.   This can also be done using type-free
domains (Dana Scott's old work, etc.)   But then there are other
formalisms where meta-reasoning and base-level reasoning are
formalized quite differently...

-- Ben

-- Ben

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBf5o6Yau1GGFw9%2B9ppYzV2M9rMqOrU7wNrhMdWoYdsaMA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-02 Thread Ben Goertzel
Linas,

On Sat, Sep 3, 2016 at 10:50 AM, Linas Vepstas  wrote:
> Today, by default, with the way the chainers are designed, the various
> different atomspaces are *always* merged back together again (into one
> single, global atomspace), and you are inventing things like "distributional
> TV" to control how that merge is done.
>
> I am trying to point out that there is another possibility: one could, if
> desired, maintain many distinct atomspaces, and only sometimes merge them.
> So, for just a moment, just pretend you actually did want to do that.  How
> could it actually be done?  Because doing it in the "naive" way is not
> practical.  Well, there are several ways of doing this more efficiently.
> One way is to create a new TV, which stores the pairs (atomspace-id,
> simple-TV)  Then, if you wanted to merge two of these "abstract" atomspaces
> into one, you could just *erase* the atomspace-id.  Just as easy as that --
> erase some info. You could even take two different (atomspace-id, simple-TV)
> pairs and mash them into one distributional TV.

I note that we used to have something essentially equivalent to this,
for basically this same reason.

It was called CompositeTruthValue, and was a truth value object that
contained mutliple truth values, indexed by a certain ID.The ID
was a version-ID not an atomspace-ID, but same difference...

A dude named Linas Vepstas got rid of this mechanism, because he
(probably correctly) felt it was a poor software design ;)

The replacement methodology is to use EmbeddedTruthValueLink and
ContextAnchorNode , as in the example

Evaluation
  PredicateNode "thinks"
  ConceptNode "Bob"
  ContextAnchorNode "123"

EmbeddedTruthValueLink <0>
  ContextAnchorNode "123"
  Inheritance Ben sane

which uses more memory but does not complicate the core code so much...

-- Ben




-- 
Ben Goertzel, PhD
http://goertzel.org

Super-benevolent super-intelligence is the thought the Global Brain is
currently struggling to form...

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBemuyPOSGtqr6hedCJ%2Bm3mo9fGpP%2BJ_a5zNnKa8rqciOw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-02 Thread Linas Vepstas
Hi Nil,


> Observe that the triple above is an arrow:  the tail of the arrow is
>> "some subset of the atomspace", the head of the arrow is "the result of
>> applying PLN rule X", and the shaft of the arrow is given a name: its
>> "rule X".
>>
>
> Aha, I finally understand what you meant all these years!
>
> I already pointed out that some of the worlds are "impossible" i.e. have
>> a probability of zero. These can be discarded.  But wait, there's more.
>> Suppose that one of the possible worlds contains the statement "John
>> Kennedy is alive" (with a very very high confidence) , while another one
>> contains the statement "John Kennedy is dead" (with a very very high
>> confidence). What I wish to claim is that, no matter what future PLN
>> inferences might be made, these two worlds will never become confluent.
>>
>
> I don't think that's true. I believe they should at least be somewhat
> confluent, I hope at least, if not then PLN inference control is
> pathological. Sure you can't have John Kennedy being half-alive and
> half-dead but that is not what a probability distribution means.


OK, the reason I focused on having separate, distinct copies of the
atomspace at each step is that you (or some algo) gets to decide, at each
point, whether you want to merge two atomspaces back together again into
one, or not.

Today, by default, with the way the chainers are designed, the various
different atomspaces are *always* merged back together again (into one
single, global atomspace), and you are inventing things like
"distributional TV" to control how that merge is done.

I am trying to point out that there is another possibility: one could, if
desired, maintain many distinct atomspaces, and only sometimes merge them.
  So, for just a moment, just pretend you actually did want to do that.
How could it actually be done?  Because doing it in the "naive" way is not
practical.  Well, there are several ways of doing this more efficiently.
One way is to create a new TV, which stores the pairs (atomspace-id,
simple-TV)  Then, if you wanted to merge two of these "abstract" atomspaces
into one, you could just *erase* the atomspace-id.  Just as easy as that --
erase some info. You could even take two different (atomspace-id,
simple-TV)  pairs and mash them into one distributional TV.

The nice thing about keeping such pairs is that the atomsapce-id encodes
the PLN inference chain.  If you want to know *how* you arrived at some
simple-TV, you just look at the atomspace-id, and you then can know how you
got there -- the inference chain is recorded, folded into the id.  To
create a distributional-TV, you simply throw away the records of the
different inference chains, and combine the simple-TV's into the
distributional TV.

I hope this is clear.   The above indicates how something like this could
work -- but we can't talk about if its a good idea, or how it might be
useful, till we get past that.


> I can't comment on link-grammar since I don't understand it.


Well, its a lot like PLN -- it is a set of inference rules (called
"disjuncts") that get applied, and each of these inference rules has a
probability associated with it (actually, log-probability -- the "cost").
However, instead of always merging each the result of each inference step
back into a single global atomspace (called a "linkage"), one keeps track
of multiple linkages (multiple distinct atomspaces).  One keeps going and
going, until it is impossible to apply any further inference rules.  At
this point, parsing is done.  When parsing is done, one has a few or dozens
or hundreds of these "linkages" (aka "atomspaces")

A parse is then the complete contents of the "atomspace" aka "linkage".
At the end of the parse, the "words" (aka OC Nodes, we actually use
WordNodes after conversion) are connected with "links" (aka OC
EvaluationLinks)

Let me be clear: when I say "its a lot like PLN", I am NOT hand-waving or
being metaphorical, nor am I trying to be abstract or obtuse.  I am trying
to state something very real, very concrete, very central.  It might not be
easy to understand; you might have to tilt your head sideways to get it,
but it really is there.

Anyway, moving on -- Now, you could, if you wished, mash all of the
"linkages"(atomspaces) back together again into just one -- you could put a
distributional TV on each "link"(EvaluationLink), and mash everything into
one.   You could do even more violence, and mash such a distributional TV
down to a simple TV.   It might even be a good idea to do this! No one has
actually done so.

Historically, linguists really dislike the
single-global-atomspace-with-probabilistic-TV's idea, and have always gone
for the many-parallel-universes-with-crisp-TV's model of parsing. This
dates back to before chomsky, before tesniere and is rooted in 19th or
18th-century or earlier concepts of grammar in, for example, Latin, etc. --
scholastic thinking maybe even to the 12th century.  The core concepts are
already present, 

Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-01 Thread Linas Vepstas
Hi Ben,

On Thu, Sep 1, 2016 at 12:09 PM, Ben Goertzel  wrote:

>
> About Kripke frames etc. --- as I recall that was a model of the
> semantics of modal logic with a Possibly operator as well as a
> Necessarily operator   But in PLN we have a richer notion of
> possibility than in a standard modal logic,


Hey, I'm guessing that you're tired from travel, as here you repeat the
same confusion from before. There is a difference between "reasoning"
(which is what PLN does) and "reasoning about reasoning" (which is what I
am talking about).

What I am talking about applies to any rule-based system whatsoever, its
not specific to PLN. As long as you keep going back to PLN, you will have
trouble figuring out what I'm saying.   This is why I keep trying to create
non-PLN examples. But every time I create a non-PLN example, you zip back
to PLN, and that misses the point of it all.

-- linas


>
>
> On Thu, Sep 1, 2016 at 6:36 AM, Linas Vepstas 
> wrote:
> > And so here is the blog post -- its a lightly reformatted version of this
> > email, with lots of links to wikipedia and a few papers.
> >
> > http://blog.opencog.org/2016/08/31/many-worlds-reasoning-
> about-reasoning/
> >
> > I really really hope that this clarifies something that is often seen as
> > mysterious.
> >
> > --linas
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA340RO1zNghUpNsB5ij3m%3Dq2hxGdW_xZfFsGXC4JW9EpMQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [opencog-dev] Re: [Link Grammar] Re: probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

2016-09-01 Thread Ben Goertzel
Thanks Linas...

Of course you are right that link grammar/pregroup grammar is
modelable as an asymmetric closed monoidal category which is not
cartesian... I was just freakin' overtired when I typed that... too
much flying around and too little sleep..

However, dependent type systems do often map into locally closed
cartesian categories; that part was not a slip...

At least in very many cases, it seems to me we can view the RelEx/R2L
transformations as taking an asymmetric closed monoidal category, into
a locally closed cartesian category...  Cashing this out in terms of
examples will be important though, otherwise it's too abstract to be
useful.   I started doing that on a flight but ran out of time; now
I'm back in HK and overwhelmed with meetings...

About Kripke frames etc. --- as I recall that was a model of the
semantics of modal logic with a Possibly operator as well as a
Necessarily operator   But in PLN we have a richer notion of
possibility than in a standard modal logic, in the form of the
 truth values.I guess that if you
spelled out the formal semantics of logic with  truth values in
the right way, you would get some sort of extension of Kripke
semantics.   Kripke semantics is based on unweighted graphs, so I
guess for the logic with  truth values you'd get something
similar with weighted graphs  This would be interesting to spell
out in detail; I wish I had the type...

Typos, dumb mistakes and hasty errors aside, I think I'm reasonably
comfortable with Kripke frames and pregroup grammars and
intuitionistic logic stuff...

However, the point I don't yet see is why we need linear logic ... and
you don't really touch on that in your blog post... if you could
elaborate that it would be interesting to me...

-- Ben


On Thu, Sep 1, 2016 at 6:36 AM, Linas Vepstas  wrote:
> And so here is the blog post -- its a lightly reformatted version of this
> email, with lots of links to wikipedia and a few papers.
>
> http://blog.opencog.org/2016/08/31/many-worlds-reasoning-about-reasoning/
>
> I really really hope that this clarifies something that is often seen as
> mysterious.
>
> --linas
>
> On Wed, Aug 31, 2016 at 4:16 PM, Linas Vepstas 
> wrote:
>>
>> Hi Ben,
>>
>> What's TTR?
>>
>> We can talk about link-grammar, but I want to talk about something a
>> little bit different: not PLN, but the *implementation* of PLN.   This
>> conversation requires resolving the "category error" email I sent, just
>> before this.
>>
>> Thus, I take PLN as a given, including the formulas that PLN uses, and
>> every possible example of *using* PLN that you could throw my way.  I have
>> no quibble about any of those examples, or with the formulas, or with
>> anything like that. I have no objections to the design of the PLN rules of
>> inference.
>>
>> What I want to talk about is how the PLN rules of inference are
>> implemented in the block of C++ code in github.   I also want to assume that
>> the implementation is complete, and completely bug-free (even though its
>> not, but lets assume it is.)
>>
>> Now, PLN consists of maybe a half-dozen or a dozen rules of inference.
>> They have names like "modus ponens" but we could call them just "rule MP"
>> ... or just "rule A", "rule B", etc...
>>
>> Suppose I start with some atomspace contents, and apply the PLN rule A. As
>> a result of this application, we have a "possible world 1".  If, instead, we
>> started with the same original atomspace contents as before, but applied
>> rule B, then we would get "possible world 2".  It might also be the case
>> that PLN rule A can be applied to some different atoms from the atomspace,
>> in which case, we get "possible world 3".
>>
>> Each possible world consists of the triple (some subset of the atomspace,
>> some PLN inference rule, the result of applying the PLN rule to the input).
>>
>> Please note that some of these possible worlds are invalid or empty: it
>> might not be possible to apply the choosen PLN rule to the chosen subset of
>> the atomspace.  I guess we should call these "impossible worlds".  You can
>> say that their probability is exactly zero.
>>
>> Observe that the triple above is an arrow:  the tail of the arrow is "some
>> subset of the atomspace", the head of the arrow is "the result of applying
>> PLN rule X", and the shaft of the arrow is given a name: its "rule X".
>>
>> (in fancy-pants, peacock language, the arrows are morphisms, and the
>> slinging together, here, are kripke frames. But lets avoid the fancy
>> language since its confusing things a lot, just right now.)
>>
>> Anyway -- considering this process, this clearly results in a very shallow
>> tree, with the original atomspace as the root, and each branch of the tree
>> corresponding to possible world.  Note that each possible world is a new and
>> different atomspace: The rules of the game here are that we are NOT allowed
>> to dump the