Linas,
On 09/03/2016 08:54 AM, Linas Vepstas wrote:
I claim that inference is like parsing, and that algorithms suitable for
parsing can be transported and used for inference. I also claim that
these algorithms will all provide superior performance to
backward/forward chaining.
Until we can
Linas,
On 09/03/2016 04:59 AM, Linas Vepstas wrote:
However, I feel an area where something similar to linear logic,
etc, might be very worthwhile thinking of is in estimating how much
evidences inference traces have in common, as to have the revision
rule work correctly. This
On 09/03/2016 08:24 AM, Linas Vepstas wrote:
The other approach, that Nil was advocating with his distributional-TV
proposals, is to jam these two into one, and say that_advmod(see,
with) is half-true, and _prepadj(man, with) is half-true, -- and
then somehow hope that PLN is able to
On 09/03/2016 08:24 AM, Linas Vepstas wrote:
The other approach, that Nil was advocating with his distributional-TV
proposals, is to jam these two into one, and say that_advmod(see,
with) is half-true, and _prepadj(man, with) is half-true, -- and
then somehow hope that PLN is able to
On 09/03/2016 07:19 AM, Ben Goertzel wrote:
About ContextLink / CompositeTruthValue -- an interesting relevant
question is whether we want/need to use it in the PLN backward chainer
which Nil is now re-implementing Quite possibly we do...
It's clear both the forward and backward chainer
Hi Ben,
On 09/03/2016 06:44 AM, Ben Goertzel wrote:
The replacement methodology is to use EmbeddedTruthValueLink and
ContextAnchorNode , as in the example
Evaluation
PredicateNode "thinks"
ConceptNode "Bob"
ContextAnchorNode "123"
EmbeddedTruthValueLink <0>
One more somewhat amusing observation is that PLN would be expected to
CREATE Chomskyan deep syntactic structures for sentences in the course
of learning surface structure based on embodied experience...
Recall the notion of Chomskyan "deep structure." I suggest that
probabilistic reasoning
> MAPPING SYNTAX TO LOGIC
>
> "RelEx + RelEx2Logic” maps syntactic structures into logical
> structures. It takes in structures that care about left vs. right,
> and outputs symmetric structures that don’t care about left vs. right.
> The output of this semantic mapping framework, given a
GOD DAMN IT BEN
Stop writing these ninny emails, and start thinking about what the hell is
going on. I've explained this six ways from Sunday, and I get the
impression that you are just skimming everything I write, and not bothering
to read it, much less think about it.
I know you are really
Yes. I am starting to get very annoyed. Whenever I talk about
CompositeTruthValue, which I did earlier, I get the big brushoff. Now, when
I finally was able to sneak it back into the conversation, I once again get
the big brushoff.
I am starting to get really angry about this. I am spending wayyy
On Sat, Sep 3, 2016 at 9:59 AM, Linas Vepstas wrote:
> Hi Nil,
>
>>
>>>
>>> These same ideas should generalize to PLN: although PLN is itself a
>>> probabilistic logic, and I do not advocate changing that, the actual
>>> chaining process, the proof process of arriving at
Linas,
On Sat, Sep 3, 2016 at 10:50 AM, Linas Vepstas wrote:
> Today, by default, with the way the chainers are designed, the various
> different atomspaces are *always* merged back together again (into one
> single, global atomspace), and you are inventing things like
Hi Nil,
> Observe that the triple above is an arrow: the tail of the arrow is
>> "some subset of the atomspace", the head of the arrow is "the result of
>> applying PLN rule X", and the shaft of the arrow is given a name: its
>> "rule X".
>>
>
> Aha, I finally understand what you meant all
Hi Ben,
On Thu, Sep 1, 2016 at 12:09 PM, Ben Goertzel wrote:
>
> About Kripke frames etc. --- as I recall that was a model of the
> semantics of modal logic with a Possibly operator as well as a
> Necessarily operator But in PLN we have a richer notion of
> possibility
Thanks Linas...
Of course you are right that link grammar/pregroup grammar is
modelable as an asymmetric closed monoidal category which is not
cartesian... I was just freakin' overtired when I typed that... too
much flying around and too little sleep..
However, dependent type systems do often
15 matches
Mail list logo