Here is other way of saying it.

On Fri, Sep 2, 2016 at 11:26 PM, Ben Goertzel <[email protected]> wrote:

>
>   Some of these things we're discussing are not going to be
> practically relevant in OpenCog for a while, but some of them might be
> important for Nil's near-future work on backward chaining and
> inference control...
>

Almost everything that I am talking about is aimed directly and explicitly
at the concept of forward and backward chaining.   I claim even more: of
all of the algorithms that are known for performing reasoning,
forward/backward chaining are the worst and the slowest and the
lowest-performance of all.   They are the most primitive possible tools for
performing reasoning -- they are CPU hogs that get stuck in the mud of
combinatorial explosion.

>
> What I'm thinking is to posit a specific example of a real-world
> situation and corresponding reasoning problem and then write down how
> it would be formulated using
>
> -- classical logic
> -- intuitionistic logic
> -- PLN
>

Nothing that I care to talk about depends in the slightest on  this
choice.  Whatever I  care to say about one applies equally well to the
other two.  The differences between them mostly do not matter, at all, for
the conversation that I wish to have.  One could add some
green-cheese-from-the-moon logic to the list, and it just plain would not
matter. It just doesn't matter.

The discussion I wish to have is about reasoning itself: the manner in
which one applies rules to data.  So far, you have mentioned only two:
forward and backward chaining.  I claim that there are many many more
possibilities, that are far superior to these two.

>
> and then identify a corresponding "reasoning about reasoning" problem
> and write down how ti would be formulated in these various ways... and
> see how the semantics can be formalized or otherwise expressed in each
> case...
>

The rules about reasoning are formulated in the same way, independently of
the actual logic which you wish to use.

Well, this is actually a kind-of white lie. If you know that your reasoner
is going to manipulate expressions written in classical predicate logic,
then you can cheat in various ways. By "cheat" I mean "optimize performance
of your reasoning algorithm".    But I would rather avoid getting tangled
in the cheats/optimizations, at least, for a little while, and discuss
reasoning in general, completely independent of the logical system on which
the reasoning is being performed.

>
> Regarding inference control, I could then use said example as an
> illustration of my prior suggestion regarding
> probabilistic-programming-based inference control... and perhaps you
> could use it to explain how you think linear or affine logic can be
> useful for inference control?
>

I think we need to take multiple steps backwards first, and long before we
talk about inference control, we first need to agree on what we mean when
we say "inference".  Right now, we don't share a common concept of what
this is.

The blog post attempts to provide a provisional definition of inference.

I claim that inference is like parsing, and that algorithms suitable for
parsing can be transported and used for inference. I also claim that these
algorithms will all provide superior performance to backward/forward
chaining.

Until we can start to talk about inference as if it was a kind of parsing,
then I think we'll remain stuck, for a while.

--linas


>
> I could come up with an example or two myself but I'm afraid I might
> come up with one that doesn't fully illustrate the points you're
> trying to make...
>
> Going through this stuff in detail in the context of some specific
> example might help un-confuse others besides you, me and Nil who are
> listening into this thread as well...
>
> This is not urgent but could be interesting...
>
> ben
>
>
> On Sat, Sep 3, 2016 at 12:17 PM, Linas Vepstas <[email protected]>
> wrote:
> > GOD DAMN IT BEN
> >
> > Stop writing these ninny emails, and start thinking about what the hell
> is
> > going on.  I've explained this six ways from Sunday, and I get the
> > impression that you are just skimming everything I write, and not
> bothering
> > to read it, much less think about it.
> >
> > I know you are really really smart, and I know you can understand this
> > stuff, (cause its really not that hard)  but you are simply not making
> the
> > effort to do so.  You are probably overwhelmed with other work -- OK --
> > great -- so we can maybe follow up on this later on. But reading your
> > responses is just plain highly unproductive, and just doesn't lead
> anywhere.
> > Its not interesting, its not constructive, it doesn't solve any of the
> > current problems in front of us.
> >
> > --linas
> >
> > On Fri, Sep 2, 2016 at 10:50 PM, Ben Goertzel <[email protected]> wrote:
> >>
> >> On Sat, Sep 3, 2016 at 9:59 AM, Linas Vepstas <[email protected]>
> >> wrote:
> >> > Hi Nil,
> >> >
> >> >>
> >> >>>
> >> >>> These same ideas should generalize to PLN:  although PLN is itself a
> >> >>> probabilistic logic, and I do not advocate changing that, the actual
> >> >>> chaining process, the proof process of arriving at conclusions in
> PLN,
> >> >>> cannot be, must not be.
> >> >>>
> >> >>> I hope the above pins down the source of confusion, when we talk
> about
> >> >>> these things.  The logic happening at the proof level, the ludics
> >> >>> level,
> >> >>> is very different from the structures representing real-world
> >> >>> knowledge.
> >> >>
> >> >>
> >> >> Oh, it's a lot clearer then! But in the case of PLN inference control
> >> >> we
> >> >> want to use meta-learning anyway, not "hacks" (sorry if I upset
> >> >> certain)
> >> >> like linear logic or intuitionistic logic.
> >> >
> >> >
> >> > Well, hey, that is like saying that 2+2=4 is a hack --
> >> >
> >> > The ideas that I am trying to describe are significantly older than
> PLN,
> >> > and
> >> > PLN is not some magical potion that somehow is not bound by the rules
> of
> >> > reality, that can in some supernatural way violate the laws of
> >> > mathematics.
> >>
> >> Hmm, no, but forms of logic with a Possibly operator are kinda crude
> >> -- they basically lump all non-crisp truth values into a single
> >> category, which is not really the most useful thing to do in most
> >> cases...
> >>
> >> Intuitionistic is indeed much older than probabilistic logic; but my
> >> feeling is it is largely superseded by probabilistic logic in terms of
> >> practical utility and relevance...
> >>
> >> It's a fair theoretical point, though, that a lot of the nice theory
> >> associated with intuitionistic logic could be generalized and ported
> >> to probabilistic logic -- and much of this mathematical/philosophical
> >> work has not been done...
> >>
> >> As for linear logic, I'm still less clear on the relevance.   It is
> >> clear to me that integrating resource-awareness into the inference
> >> process is important, but unclear to me that linear logic or affine
> >> logic are good ways to do this in a probabilistic context.   It may be
> >> that deep integration of probabilistic truth values provides better
> >> and different ways to incorporate resource-awareness...
> >>
> >> As for "reasoning about reasoning", it's unclear to me that this
> >> requires special treatment in terms of practicalities of inference
> >> software....   Depending on one's semantic formalism, it may or may
> >> not require special treatment in terms of the formal semantics of
> >> reasoning....  It seems to me that part of the elegance of dependent
> >> types is that one can suck meta-reasoning cleanly into the same
> >> formalism as reasoning.   This can also be done using type-free
> >> domains (Dana Scott's old work, etc.)....   But then there are other
> >> formalisms where meta-reasoning and base-level reasoning are
> >> formalized quite differently...
> >>
> >> -- Ben
> >>
> >> -- Ben
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups
> >> "link-grammar" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an
> >> email to [email protected].
> >> To post to this group, send email to [email protected].
> >> Visit this group at https://groups.google.com/group/link-grammar.
> >> For more options, visit https://groups.google.com/d/optout.
> >
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "link-grammar" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to [email protected].
> > To post to this group, send email to [email protected].
> > Visit this group at https://groups.google.com/group/link-grammar.
> > For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> Super-benevolent super-intelligence is the thought the Global Brain is
> currently struggling to form...
>
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/opencog/CACYTDBcvhDGd_%3D8b6fKqXJ42BDhcseBgAiV%
> 3DXKAsXcem36_4wQ%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA364vWqe_5R%2BQRA%3Dk9rvdkqdkhhZN-vDLgpC7e1ev_24pA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to