On Mon, May 14, 2018 at 4:54 PM, Alexey Potapov <[email protected]> wrote:

> Hi Linas,
> this is quite an interesting discussion, and I believe we should involve
> Ben and others in it.
>

Ben has been involved  in this discussion for a decade; I think he knows
the general outline, even as we argue violently about the details.  Anyway,
this is why I say "we should discuss on the mailing list", a dictum that I
have been violating.


> Actually, I have some concerns regarding too strong focus on the formal
> logic and automatic theorem proving in OpenCog since I'm not completely
> convinced of its fundamental role in AGI... ;)
>

Yes, I agree.  I am very much trying to pursue a probabilistic approach.
The question is then "probabilities on what?" and I have an answer that I
like, but I have trouble sharing that answer in a way that is
understandable.  Let me try to sketch that, in this context.

You know probabilistic programming very very much better than I, so please
correct my mis-steps.  In probabilistic programming, one essentially has
lambda-expressions, and is assigning probabilities to them.   For example:
what is the probability of taking this branch (this if-statement) or not?
The probability of running this loop N times? the probability of calling
this routine? The probability of making this list/array longer or shorter?
This might be done with either custom-built probabilistic programming
languages, or with modified versions of C or Lisp.  At any rate, these can
be abstractly understood as being lambda-expressions, with probabilities
assigned to them.

However, as I exhort below, lambdas are closely tied to cartesian products,
and to classical logic, and we have plenty of evidence that suggests that
this is inappropriate for natural, human intelligence. So what is the right
thing? To keep a long story short, for me, it is this concept of "disjoined
connector sets" (aka "jigsaw-puzzle pieces"), and I attach probabilities to
those. More generally, more vaguely, the structures are "patterns", this is
what the pattern miner looks for. There are various theoretical arguments
that show that these are sufficient for general purposes.   For  example,
theorem-proving is like assembling jigsaw-puzzle pieces, where he
puzzle-pieces are rules of inference; this is why theorem-proving is like
parsing.  However, general knowledge representation is also like this:
solving Sherlock Holmes mysteries has long been said to be like "solving a
jigsaw puzzle": well, except now, we have the mathematical formalism to
make this statement precise. I think much of what neural nets and deep
learning do also fits into this general framework; I want to write a paper
on this, but have not had the time yet.

So, I am not really into formal logic, per-se; rather, I am trying to amass
evidence why probabilistic patterns are the correct approach, while
probabilistic lambdas are too stringent, too constrained, too tight to be
useful.  One has to assign probabilities to the patterns, the arrangements,
and NOT to the individual components of the pattern. (Also, I keep saying
"probability" when I actually mean "mutual information" or maybe
"surprisingness", as the case may be. Just as lambda is not quite right, I
get the feeling that probability is not quite right, and that talking about
mutual information and surprisingness is more correct, than talking about
probability.  But this intuition remains difficult to explain.).

-- Linas


> Best regards,
> Alexey.
>
> 2018-05-15 0:13 GMT+03:00 Linas Vepstas <[email protected]>:
>
>> Alexey, Konstantin,
>>
>> On Mon, May 14, 2018 at 2:41 PM, Константин Тимофеев <
>> [email protected]> wrote:
>>
>>> Nil, Linas, thanks for the explanations. It seems like I have to learn
>>> some new things before I could understand clearly what is the issue behind
>>> this discussion. Hopefully it will not take too much time.
>>>
>>
>> If it doesn't take much time, then you are a super-human genius working
>> in the wrong field.  To understand the issue, you'd have to read a book or
>> two on proof theory, along with a large rainbow of related topics. Its
>> taken me ten years to understand what the problem is.  I can't magically
>> transfer this knowledge, but I can tell you what keywords to search for and
>> read on.   Besides proof theory and model theory, anything that describes
>> linear logic should make clear what the issue with cartesian products is
>> (in proof theory, it corresponds to the rules of weakening and contraction,
>> which are assumed by the lambda abstraction but are forbidden by tensor
>> products.)  To understand why products in linguistics are tensor products
>> and not cartesian products, look at the wikipedia article on "pregroup
>> grammar".  The earliest reference that I know of that discusses this
>> clearly is Marcus Solomon "Algebraic Linguistics" (1967) which your
>> compatriots in Novosibirsk sent me.  The categorial grammars there are
>> discusses in pages 90-120, which you can match up to what the wikipedia
>> pregroup grammar article states.
>>
>> Last time I said that "theorem proving is like parsing", I said it to
>> Ben's son Zar, who is studying theorem proving, and his response was "yeah,
>> so what, everyone knows that".  If you already know that, good.  My point
>> is that since we know that theorem proving is incompatible with lambda, we
>> should stop using lambda so much. To me, this still remains a deep,
>> important insight; I'm trying to use it to guide the design of atomese.  I
>> can't imagine that you will understand in only a few months.  A few years,
>> maybe. I'm just saying, now is the time to start reading about it.
>>
>> -- Linas
>>
>>>
>>>
>>> On Mon, May 14, 2018 at 8:45 PM, Linas Vepstas <[email protected]>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Mon, May 14, 2018 at 12:54 AM, Nil Geisweiller <
>>>> [email protected]> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> On 05/11/2018 11:36 PM, Linas Vepstas wrote:
>>>>>
>>>>>> light sprinkling of lambda calculus on top.  Nil keeps putting in too
>>>>>> much lambda calculus, I can't seem to get him to stop.  It needs more
>>>>>> prolog, I keep telling him.
>>>>>>
>>>>>
>>>>> But
>>>>
>>>>
>>>> I meant that to be funny. Like saying "it needs more pepper, no, it
>>>> needs more salt".
>>>>
>>>>
>>>>> the left hand side term of a prolog fact shares the same goodies I'm
>>>>> after when I use a LambdaLink. It exposes a list of variables, in a
>>>>> determined order, abstracting the body away.
>>>>>
>>>>
>>>> But Alexey and Konstantin have not heard this argument before, so let
>>>> me sketch it briefly.
>>>>
>>>> * Yes, lambdas provide abstraction. They're useful. But ...
>>>> * lambda calc is old, it was originally developed in the view of
>>>> Hilbert's program, of providing a foundational basis for classical logic.
>>>> * prolog is new, developed for knowledge representation and theorem
>>>> proving, that is, non-classical logic.
>>>> * lambda calc is the internal language of cartesian, closed categories:
>>>> viz anything with lists, pairs, cartesian products. lambda calc corresponds
>>>> to classical logic.
>>>>
>>>> * by contrast, theorem proving is known to NOT use classical logic; the
>>>> logic of theorem proving is intuitionist logic, and more generally, the
>>>> logic of Kripke frames. Insofar as the rule engine and PLN is a
>>>> kind-of-like theorem prover, it should be based on the principles given in
>>>> books on proof theory, and NOT on classical logic!  People who work on
>>>> proof theory have made a lot of advancements since the 1930's. We should
>>>> know of, understand, and employ those concepts. Proof theory is a big deal.
>>>> We should not ignore it.
>>>>
>>>> * There are other things that classical logic fails to describe,
>>>> besides proof theory: natural language and biochemistry. Both of these
>>>> appear to be described by intuitionistic logic, or, more properly, by
>>>> fragments of linear logic. The difference is that linear logic throws away
>>>> the notion of pairs, lists, cartesian products and lambdas: lambda's cannot
>>>> be used to describe language or biochemistry (In physics, the
>>>> failure/incorrectness of lambda is "well-known", and is called the
>>>> "no-cloning theorem". The fundamental reason for this is that tensor
>>>> products violate the assumption of the existence of pairs, lists, cartesian
>>>> products. Tensor products are incompatible with cartesian products.
>>>> Insofar as the lambda abstracts the cartesian product, it too must be
>>>> discarded from the theory, as being incompatible. Any theory that has
>>>> tensor products cannot have lambdas in it. They can't co-exist without
>>>> contradiction.)
>>>>
>>>> * Insofar as we work with language, learning and theorem proving, where
>>>> there are "well-known" theorems that prove that lambda calc is
>>>> incompatible, we should perhaps stop focusing on it so much. Lambda calc
>>>> has it's place, -- lambda is great for abstraction, but not everything can
>>>> be abstracted in that way.  Most notably (for me), the link-types of
>>>> link-grammar cannot be abstracted by lambdas, because the disjuncts of
>>>> link-grammar (alternately, the VP, NP of phrase-structure grammars) look
>>>> like tensor products, and not link cartesian products.
>>>>
>>>> For these reasons, I think we are over-using lambdas. They offer a
>>>> deeply-incorrect, basically-broken view of the world we are interested in:
>>>> proof theory, linguistics, knowledge, biochemistry.
>>>>
>>>>
>>>>>
>>>>> I know that FreeLink automatically digs up the variables, but it has
>>>>> other problems
>>>>>
>>>>> 1. The variable order depends on their appearances in the body. Even
>>>>> worse, this may depend on the variable names (see
>>>>> https://github.com/opencog/atomspace/issues/1677).
>>>>>
>>>>
>>>> Yes, that is the definition of a "free variable".  Do you want to
>>>> propose some kind of semi-free variable?
>>>>
>>>>>
>>>>> 2. There's no way to tell FreeLink to escape some variable (unless a
>>>>> QuoteLink is used).
>>>>>
>>>>
>>>> Yeah, quote links are awful. I think a very very high priority is to
>>>> figure out how to do the things we need to do, without having to use
>>>> quotes.  I'm not sure, but the need for quotes seem to be a side-effect of
>>>> using lambdas or something like that.  I don't understand.  In all the work
>>>> I do, with link-grammar, with language learning, quotes are never needed;
>>>> the concept of lambda abstraction is replaced by connectors and disjuncts;
>>>> and these never need to be quoted. Actually, I can't even imagine how one
>>>> could "quote" a connector or disjunct, it doesn't make sense,
>>>> conceptually.  I would have to think hard about that.
>>>>
>>>> (Well, there is also no concept of "variables", either. The roll of
>>>> variables are taken up by connectors, which are kind-of-like typed free
>>>> variables. Sort of. Again, this requires deep thought.).
>>>>
>>>> Upshot: maybe if we used lambdas less, then we would not need quotes.
>>>> Maybe they are partners in crime.
>>>>
>>>>
>>>>> 3. Variables cannot be typed.
>>>>>
>>>>
>>>> If they were typed, they would not be free, right?  Maybe there is a
>>>> way to have typed free variables, I'm not sure; its certainly not  obvious;
>>>> but I can also see how it might be possible.  You'd have to design it
>>>> carefully, to avoid contradictions.
>>>>
>>>>
>>>>>
>>>>> For these 3 reasons I generally stay away from FreeLink, even though
>>>>> I'm sure it has good uses, if one is careful.
>>>>>
>>>>
>>>> !? I don't recall ever suggesting that we should use FreeLink for
>>>> anything other than what it is currently used for. it provides some basic
>>>> utilities for locating variables in an expression, but that's it.  I don't
>>>> understand why you even brought it up... ??
>>>>
>>>> --linas
>>>>
>>>> --
>>>> cassette tapes - analog TV - film cameras - you
>>>>
>>>
>>>
>>
>>
>> --
>> cassette tapes - analog TV - film cameras - you
>>
>
>


-- 
cassette tapes - analog TV - film cameras - you

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA356-6JTNTp1Txct21ZTuBYXwDs6bdMcr781EusGijVT3w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to