On 1/30/21, Matt Mahoney <[email protected]> wrote:
>> My proposed model has some important properties:
>> 1.  uses deep learning to learn logic formulas
>
> Do you have any experiments confirming this? It wasn't clear from your
> paper how to achieve this.
>
> The lack of an efficient learning algorithm has been the one most important
> obstacle to symbolic AI in spite of decades of research. I realize that
> humans are able to manually encode knowledge into structured formats like
> first order logic and it's extensions like CYCL and probabilistic logic.
> But this ability can't be central to learning in humans because it comes
> after the knowledge is learned. You don't have to know the difference
> between a noun and a verb to form grammatically correct sentences.

Yes, I have long identified learning as the bottleneck problem in AGI,
and that is why I adopted deep learning to learn logic rules,
(this is not trivial and requires throwing away the traditional symbolic
logic engine, and the new logic rules would be in a "black box" of
a large neural network).

In 2018 in the AGI conference in China Ben Goertzel seemed critical of this
approach but he is actually the one who suggested to me this idea around 2014,
back when "deep learning" wasn't the fad like it is now 😆

Then I worked out a theory and claimed that it solved AGI
(the one titled "Logicalization of BERT")
This is like encoding the syntax of logic formulas in vector space
and then simply tell a deep neural network to "learn logic formulas".
The whole approach is just old-fashioned logic AI wrapped in deep learning.
It may work, and it's easy to implement, so I'm still planning to implement it.

A problem with this approach is that it learns from sensory experience
and learning may be very slow, much slower than a logic formula like
∀x P(x).  I'm also thinking of a hybrid approach....

On the other hand, I try to embed logic formulas *semantically* rather
than syntactically.  That led to this unsuccessful attempt.

In mathematical logic, the semantic entities grow like a tree, so it
is easy to embed them in a fractal space.  But neural networks cannot
deal with fractal structures (universal approximation fails), so this
seems a dead end.

There may be some hope in Hilbert space but I can't even work out
an initial idea, much less to analyse the distances between elements
to see what's going on....

There is one other, related problem.  For example I imagine logic
entities would be represented by points in some semantic space,
and predicates would be represented by spatial regions.  So "John
is male" would be a point representing John enclosed in the region
representing "Males".  This seems to work for simple stuff, but then
every formula P(a) would immediately have a truth value that can be
read off from the position of point a and the region defined by P.
This is absurd because most statements need to be reasoned and
proved.  So these points and regions would have to be "indeterminate".
A lot of mathematical logic rely on the analogy with point-set topology,
but these points and sets would need to be constantly "in flux"....

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T54594b98b5b98f83-Mc5325bfb65a1f99ce0803bf1
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to