On Tue, Oct 1, 2019 at 7:48 AM Brett N Martensen <[email protected]>
wrote:

> Matt is right - Logic needs to be grounded on experiences.
>
>
http://matt.colorado.edu/teaching/highcog/readings/b8.pdf


That's a good paper, I will read it in details later.

I made a mistake earlier.  When the brain thinks about "John loves Mary",
its representation is not just the juxtaposition of 3 the concepts "John",
"love", and "Mary". Rather, the brain constructs a model composed of a
whole bunch of *ramifications* of "John loves Mary".  For example:  John
would be assumed to be a typical man with typical male characteristics.
John's love for Mary would assume the typical emotions of romantic love,
etc.  All these little pieces of (assumed, or abduced) knowledge constitute
the mental model.

When the brain hears that "Mary doesn't love John", it adds some further
facts to the constructed model.

>From the data of this model, it would be *inferred* that "John is probably
unhappy / heart-broken".  It is this inference mechanism that is very
mysterious to us.

It seems reasonable to assume that the mental models are constructed from
neural "features", ie, activation patterns.  But we don't know how the
brain jumps from one mental model to a slightly different mental model
containing new conclusions.

It would be very fruitful to compare this mechanism with symbolic logic
rules.  It may lead to a better way to build AGIs, different from the
logic-based approach.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T77af318d4abfa8a8-Mf9252f5b8f70066a3928d5f3
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to