Well, not to pointlessly prolong the discussion, but ...

Theory-of-mind is not that different than theory-of-world.  Although I
believed that it was Pumpkin that jumped out the car window, I now have to
revise my beliefs based on new evidence.

So, I start with a statement about objective reality: a statement
concerning small dogs that fly out of car windows, and I transform it into
a "controller"  -- a verb-action -- that updates my personal model of the
external world. See where I am going with this?

My model of the external world is some set of EvaluationLinks asserting
"facts" that I "know to be true". Except that they are actually just my
(personal) beliefs about the external world.

When I hear a new sentence: "It was Dali", I have to revise my belief
network. I parse this sentence, and eventually turn it into a "controller"
or "action", and use that "action" to update my belief network about flying
dogs.

Did that update-action have to go through the action-selection stage?  You
might think that such actions are always on, always happen.   But perhaps I
am very sleepy, or tired or cranky/angry, and my action-selector decides
NOT to update my belief network about flying dogs.

It's even more complex: when that controller runs and updates my beliefs
about flying dogs, I may then also play the smile animation, or the
roll-your-eyes animation, or the
sit-up-in-bed-in-the-middle-of-the-night-and-smack-forehead animation.

See?  This is why I keep wanting to talk about "world models" and
"controllers" -- its all a form of belief revision.  Right now, I don't
really much care about whether or not the formulas for updating the truth
values are mathematically correct or not, based on probabilistic modal
logic. What I really really *do* care about is that We pick the correct
representation from the three styles listed on the wiki page, and that we
correctly design controllers that can correctly update these belief
structures.  If the representation is bad, and the controllers are flawed,
then we cannot construct any beliefs from our sensory inputs.

The only way out of this mess that I see is to continue to try to build
some KR system that converts English sentences into action-psi-rules and
updates the KR structures, and performs actions, and answers questions, and
see what happens. Then step back and look at the architecture, and see if
its fucked up or not, see if we can fix it.

What we cannot do is build little pieces that are disconnected from the
chatbot: its great that we know have some modal-logic-with-correct-formuals
code, but its disconnected from the "reality" of a working, demo-able
chatbot.

--linas



On Wed, Mar 15, 2017 at 12:02 PM, Ben Goertzel <[email protected]> wrote:

> > Well, but the very first example on the wiki page is "I tell you that
> small
> > dogs can fly" which is not the same as "I believe that small dogs can
> > fly"...
> >
> > This promptly goes down a rabbit-hole of a theory of mind:  "I believe
> that
> > Ben thinks that small dogs can fly"  or more likely: "I believe that Ben
> was
> > joking when he said that small dogs can fly".
>
> Well it may be a rabbit hole.  But what Sumit did was figure out some
> sensible truth value formulas for the particular case of "belief" ...
>
> https://github.com/sumitsourabh/opencog/blob/
> patch-1/opencog/reasoning/pln/rules/epistemic-reasoning/
> theory/gsoc_theory.tex
>
> > And then there is premonition, because that wiki page was written before
> > Pumpkin jumped out the window of the moving car and broker her leg...
> >
>
> I *believe* that was Dali not Pumpkin ;p ;) ...
>
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/opencog/CACYTDBdNrRYgtX%2B9094WxMAbGA-KPZn46hMftMXmRUKw%2B9yVHA%
> 40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA36Y436NaVe0NOP_bVvvppKUadTDoUR0fK6jPPWvbPu9wQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to