Hi Vitaly,

On Wed, May 22, 2019 at 11:19 AM Vitaly Bogdanov <vsb...@gmail.com> wrote:

> Hi Linas,
>
> It was merged half-a-year-ago, a year-ago.
>>
>> In current atomspace/opencog code I can find OctoMapValue only which
>>> inherits FloatValue (and keeps vector of doubles in it) + has update()
>>> method which updates the value using OctoMapNode. So value should be
>>> updated before using it.
>>>
>>
>> Yes, but I think you misunderstand how it works. It is updated only when
>> the value is asked for. If no one asks for the value, it is never updated.
>> This design allows the current value to be managed externally, in some
>> other component, and it is then brought into the atomspace "on demand",
>> only when some user of the atomspace tries to look at the value.   Think of
>> it as a "smart value" -- when the user asks the value "what number are you
>> holding?", the value isn't holding anything; instead, it goes to the
>> external server, gets the latest, and returns that.
>>
>> In this case, OctoMapValue never changes, until you ask for it's current
>> value; then it goes to the space-server, gets the current value, and
>> returns that.
>>
>
> Ok, this way it should work. But things which confuses me is that only
> place where update() is called is withing OctoMapValue.to_string() function.
>

That's because it inherits from FloatValue ... The primary access is
provided by the `value()` method -- see FloatValue.h line 59
```
     const std::vector<double>& value() const { update(); return _value; }
```
which works with
```
      virtual void update();
```
The goal here was to allow the compiler to inline the call to `value()`
whereas the `virtual void update()` is unconstrained - it doesn't have to
return anything, and it can do anything at all.


> For example if generic predicate uses value to verify whether object is
>>> "red", then what it expects from value is pair of doubles (mean,
>>> confidence). If our value returns such pair of doubles it forgets all
>>> computation graph which led to this result. It means that we are not able
>>> to propagate an error back through the calculation graph and improve next
>>> result.
>>>
>>
>> ? I don't understand. What do you need to propagate back?
>>
>
> Our goal is to make system which consist of neural networks and atomspace
> links which could be trained by error backpropagation. When system answers
> the question and answer is not correct the error is backpropagated through
> the computation graph and updates truth values of the atomspace links and
> neural network weights.
>

OK. Yes, that seems reasonable.  Note that all TruthValues inherit from
FloatValue. None of them overload the virtual `update()` method (they don't
need to); but a special one could: e.g. you could create a class
TensorFlowTruthValue and overload the `update()` method to compute
something on the fly, or fetch the latest values, or whatever.

Mostly, you don't want to update atomspace truth values 100 times a second,
cause no one is going to be looking at them very much; such updates are
wasted.  You want to avoid doing CPU-intensive work in the main atomspace
thread; for a new thread if you have cpu-intensive things. Watch out for
lock contention. etc.  The Value system, with the `virtual update()` method
was designed to allow cpu-intensive things to live outside of the atomspace.

Oh, and finally, of course, to hook up the predicates (generic, or
> external) to language (ghost). For this last step, we need the ghost
> maintainers -- Amen, Man Hin, others, to look at think about and comment
> about what predicates they could actually use and support easily and
> quickly.   It's easy to make a list of prepositional and comparative
> phrases -- there are about 100 of them (above, below, behind, next-to,
> bigger-then, taller, .. etc) and we should start with some reasonable
> subset of these, hook them into the chatbot, and get things like
> question-answering going.  As far as I know, no one has begun any of this
> work yet, right?
>

>  Not sure it is a question to me, but in our work we didn't modify ghost.
We used external "language to atomspace query" conversion logic based on
relex output to demonstrate the approach.

For a demo, I guess that's OK. So, Ghost is a chatbot system designed for
Hanson Robotics, and it's primary goal is to integrate sensory and motor
subsystems with language.  That is why I keep saying "preposition" over and
over again.  I really really really want to have this:

EvaluationLink
     PredicateNode "is-next-to"
     ListLink
          ConceptNode   "Ben"
          ConceptNode   "David"

so that whenever those two are on stage with Sophia, and someone asks
"Sophia, do you see Ben standing next to David?" she can honestly say yes,
because the EvaluationLink caused the `FloatValaue::update()` method to be
called, and updated with the latest from Tensorflow or from ROS+OctoMap or
where-ever.

Now, you can certainly hack around with relex, but that's hard and ugly.
Of course, ghost is just a chatbot, its not a sophisticated language
processing system.  But it works, and until we get a replacement, I would
prefer it if efforts were aimed that way.

Of course, if you want to invent a brand-new language-processing system,
well, yes, that would be nice, but that is a different discussion.

-- Linas

-- 
cassette tapes - analog TV - film cameras - you

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA36x7Ozj2cvqyjPX4WF9iWY8ZACggtLrC%2BAWpRtHwDqP9w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to