Hi Alexey,

> Yes, it should be quite possible algorithmically. And that's exactly why we
> discuss this - because we want to use PM algorithms on Values. However, to
> implement this, some architectural and organizational decisions should be
> made (should we generalize existing values to tensors or introduce a
> separate type of values; should we overload TimesLink, etc. to work both
> with NumberNodes and Values, or introduce new types of Links, or introduce
> introduce special links that "atomize" values, etc.; should this be done in
> a separate repo with keeping core PM algorithms unchanged, or should the
> core PM be modified, and by whom, etc.). We have few guys who can work on
> this, but we need to know the preferable way.

I think it's better, if possible, to figure out a way to suitably
modify the core PM rather than
using a separate repo ...

However, I guess the PM tweaks would need to be done someone on your team, as
Linas and Nil probably are too busy and we don't have a lot of others
who can rapidly
perform such changes...

I would personally be in favor of overloading stuff like TimesLink in
order to apply to both
NumberNodes and Values, because it seems to me that the Atom/Value
distinction is more of
an efficiency-driven implementation distinction rather than a
fundamental mathematical/conceptual
distinction...

Nil and Linas should be consulted on this stuff, but at this point you
are also in the exalted
"inner circle" with foundational input on these OpenCog-architecture issues...

>> Now coordinate values of bounding boxes ... If we are talking about
>> something like the bounding box of Ben's face during a conversation,
>> which changes frequently, this would be appropriately stored in the
>> Atomspace using a StateLink,
>>
>> https://wiki.opencog.org/w/StateLink
>>
>
> We considered StateLink as a way to feed OpenCog with observations within
> the reinforcement learning direction. But the current question remains the
> same: should we use NumberNodes or Values?..

See my comments below on that... maybe we want some special TensorValues ..

> Also, DNNs are trained on (mini-)batches. It is not too natural from an
> autonomous agent perspective, but efficient.

Yes I see.   Again maybe some new TensorValue construct will be needed, we just
need to understand clearly what the requirements are in terms of any special
indexing etc.

> Difference between Atoms and Values is relevant, but this relevance will be
> much better seen when we go from just Atoms vs Values to the inference
> processes over them  (declarative logic represents computations inversely;
> and back inversion to direct computations performed by processors is done by
> the inference engine; that's why logic poorly deals with number crunching,
> i.e. Values manipulation, while it is good for reasoning over Atoms), which
> I have not yet discussed on a technical level. However, I mentioned this
> problem in my long message on example of PM application to VQA. Maybe we
> should not discuss all these question simultaneously, but I can try to
> elaborate on this if you wish.

The difference between Atoms and Values is just an implementation-efficiency
tactic...

Values as currently implemented have some properties of Atoms but not others...

Possibly different implementation-efficiency tactics may be of value
in a "tensorial
Atomese" context...

If needed we could also introduce some sort of entity that is between
a Value and an Atom
in some sense -- i.e. we could introduce some sort of TensorValue entity
that

1) Perhaps, knows what links to it (like an Atom but unlike a Value)

2) has an internal tensor that is mutable

There is nothing prohibiting one from building something like this
into Atomspace,
though obviously not breaking various mechanisms would require some care...

>> One question is: Is probabilistic logic an appropriate method for the
>> core of an AGI system, given that this AGI system must proceed largely
>> on observation-based semantics ...
>>
>> I think the answer is YES
>
>
> I think it is necessary but not sufficient

Sure, clearly I agree w that which is why OpenCog has all this other
shit in it too ;) ... and then the interesting questions come up, like
which other methods do we need and how do they need to interoperate...

>
> Exactly. Probabilistic logic is a way to make inference over probabilistic
> programs much more efficient. I have specific examples for this in mind.

It will be good to hear the examples when you have time...

>> Overall, my feeling is that probabilistic programming will be better
>> for procedural knowledge, and probabilistic
>> logic will be better for declarative knowledge
>
>
> Hmm... not precisely. In the context of probabilistic inference, purely
> procedural knowledge is the result of specialization of a general inference
> procedure w.r.t. specific generative model, that is, discriminative models
> are purely procedural. With the use of generative models, you can infer (and
> should infer with the use of search like in probabilistic logic) truth
> values for any conditional expression, but these models don't say how
> exactly to calculate these values, so they don't represent procedural
> knowledge in this sense, and have some features of declarative knowledge. I
> couldn't call generative models a declarative knowledge either. So, I'm
> slightly confused how to classify them...

Yes, the language we have for describing these things introduces confusions...

For instance, I like to think about evolutionary programming (e.g.
MOSES) as a tool for learning procedural knowledge, but OTOH our main
use of this tool right now is for learning classification rules.  Now
a program embodying a classification rule is, in a sense, a "procedure
for performing the classification" ... but then in this sense, every
logical inference is also a cognitive procedure ;p

So you're right, we don't have the right language for describing which
problems are best addressed by a programming-language-ish approach and
which by a logic-ish approach....   (and noting that there is a
sorta-fast conversion btw the two approaches... even so in practice
the conversion is not sooo fast as to obviate the value of looking at
the two approaches sorta-separately, at the moment..)

This seems a solvable language/conceptual problem, but I don't have
time to think about it hard right now either...

ben

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBdt9OgeC3q_9J20CeeAF-iY_aQpKqm17tW9zNc1%3DDbKjg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to