Ben,


2018-05-20 8:04 GMT+03:00 Ben Goertzel <[email protected]>:

> Alexey,
>
> ***
> Our knowledge is built from data. Deduction systems (probabilistic or
> not) lack this connection, while functional PPLs are well-suited for
> this.
> ***
>
> I don't understand why you think this way...
>
> The semantics of probabilistic logic systems can be naturally framed
> in a fully observation-based way, which is what the original PLN book
> is about...
>

For me, observational data is sensory data. It doesn't contain concepts,
predicates, etc. As far as I understand, PLN is designed to deal with more
high-level data, e.g. textual. If we have an observation that a particular
crow is black, then this is an observation for which generalization logical
languages/PLN can suite not worse or even better than (functional) PPLs.
But there are no purely black crows. It's just an abstraction, which itself
should be somehow generalized from raw data.
How can we calculate P(crow,black|image)? It's ~
P(crow,black)P(image|crow,black).
You can derive P(crow,black) (or rather P(crow,black|some other knowledge,
e.g. cawing)) using PLN (or PPLs also in fact, but maybe less
elegant/efficient). But how will you calculate P(image|crow,black)? This
probability is easily describable within functional generative models, but
it's very cumbersome within logical languages.
Well... I'm sure you know all this stuff. So, maybe this is just the
question about the difference in our attentional focus.



>
> It's true that a logic system, as part of its formulation, makes some
> commitments about the initial logic rules, which are not initially
> derived from the data but rather supplied by the system designer
>
> OTOH a probabilistic programming system, as part of its formulation,
> makes some commitments about the initial programming language
> primitives, which are not initially derived from the data but rather
> supplied by the system designer
>

Exactly. So, what we are talking about is the difference in the available
primitives in two cases. This might seem as a mere practical, but not
fundamental difference. However, this practical difference is so large that
it is almost fundamental. Logic deals with truth values, not numbers. One
can introduce Peano axioms, basic grounded predicates saying something like
"it's true that the pixel with coordinates (x, y) has (r,g,b) color", and
infer the truth value that we see a crow and it is black given this image,
but this is much more easier to just crunch numbers. But if you introduce
imperative number crunching into your logical system, you loose the ability
to logically reason about these particular numbers.
Non-logical PPLs don't deal with probabilities directly. They generate
values of random variables. These values can be numbers or arbitrary data
structures. These PPLs naturally inherit the powerfulness of number
crunching of imperative languages. That's why I say they are better suited
for learning from (raw) data. Of course, the back side is that implementing
reasoning with them as cumbersome as data processing with logic.

Well, actually my worries are very technical, and I will describe them in a
new thread (hopefully) soon.

-- Alexey

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CABpRrhzfw%3Ds3Y0%3DrXsvfOnUyviu%2B4m%3Dxqyx9FW3Z9QmY8KTiJg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to