> But how will you calculate P(image|crow,black)?

Well as you know, if you really want to, something like "the RGB value
of the pixel at coordinate (444,555) is within a distance .01 of
(.3,.7,.8)" can be represented as a logical atom  ... so there is no
problem using logic to reason about perceptual data in a very raw way
if you want to

OTOH I don't really want to do it that way... instead, as you know, I
want to model visual data using deep NNs of the right sort, and then
feed info about the structured latent variables of these NNs and their
interrelationships into the logical reasoning engine....   This is
because it seems like NNs, rather than explicit logic or probabilistic
programming, are more efficient at processing large-scale raw video
data...

It occurs to me that -- while I don't have time for this week, while
traveling around doing more business-y stuff -- it might be valuable
to just bite the theoretical bullet, and work out in more detail the
mapping between PLN probabilistic logic in particular (including its
indefinite-probability truth values, intensional inference, and so
forth) and probabilistic programming...

In principle this is just some specific fiddling in the direction of
Curry-Howard correspondence, but still, working it out in particular
might well teach us something.   This is, I suppose, something that
you, me, Nil and Matt Ikle' could contribute to....   It's a pretty
interesting topic to me, but it might help us make progress beyond
throwing around generalities and expressions of differences in
individual taste?

-- Ben



On Sun, May 20, 2018 at 10:26 AM, Alexey Potapov <[email protected]> wrote:
> Ben,
>
>
>
> 2018-05-20 8:04 GMT+03:00 Ben Goertzel <[email protected]>:
>>
>> Alexey,
>>
>> ***
>> Our knowledge is built from data. Deduction systems (probabilistic or
>> not) lack this connection, while functional PPLs are well-suited for
>> this.
>> ***
>>
>> I don't understand why you think this way...
>>
>> The semantics of probabilistic logic systems can be naturally framed
>> in a fully observation-based way, which is what the original PLN book
>> is about...
>
>
> For me, observational data is sensory data. It doesn't contain concepts,
> predicates, etc. As far as I understand, PLN is designed to deal with more
> high-level data, e.g. textual. If we have an observation that a particular
> crow is black, then this is an observation for which generalization logical
> languages/PLN can suite not worse or even better than (functional) PPLs. But
> there are no purely black crows. It's just an abstraction, which itself
> should be somehow generalized from raw data.
> How can we calculate P(crow,black|image)? It's ~
> P(crow,black)P(image|crow,black).
> You can derive P(crow,black) (or rather P(crow,black|some other knowledge,
> e.g. cawing)) using PLN (or PPLs also in fact, but maybe less
> elegant/efficient). But how will you calculate P(image|crow,black)? This
> probability is easily describable within functional generative models, but
> it's very cumbersome within logical languages.
> Well... I'm sure you know all this stuff. So, maybe this is just the
> question about the difference in our attentional focus.
>
>
>>
>>
>> It's true that a logic system, as part of its formulation, makes some
>> commitments about the initial logic rules, which are not initially
>> derived from the data but rather supplied by the system designer
>>
>> OTOH a probabilistic programming system, as part of its formulation,
>> makes some commitments about the initial programming language
>> primitives, which are not initially derived from the data but rather
>> supplied by the system designer
>
>
> Exactly. So, what we are talking about is the difference in the available
> primitives in two cases. This might seem as a mere practical, but not
> fundamental difference. However, this practical difference is so large that
> it is almost fundamental. Logic deals with truth values, not numbers. One
> can introduce Peano axioms, basic grounded predicates saying something like
> "it's true that the pixel with coordinates (x, y) has (r,g,b) color", and
> infer the truth value that we see a crow and it is black given this image,
> but this is much more easier to just crunch numbers. But if you introduce
> imperative number crunching into your logical system, you loose the ability
> to logically reason about these particular numbers.
> Non-logical PPLs don't deal with probabilities directly. They generate
> values of random variables. These values can be numbers or arbitrary data
> structures. These PPLs naturally inherit the powerfulness of number
> crunching of imperative languages. That's why I say they are better suited
> for learning from (raw) data. Of course, the back side is that implementing
> reasoning with them as cumbersome as data processing with logic.
>
> Well, actually my worries are very technical, and I will describe them in a
> new thread (hopefully) soon.
>
> -- Alexey



-- 
Ben Goertzel, PhD
http://goertzel.org

"Only those who will risk going too far can possibly find out how far
they can go." - T.S. Eliot

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBdr19gWtQ8bsM6t5MMHty_kZ7j3Xr-wjwyqPNeP2NOmmA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to