On 05/19/2018 09:00 PM, Alexey Potapov wrote:
Our knowledge is built from data. Deduction systems (probabilistic or
not) lack this connection, while functional PPLs are well-suited for this.
Deduction system can be understood very broadly, and may encompass
inferences based on PPL models as well.
PLN definitely draws, at least in principle, the relationship between
deduction and data.
ATM in practice it's a bit lacking though, for instance the link between
the TV
Implication <TV>
P
Q
obtained from instances of P and Q is forgotten after the inference.
This should be corrected. Meaning the inference rule
D ;; <- instances of P and Q
|-
Implication <TV>
P
Q
should be more something like
LinkBetweenDataAndImplication <TV>
D ;; <- instances of P and Q
Implication
P
Q
d ;; <- new instance pair of P and Q
|-
LinkBetweenDataAndImplication <TV_update>
Cons
d
D
Implication
P
Q
It would also provide an incremental way to calculate the TV as opposed
to batch processing every time.
It's kinda scary, computationally wise, but it seems to do well most
inference traces need to be recorded, not just conclusions. Yet another
meta-learning black hole...
Nil
--
You received this message because you are subscribed to the Google Groups
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit
https://groups.google.com/d/msgid/opencog/a9852fa0-fc0d-bedc-7279-750d13ca8599%40gmail.com.
For more options, visit https://groups.google.com/d/optout.