Jade, Abram, Linas, Cosmo, etc. etc.,

I've dug deeper into MLN recently, and now have some new thoughts
about how one would make PLN do MLN-like stuff....   I continue to
think this would be an interesting direction for testing PLN on
certain sorts of problems...

In supervised classification, one constantly compares different
algorithms against each other on standard problems.  This gets
tiresome and often becomes counterproductive.  But up to a certain
point it's instructive.  PLN has lacked this sort of comparative
analysis; but running it side by side with MLN on the same datasets
would probably lead to some interesting insights...  Although this
would certainly take a nontrivial amount of work.

To avoid terminological confusion, I'll refer to "MLN rules" here as
"focus-patterns", i.e. they are the data patterns that the inference
process is focusing on.  (What MLN calls rules are predicates mined
from, or suspected to hold in, the data; these are not highly
analogous to inference rules in PLN.)

1)
MLN rule-learning (focus-pattern learning) would be done via Fishgram
or MOSES based pattern mining.   Basically, an MLN rule is a
frequently occuring  combination of relationships, with variables in
place of some of the arguments of the relationships.   It's just like
what Fishgram learns.

2)
MLN weight learning basically serves the purpose of figuring out the
dependencies among the focus-patterns, over the data (the "evidence"
in MLN terms, which in PLN is just the logical links in the
Atomspace).   If there are no dependencies, then weights come straight
from the support of the focus-patterns, which is already known from
the focus-pattern-learning phase.

Since the focus-patternns, in PLN, are just predicates (e.g. AndLinks
or OrLinks, possibly nested), one can do an analogue of MLN weight
learning via backward chaining on the focus-patterns.  However, this
must be done well, to actually account for dependencies among the
focus-patterns.  In the new PLN design this will involve many cycles
of

-- backward chain on the focus patterns

-- forward chain on the Atoms that got high STI via being useful in
the backward chaining on the focus patterns

These cycles will have to do a good job of accounting for dependencies
among the focus patterns.

In essence, this *should* be accomplished automagically by the new PLN
design, via having a constant stream of STI directed to the focus
patterns, so that the focus patterns have constantly high STI and are
frequently chosen for backward chaining.

3)
Once the focus-patterns have confident truth values, incorporating
their interpendences across the available data effectively, then to
find the "most likely world" consistent with the data and the
focus-patterns, one does a bunch of forward chaining inference,
ensuring that the implications of the focus-patterns spread throughout
the Atomspace.

4)
Finally, at this point, querying PLN is analogous to querying MLN

...

A few more notes..

-- As the above should make clear "being MLN-like", for PLN, basically
consists of a specialized control mechanism that mines some patterns
and then focuses on the implication these patterns, collectively, have
for the rest of the Atomspace

-- Unlike in MLN, where the rule weights are something weirder than
probabilities, in PLN the focus patterns are given probabilistic truth
values just like everything else....  This seems much more elegant.

-- PLN seems innately more scalable than MLN on an algorithmic level.
E.g. Tuffy is a pretty good MLN implementation, but the runtime for
weight learning utterly blows up when you put in hundreds of thousands
of relationships as evidence ... and Gibbs sampling is also
intrinsically slow.   Felix works around this via integrating MLN with
specialized problem solvers like logistic regression in an interesting
way, but this becomes pretty problem-specific.

-- Ben


-- 
Ben Goertzel, PhD
http://goertzel.org

"In an insane world, the sane man must appear to be insane". -- Capt.
James T. Kirk


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to