Given the nature of this topic, I start a new thread for it.

Ben proposed the following example to reveal a difference between NARS
and PLN, with the hope to show why PLN is better. Now let me use the
same example to show the opposite conclusion. ;-)

In the first part, I'll just translate Ben's example into Narsese,
then derive his conclusion, with more details.
[ For the grammar of Narsese, see
http://code.google.com/p/open-nars/wiki/InputOutputFormat ]

Assuming 4 input judgments, with the same default confidence value (0.9):

(1) {Ben} --> AGI-author <1.0;0.9>
(2) {dude-101} --> AGI-author <1.0;0.9>
(3) {Ben} --> odd-people <1.0;0.9>
(4) {dude-102} --> odd-people <1.0;0.9>

>From (1) and (2), by abduction, NARS derives (5)
(5) {dude-101} --> {Ben} <1.0;0.45>

Since (3) and (4) gives the same evidence, they derives the same conclusion
(6) {dude-102} --> {Ben} <1.0;0.45>

Ben argues that since there are many more odd people than AGI authors,
(5) should have a "higher" truth-value, in a certain sense, which is
the case in PLN, by using Bayes rule.

So far, I agree with Ben, but just add that in NARS, the information
"there are many more odd people than AGI authors" has not been taken
into consideration yet.

That information can be added in several different forms. For example,
after NARS learns some math, from the information that there are only
about 100 AGI authors but 1000000 odd people (a conservative
estimation, I guess), plus Ben is in both category, and the principle
of indifference, the system should have the following knowledge:
(7) AGI-author --> {Ben} <0.01;0.9>
(8) odd-people --> {Ben} <0.000001;0.9>

Now from (2) and (7), by deduction, NARS gets
(9) {dude-101} --> {Ben} <0.01;0.81>

and from (4) and (8), also by deduction, the conclusion is
(10) {dude-102} --> {Ben} <0.000001;0.81>

[Here I'm taking a shortcut. In the current implementation, the
deduction rule only directly produces strong positive conclusions,
while strong negative conclusions are produced with the help of the
negation operator, which is something I skipped in this discussion.
So, in the actual case, the confidence will be lower than 0.81 (the
product of the confidence values of the premise), but not by too
much.]

The same result can be obtained in other ways. Even if NARS doesn't
know math, if the system has met AGI author many times, and only in
one percent of the times the person happens to be Ben, the system will
also learn something like (7). The same for (8).

When the system gets both (5)-(6) and (9)-(10), the latter is chosen
as the final conclusion, given their high confidence value. [The two
pairs won't be merged, because they come from overlapping evidence ---
(2) and (4) are used in both cases.]

Now NARS gives exactly the conclusion Ben asked.

So, what is going on here? The information referred as "node
probability" in PLN sometimes (though maybe not always) builds
"reverse" links in NARS, and consequently turns an abduction (or
induction) into deduction, whose conclusion will "override" the
abductive/inductive conclusion, because deductive conclusions usually
have higher confidence values. This is not really news, because
abduction and induction are implemented in PLN as
deduction-on-reversed-link.

What does this means? To me, it once again shows what I've been saying
all the time: NARS doesn't always give better results than PLN or
other probability-based approach, but it does assume less knowledge
and resources. In this example, from knowledge (1)-(4) alone, NARS
derives (5)-(6), but probability-based approach, including PLN, cannot
derive anything, until knowledge is got (or assumptions are made) on
the involved "node probabilities". For NARS, when this information
becomes available, it may be taken into consideration to change the
system's conclusions, though they are not demanded in all cases.

This example also shows why NARS and PLN are similar on deduction, but
very different in abduction and induction. In my opinion, what called
"abduction" and "induction" in PLN are special forms of deductions,
which produce solid conclusion, but also demand more evidence to start
with. Actually probability theory is about (multi-valued) deduction
only. It doesn't build tentative conclusions first, them using
additional evidence to revise or override them, which is how
non-deductive inference works.

NARS can deliberately use probability theory by coding P(E) = 0.8 into
Narsese judgment like "(*, E, 0.8) --> probability-of <1.0;0.99>",
though it is not built-in, but must be learned by the system, just
like us. Its "native logic" is similar to probability theory here or
there, but is based on very different assumptions.

Pei


On Sun, Sep 21, 2008 at 10:46 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> As an example inference, consider
>
> Ben is an author of a book on AGI <tv1>
> This dude is an author of a book on AGI <tv2>
> |-
> This dude is Ben <tv3>
>
> versus
>
> Ben is odd <tv1>
> This dude is odd <tv2>
> |-
> This dude is Ben <tv4>
>
> (Here each of the English statements is a shorthand for a logical
> relationship that in the AI systems in question is expressed in a formal
> structure; and the notations like <tv1> indicate uncertain truth values
> attached to logical relationships,  In both NARS and PLN, uncertain truth
> values have multiple components, including a "strength" value that denotes a
> frequency, and other values denoting confidence measures.  However, the
> semantics of the strength values in NARS and PLN are not identical.)
>
> Doing these two inferences in NARS you will get
>
> tv3.strength = tv4.strength
>
> whereas in PLN you will not, you will get
>
> tv3.strength >> tv4.strength
>
> The difference between the two inference results in the PLN case results
> from the fact that
>
> P(author of book on AGI) << P(odd)
>
> and the fact that PLN uses Bayes rule as part of its approach to these
> inferences.
>
> So, the question is, in your probabilistic variant of NARS, do you get
>
> tv3.strength = tv4.strength
>
> in this case, and if so, why?
>
> thx
> ben
> ________________________________
> agi | Archives | Modify Your Subscription


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to