Hi Pei,
> Assuming 4 input judgments, with the same default confidence value (0.9):
>
> (1) {Ben} --> AGI-author <1.0;0.9>
> (2) {dude-101} --> AGI-author <1.0;0.9>
> (3) {Ben} --> odd-people <1.0;0.9>
> (4) {dude-102} --> odd-people <1.0;0.9>
>
> From (1) and (2), by abduction, NARS derives (5)
> (5) {dude-101} --> {Ben} <1.0;0.45>
> Since (3) and (4) gives the same evidence, they derives the same conclusion
> (6) {dude-102} --> {Ben} <1.0;0.45>
>
One interesting observation is that these truth values approximate
relatively
uninformative points on the probability distributions that PLN would attach
to these relationships.
That is, <1.0;0.45> , if interpreted as a probabilistic truth value, would
indicate
a fairly wide interval of probabilities containing 1.0
Which is not necessarily wrong, but is not maximally interesting ... there
might
be a narrower interval centered somewhere besides 1.0
(the confidence 0.45, in a PLN-like interpretation, is inverse to
probability
interval width)
>
>
> That information can be added in several different forms. For example,
> after NARS learns some math, from the information that there are only
> about 100 AGI authors but 1000000 odd people (a conservative
> estimation, I guess), plus Ben is in both category, and the principle
> of indifference, the system should have the following knowledge:
> (7) AGI-author --> {Ben} <0.01;0.9>
> (8) odd-people --> {Ben} <0.000001;0.9>
>
> Now from (2) and (7), by deduction, NARS gets
> (9) {dude-101} --> {Ben} <0.01;0.81>
>
> and from (4) and (8), also by deduction, the conclusion is
> (10) {dude-102} --> {Ben} <0.000001;0.81>
This is all correct, but the problem I have is that something which should
IMO be very simple and instinctive is being done in an overly
complicated way.... Knowledge of math should not be needed to
do an inference this simple...
>
> The same result can be obtained in other ways. Even if NARS doesn't
> know math, if the system has met AGI author many times, and only in
> one percent of the times the person happens to be Ben, the system will
> also learn something like (7). The same for (8).
But also, observations of Ben should not be needed to do this inference...
>
> What does this means? To me, it once again shows what I've been saying
> all the time: NARS doesn't always give better results than PLN or
> other probability-based approach, but it does assume less knowledge
> and resources. In this example, from knowledge (1)-(4) alone, NARS
> derives (5)-(6), but probability-based approach, including PLN, cannot
> derive anything, until knowledge is got (or assumptions are made) on
> the involved "node probabilities". For NARS, when this information
> becomes available, it may be taken into consideration to change the
> system's conclusions, though they are not demanded in all cases.
It is simple enough, in PLN, to assume that all terms have equal
probability ... in the absence of knowledge to the contrary.
Algebraically, the NARS deduction truth value formula closely approximates
the special case of the PLN deduction truth value formula obtained by
assuming
all terms in the deduction premises have equal probability.
>
>
> This example also shows why NARS and PLN are similar on deduction, but
> very different in abduction and induction.
Yes. One of my biggest practical complaints with NARS is that the induction
and abduction truth value formulas don't make that much sense to me. I
understand
their mathematical/conceptual derivation using boundary conditions, but to
me
they seem to produce generally uninteresting conclusion truth values,
corresponding
roughly to "suboptimally informative points on the conclusion truth value's
probability
distribution" ...
> In my opinion, what called
> "abduction" and "induction" in PLN are special forms of deductions,
> which produce solid conclusion, but also demand more evidence to start
> with. Actually probability theory is about (multi-valued) deduction
> only. It doesn't build tentative conclusions first, them using
> additional evidence to revise or override them, which is how
> non-deductive inference works.
>
Different theorists use the words induction and abduction in different ways,
of course...
Regarding the "proposal of tentative conclusions": I'm not sure exactly what
you mean by this ... but, I note that in OpenCogPrime
we use other methods for hypothesis generation, then use probability theory
for estimating the truth values of these hypotheses...
PLN is able to make judgments, in every case, using *exactly* the same
amount of evidence that NARS is. It does not require additional evidence.
Sometimes it may make simplistic "default assumptions" to work around the
relative paucity of evidence, but in those cases, it still reaches
conclusions,
just as NARS does...
-- Ben G
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com