On Mon, Sep 22, 2008 at 10:09 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> One interesting observation is that these truth values approximate
> relatively
> uninformative points on the probability distributions that PLN would attach
> to these relationships.
>
> That is, <1.0;0.45> , if interpreted as a probabilistic truth value, would
> indicate a fairly wide interval of probabilities containing 1.0
>
> Which is not necessarily wrong, but is not maximally interesting ... there
> might
> be a narrower interval centered somewhere besides 1.0
>
> (the confidence 0.45, in a PLN-like interpretation, is inverse to
> probability interval width)

>From the beginning (see
http://www.cogsci.indiana.edu/pub/wang.inheritance_nal.ps) the
truth-value of NARS has an interval interpretation, where the width of
the interval (called "ignorance") is the opposite of confidence.
Therefore, duductive conclusions usually have narrower intervals than
abductive/inductive conclusions.

> This is all correct, but the problem I have is that something which should
> IMO be very simple and instinctive is being done in an overly
> complicated way....  Knowledge of math should not be needed to
> do an inference this simple...

It is simple to me --- at least I don't need "node probability". ;-)

>> The same result can be obtained in other ways. Even if NARS doesn't
>> know math, if the system has met AGI author many times, and only in
>> one percent of the times the person happens to be Ben, the system will
>> also learn something like (7). The same for (8).
>
> But also, observations of Ben should not be needed to do this inference...

If Ben isn't in "AGI authors", this concept won't be involved in this inference.

>> What does this means? To me, it once again shows what I've been saying
>> all the time: NARS doesn't always give better results than PLN or
>> other probability-based approach, but it does assume less knowledge
>> and resources. In this example, from knowledge (1)-(4) alone, NARS
>> derives (5)-(6), but probability-based approach, including PLN, cannot
>> derive anything, until knowledge is got (or assumptions are made) on
>> the involved "node probabilities". For NARS, when this information
>> becomes available, it may be taken into consideration to change the
>> system's conclusions, though they are not demanded in all cases.
>
> It is simple enough, in PLN, to assume that all terms have equal
> probability ... in the absence of knowledge to the contrary.

Though it is usually intuitive and natural, it is still an assumption
to be make, and it does cause problems in certain situation.
Furthermore, what is "knowledge to the contrary" that lead the system
to undo the assumption? How?

>> This example also shows why NARS and PLN are similar on deduction, but
>> very different in abduction and induction.
>
> Yes.  One of my biggest practical complaints with NARS is that the induction
> and abduction truth value formulas don't make that much sense to me.

I guess since you are trained as a mathematician, your "sense" has
been formalized by probability theory to some extent. ;-)

> I understand
> their mathematical/conceptual derivation using boundary conditions, but to
> me
> they seem to produce generally uninteresting conclusion truth values,
> corresponding
> roughly to "suboptimally informative points on the conclusion truth value's
> probability
> distribution" ...

... just like metaphors, which cannot be used to prove anything.

A single non-deductive conclusion is almost never useful. Their value
is when they are accumulated in the long run.

> in OpenCogPrime
> we use other methods for hypothesis generation, then use probability theory
> for estimating the truth values of these hypotheses...

Many people have argued that "hypotheses generation" and "hypotheses
evaluation" should be separated. I strongly think that is wrong,
though I don't have the time to argue on that now.

> PLN is able to make judgments, in every case, using *exactly* the same
> amount of evidence that NARS is.

Without assumptions on "node probability"? In your example, what is
the conclusion from PLN if it is only given (1)-(4) ?

Pei


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to