The {A} statements are consistent with NARS, but the existing NARS inference
rules don't use these statements...
A related train of thought has occurred to me...
In PLN we explicitly have both intensional and extensional inheritance links
(though with semantics nonidentical to that used in NARS, and fundamentally
probabilistic in nature) ... so the "probabilistic quasi-NARS" logic you're
describing could potentially be used as a sort of "NARS on top of PLN" ...
I'm not sure how useful such a thing is, but it might be interesting...
ben
On Mon, Sep 22, 2008 at 12:18 PM, Abram Demski <[EMAIL PROTECTED]>wrote:
> Sure, but it is a consistent extension; {A}-statements have a strongly
> NARS-like semantics, so we know they won't just mess everything up.
>
> On Mon, Sep 22, 2008 at 11:31 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > Of course ... but then you are not doing NARS inference anymore...
> >
> > On Mon, Sep 22, 2008 at 8:25 AM, Abram Demski <[EMAIL PROTECTED]>
> wrote:
> >>
> >> It would be possible to get what you want in the setting, by allowing
> >> some probabilistic manipulations not done in NARS. The node
> >> probability you want in this case could be simulated by talking about
> >> the probability distribution of sentences of the form "X is the author
> >> of a book". We can give this a low prior probability. Since the system
> >> manipulates likelihoods, it won't notice; but if we manipulate
> >> probabilities, it would.
> >>
> >> Perhaps a more satisfying answer would be to introduce a new operator
> >> into the system, {A}, that simulates the node probability; or more
> >> specifically, it represents the average truth-value distribution of
> >> statements that have A on one side or the other. So, it has a 'par'
> >> value just like inheritance statements do. If there was evidence for a
> >> low par, there would be an effect in the direction you want. (It might
> >> be way too small, though?)
> >>
> >> --Abram
> >>
> >> On Sun, Sep 21, 2008 at 10:46 PM, Ben Goertzel <[EMAIL PROTECTED]>
> wrote:
> >> >
> >> >
> >> > On Sun, Sep 21, 2008 at 10:43 PM, Abram Demski <[EMAIL PROTECTED]
> >
> >> > wrote:
> >> >>
> >> >> The calculation in which I sum up a bunch of pairs is equivalent to
> >> >> doing NARS induction + abduction with a final big revision at the end
> >> >> to combine all the accumulated evidence. But, like I said, I need to
> >> >> provide a more explicit justification of that calculation...
> >> >
> >> > As an example inference, consider
> >> >
> >> > Ben is an author of a book on AGI <tv1>
> >> > This dude is an author of a book on AGI <tv2>
> >> > |-
> >> > This dude is Ben <tv3>
> >> >
> >> > versus
> >> >
> >> > Ben is odd <tv1>
> >> > This dude is odd <tv2>
> >> > |-
> >> > This dude is Ben <tv4>
> >> >
> >> > (Here each of the English statements is a shorthand for a logical
> >> > relationship that in the AI systems in question is expressed in a
> formal
> >> > structure; and the notations like <tv1> indicate uncertain truth
> values
> >> > attached to logical relationships, In both NARS and PLN, uncertain
> >> > truth
> >> > values have multiple components, including a "strength" value that
> >> > denotes a
> >> > frequency, and other values denoting confidence measures. However,
> the
> >> > semantics of the strength values in NARS and PLN are not identical.)
> >> >
> >> > Doing these two inferences in NARS you will get
> >> >
> >> > tv3.strength = tv4.strength
> >> >
> >> > whereas in PLN you will not, you will get
> >> >
> >> > tv3.strength >> tv4.strength
> >> >
> >> > The difference between the two inference results in the PLN case
> results
> >> > from the fact that
> >> >
> >> > P(author of book on AGI) << P(odd)
> >> >
> >> > and the fact that PLN uses Bayes rule as part of its approach to these
> >> > inferences.
> >> >
> >> > So, the question is, in your probabilistic variant of NARS, do you get
> >> >
> >> > tv3.strength = tv4.strength
> >> >
> >> > in this case, and if so, why?
> >> >
> >> > thx
> >> > ben
> >> > ________________________________
> >> > agi | Archives | Modify Your Subscription
> >>
> >>
> >> -------------------------------------------
> >> agi
> >> Archives: https://www.listbox.com/member/archive/303/=now
> >> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> >> Modify Your Subscription: https://www.listbox.com/member/?&
> >> Powered by Listbox: http://www.listbox.com
> >
> >
> >
> > --
> > Ben Goertzel, PhD
> > CEO, Novamente LLC and Biomind LLC
> > Director of Research, SIAI
> > [EMAIL PROTECTED]
> >
> > "Nothing will ever be attempted if all possible objections must be first
> > overcome " - Dr Samuel Johnson
> >
> >
> > ________________________________
> > agi | Archives | Modify Your Subscription
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com