On Sat, Sep 20, 2008 at 9:09 PM, Abram Demski <[EMAIL PROTECTED]> wrote: >> >> (1) In probability theory, an event E has a constant probability P(E) >> (which can be unknown). Given the assumption of insufficient knowledge >> and resources, in NARS P(A-->B) would change over time, when more and >> more evidence is taken into account. This process cannot be treated as >> conditioning, because, among other things, the system can neither >> explicitly list all evidence as condition, nor update the probability >> of all statements in the system for each piece of new evidence (so as >> to treat all background knowledge as a default condition). >> Consequently, at any moment P(A-->B) and P(B-->C) may be based on >> different, though unspecified, data, so it is invalid to use them in a >> rule to calculate the "probability" of A-->C --- probability theory >> does not allow cross-distribution probability calculation. > > This is not a problem the way I set things up. The likelihood of a > statement is welcome to change over time, as the evidence changes.
If each of them is changed independently, you don't have a single probability distribution anymore, but a bunch of them. In the above case, you don't really have P(A-->B) and P(B-->C), but P_307(A-->B) and P_409(B-->C). How can you use two probability values together if they come from different distributions? >> (2) For the same reason, in NARS a statement might get different >> "probability" attached, when derived from different evidence. >> Probability theory does not have a general rule to handle >> inconsistency within a probability distribution. > > The same statement holds for PLN, right? Yes. Ben proposed a solution, which I won't comment until I see all the details in the PLN book. >> The first half is fine, but the second isn't. As the previous example >> shows, in NARS a high Confidence does implies that the Frequency value >> is a good summary of evidence, but a low Confidence does implies that >> the Frequency is bad, just that it is not very stable. > > But I'm not talking about confidence when I say "higher". I'm talking > about the system of levels I defined, for which it is perfectly OK. Yes, but the whole purpose of adding another value is to handle inconsistency and belief revision. Higher-order probability is mathematically sound, but won't do this work. Think about a concrete example: if from one source the system gets P(A-->B) = 0.9, and P(P(A-->B) = 0.9) = 0.5, while from another source P(A-->B) = 0.2, and P(P(A-->B) = 0.2) = 0.7, then what will be the conclusion when the two sources are considered together? Pei ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69 Powered by Listbox: http://www.listbox.com