> According to my "experience-grounded semantics", in NARS truth value (the
> frequency-confidence pair) measures the compatibility between a statement
> and available (past) experience, without assuming anything about the "real
> world" or the future experience of the system.
>
> I know you also accept a version of experience-grounded
> semantics, but it is
> closer to model-theoretic semantics. In this approach, it still
> makes sense
> to talk about the "real value" of a truth value, and to take the current
> value in the system as an approximation of it. In such a system, you can
> talk about possible worlds with different probability
> distributions, and use
> the current knowledge to choose among them.
>
> Probability theory is not compactable with the first semantics above, but
> the second one.
>
> Pei

Right -- PTL semantics is experience-grounded, but in the derivation of some
of the truth value functions associated with the inference rules (deduction
and revision), we make an implicit assumption that "reality" is drawn from
some probability distribution over "possible worlds."  Among the "heuristic
assumptions" we use to make this work well in practice, are some assumptions
about the nature of this distribution over possible worlds (i.e., we don't
assume a uniform distribution; we assume a bias toward possible worlds that
are structured in a certain sense).  This kind of bias is a more abstract
form of Hume's assumption of a "human nature" providing a bias that stops
the infinite regress of the induction problem.

However, I disagree that NARS doesn't assume anything about the real world
or the future experience of the system.  In NARS, you weight each frequency
estimate based on its confidence, c = n/(n+k), where n is the number of
observations on which the frequency estimate is based.  This embodies the
assumption that something which has been observed more times in the past, is
more likely to occur in the future.  This assumption is precisely a bias on
the space of possible worlds.  It is an assumption that possible worlds in
which the future resembles the past, are more likely than possible worlds in
which the future is totally unrelated to the past.  I think this is a very
reasonable assumption to make, and that this assumption is part of the
reason why NARS works (to the extent that it does ;).  However, I think you
must admit that this DOES constitute an inductive assumption, very similar
to the assumption that possible worlds with temporal regularity are more
likely than possible worlds without.

Also, I think that the reason the NARS deduction truth value formula works
reasonably well is that it resembles somewhat the rule one obtains if one
assumes a biased sum over possible worlds.  In your derivation of this
formula you impose a condition at the endpoints, and then you choose a
relatively simple rule that meets these endpoint conditions.  However, there
are many possible rules that meet these endpoint conditions.  The reason you
chose the one you did is that it makes intuitive sense.  However, the reason
it makes intuitive sense is that it fairly closely matches what you'd get
from probability theory making a plausible assumption about the probability
distribution over possible worlds.

Similarly, the NARS revision rule is very close to what you get if you
assume au unbiased sum over all possible worlds (in the form of a
probabilistic independence assumption between the relations being revised)

In short, I think that in NARS you secretly smuggle in probability theory,
by

-- using a confidence estimator based on an assumption about the probability
distribution over possible worlds
-- using "heuristically derived" deduction and revision rules that just
happen to moderately closely coincide with what one obtains by reasoning in
terms of probability distributions over possible worlds

On the other hand, the NARS induction and abduction rules do NOT closely
correspond to anything obtainable by reasoning about probabilities and
possible worlds.  However, I think these are the weakest part of NARS; and
in playing with NARS in practice, these are the rules that, when iterated,
seem to frequently lead to intuitively implausible conclusions.

-- Ben G



-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to