Sorry I don't have the time to type a detailed reply, but for your
second point, see the example in
http://www.cogsci.indiana.edu/pub/wang.fuzziness.ps , page 9, 4th
paragraph:

If these two types of uncertainty [randomness and fuzziness] are
different, why bother to treat them in an uniform way?
The basic reason is: in many practical problems, they are involved
with each other. Smets stressed
the importance of this issue, and provided some examples, in which
randomness and fuzziness are
encountered in the same sentence ([20]). It is also true for
inferences. Let's take medical diagnosis
as an example. When a doctor want to determine whether a patient A is
suffering from disease D,
(at least) two types of information need to be taken into account: (1)
whether A has D's symptoms,
and (2) whether D is a common illness. Here (1) is evaluated by
comparing A's symptoms with D's
typical symptoms, so the result is usually fuzzy, and (2) is
determined by previous statistics. After
the total certainty of "A is suffering from D" is evaluated, it should
be combined with the certainty
of  "T is a proper treatment to D" (which is usually a statistic
statement, too) to get the doctor's
"degree of belief" for "T should be applied to A". In such a situation
(which is the usual case,
rather than an exception), even if randomness and fuzziness can be
distinguished in the premises,
they are mixed in the middle and final conclusions.

Pei

On Mon, Sep 8, 2008 at 3:55 PM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
> A somewhat revised version of my paper is at:
> http://www.geocities.com/genericai/AGI-ch4-logic-9Sep2008.pdf
> (sorry it is now a book chapter and the bookmarks are lost when extracting)
>
> On Tue, Sep 2, 2008 at 7:05 PM, Pei Wang <[EMAIL PROTECTED]> wrote:
>>>
>>>   I intend to use NARS confidence in a way compatible with
>>> probability...
>
>> I'm pretty sure it won't, as I argued in several publications, such as
>> http://nars.wang.googlepages.com/wang.confidence.pdf and the book.
>
> I understood your argument about defining the confidence c, and agree
> with it.  But I don't see why c cannot be used together with f (as
> *traditional* probability).
>
>> In summary, I don't think it is a good idea to mix B, P, and Z. As Ben
>> said, the key is semantics, that is, what is measured by your truth
>> values. I prefer a unified treatment than a hybrid, because the former
>> is semantically consistent, while the later isn't.
>
> My logic actually does *not* mix B, P, and Z.  They are kept
> orthogonal, and so the semantics can be very simple.  Your approach
> mixes fuzziness with probability which can result in ambiguity in some
> everyday examples:  eg, John tries to find a 0.9 pretty girl (degree)
> vs  Mary is 0.9 likely to be pretty (probability).  The difference is
> real, but subtle, and I agree that you can mix them but you must
> always acknowledge that the measure is mixed.
>
> Maybe you've mistaken what I'm trying to do, 'cause my theory should
> not be semantically confusing...
>
> YKY
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to