[EMAIL PROTECTED] wrote:
> David Poole writes:
> 
> 
>>In this example, the clarity priciple doesn't depend on any definition 
>> of "near", as the adherents to fuzzy would like to claim. All we need is 
>> to make a test for whether the ship is near the land. I would ask the 
>> captain of the ship. "The captain of the ship would, if asked, concur 
>> that the ship is near the land" is a perfectly clear clarity priciple: 
>> we could bet on what the captain would say (we might need to have some 
>> protocol if the captain refused to talk to us on the grounds that we are 
>> just troublesome academics). We could even derive a probability 
>> distribution of what the captain would say conditioned on the distance...
> 
> 
> On what basis is the captain supposed to answer your question? How does she 
> use the information about the birds? Are probabilistic means foreclosed to her?
> 
> 
>                                                     Paul Snow

The captain is just supposed to answer "near" questions. We don't ask 
her theoretical questions, just "are we now near land"?  She can use 
whatever information she likes, her answer can depend on the time-of-day 
or the weather (both of which I would expect to be relavant to the 
answer) or she can just pick random answers. That she says yes to this 
question, is a well defined proposition that we can have bets over or 
have probabilities over. We can now gather evidence, have prior 
probabilities, or just argue theoretically about what her answer would 
be. We can have a probability of her saying yes, given the distance (and 
given other relevant features). We don't need any "near" fuzzy concept.

If I was to build a system based on closeness of ships to shore, I'd 
much rather trust (and model) the opinions of experts, than of me or 
other computer scientists or logicians making up an arbitrary 
definitions of what "near" may mean. "Near" probably has quite a useful 
meaning in the nautical world.

In the birds example, the "near land" now has a precise meaning. My 
guess is that she does not use the information about the birds in her 
judgement of closeness, although, as in the example, it could be used as 
evidence about whether or not she would say we were close.

This may not seem so obvious for concepts such as "near" where it seems 
obvious that it is a function of distance (although my guess is that it 
is also a function of the weather; what may no be "near" in calm waters 
may be very near in stormy waters) but we just don't know the function. 
  However, consider the concept of "Beauty".  There are no obvious 
properties that make a scene or part of a scene beautiful.  But we can 
still have probabilities distribution over whether someone (or even when 
a random person) would say that a scene is beautiful. We can learn what 
makes things beautiful (i.e., what properties would predict that someone 
would say it is beautiful).  There is a common saying that "beauty is in 
the eye of the beholder" that emphasises the subjective nature of beauty.

I must admit that I have never understood the motivation for fuzzy 
logic. I can't see why concepts such as "near" or "beautiful" can't be 
modelled as above with standard probability, as above.  Perhaps I can be 
enlightened.

David

Reply via email to