Lotfi -

>        The standard axiomatic structure of  standard probability theory
>does not address two basic issues which show their ungainly faces in
>many real-world applications of probability theory.They are (a)
>imprecision of probabilities;

Imprecision of probabilities can be handled with second order 
probabilities and / or interval probabilities.

>and (b) imprecision of events.

If by this you mean representing events that don't satisfy the 
clarity test, you are right.  Probabilities are defined on sets -- or 
in your terminology, "crisp sets." (I prefer to use standard 
mathematical terminology unless there is a strong reason to do 
otherwise.  Therefore, I use"fuzzy set" to refer to the 
generalization of standard set theory that you describe in your 
email.)

>To address
>these issues and to add to probability theory the capability to deal
>with perception-based information, e.g., "usually Robert returns from
>work at about 6 pm; what is the probability that Robert is home at 6:30
>pm?"

I don't understand what you mean by calling this "perception based 
information."   This is a statement in natural language, at the 
cognitive level and NOT at the perceptual level.  I suspect we as a 
community will ultimately conclude it is a dead-end path to try to 
build a truly intelligent reasoner by reasoning directly with 
linguistic statements such as this, using theories that are not based 
on a deeper understanding of the complex process by which the brain 
generates such linguistic statements.  When we have a deeper 
understanding of how we actually do cognitive level processing and 
how it interacts with sub-symbolic processing, I doubt very much that 
t-norms and t-conorms will turn out to be what it's about.  They 
strike me as epicycles.

I could be very wrong about that.  I urge anyone who disagrees with 
me to plunge in and work on t-norms and t-conorms.  Science moves 
forward because passionate adherents of the various theories try with 
everything they've got to make their theories work.  For what it's 
worth, though, although I have the very highest respect for your 
work, I'm placing my bets elsewhere.

In my mind, "perception based information" would mean something like 
nerve impulses resulting from an optical or auditory waveform 
impacting on the retina or the eardrum.  If we are talking 
automation, then a radar waveform or a bunch of pixel intensities 
would qualify as "perception based information."  It takes a huge 
amount of processing to turn this kind of information into a 
high-level linguistic summary, fuzzy or otherwise.  We don't yet 
understand very well how the brain does this, and our computational 
models still reflect this lack of understanding -- although things 
are moving very rapidly.

I am very keenly interested in figuring out how a brain or computer 
can get from a bunch of pixels to a linguistic statement such as 
"that is a roughly circular shaped blob."  In my view, the big payoff 
will come not from applying t-norms and t-conorms to reason about 
statements already processed into linguistic form.  The big payoff 
will come from going from the sensory data to the linguistic 
constructs.  Once we understand how to do that, then how to reason 
with the linguistic constructs will become obvious.  It will fall 
right out of the theory.

In my view, the most promising path to dealing with imprecision is 
the following research program (which is going on at an active pace 
as we speak):
   - develop theories of how animals process sensory information;
   - develop theories of how these sensory processing mechanisms 
contribute to the generation of linguistic summary statements such as 
the above;
   - abstract away the details to arrive at the essential principles 
of how humans and other animals do this;
   - build computational theories that apply these essential principles.

In my view, it is quite likely that probability and decision theory 
will prove adequate to the job.  However, in my view it is also quite 
likely that classical physics and classical computing will prove 
inadequate to the job.  We will need a new theory of computing in 
which standard recursive function theory / Turing computability is 
replaced by a quantum computation model that has quantum non-locality 
and quantum randomness built into the computational apparatus from 
the ground up.  You can find some (as yet unpublished -- I haven't 
yet written anything that I feel is ready for archival publication) 
musings on this topic on my web site (url below).  I am currently 
working on a paper amplifying these ideas and developing a quantum 
computation version of the physical symbol system hypothesis.  Stay 
tuned.

>It is necessary to generalize probability theory

Your preferred research program is to attack these problems by 
generalizing probability theory via t-norms and t-conorms. That is a 
perfectly valid approach to try, as I said above, although it is not 
my preferred approach.  However, it's not fair to say this is 
NECESSARY unless you have definitively falsified the alternative 
approaches.  You demonstrate necessity only if you can show that 
there is no way to handle these kinds of problems using ordinary 
probability and decision theory.  I believe you have come nowhere 
near doing this.

>in three stages. A
>preliminary account of such generalization is described in a forthcoming
>paper of mine in the Journal  of Statistical Planning and Inference,
>"Toward a Perception-based Theory of Probabilistic Reasoning with
>Imprecise Probabilities."

Thanks for the reference.  I recommend it to anyone having a serious 
interest in these issues.  It is important to look at all the 
alternatives before making up one's mind.

Respectfully,

Kathy Laskey


Reply via email to