I am a law professor interested in various aspects of probability (by which 
I mean to include both fuzzy set theory, the standard probability calculus, 
and just about everything else related, actually) as they may apply to 
legal decision making.  I have lurked on this list for some time, and 
learned much (for which I am grateful), but now I have a question (if I 
may).  Before asking it, I should also say that I have no normative 
commitments to any of the positions represented in the various debates that 
occur on the list.  For me, the various approaches to probability represent 
tools to be employed when helpful and appropriate to the task at hand, 
which brings me to my question.  I have the sense that there is a double 
standard at play.  If conventional probability theory (in particular, as I 
follow the discussions, expected utility/decision theory) does not explain, 
in essence, everything, it is accused of, in essence, explaining very 
little or nothing, whereas if some alternative explains something, that 
alone is taken as demonstrating its superiority, and perhaps vice 
versa.  If I am wrong in this sense, it would be helpful to me to have my 
error uncovered.  If I am right in this, then isn't the salient question 
the proper domain of differing approaches?

Lest this be taken as a disguised defense of conventional probability 
theory, by the way, perhaps I should note that I have spent a fair amount 
of effort demonstrating that various aspects of conventional probability 
theory basically do not map onto trial processes.  But, I also don't see 
fuzzy set theory as particularly useful either, at least not in any kind of 
general algorithmic way.  The combination of those two points, in fact, is 
what generated my question.  Aren't the combating perspectives in this 
debate best understood as providing tools of various kinds to be used as 
appropriate to the task at hand?

Any help in dispelling my ignorance would be greatly appreciated.

At 08:43 PM 8/15/2003 -0700, Lotfi A. Zadeh wrote:
>   Insightful comments regarding the principle of maximization of
>expected utility--and the paradoxes of Allais and Ellsberg which call
>into question its validity--have clarified, but not resolved, some of
>the basic issues which touch upon the foundations of decision analysis
>and economic theory.
>
>A broad question which remains on the table is: Is it possible to
>construct a precise, rigorous, ,axiomatic and prescriptive (normative)
>theory, call it PDT (Prescriptive Decision Theory)--a theory in the
>spirit of von Neumann, Morgenstern and Wald-- the superior intellects
>who laid its foundations in the middle of last century? I had the
>privilege of knowing Morgenstern and Wald, but not von Neumann.
>
>It is much easier to argue that the answer is "No" rather than "Yes."
>Here are my humble arguments in support of "No." For convenience, my
>arguments are centered on what I see as some of the principal obstacles
>to construction of PDT.
>
>The problem of risk aversion. It is quite obvious that risk aversion
>plays a key role in human decision-making. Consequently, it must be an
>integral part of PDT.
>The problem is that risk aversion is a human disposition, like
>honesty,kindness,selfishness and stinginess. Dispositions are
>context-dependent and hard to characterize. For example, when I am
>rushing my child to a hospital, I am much less risk averse as a driver
>than when I am driving to a beach and have three traffic tickets for
>speeding. In my view, it is not possible to formulate a realistic
>measure of risk aversion within the conceptual structure of bivalent
>logic and bivalent-logic-based probability theory. What is needed for
>this purpose is a high-level definition language such as PNL
>(Precisiated Natural Language). (See my paper " A New Direction in
>AI--Toward a Computational Theory of Perceptions," Spring 200l issue of
>the AI Magazine.) What this means is that to deal realistically with
>risk aversion we have to exit from the transparent and well-defined
>structure of probability theory and enter the murky waters of human
>psychology.
>
>The problem of the unexpected. In probability theory, we assume (a) that
>we can enumerate the possible outcomes of an experiment, and (b) that
>their respective probabilities add up to unity. But in reality there is
>almost always the possibility of occurrence of an unexpected outcome,
>with a small but unknown probability. How can the issue of unexpected
>outcomes be addressed in PDT?
>
>The problem of pseudonumbers. What I call a pseudonumber is an entity
>which has a number as its label but, in reality, is not a number. Common
>examples are: Check-out time is l pm; Speed limit is l00 km/hour;
>Accuracy of the poll is 3 percent; etc. In general, the meaning of a
>pseudonumber is far from simple. How would you define, realistically and
>precisely, what is meant by "Check-out time is l pm?"
>
>Why is the concept of a pseudonumber of high relevance to decision
>analysis? Because when the outcome of a decision is represented as a
>number, in most cases the number is a pseudonumber. For example, if I am
>told that choosing option A, I may win $l,000 , what goes through my
>mind is: What will be the consequences of winning $l,000? In this
>perspective, $l,000 is a pseudonumber, just as l pm is in "Check-out
>time is l pm." The standard artifice of assuming that utility is a
>nonlinear function of value is much too simple to address the problem.
>
>What I am suggesting is that many of the numbers that are used in
>decision analysis and econometrics are, in reality, pseudonumbers. The
>expressive power of bivalent logic and bivalent-logic-based probability
>theory is not sufficient to define the meaning of pseudonumbers. As in
>the case of risk aversion, what is needed for this purpose is PNL.
>
>The problem of axiomatics. When the point of departure in a theory is a
>collection of axioms, the theory appears to be built on a solid,
>precisely formalized, foundation. There is, however, a serious problem
>with axiomatic approaches which relates to the nature of bivalent logic.
>More specifically, a typical axiom contains equalities and universally
>quantified conditions. But suppose that an equality is satisfied not
>exactly but to within epsilon. As epsilon increases, a point is reached
>at which the equality ceases to be satisfied. But what is this point? In
>most realistic settings ,the point is context-dependent. The implication
>is that to fit reality an axiomatic structure must allow partiality of
>truth. Existing axiomatic structures do not have this capability.
>
>The problem of perceptions. In most realistic settings,
>decision-relevant information is a mixture of measurements and
>perceptions. For example, when I have to decide on whether or not to buy
>a hours, the measurement-based information consists of the price of the
>house, taxes, area, etc., while the perception-based information is its
>appearance, quality of construction, quality of schools, safety, and so on.
>
>Existing methods of decision analysis are intended to deal with problems
>in which decision-relevant information is measurement-based. Based as
>they are on bivalent logic and bivalent-logic-based probability theory,
>existing methods do not have the capability to operate on
>perception-based information.
>
>An example is what may be called " The balls-in-box " problem--a problem
>which has some links to Ellsberg's paradox. The measurement-based
>version of the problem is : A box contains 20 black and while balls.
>Over 70% are black. There are three times as many black balls as white
>balls. What is the probability that a ball drawn at random is white? The
>perception-based version is: A box contains about 20 black and white
>balls. Most are black. There are several times as many black balls as
>white balls. What is the probability that a ball drawn at random is white?
>
>Standard probability theory, call it PT, does not deal with problems of
>this type. The inability of PT to deal with perception-based information
>is a fundamental limitation.However,the capability to dral with
>perception-based information may be added to PT,throgh a three-stage
>generalization,leading to a perception-based probability theory,PTp. (An
>exposition of PTp may be found in my paper "Toward a Perception-based
>Theory of Probabilistic Reasoning with Imprecise Probabilities," Journal
>of Statistical Planning and Inference,Vol.105,233-264,2002.Downloadable
>from http://www-bisc.cs.berkeley.edu/BISCProgram/CTPZadeh.pdf.)
>
>To sum up my arguments, if construction of a decision theory in the
>spirit of von Neumann-Morgenstern-Wald is not a realizable goal, then
>what is achievable?
>
>In my view, what is achievable is a theory which is partly descriptive
>and partly prescriptive. A prerequisite to constructing such a theory is
>abandonment of bivalence. The resulting decision theory, call it DTp,
>will have the capability to operate on both measurement-based and
>perception-based information.
>
>Lotfi A. Zadeh
>Professor in the Graduate School, Computer Science Division
>Department of Electrical Engineering and Computer Sciences
>University of California
>Berkeley, CA 94720 -1776
>Director, Berkeley Initiative in Soft Computing (BISC)



Ronald J. Allen
Wigmore Professor of Law
Northwestern University
Phone:  312-503-8372
Fax:    312-503-2035

Reply via email to