Christian --
You wrote --
"With it the theory can always be saved, especially since
the utility of an outcome is a subjective thing and thus cannot be
measured objectively. We are free to fiddle around with the utility
function until it gives the decision we want."
This is hardly the case. The utility function in the
von-Neumann/Morgenstern (VNM) theory is invariant up to linear changes
in scale only. Anything else will affect the decision maker's (DM)
choice. In the VNM development, a mechanism for eliciting the utility
is well defined, at least for a finite set of choices. Order the
outcomes from best to worst when there is no uncertainty about the
outcome. Assign U(worst) = 0 and U(best) = 1. Then elicit from the DM
U(X) = the probability p making the DM indifferent between getting X
for sure and the gamble of getting the best with probability p and the
worst with probability 1-p.
In the Allais paradox, using Gordon's examples, set U(0) = 0 and U(45)
= 1.0. Now try to find a value of U(30) that will allow the DM to
choose L1 = {30 w/prob 1} over L2 = {45 w/prob .8 and 0 w/prob .2} and
also allow the DM to choose K2 = {45 w/prob .2 and 0 w/prob .8} over
K1 = {30 w/prob .25 and 0 w/prob .75}. Such a value must satisfy
U(30) > .8 and U(30) < .8.
Clearly, you are not free to "fiddle around with the utility function
until it gives the decision we want." Only the probabilities are
subjective in the VNM theory. Once the probabilities are set, the
origins of the utility function are set arbitrarily, and the axioms
are satisfied, the rest must follow. There is a very good reason for
this: the utility assessment captures attitudes towards risk. That is
why your attempt to define risk aversion in terms of variance in the
utility value makes no sense at all.
You also wrote
"Either I can reject the theory as inappropriate or insufficient (this
is what Prof. Zadeh is doing) or I can try to convince myself that I
am wrong in believing that people are acting rationally in the
situation in which it fails (this mailing list has seen several posts
from people who said that they had done this)."
I believe there is a middle ground here, one that recognizes that the
VNM Expected Utility Theory is not universal and to be aware of
situations where it is not appropriate.
Bob.
-----Original Message-----
From: Christian Borgelt [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 20, 2003 9:47 PM
To: [EMAIL PROTECTED]
Subject: Re: [UAI] Allais' paradox
Dear Gordon
Let me start by clarifying the descriptive/prescriptive issue, because
it is at the core of the discussion. If we only have the premise that a
decision theory fails as a descriptive theory, it does not follow that
this theory is not an acceptable prescriptive theory, you are perfectly
right about that. However, if I add as a second premise that I see as
perfectly rational what people are doing in a situation in which the
theory fails, I have to reject it also as a prescriptive theory, because
then it prescribes (at least in this specific situation) something I see
as irrational.
More generally, the problem is this: If a decision theory prescribes
some decision, which I see as inappropriate, I have two options: Either
I can reject the theory as inappropriate or insufficient (this is what
Prof. Zadeh is doing) or I can try to convince myself that I am wrong
in believing that people are acting rationally in the situation in which
it fails (this mailing list has seen several posts from people who said
that they had done this). As I see things this discussion is about which
of the two options is the better one. (Maybe we can apply expected
utility theory to come up with an answer ;-)
As with most such questions, in the end it may be a matter of belief
how we decide. If I believe that people are often acting irrationally,
I may be inclined to go for the second alternative. If I believe more
strongly in the rationality of the human mind, I may prefer the second.
In finding an answer, maybe it could help to consider the following:
It is well known that many people misestimate the number of people in
the famous birthday problem. (How many people are needed, so that the
probability that two of them have the same birthday is higher than
50%?) Here it is easy to convince oneself that the original estimate
given by many people is actually wrong. However, what is really
convincing in this case is not that one does the computations (using
probability theory), but that one can explain how the misestimate
comes about: When estimating, people are usually inter-/extrapolating
linearly, and since it is easy to assess the probability for 2 people
and 366 people, a lot of people guess some number in the vicinity of
180. However, the actual function is highly non-linear. What I would
like in the cases of Allais' and Ellsberg's paradox is something
similar: Why do most people reject the answer of expected utility
theory on first sight? What is it that they are doing wrong? Is there
something like the linear interpolation in the birthday problem that
does not fit the situation?
Of course, this can also be turned around: If I reject the theory as
inappropriate, I should try to find the point where it either neglects
something or makes some implausible assumption or something like this.
This is what I am trying to do. I still think that expected utility
theory does not handle the variance of outcomes appropriately.
Thank you very much for being so detailed in answering my question.
However, I have some problems with this answer (some of which result
from me not being specific enough in asking my question, I am sorry).
I have to admit that I was suprised when you introduced the exponential
utility function. Of course one can get a different relative position
of the mean values if one scales the domain. That was not my point.
How I meant my question is this: I assumed that the payoffs were not
just numerical descriptions of the outcomes (like amounts of money),
but already contained an assessment of the value of the outcomes for
the subject that has to make the decision. That is, I assumed that
they already represented the "utility" of the outcome. In this case,
if I am not mistaken, the expected values of the normal distributions
coincide with the expected utilities, right? Which means that expected
utility theory would prescribe to go for option A, regardless of the
values of c_1 and c_2. This is precisely the point at which I cannot
accept expected utility theory, because I believe (as I said above,
in the end it may be a matter of belief) that there can be rational
grounds in such situations to choose option B, depending, of course,
on the values of c_1 and c_2. And this is what I mean by saying that
expected utility theory does not take into account the variances of
the outcomes, because it seems to me that the rational grounds are
closely connected to these variances.
By the way: I did not claim that it would be sufficient for a decision
theory to handle variances in this sense in order to be able to deal
with the paradoxes. I just hold the opinion that it is necessary.
Maybe something else is needed in addition.
Finally, I would like to consider the following: It is clear that I
can reconcile the theory and my rejection of a prescribed decision
by saying that I assigned a wrong utility function. I have to admit
that I entirely dislike this approach. With it the theory can always
be saved, especially since the utility of an outcome is a subjective
thing and thus cannot be measured objectively. We are free to fiddle
around with the utility function until it gives the decision we want.
However, this is not how it should be (and I am certain that you agree
with me on this). The utility function is an input to the theory and
cannot be changed. And once the utility function is fixed, we either
have to accept the prescription the theory makes, or we have to reject
the theory. With which we are back at the beginning.
Best regards,
Chris