Gordon Hazen writes:
 > >...What I would
 > >like in the cases of Allais' and Ellsberg's paradox is something
 > >similar: Why do most people reject the answer of expected utility
 > >theory on first sight? What is it that they are doing wrong? Is there
 > >something like the linear interpolation in the birthday problem that
 > >does not fit the situation?
 > 
 > I think so.  What people are saying in the Allais paradox is that they like 
 > L1 better than L2, but if faced now with 25% chance of having to choose 
 > between L1 and L2, they would prefer now that they would pick L2 if given 
 > the chance. Again, see my 8/5/2003 post for details.

Dear Gordon

I read it when you posted it originally and I did not find it
convincing. I reread it, and I still don't find it convincing.
The 25% chance changes the values of the variances ;-) and thus the
decision situation is different from the original one. Only if the
expected utility is the only relevant thing, the two situations are
analogous. However, this is my whole point: It is not the only
relevant thing. I don't see a problem in the fact that people
behave differently in the two situations.

 > Prescriptively, Allais paradox behavior is very hard to justify.  It allows 
 > the following very odd behavior by a decision rule:
 > 
 > 1) The decision rule prefers the policy "Choose surgery if the diagnostic 
 > test indicates high risk" to the policy "Choose conservative treatment if 
 > the diagnostic test indicates high risk".
 > 
 > 2) The diagnostic test is conducted and indicates high risk.  The decision 
 > rule now indicates that "Conservative treatment" is preferred to "Surgery".
 > 
 > Expected utility is explicitly designed to avoid this weirdness.

It is definitely a very good property of expected utility theory to
do so. However, I doubt that there is no alternative, i.e., no other
possible theory that is also capable of avoiding such weird results.

I also have to admit that I do not quite see how Allais' paradox allows
the behavior you describe. Let me take a guess at how you mean it:
The probability of the diagnostic test indicating high risk corresponds
to the 25% chance in the lottery version? The outcomes are whether the
illness is cured (to some degree, because we need different utilities
for the outcomes in the two options)? So what we are comparing is this:

Given that the diagnostic test indicates high risk, we have

A)  Conservative treatment leads to, say, 60% recovery in all cases.
B)  Surgery leads to, say, 80% recovery with a probability of 0.8
    and to death with a probability of 0.2.

Now we compare the situation before the diagnostic test, which indicates
a high risk with a probability of 0.25. Well, and now I am a bit at a
loss how to construct the alternatives. To get the structure of Allais'
paradox, we need something like this:

A') Conservative treatment leads to 60% recovery with a probability
    of 0.25 and to death with a probability of 0.75.
B') Surgery leads to 80% recovery with a probability of 0.2 and to
    death with a probability of 0.8.

I do not quite see how to get this. The diagnostic test would have to
have some healing property, ruling out the 0.75 chance of death for
conservative treatment. Is it that any treatment at all depends on the
outcome of the test (the physician refuses treatment if the test is not
made or does not indicate high risk - yes, I know that this is crazy),
and the subject is bound to die without treatment? However, in this
case A' and B' are not quite the options the subject can choose from.
It would be a two step process then: First a decision to take the test,
with some chance of recovery if it is taken and indicates high risk,
and then the decision on the treatment. Even if you say: Imagine that
you have to decide on the treatment before you decide on whether you
take the test, we don't get A' and B', because the decision on the
treatment is irrelevant if I am not taking the test. So in order to
decide on the treatment, I consider the alternative A and B, and then
there is, of course, no problem. (Maybe the critical point is that
the policies are expressed as "if the diagnostic test indicates high
risk...", so that the actual problem could be the old one of the
material implication being at odds with normal human interpretation
of if-then-statements.)

Anyway, if I take the alternatives as I described them above, I see no
problem in preferring A over B, and B' over A'. This may depend on the
percentages of recovery I chose (which I would like to represent the
utilities of the outcomes) and some other things, but what is weird
about these preferences? I assume that you had something else in mind.
I am sorry if I distorted what you wrote.

 > The problem is that 
 > people cannot see that they are doing this because they cannot multiply 
 > probabilities in their heads.

This is, indeed, a problem, but I don't think it is the main problem
here. I would rather say that people take into account some other
property of the setup, which is neglected by expected utility theory.

 > So what you are saying is that if u(x) is the utility of sure payoff x 
 > (however defined), and X is a gamble over payoffs, then you want the 
 > utility of X (however defined) to depend not just on the mean of the random 
 > variable U = u(X), but also perhaps on the variance of U, or other features 
 > of the distribution of U.

I don't like this description, because in my view it contains an
equivocation of the word "utility". See below.

 > But there is a problem here: What is "the utility" u(x) of a sure payoff 
 > x?  The function u(x) should satisfy u(x) > u(y) whenever x is preferred to 
 > y.

Indeed, this is a minimal, a necessary requirement.

 > But any increasing transformation v(x) = f(u(x)) will also satisfy 
 > v(x) > v(y) whenever x is preferred to y.

Yes, v will satisfy *this* requirement, but maybe not some other
requirements, which u satisfies.

 > So v(x) could equally well be "the utility" of x.

No, this does not follow! It follows only if you assume that all a
utility function is meant to express is a preference, nothing else.
However, such a view is not possible, as I see things. We are doing
computations with the values of the utility function. For example,
we are computing expected values. For this to be possible, the values
of the utility function cannot be seen as ordinal, but have to be
metric/quantitative. In addition, considering what utility is meant
to express, I would say that at least ratios of utilities matter
and therefore we cannot apply just any increasing transformation.
Actually, I see little else than the multiplication with a constant
factor that is admissible, if we want to preserve the semantics
intended by the subject who specified the values. (I leave aside
here the issue whether it is reasonable in the first place to
specify an assessment of utility on a linear, numerical scale.)

 > Do you also want the utility of X to depend on more 
 > than just the mean of the random variable V = v(X)?

This question is difficult to answer, because in my view there is
an equivocation of the word utility here: It is used to denote two
different things. First there is the utility of a specific outcome,
where there is no chance element involved. This is what I think the
utility function specifies. It is a subjective assessment made by
the person who has to make the decision. This person imagines
his-/herself in the situation and then says how much he/she values
it (relative to other situations). Then there is the "utility" of
the option, with the option leading to different possible outcomes,
so that it can be represented by a random variable over outcomes
(or the utilities of these outcomes). This, for me, has a different
character. It is not - as expected utility says - just the expected
value of the random variable (and only in this case I would say it
is the same "utility" as above). For me, and this is my point, not
only the expected value, but also the variance of the random variable
matters if I want to assess the option (and maybe some other things
- - the shape of the distribution could be relevant, too). However,
expected utility theory says that the variance does not matter, and
neither do other properties of the distribution. I cannot understand
why. For me it obvious that it matters.

 > Do you want this for every increasing transformation v of u?

No. As I said above, I think that there are only very few
transformations that preserve all relevant (i.e. semantically
relevant) properties of a utility function.

 > It is far from clear this is even possible.

Right, but I don't want that.

 > So I think your requirement is basically incoherent unless you further 
 > clarify what it is you want.

I hope what I wrote above makes clearer what my problem is.

 > Here is a further difficulty: In expected utility theory, the utility of a 
 > gamble X is E[u(X)], equal to E[U], where U is the random variable 
 > u(X).  So the utility of X depends only on the mean of U, not its 
 > variance.  But suppose we postulate that "the utility" v(x) of a sure 
 > payoff x is given by v(x) = exp(u(x)).  There is nothing wrong with this, 
 > as v(x) is simply an increasing transformation of u(x).

Well, for me there is something wrong with this, see above.

 > Then E[u(X)], 
 > which is still the utility of X, is given by E[ln(v(X))] = E[ln(V)], where 
 > V is the random variable v(X).  And voila!  the utility of X, being equal 
 > to E[ln(V)], will depend on more than just the mean of V, just as you 
 > wished.  (For example, if U happens to be normally distributed, then V will 
 > have a lognormal distribution, and E[ln(V)] will depend on both the mean 
 > and variance of V.)

I am tempted to turn this around and say: The fact that the utility
now depends on the variance shows that you applied a inadmissible
transformation. :-)

 > So all you need to do to make expected utility acceptable according to your 
 > criterion is say that "the utility" of a sure payoff x is exp(u(x)), where 
 > u(x) is what everyone else calls the utility of x.  Of course this is 
 > silly, but the reason it is silly is that your requirement is incoherent.

Again, I don't think it is possible to apply this transformation
without changing the semantics of the utility function. Of course,
a different utility function can lead to a different decision, but
that means changing the decision problem. It is like replacing the
subject that has to make the decision by someone else with different
values.

Best regards,
Chris

Reply via email to