And now that rarity from me, an original post....

Approval Voting is a special case of Range, with rating values restricted to 0 and 1. When Brams proposed Approval, it was as a method free of vulnerability to "tactical" or "strategic" voting, i.e., voting with reversed preference in order to produce a better outcome. And, indeed, both Range and Approval are immune to that, i.e., there is no advantage to be gained by it, ever (at least not in terms of outcome).

The proponents of other methods attacked this by redefining -- without ever being explicit about it -- the meaning of strategic voting. Because the concept was developed to apply to methods using a preference list, whether explicit on the ballot or presumed to exist in the mind of the voter, a strategic vote was one which reversed preference, simple. But with Approval and Range, it is possible to vote equal preference. Is that insincere if the voter has a preference? The critics of Range and Approval have claimed so, and thus they can claim that Range and Approval are "vulnerable to strategic voting."

Arrow, in explaining why he did not study cardinal rating methods (like Range and Approval), methods that allow equal ranking, wrote that they offended him because there is no single sincere vote. I.e., a whole set of votes could be considered sincere. If the voter prefers A>B>C, the voter could vote for A or for A and B (and, for that matter, for A and B and C), and still be sincere.

(A side-note: unless a preferential ballot allows ranking all candidates, it does allow equal ranking *at the bottom,* indeed it requires it. But we've tended to focus on the winner only.)

The critics, I've seen, will consider a vote for A only, with Approval, when the voter supposedly "approves" both A and B, to be "strategic." It certainly is strategic in the sense of "smart," under some conditions. However, this is where preference strength comes in, and a strange twist of the definitions takes place. We must assume that if the voter votes only for A, the voter does, indeed, prefer A. So with a preferential method, as with Plurality, the vote for A alone is sincere, and a vote for B alone would be insincere. In other words, Approval voting is "vulnerable" to a voter voting what would be considered a sincere vote in a method that does not allow equal ranking. This is having the critical cake and eating it too.

So what are "sincere" Range and Approval votes? Should voters in Range vote "sincerely?" Or should they vote "strategically," which means that their vote is different depending on their perception of the election probabiities. The voter votes only for A in the example above, even though the voter supposedly "approves" of B as well, because the voter perceives the important choice as being between A and B, with C being unimportant. If the voter sees C as possibly winning, with significant probability, the voter is much more likely to vote for both A and B.

The root of the critical problem is that votes have been considered expressions of preference alone, and the goal has been to find a voting method that works, even in the presence of voter knowledge of the election probabilities, just like a zero-knowledge election. The problem is that this is a strange and artificial creation, when we look at it carefully. It doesn't exist in the real world, and there are many obstacles in the way of it, including Arrow's theorem and how people will always behave. When we ask people what they want, they will *always* modify the answers according to how they perceive the probabilities of each possibility.

Which would you prefer, $10 or $100? Seems simple, eh? And I've argued that any good ballot design will allow you to express that preference. However, suppose there are three alternatives, $0 or $10 or $100. We can easily rank these, but suppose that these are personal utilities for the three alternatives, and they are not identical for all the voters, and some voters will prefer the outcomes in a different order. And if we vote 100>10>0, the probabilities, in our judgement, are that we'll get 0. While the 100 outcome is obviously preferable to us, we consider it unlikely. So how do we vote in an Approval election? I've set it up to be obvious. We vote for 100 and for 10. Now, how do we vote in Range? The supposed sincere vote, based on true personal utilities, which we've made obvious, would be, in Range 100, to vote the dollar values. Yet that would be almost as foolish as to vote for 100 only.

I came across the following piece, at http://cepa.newschool.edu/het/essays/uncert/vnmaxioms.htm


In the von Neumann-Morgenstern hypothesis, probabilities are assumed to be "objective" or exogenously given by "Nature" and thus cannot be influenced by the agent. However, the problem of an agent under uncertainty is to choose among lotteries, and thus find the "best" lottery in D (X). One of <http://cepa.newschool.edu/het/profiles/neumann.htm>von Neumann and <http://cepa.newschool.edu/het/profiles/morgenst.htm>Morgenstern's major contributions to economics more generally was to show that if an agent has preferences defined over lotteries, then there is a utility function U: D (X) ® R that assigns a utility to every lottery p Î D (X) that represents these preferences.

Of course, if lotteries are merely distributions, it might not seem to make sense that a person would "prefer" a particular distribution to another on its own. If we follow <http://cepa.newschool.edu/het/essays/uncert/bernoulhyp.htm>Bernoulli's construction, we get a sense that what people really get utility from is the outcome or consequence, x Î X. We do not eat "probabilities", after all, we eat apples! Yet what von Neumann and Morgenstern suggest is precisely the opposite: people have utility from lotteries and not apples! In other words, people's preferences are formed over lotteries and from these preferences over lotteries, combined with objective probabilities, we can deduce what the underlying preferences on outcomes might be. Thus, in von Neumann-Morgenstern's theory, unlike Bernoulli's, preferences over lotteries logically precede preferences over outcomes.

How can this bizarre argument be justified? It turns out to be rather simple actually, if we think about it carefully. Consider a situation with two outcomes, either $10 or $0. Obviously, people prefer $10 to $0. Now, consider two lotteries: in lottery A, you receive $10 with 90% probability and $0 with 10% probability; in lottery B, you receive $10 with 40% probability and $0 with 60% probability. Obviously, the first lottery A is better than lottery B, thus we say that over the set of outcomes X = ($10, 0), the distribution p = (90%, 10%) is preferred to distribution q = (40%, 60%). What if the two lotteries are not over exactly the same outcomes? Well, we make them so by assigning probability 0 to those outcomes which are not listed in that lottery. For instance, in Figure 1, lotteries p and q have different outcomes. However, letting the full set of outcomes be (0, 1, 2, 3), then the distribution implied by lottery p is (0.5, 0.3, 0.2, 0) whereas the distribution implied by lottery q is (0, 0, 0.6, 0.4). Thus our preference between lotteries with different outcomes can be restated in terms of preferences between probability distributions over the same set of outcomes by adjusting the set of outcomes accordingly.

But is this not arguing precisely what <http://cepa.newschool.edu/het/profiles/bernoulli.htm>Bernoulli was saying, namely, that the "real" preferences are over outcomes and not lotteries? Yes and no. Yes, in the sense that the only reason we prefer a lottery over another is due to the implied underlying outcomes. No, in the sense that preferences are not defined over these outcomes but only defined over lotteries. In other words, von Neumann and Morgenstern's great insight was to avoid defining preferences over outcomes and capturing everything in terms of preferences over lotteries. The essence of von Neumann and Morgenstern's expected utility hypothesis, then, was to confine themselves to preferences over distributions and then from that, deduce the implied preferences over the underlying outcomes.

"Preferences" in Range Voting is preferences over lotteries, not preferences over outcomes, as such. I, of course, support voting methods which allow the expression of both, hybrid methods, and which resolve the occasional conflict between a sum-of-votes approach and a pairwise winner approach, using not the original ballot, but a new one, i.e., a runoff that turns the choice involved back to the voters. Some supporters of Range are disturbed by this, because, supposedly, the Range Votes, summed, elect the social utility winner, which, they argue is the best winner for society. However, they've neglected the overall process in favor of resolving it in a single ballot. If a single ballot *must* be used, no matter what the cost, the Range outcome is indeed the closest we can get to ideal, I suspect. But we are not limited to that, and we can go back to the voters -- a different set of voters, usually! -- and ask them. The exact details of that additional election I'll leave for another paper; parliamentary procedure would suggest that it be an entirely new election, informed by the results of the first one, plus additional campaigning, but practicality may suggest something different. Regardless, it's apparent to me that two ballots is better than one, when one doesn't come up with a clear majority choice or better.

Economics. It seems to be a field that is disreputable to political scientists. But this is, of course, a field where substantial theoretical expertise has been applied to the problem of making decisions. A voting system is, obviously, such a problem. Warren Smith found this paper and pointed it out to us:

http://ideas.repec.org/a/ecm/emetrp/v67y1999i3p471-498.html

This is a paper by Dhillon and Mertens. An abstract:

In a framework of preferences over lotteries, the authors show that an axiom system consisting of weakened versions of Arrow's axioms has a unique solution, 'relative utilitarianism.' This consists of first normalizing individual von Neumann-Morgenstern utilities between zero and one and then summing them. The weakening consists chiefly in removing from IIA the requirement that social preferences be insensitive to variations in the intensity of preferences. The authors also show the resulting axiom system to be in a strong sense independent.

Relative Utilitarianism is an analytical method which takes as input Range Votes; as Warren Smith has stated he prefers, the Votes are rational numbers (I think) in the range of 0-1, with no restriction on resolution. I.e., practical Range Voting uses some specified resolution; I define Range N as being Range with N+1 choices, so Range 1 is Approval (with two choices, 0 and 1), and we can express Range votes as 0-N; i.e, Range 100 may vote as 0-100, but is really 0-1 in steps of 1/100 vote.

In pure relative utilitarianism, then, unless the voter is indifferent to a choice, the vote in that choice will always show preference, but the magnitude of the preference will vary according to perceived probabilities. In practical Range Voting, if the von Neumann-Morgenstern utilities get rounded off, thus showing equal preference when the reality is that there is an underlying preference, but with a combination of absolute magnitude and relative probability that brings it within the resolution of the Range method.

Now, voters don't sit down with a calculator, but what was claimed in the first paper above is that this is, in fact, how we make decisions. It's much simpler than one might think, and in real elections, the normal procedure for determining these utilities is quite simple in *most* elections: Pick two frontrunners (which depends on probabilities only, not personal preferences). Then use preferences to rate one of them at maximum and one at minimum. If one has preferences of significance outside this set and this range (i.e., one has a candidate preferred over the best frontrunner and one over the worst frontrunner, then one might consider, if the method has sufficient resolution, pulling the frontrunner down a notch or the worst up a notch, to preserve preference expression. Alternatively, perhaps the method allows expression of preference independently of rating.

Only when there are three candidates considered possible winners does it get more complicated. But the point of all this is that voters will always consider election probabilities, and thus pure Independence of Irrelevant Alternatives is a real stumbling block if insisted upon. The voting power I assign to the pairwise preference of $10 to $100 must depend on, not only my pure intensity of preference, but on my perception of the probabilities. Voters in Range are choosing lotteries with specified prizes, with values and probabilities as estimated by the voter, and setting their votes accordingly.

And they sincerely choose them, i.e., they attempt to maximize their personal expected return, and this is *exactly* what we want them to do. It's not "greedy" or "selfish," it's "intelligent." Now, the shift in votes due to the probability perceptions can be mistaken. A dark horse candidate may not receive the full vote strength that the candidate would receive in a zero-knowledge election, with all candidates being considered equally likely. Thus, we'd need runoffs to fix problems, which might be detected through preference analsyis. But the method is theoretically ideal, as Dhillon and Mertens show, it is a unique solution to Arrow's theorem (with a minimal tweak, one that is utterly necessary; the requirement of absolute Independence of Irrelevant Alternatives was, quite simply, a mistaken intuition. Relative Utilitarianism doesn't require -- actually does not allow, in its pure form, -- the suppression of preferences, but the *magnitude* of the expressed preference varies with the alternatives, and that technically violates IIA, as it was understood.

This brings us to a problem with Range Voting. If voters expect that their task, with Range, is to give "sincere ratings," regardless of the effect on results, Range will suffer badly from IIA and, indeed, as claimed by critics, voters who pay no attention to irrelevant candidates, in determining how they vote for two frontrunners, will have an advantage, through "bullet voting" or through "exaggerating." Range Voting is still *voting,* but merely with fractional votes allowed, not required, when N is greater than 1. Approval is still voting. It is not about "approving" the candidates, except in the sense that by voting for a candidate, one is approving the election of that candidate, *compared to the likely alternatives*. It is not a sentiment, it's an action, adding weight to an outcome, choosing to effectively participate in it. Add weight to an irrelevant alternative, it doesn't matter, by definition. In almost all elections, there are two frontrunners, and, this is why Plurality usually works, and only breaks down through the related spoiler or center squeeze effects, because of the restriction against voting for more than one. All advanced voting methods -- with one exception, not applied anywhere that I'm aware of, Asset Voting -- allow voting for more than one, but through various procedures. (Even Asset would normally allow voting for more than one, the original form was proposed for an STV ballot with optional preferential voting, to deal with the very common problem of exhausted ballots.)

Comments invited.

----
Election-Methods mailing list - see http://electorama.com/em for list info

Reply via email to