Various factors that affect real elections have been neglected in the simulations which have been done to compare performance of various voting systems. The analysis which has been done, so far, is quite valuable and represents the best data we have on voting system performance, but the neglect of real voting patterns and factors has, I suspect, produced warped comparisons of systems.

The technique of simulating underlying absolute preferences has too quickly moved into an assumption that preferences can be normalized and that all members of the simulated population will actually vote. In fact, real voter behavior can be predicted to vary with preference strength.

As an example, if I'm correct, analysis of Bucklin made the assumption that all voters would rank all candidates, which is actually preposterous Further, with Top Two Runoff, a assumption has been made that all of the original voters will then vote in a runoff, so the simulation, of course, simulates a Contingent Vote that accomplishes the same thing with a single ballot, unless, of course, voters truncate, and truncation hasn't been simulated, to my knowledge.

In fact, voters with low preference will not turn out to vote in a special election runoff. If the system is implemented with the primary as a special election (as in Cary, NC) and the runoff in the general election, then we'll see the effect of low preference strength on turnout in the primary instead.

It is a common assumption that low turnout in an election is a Bad Thing. However, I've seen little analysis that does anything more than make partisan assumptions; allegedly, low turnout favors Republican candidates. If so, then the source of the problem would be large numbers of voters who might otherwise favor a Democrat, but who have, in fact, low absolute preference strength, and Baysian regret analysis of the whole population would likely reveal that the Republican would be the social utility winner.

Turnout differential shifts any voting system toward social utility optimization and away from majority, plurality, or Condorcet criteria. Overlooking this has caused voting systems theorists to overlook the power of runoff systems, which have been assumed to be mere technical methods.

In fact, runoffs have not only this value, but the values noted by Robert's Rules of Order for repeated ballot in general. (Robert's Rules does not support "runoffs," i.e., elections with rigidly-determined candidate eliminations, it requires repeated elections, in toto, if there is a failure to find a majority of all those voting in the election, accepting the winner.)

Those values are: the ability of the voters to make decisions, including compromises, based on the results of the first ballot, and, as well, to make more informed decisions based on better knowledge of the candidates. We know that real runoff elections, nonpartisan, do produce "comeback" elections in about one-third of the cases. This phenomenon does not happen with Instant Runoff Voting, so the instantaneous preference profiles of voters in the primary are not simply replicated in the runoff, or, at least, that is the most likely explanation. There is also truncation as an explanation.

Some study of Bucklin has been based on an assumption that Later-No-Harm is important to voters. It's clearly important to *some* voters, specifically the most partisan. A partisan Democrat is not likely to add a second-rank vote for Bush in a Bush v. Gore election. But an independent voter *might*. And more to the point, if the voter buys the Tweedle-dum and Tweedle-dee argument of a candidate like Nader, they might well not cast any second-rank vote at all, even if they have some preference for one candidate over another.

We know, however, that in nonpartisan elections, the addition of additional ranked candidates was reasonably common in actual Bucklin elections. However, there is a basic voting phenomenon which has been inadequately considered, even though it was first noted, to my knowledge, by Charles Dodgson (Lewis Carroll) in about 1883. Many or most voters only have a good idea of their first preference, and, in an STV system, may be likely to truncate below that, and bullet voting, or at least some level of truncation, makes sense as a sincere vote for such a voter.

Out of some idea that "majority" is important, but not understanding the *purpose* of seeking a majority, some places and some theorists have advocated mandatory full ranking, which is combined, in Australia, with mandatory voting. It's illegal not to vote there, and if a ballot is cast (except in places which have Optional Preferential Voting), and does not fully rank the candidates, the ballot is "informal" or discarded.

Majority failure is an essential feature of single-ballot systems, all of them, it will occur with some considerable frequency, unless the "majority" is in some way coerced, or the options are limited to two, which also frustrates democratic purposes.

Hence repeated ballot is ideal, and only deprecated for reasons of expense and efficiency. The question then becomes, once we realize this, *how much damage is done in the name of efficiency?*

If the damage is trivial, it's not really a problem. But if the damage is major, and it can be, then to avoid runoff elections in the name of saving money and "trouble," is penny-wise and pound-foolish.

Rather, the question would become how to *avoid* runoffs when sufficient data can be collected from voters to make the runoff redundant and unnecessary. And there has been far too little study of this problem. Robert's Rules suggests preferential voting as a way to reduce the need for repeated ballot, and certainly considers it an improvement over accepting a plurality result, but apparently does not realize that the sequential elimination method that they describe is singularly inefficient at the goal of actually finding a majority of votes, but in no way are they deluded into thinking that the "last round majority" of IRV is a real majority, it obviously is not, and Robert's Rules requires that the election be repeated if a real majority of votes is not found.

Bucklin is quite a bit more efficient, because it counts all the votes. It's been argued that counting all the votes, as Bucklin will have done in any situation that is at or is approaching majority failure, will cause voters concerned about Later-No-Harm to truncation, and that's true, but only to a degree. It depends on the preference strength of the voters for their favorite over all alternatives. Thus the additional preferences that voters express in Bucklin are, in fact, sincere additional approvals, assuming reasonably educated voters. When Bucklin finds a majority, it is a true majority of voters accepting that result.

Further, selective truncation based on preference strength, quite likely, shifts real Bucklin results toward Range results, since low preference strength votes are suppressed.

From these arguments, I suspect that in real social utility performance, Bucklin with a majority required, used in a primary, with plurality Bucklin reserved for runoffs (where it can be two-rank Bucklin), is very close to ideal, but this does, as well, depend on details of Bucklin which have sometimes been missed.

As an example of an important detail, some real and notable Bucklin implementations allowed multiple voting in the third rank. Thus the method was a closer implementation of Approval voting than has been realized. One really could vote this kind of Bucklin as an antiplurality, "anybody but Joe" method. An obvious improvement on the old Bucklin, then, would be to allow equal ranking in all ranks. It's not terribly important, overall, that equal ranking be allowed in the first two ranks, but it would allow better and more accurate expression by voters, as well as providing some safety net for the unusual situations where strategic voting in Bucklin could suggest preference reversal. Instead of reversal, what can be done is to vote equal rank, which is less harmful and more sincere. ("Equal" means "to be equally supported under the voter's understanding of election conditions." It does not mean that there is no preference, but that the preference is small compared to other issues, most particularly a strong dislike of some frontrunner. I actually think this condition would be so rare that I'm not stressed about the idea that equal ranking might continue to be disallowed in the first two ranks, but it's important, if Bucklin is to handle large candidate sets, that it be allowed in third rank.)

Bucklin with equal ranking thus becomes quite a close approximation of Range voting, and could even be implemented using a Range ballot, as long as approval cutoff is specified. In combination with runoff requirements when there is failure to obtain majority approval, most realistic election pathologies can be avoided. Bucklin is alleged to violate the Condorcet Criterion, but, in fact, it only does so, I consider likely, under conditions where the Condorcet Criterion fails to find an optimal winner but merely pretends to, based on an assumption that all preferences are equal.

----
Election-Methods mailing list - see http://electorama.com/em for list info

Reply via email to