On 06/30/2013 10:19 PM, Benjamin Grant wrote:
I’ve been coming at understanding better the options and choices, merits
and flaws of various approaches to holding votes – mostly with the kind
(and sometimes not-so-kind) help of the people on this list.

However, a (I assume) basic thought occurred to me, which may be so
obvious no one need ever discuss it, but I want to double check my
thinking on some of this.

The rest of this post will NOT be concerned with any one particular
voting method or criteria.  Instead I will be comparing different
scenarios of voter preference with thoughts about who “should” win. If I
am not making sense quite yet, come along and hopefully it will make
more sense in practice. If not, you can ask me questions or delete the post.

Let’s assume that we have a magical gift – a super power, if you will.
We can know exactly what each voter thinks about each candidate.  Now,
because this comes from magic, it cannot unfortunately be used as a part
of the election process, but it will be useful for our examination of
attitudes of the voters.

Welcome to utilitarianism!

But note that there's a subtle distinction. Either this magical gift gives you the *opinions* of the voters, according to some scale, or it gives you a number that tells you how much voter X would come to like the choice being Y, were Y to be elected. These may be different - if for no other reason than that many politicians lie - and I'm going to assume you mean the first one.

Some claim that with experience, the two will converge. The voters will see through the lies, and if there's any bias in the voting method, it will be included in the judgement. But that takes time and may or may not happen.

I am going to posit a series of two candidate comparisons, and ask who
“should” win. The point here is to ignore the methods for a bit, and
just see what our gut says, given the absolutely magically accurate
information we have about the voter’s preferences.

> [...]

This brings me to my main thought.  As we compare candidates using
perfect knowledge of how the voters favor them (or disfavor them), at
what point does our guts, intuition, and instinct for fairness cause us
to flip our support from one candidate to another? Are our transition
points naturally similar, or fundamentally different, based on perhaps a
different valuing of compromise or partisanship?

If you assume that the voter automatically takes second order effects into account, then there's no problem. You can just use a method like Range and say that the voter will report expected value rather than his opinion of the candidate. Such an approach would be more like assuming that the magical method provides how much voter X would come to like the choice being Y.

However, if the effects are based on the distribution of candidate scores - for instance that that the voter's perceived it'll be better to elect X if X's score is closer to the scores of the other candidates - then the method will have to find equilibrium, and that requires running it repeatedly. To show this, let's take another example where the voter starts off really liking X, but also likes fairness. At the outset, the voter doesn't know where Y will end up, so he rates X according to his opinion of X alone. If there's only one round (and no polling), that's the best that voter can do; but if there are more, he can take the results from the previous round (or poll) into account and then adjust his scores.

On the other hand, if the voters don't report expected value, but rather their opinion according to some well-defined scale, then the method has to either clean up for them or make them aware that they should factor more into their scores.

When you look at it like this, what you're seeing is really the same question that I've mentioned before: whether processing should be done at the front end (in the method itself) or the back end (in the minds of the voters). And that question can be answered in many ways, all of which are self-consistent.

The extreme back-end answer would be to just have a very large discussion and then a majority vote. The majority vote decides very little; the discussion is where everything happens, but it is not a voting method, it is just a way for the "voters" to communicate among themselves. But that answer is not scalable. In contrast to such an extreme, the idea of voters reporting expected value is not so "back-end-centric" at all. There are certainly methods that are between the two, also: like an exhaustive runoff under the assumption that the voters will adjust their strategies to reach an equilibrium and thus that the (honest) Condorcet winner will never be eliminated.

I *think* that those who advocate rated voting (particularly Approval) lean towards prioritizing the back end. Not as much as the extreme I've given, but Approval pretty much needs some of the processing to happen in the minds of the voters when there's a three-way contest.

MAV and median methods in general lean further towards front end than do Range and Approval. For instance, it automatically satisfies Majority instead of requiring a majority to actually exaggerate the votes. This simplification has a cost: that the method is no longer as good as Range when voters are honest and optimal (that is, take into account dynamics to produce an expected value estimate).

But to return to your subject of designing fairness into a method. It would be possible to do so, but as you put it, different people may have different ideas of what fair is. I also think a problem is that you need a consistent reference in order to do it, and it's hard to establish consistent reference points under rated voting.

Let's explain that a little further. A consistent reference point says what a score of 100 (or 0, for that matter) *means*. If you don't have a consistent reference point, then it's very hard to make the decision of whether A should win by raw support or B should win by fairness, because you don't know if the 0-100 scale goes from "Stalin" to "savior of the world" or from "bumbling but harmless" to "the most we've come to expect".I would imagine fairness to be much more important when the scale goes from Stalin to savior than when it goes from bumbling to okay-good.

The back-end focus of some rated voting methods just unasks the question: the logic goes that the voters will factor in all they need to factor in and then be done, so the method shouldn't interfere. It's about as well as you can do without an established reference, and (I think) this is related to Arrow's disregard of rated methods in general.

Then there are methods like MAV (MJ). These do try to establish a consistent reference point. But the reference point is still quite vague - it's just that the vagueness doesn't matter to the method. I can explain that in detail if you wish :-)

And finally, you have ranked methods. These are generally very front-end based (or at least I prefer to look at them in this way). But a ranking doesn't contain any strength of preference information, so it's hard to do any sort of fairness adjusting there. It is *possible*, but only in very general terms. For instance, Borda can elect a compromise candidate even when some other candidate has a majority of first preference votes. But it can't make the fine-grained sort of call you seem to be seeking.

So to sum all of that up: Range-style ratings are very detailed (contain very specific information), but their meaning is left in the air. Rankings are just the opposite. In the former case, it's difficult to see how the method would know where to trade fairness for raw support. In the latter, it's difficult to see how the method would know the distribution of support precisely enough to trade one for the other in the first place.

(But, just to contradict myself at the very end: you could consider some criteria to be ways of establishing fairness. The majority criterion could be said to be about fairness to a majority. The Condorcet criterion could be said to be about fairness to a candidate that would beat every other one-on-one. But that's just one way of interpreting those criteria. The Condorcet criterion could just as easily be considered an efficiency criterion, based on the jury theorem or on statistical grounds that probably also explain why Condorcet does as well as it does in Olson's simulations.)

And in closing, I'd say that it is late here (again!) so I may be wrong somewhere in here. Feel free to ask if I passed by something too quickly, or if something just doesn't seem right.

----
Election-Methods mailing list - see http://electorama.com/em for list info

Reply via email to