Re: [EM] [CES #9004] Before Voting Methods and Criteria: Outcome Design Goals (long)

2013-07-01 Thread Jameson Quinn
Benjamin:

You are right to point out that we should have some discussion of basic
principles to underly our discussion of specific systems. Here are my own
views:

1. There is no single easy philosophical answer to these questions. There
will always be those who, like Clay, would rather grab the quick
self-consistent certainty of interpersonally-summable utilities; and it is
true that this point of view offers many advantages (for instance,
immediate immunity to probabilistic or Arrovian money-pump arguments); but
it also has serious philosophical critiques. There is a continuum of
possible self-consistent answers to the breadth-versus-depth question that
runs from maximin to summed-utility to maximax, and if you allow certain
kinds of status-quo bias, the possibilities are even broader.

2. However, for practical terms, we are not likely to get a better metric
for voting systems than total utility. A maximin metric would
philosophically give veto power to a single voter; an intermediate metric
would probably be equivalent to summed-utility if you rescale utility by
some monotic function; and any metric involving status quo bias extrinsic
to the voters themselves, is a horrible compass for system design. So while
I don't share Clay's easy certainty about the impeccable solidity of the
philosophical foundations here, I do agree with him that this is the best
single measure of outcome quality.

3. That doesn't make things easy, though. For instance, for a given set of
ballots, the Score result is usually (though not always!) the best way to
use the information given to try to maximize the underlying utilities. But
since voters will almost certainly vote differently under different
systems, that does not mean that Score is the best way to ensure the best
result from a pre-balloting point of view. Or another instance: system A
could give better results, but system B might be more likely to be
implemented and thus offer more expected value over status quo C.

Jameson

2013/6/30 Benjamin Grant b...@4efix.com

 I’ve been coming at understanding better the options and choices, merits
 and flaws of various approaches to holding votes – mostly with the kind
 (and sometimes not-so-kind) help of the people on this list.

 ** **

 However, a (I assume) basic thought occurred to me, which may be so
 obvious no one need ever discuss it, but I want to double check my thinking
 on some of this.

 ** **

 The rest of this post will NOT be concerned with any one particular voting
 method or criteria.  Instead I will be comparing different scenarios of
 voter preference with thoughts about who “should” win. If I am not making
 sense quite yet, come along and hopefully it will make more sense in
 practice. If not, you can ask me questions or delete the post.

 ** **

 Let’s assume that we have a magical gift – a super power, if you will.  We
 can know exactly what each voter thinks about each candidate.  Now, because
 this comes from magic, it cannot unfortunately be used as a part of the
 election process, but it will be useful for our examination of attitudes of
 the voters.

 ** **

 So as we turn our power on a random voter, we can pick (on a scale of 0 to
 100) how they feel about each candidate.  A 0, in this case, indicates that
 the voter is absolutely against the candidate winning the election, and
 will vote however he must to stop that from happening, whereas a 100
 indicates the reverse: that the voter is absolutely for this candidate’s
 victory, and will give it everything he can at the ballot box. 50 indicates
 a sort of “meh” reaction – doesn’t hate them, doesn’t love them – or
 possibly the voter has some aspects of the candidate he really likes, but
 some other aspects that he is less than thrilled with.

 ** **

 So, using this power, we can know absolutely on a scale of 0 to 100 what
 each voter thinks of each candidate. Using that knowledge, we ought to be
 able to say who “should” win – which I will return to in just a moment.***
 *

 ** **

 First, each candidate’s support by the voters can be noted on a graph,
 with the X axis denoting the scale of 0-100 Favorability, and the Y axis
 denoting the percentage of voters who hold that exact opinion.

 ** **

 So, for example, on a graph like this, you might find that 12% rate a
 certain candidate at 0F – they *hate* this guy.  Another 14% may rate this
 candidate at 100F – these are his loyal base. Most people fall somewhere in
 between.

 ** **

 To keep things simple, I’m going to talk about candidates as if their
 voters clump at certain points, instead of spreading more fuzzily. I think
 the core questions become no less valid and no less worth thought.

 ** **

 I am going to posit a series of two candidate comparisons, and ask who
 “should” win. The point here is to ignore the methods for a bit, and just
 see what our gut says, given the absolutely magically accurate information
 we have about the voter’s 

Re: [EM] [CES #9004] Before Voting Methods and Criteria: Outcome Design Goals (long)

2013-07-01 Thread Abd ul-Rahman Lomax

At 11:03 AM 7/1/2013, Jameson Quinn wrote:

Benjamin:

You are right to point out that we should have some discussion of 
basic principles to underly our discussion of specific systems. Here 
are my own views:


1. There is no single easy philosophical answer to these questions. 
There will always be those who, like Clay, would rather grab the 
quick self-consistent certainty of interpersonally-summable 
utilities; and it is true that this point of view offers many 
advantages (for instance, immediate immunity to probabilistic or 
Arrovian money-pump arguments); but it also has serious 
philosophical critiques. There is a continuum of possible 
self-consistent answers to the breadth-versus-depth question that 
runs from maximin to summed-utility to maximax, and if you allow 
certain kinds of status-quo bias, the possibilities are even broader.


My take on this is that the hypothesis of interpersonally-summable 
utiliities is useful but not the truth. What the Bayesian Regret 
studies do is to show how a voting system performs *if* there are 
summable utilities. With proper design, those simulated utilities can 
be quire reasonable.


A voting system that peforms poorly with *known utilities* is not 
likely to perform well with unknown ones. So BR studies are the best 
measure we have, so far, for assessing voting system performance.


Given this, Range voting would seem to be an ideal voting system, 
because, on the fact, it sums those utilities. In fact, there is a 
translation process between internal utilities and actual votes that 
introduce distortion into the system. The first is normalization, 
where the realistic options are mapped to the full vote scale. If 
there is simple normalization only, then voters would essentially 
disempower themselves if they have a very strong preference, a *must 
have* or *must defeat.* So there is what I've called magnification, 
where voters would stretch or compress the voting preference 
strengths (the differences between two voted values) in order to 
match assessments of relative value as adjusted for election probabilities.


All this means that the actual range vote sum is not necessarily the 
actual social utility maximizer. In particular, if the voters don't 
have good data on election probabilities, and, more than that, have 
*incorrect information*, they may vote foolishly. This problem is 
handled if the system uses repeated elections, because, after the 
first election, they should have much better information about what 
is probable, if the voting system does allow the disclosure of that 
information.


2. However, for practical terms, we are not likely to get a better 
metric for voting systems than total utility. A maximin metric would 
philosophically give veto power to a single voter; an intermediate 
metric would probably be equivalent to summed-utility if you rescale 
utility by some monotic function; and any metric involving status 
quo bias extrinsic to the voters themselves, is a horrible compass 
for system design. So while I don't share Clay's easy certainty 
about the impeccable solidity of the philosophical foundations here, 
I do agree with him that this is the best single measure of outcome quality.


And I agree on that as well. This leaves two issues:

1. The practicality of implementation.
2. Majority consent.

The latter has often been seriously neglected. IRV was sold on a 
claim that it would find majorities. Essentially, FairVote lied, or 
allowed people, in some cases, to be decieved by naive expectations. 
We don't know that FairVote *actually corrupted the committee that 
wrote the voter information booklet* that led voters to approve 
Ranked Choice Voting in San Francisco, but we can be quite sure 
that FairVote took no steps to correct it, and what was said that was 
just plain wrong has been said by them in many places, though the 
gradually became more careful. They now say this majority thing in 
such a way that naive voters won't understand the difference, but if 
you call them on it, what they say is defensible. It's just 
misleading, not directly a lie.


The biggest problem in the way of implementing Range is lack of any 
test for majority consent. Range (Score) is a *plurality method.* 
With all the anti-Plurality hype, that's totally overlooked. All that 
has happened with Range is that fractional votes are allowed.


While, in theory, it can take an endless series of repeated elections 
to find a majority, the probability of that is vanishingly low. 
People *do* make the necessary adjustments, unless the majority *does 
not want to complete*, in which case that is the majority decision. 
Who is to say that this is wrong?


However, my sense is that a two-round system with intelligent choice 
of nominations for the second round can find a *true majority* almost 
all the time. And when it fails, it would be close enough that the 
value of continuing the process would be less than the cost of continuing it.


3. That