Re: [EM] Will to Compromise

2008-10-31 Thread Jobst Heitzig
Dear Greg,

you wrote:
 Nondeterminism is a delightful way of skirting the
 Gibbard-Satterthwaite theorem. All parties can be coaxed into exposing
 their true opinions by resorting or the threat of resorting to chance.

Actually, if I remember correctly, that theorem just said that Random Ballot 
was the only completely strategy-free method (given some minor axioms such as 
neutrality and anonymity), so it's not really skirting it but just taking it 
seriously. 

However, it seems some minor possibilities for strategizing are acceptable when 
they allow us to make the method more efficient. FAWRB tries to be a compromise 
in this respect.
 
 I don't dispute that. The nondeterminsitc methods I have seen appear
 to be designed to tease out a compromise because a majority cannot
 throw its weight around.

Right, that's the main point.

 The abilities of nondeterministic methods to generate compromises is
 formidable, but since we speak of utility, I would like to point
 something out.
 
 1) Using Bayesian utility, randomness is worse than FPTP.

Two answers: i) Please cite evidence for this claim, ii) Bayesian utility is 
not a good measure for social utility in my opinion. We had lengthy discussions 
on this already a number of times on this list, so I won't repeat them. 
Instead, I will produce evidence from simulations this weekend which shows that 
no matter what measure of social utility is used, Random Ballot does not 
perform much worse than optimal.
 
 2) False compromises are damaging

What do you mean by false? If a proposed compromise fails to be desirable by 
most voters over the Random Ballot lottery, it will not get much winning 
probability. If it is, on the other hand, it is not a false but a good 
compromise. The simulations I will report about this weekend show that usually 
we can good compromises to exist which have quite large social utility.

 The reduced power of a majority means that at any choice with a
 greater-than-random-ballot average utility is a good compromise
 Notice how lousy the Bayesian utility of random ballot is and you
 begin to see my point.

See above. In simulations with well-known preference models, Random Ballot 
results are not lousy at all.

 Also note that the method for determining the compromise is
 majoritarian (to the extent that approval is) so the intermediate
 compromise procedure is a red herring that produces some nasty
 side-effects. The compromise is determined to be the most-supported
 at-least-above-average candidate. How does this avoid the original
 criticism of majoritarian methods?

You are right in that the majority still has some special influence on the 
*nomination* of the compromise. But the important difference to majoritarian 
methods is that they can't make any option get more winning probability than 
their share without the minority cooperating in this. So, yes, they can present 
the minority with a compromise they value only slightly better than Random 
Ballot. This is not perfect yet, but it guarantees the minority to get a 
better-than-average result where a majoritarian method doesn't guarantee a 
minority anything!

Yours, Jobst

Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Will to Compromise

2008-10-31 Thread Jobst Heitzig
Dear Kristofer,

you wrote:
 With more candidates, a minority might find that it needs to approve of 
 a compromise with just slightly better expected value than random 
 ballot, if the majority says that it's not going to pick a compromise 
 closer to the minority than that just-slightly-better candidate.
 
 That is, it would give an incentive to compromise early, under the 
 threat that to do otherwise might make the method fall back to random 
 ballot, and the compromise is better than random ballot even if it's not 
 all that much better.

True. But for the minority, Random Ballot is usually already much better than 
the majority preference, so that would be OK, right?

Yours, Jobst

Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Will to Compromise

2008-10-26 Thread Raph Frank
On Mon, Oct 27, 2008 at 1:05 AM, Greg Nisbet [EMAIL PROTECTED] wrote:
 The abilities of nondeterministic methods to generate compromises is
 formidable, but since we speak of utility, I would like to point
 something out.

 1) Using Bayesian utility, randomness is worse than FPTP.

 This is a pretty powerful indict, depending on how often the method
 has to resort to random ballot.

Hmm, I am not sure how true that is.  The randomness in those
simulations is picking a random candidate.

Random ballot should be superior to random candidate.

Perhaps, Warren can comment on which he actually used.

 2) False compromises are damaging

 The reduced power of a majority means that at any choice with a
 greater-than-random-ballot average utility is a good compromise
 Notice how lousy the Bayesian utility of random ballot is and you
 begin to see my point.

 The fallback method produces crappy candidates.
 People are encouraged to compromise for crappy candidates.

 Also note that the method for determining the compromise is
 majoritarian (to the extent that approval is) so the intermediate
 compromise procedure is a red herring that produces some nasty
 side-effects.

It isn't entirely.  There randomness creates an incentive to approve
compromise candidates.  This means that it isn't like pure approval.
A 55% bloc that refuses to compromise and thus wins the approval
stage, will likely end up causing a compromise failure.  That is
completely different to an approval election where a 55% bloc can
guarantee a win.

I think that finding an acceptable compromise is an important point.
The specific method is separate from the concept that you can allow
voters to in effect trade their winning probability.

Strategy needs to be tested.  The example that was used was a 3
candidate race, finding a compromise is harder when there is more
candidates.

Election-Methods mailing list - see http://electorama.com/em for list info