On Nov 26, 2009, at 12:43 AM, Kristofer Munsterhjelm wrote:

A final note is this: while the above may make it seem like I think criterion compliance is pointless (after all, the failure could be hidden in an obscure election scenario that will never happen), that's not quite true. If a method passes Condorcet, you at once know it will always pick the CW if one exists; you don't have to sit down and reason whether the failure is acceptable by whatever metric you are using. Thus, if we can pass criteria without having to pay too much, we should; a method passing a criterion is a guarantee for that method that it won't ever misbehave in the way drawn up by the criterion, and such absolute guarantees are nice things to have. If we can't have them, *then* we can start talking about whether or not it matters, but in the ideal world, we would have a method that pass the criteria that are worth passing.

I agree with this.

If some method breaks some criterion it would be good to understand also how often that is expected to happen and how serious the consequences are expected to be. (Even if some violations of this criterion (by other methods) would be serious the violations of this method might be minor. The violations might be so rare and consequences so mild that the problems are marginal or below the noise level.)

If there are multiple criteria that can not be met simultaneously it is possible that the best method violates all these criteria (instead of trying to meet as many of them as possible). It is possible that compatibility with some criteria makes the method violate some other criterion more. And this may lead e.g. to greater vulnerability to tactical voting (since the weakest link of the chain is now weaker than what it could be when all/many of the criteria are violated but only lightly).

In summary, it would be good to have some standard ways and common practices to give also some general estimates on 1) when / how often some criterion is violated and 2) how serious the consequences are.

We all would like e.g. "later preferences to never cause harm" and "election methods to be strategy free" but we just can't. Typically we need to sacrifice some properties in favour of some others, or simply accept that the laws of nature don't allow us to have everything that we would like to have. Some criteria are also double-edged. You may have problems both if you meet them and if you don't. Nice names of the criteria (that point in one direction only) and that use words with positive or negative tone may sometimes increase confusion (=> as if there would be a need to meet such a criterion always).

I addition I must note (the trivial fact) that the number of criteria that some method meets does not mean that it would be good (although there is some correlation). There may be e.g. criteria that point out some benefits that are marginal and that are in real life run over by some other bigger vulnerabilities. I think the true vulnerability of the methods can often be described in a more reliable way e.g. by giving some realistic (real life / in the intended environment) examples on how the method may fail. Such examples are of course often based on the basic information about theoretical vulnerabilities and whether or not the method meets some criteria but the real test is if those vulnerabilities will/can have some impact in the actual election.

Another summary. Maybe I was just saying that in many typical environments Condorcet methods are indeed "quite LNH" :-).

Juho





----
Election-Methods mailing list - see http://electorama.com/em for list info

Reply via email to