Juho wrote:
Here's one practical and simple approach to guaranteeing
computational feasibility of some otherwise complex election methods.
The original method might be based e.g. on evaluating all possible
sets of n candidates and then electing the best set. Comparing all
the sets is usually not computationally feasible. But the function
that compares two of these sets is typically computationally
feasible.

The solution is to apply some general optimization methods (monte
carlo, following the gradient etc.) to find the best set one can.

As in optimization, if you want to find a local optimum (or the problem
is set up so that hillclimbing finds the global optimum), then that can
work. But in some situations, the local optimum may not be good enough.
For instance, if you want to use Kemeny to determine a social ordering,
then approximating the solution would probably make the new method
(approximation-Kemeny) fail reinforcement.

On the other hand, approximating may make strategy more difficult. I
think Rob LeGrand wrote something about how approximations to minimax
Approval obscured the strategy that would otherwise work.

To gain even better trust that this set is the best one one could
publish the best found set and then wait for a week and allow other
interested parties to seek for even better sets. Maybe different
parties or candidates try to find alternatives where they would do
better. If nothing is found then the first found set is declared
elected.

This strategy could be used for any kind of data where one wants to optimize, where getting an approximation isn't too bad, and where it's very hard to fix a near-optimal solution (that is, rig it in any one's favor). One example where that holds (I think) is redistricting, which has been mentioned before. Another, if more abstract, would be the case of a democratic economic organization where one first (through some democratic method) determines what should be produced, then runs optimization (linear programming) to figure out what intermediate products have to be made to reach that goal. The optimization would be hard, but a similar trick should make it incentive-compatible for any faction within the organization to submit a better LP solution while making it hard to strategize. If rewards are needed, the difference between the best solution and the next best could go to those who proposed the best solution, although that may break down if a faction gets a sufficiently large computer to go directly to the optimum.

In this approach one is typically finds a set that is with high probability either the best or at least in value close to best. If better ones can not be found, this set could as well be considered to
be the best ("best known" at least). Other random factors of the
election method may well cause more randomness, so from theoretical
point of view this approach should in most cases be good enough.

This approach should make many complex methods computationally
feasible. The involved randomness may be a bit unpleasant, but from
technical point of view the performance is probably quite good.

In multiwinner election situations (like CPO-STV), the randomness might make the losers complain that they lost because the assembly that the optimization algorithm stumbled on didn't include them, not because the optimal assembly didn't include them. The "everyone may propose a solution" approach would to some extent limit these complaints - those who won could say "well, if you're on the optimal assembly, why didn't you calculate it and submit it as proof?" - but not eliminate it altogether.
----
Election-Methods mailing list - see http://electorama.com/em for list info

Reply via email to