On Sep 2, 2008, at 0:58 , Kristofer Munsterhjelm wrote:

Juho wrote:
Here's one practical and simple approach to guaranteeing
computational feasibility of some otherwise complex election methods.
The original method might be based e.g. on evaluating all possible
sets of n candidates and then electing the best set. Comparing all
the sets is usually not computationally feasible. But the function
that compares two of these sets is typically computationally
feasible.
The solution is to apply some general optimization methods (monte
carlo, following the gradient etc.) to find the best set one can.

As in optimization, if you want to find a local optimum (or the problem is set up so that hillclimbing finds the global optimum), then that can work. But in some situations, the local optimum may not be good enough. For instance, if you want to use Kemeny to determine a social ordering,
then approximating the solution would probably make the new method
(approximation-Kemeny) fail reinforcement.

My intention was to refer to the long tradition of generic minimization/optimization methods. (That's why I picked gradients and monte carlo as examples.) They should provide sufficient tools to cover most cases with reasonable probability of finding the best or close to best solutions. I have no idea but I guess most complex election algorithms are not very bad from optimization point of view.

On the other hand, approximating may make strategy more difficult. I
think Rob LeGrand wrote something about how approximations to minimax
Approval obscured the strategy that would otherwise work.

To gain even better trust that this set is the best one one could
publish the best found set and then wait for a week and allow other
interested parties to seek for even better sets. Maybe different
parties or candidates try to find alternatives where they would do
better. If nothing is found then the first found set is declared
elected.

This strategy could be used for any kind of data where one wants to optimize, where getting an approximation isn't too bad, and where it's very hard to fix a near-optimal solution (that is, rig it in any one's favor). One example where that holds (I think) is redistricting, which has been mentioned before. Another, if more abstract, would be the case of a democratic economic organization where one first (through some democratic method) determines what should be produced, then runs optimization (linear programming) to figure out what intermediate products have to be made to reach that goal. The optimization would be hard, but a similar trick should make it incentive-compatible for any faction within the organization to submit a better LP solution while making it hard to strategize. If rewards are needed, the difference between the best solution and the next best could go to those who proposed the best solution, although that may break down if a faction gets a sufficiently large computer to go directly to the optimum.

Yes. I was pretty much testing the claim that all computationally complex election methods (and districting etc.) would actually be computationally feasible and still good enough in the sense that possible remaining deviation from the optimal result would be small enough to justify use of the method in such an incomplete way.

In this approach one is typically finds a set that is with high probability either the best or at least in value close to best. If better ones can not be found, this set could as well be considered to
be the best ("best known" at least). Other random factors of the
election method may well cause more randomness, so from theoretical
point of view this approach should in most cases be good enough.
This approach should make many complex methods computationally
feasible. The involved randomness may be a bit unpleasant, but from
technical point of view the performance is probably quite good.

In multiwinner election situations (like CPO-STV), the randomness might make the losers complain that they lost because the assembly that the optimization algorithm stumbled on didn't include them, not because the optimal assembly didn't include them. The "everyone may propose a solution" approach would to some extent limit these complaints - those who won could say "well, if you're on the optimal assembly, why didn't you calculate it and submit it as proof?" - but not eliminate it altogether.

Yes. There also remains a chance that someone will later find a solution that is more optimal than the one that was declared to be the winning solution. I think people can quite well understand and accept the involved randomness. The end result is anyway close to optimal (I assume that finding a radically better solution would be very improbable).

This is a bit like if some candidate loses by few votes only. There will be many voters who think that they should have voted (or should have voted otherwise), but people generally accept that it could have been the other way around too and there is always some elements of luck involved. Better luck next time.

Some people could also try to make some propaganda based on this. But I doubt it would work well (well, one never knows, looking at some real life irrational behaviour examples and claims of voting experts that are often discussed on this mailing list :-). I also believe that there would be many interested hackers trying to get the glory of beating the government calculations. So, maybe plenty of processing power used before declaring the best found solution to be the winner.

Juho





        
        
                
___________________________________________________________ All new Yahoo! Mail "The new Interface is stunning in its simplicity and ease of use." - PC Magazine http://uk.docs.yahoo.com/nowyoucan.html

----
Election-Methods mailing list - see http://electorama.com/em for list info

Reply via email to