Re: [Election-Methods] Determining representativeness of multiwinner methods

2008-06-25 Thread Kristofer Munsterhjelm

Howard wrote:

Question to Kristofer

do you see the issues that you start off with as orthogonal?
i.e. do you see this only working in a world where the issues polled are 
independent.


The simulation I wrote assumes this, since it picks the proportion in 
favor on each issue independently. The simulation-idea itself could work 
with non-orthogonal issues, where one programs the individual issue 
profile generator to select true on an issue with a probability that's 
correlated (or reversely correlated) with the probability of selecting 
true on some other issue.


The concept works with non-orthogonal issues. The implementation doesn't.

also, how it would be decided what issues are polled? even in a 
simulation this is important.


I take a best-case approach here: every voter knows the issue profiles 
of every candidate. That's not how it would happen in reality, but it 
can only make the simulated scores better than in reality, not worse.


Ultimately there are a large election there are wide variety of issues 
and it is impossible for any one candidate or voter to be aware of all 
of them much less have an opinion on them all.


Perhaps there could be a switch where, if turned on, the simulation only 
compares subsets of issue profiles. A noise parameter would determine 
how large a subset is compared. One would have to make assumptions as to 
the correlation of subsets, though - do voters compare on the same 
subsets (what's being advertised, for instance), or do they compare on 
different subsets (their special interests)? In hoping that the results 
can be generalized, picking random subsets may suffice.


I think the generalization you propose below to a range of values is 
probably worth while.
it might then also be able to address not only proportionality on views 
of the legislature, but also proportionality on the thrust of the 
legislature.
i.e. it is all well and good to say you are for some position but if the 
legislature never proposes a new law or regulation around this position 
it is of little use to the people.


How would you treat the case where the assembly doesn't care about a 
certain issue, but the voters do? Range-style only count those with an 
opinion would produce an undefined proportion in favor of the issue at 
that point (because of a division by zero). I'm not sure what the best 
approach would be in that case.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [Election-Methods] Determining representativeness of multiwinner methods

2008-06-25 Thread Kristofer Munsterhjelm

Steve Eppley wrote:

Hi,

I prefer a definition of representativeness that differs from 
Kristofer's.  To me, the more similar the *decisions* of a legislature 
are to the decisions the people themselves would make collectively in a 
well-functioning direct democracy, the more representative is the 
legislature.
Given my definition, a non-proportional legislature comprised solely of 
centrist majoritarian compromise candidates may be very representative, 
since the people themselves would reach centrist compromises on the 
issues in a well-functioning direct democracy.  It might be more 
representative than a proportional legislature, since the proportional 
legislator could match her constituents' favorite position on every 
issue yet fail to match the way they would compromise.


By considering issue representativeness, I was trying to reduce the 
problem of deliberation within a representative assembly to that of a 
direct democracy. Whatever problems the assembly might have, the people 
would also have if a direct democracy on the scale in question would be 
feasible in the first place: problems like tipping-point coalitions 
having undue power (as the Banzhaf and SS indices try to measure) would 
exist in both cases.


However, that, as you say, depends on that issues are the only thing 
that matter. Now, the dynamics among the candidates could differ from 
those of the people, but I don't see how those dynamics could be 
simulated. In order to measure the proportionality of decisions alone, 
there would have to be some sort of decision generator that takes the 
dynamics into account.


Also, the centrist majority candidates you mention would have to be very 
good at being neutral, incorruptible, and not belong to the same 
majority. The feedback is much more direct in a proportional assembly: 
if one of the representatives start to diverge, their support wanes, and 
voters can discriminate between dropping support of one part of the 
assembly and of another. If the assembly consisted of centrists, a 
veering centrist could benefit more than he loses just by moving closer 
to a certain majority, since a majoritarian method would reward him for 
doing so.


Why should anyone care more about the legislature's proportionality than 
about their decisions?


If the issues are good predictors of decisions, one would care about 
issues for that reason alone.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [Election-Methods] Determining representativeness of multiwinner methods

2008-06-25 Thread Kristofer Munsterhjelm

Terry Bouricius wrote:
That brings me to an interesting issue, which may be off-topic for this 
list...sortition...the selection of a legislative body by means of 
modern sampling methods that assure a fully representative body. There is 
an interesting history of the tension between sortition on one hand and 
election on the other (Athenian democracy used both), where sortition was 
seen as the more democratic method, with election being the lesser 
(because candidates with more money or fame had such an advantage over 
average citizens). It is the old question of whether representative 
democracy should be seen as self-governance, or consent of the 
governed.


Possibly taking this thread even further off topic, I could mention a 
hybrid I once thought of. If there's a legislature of 360 members (to 
use a highly composite number), use random sampling to construct 36 
groups - juries or citizens' assemblies - each of which elect ten from 
their own numbers to the main assembly.


Assuming the jury voters know what they're doing, the final 
representatives would have greater skills than a randomly selected 
assembly, yet they would not be as prone to corruption and 
aristocratic effects as a directly elected assembly (since nobody can 
tell who'll make up the first-round juries, and thus no shadowy group 
could run ads on the behalf of any of those candidates).


One disadvantage to this method is that minorities of less than a tenth 
of the population won't be represented (since each jury only elects ten 
members). Another is that it may be considered undemocratic since only 
(36 * members of each jury) have any say in the final outcome.


The size of the assemblies, and how many they elect, could be tuned as 
desired to reflect a particular position on the sortition-election spectrum.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [Election-Methods] Determining representativeness of multiwinner methods

2008-06-27 Thread Kristofer Munsterhjelm
That could be one big poster where the candidates are listed on the 
right hand side and the left hand side is used for representing the tree 
structure (and the names of the parties and the subgroups).


That could work, at least in cases where there's only one district and 
the party limits the depth of the tree so it doesn't get too cluttered. 
I don't think it would be of much use in small-district elections, since 
either all subgroups would have to field candidates (making the regional 
lists very long), or only some subgroups would have candidates you could 
vote on.


To be absolutely safe, each party in party list PR would have to have at 
least as many candidates in the running for a region as there are seats 
in the region. If you want absolute representation not just between 
parties but within the party, each subgroup would have to do the same, 
which would add greatly to the count.


It could be solvable by something like MMP where you have one 
constituency (FPTP or STV) vote and one subgroup vote, where the 
subgroup totals propagate up the party -- or for an asset-flavored 
method, the subgroups negotiate with weighted votes as to how the list 
votes are to be divided up.


Asset-flavored methods would have the same Fiji-type problems you 
referred to, however, if the candidates throw their weight behind 
something you don't support. One might say that feedback would make the 
voter trust the candidate less the next time around and thus keep them 
in line, but that argument could be made for Fiji, too, and observation 
shows that feedback isn't strong enough.


The substituted ranks (candidate-individual automatic how-to-vote 
cards) would nest outwards, from the small wings to the increasingly 
larger ones within the party itself, then on to other parties in 
preference. In a sense, they are lists of their own, and so the 
problem isn't completely avoided.


Are there some specific cases where the tree like inheritance order is 
clearly not sufficient?


Not really - it was more of an addon to increase the information given, 
with the idea that if the candidates transfer beyond their own party, 
then a socialist green could favor other socialist and green parties 
over conservative ones, for instance. Since I considered the 
tree-structure too complex, I thought that complexity wouldn't be an issue.


At least the rank substitution method gives a simple way of implementing 
such a nested party-list method. If candidates declare subgroups, and 
subgroups supergroups, then the rank votes are generated so that a list 
gives the ordering within subgroups, and then the ranked vote is 
[Candidates in subgroup]  [ Candidates in other subgroups ]  [ 
Candidates in other supergroups ] and so on.


That brings us to the original question, whether it'd be possible to 
simulate this method. If tree-based party list (as we may call it) is 
transformed to STV, then any question of proportionality of tree-based 
party list would be reduced to a subset of the questions of 
proportionality of STV alone, and so it wouldn't be necessary to test 
tree-based party list separately - at least not unless the structure 
mitigates some problem with ordinary STV representation.


Election-Methods mailing list - see http://electorama.com/em for list info


[Election-Methods] Matrix voting and cloneproof MMP questions

2008-07-05 Thread Kristofer Munsterhjelm
I thought I could ask a few questions while otherwise being busy making 
my next simulator version :-) So here goes..


First, when a group elects a smaller group (as a parliament might do 
with a government, although real parliaments don't do it this way), 
should the method used to elect the smaller group be proportional?


I think one could make a majoritarian version with cardinal 
ratings/Range. It'd work this way: for n positions, each voter submits n 
rated ballots. Then, with k candidates, make a k*n matrix, where 
position (a,b) is the sum of the ratings the voter assigned candidate a 
in the ballot for position b.


We've now reduced the problem of picking (candidate, position) values so 
that the sum is maximized. The constraints on the problem are: only one 
value can be selected from each row (can't have the same candidate for 
two positions), and only one value can be selected from each column 
(can't have two candidates for the same position). I think that's 
solvable in polynomial time, but I haven't worked out the details.


That's for majoritarian matrix votes with cardinal ratings (or Range - 
could also be median or whatever as long as the scores are commensurable).


(On a related note, has anyone tried to use Range with LeGrand's 
Equilibrium Average instead of plain average?)


Perhaps the same pick-the-best-sum reasoning could be extended to a 
Condorcetian matrix vote, using Kemeny score for the Condorcet matrix 
for the position in question instead of ratings sums/averages. But as 
far as I remember, Kemeny scores relate to social orderings, not just 
candidate choices, so maybe the Dodgson score instead -- but that may 
not be comparable in cases where different candidates are Condorcet 
winners in different elections, since those would all have Dodgson 
scores of 0 (no swapping required).


In any case, the reduction above won't work if matrix voting methods 
ought to be proportional. I'm not sure whether it should be majoritarian 
or proportional, and one could argue for either - majoritarianism in 
that that's how real world parliamentary governments are formed 
(negotiations notwithstanding), and proportionality because some group 
may be very good at distinguishing suitable foreign ministers while some 
other, slightly larger group, might not do very well at that task but be 
good at distinguish suitable ministers of interior.



Second, I've been reading about the decoy list problem in mixed member 
proportionality. The strategy exists because the method can't do 
anything when a party doesn't have any list votes to compensate for 
constituency disproportionality. Thus, cloning (or should it be called 
splitting?) a party into two parties, one for the constituency 
candidates, and one for the list, pays off. But is it possible to make a 
sort of MMP where that strategy doesn't work?


That MMP method would have to use some kind of reweighting for those 
voters who got their way with regards to the constituency members, I 
think, because if the method just tries to find correlated parties, the 
party could theoretically execute the strategy by running all the 
constituency candidates as independents.
What kind of reweighting would that be? One idea would be to have a rule 
that says those with say x about the constituency vote gets 1-x in the 
list vote. Then vary x until the point of party proportionality is 
found. No matter what party someone who makes a difference with regards 
to the constituency candidate chooses, his vote loses power 
proportionally, and thus decoy lists wouldn't work.


No concrete methods here, but maybe someone else will add to them... or 
find flaws in my reasoning and correct them :-)


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [Election-Methods] Matrix voting and cloneproof MMP questions

2008-07-08 Thread Kristofer Munsterhjelm

James Gilmour wrote:

Kristofer Munsterhjelm  Sent: Sunday, July 06, 2008 12:10 AM
Second, I've been reading about the decoy list problem in mixed member 
proportionality. The strategy exists because the method can't do 
anything when a party doesn't have any list votes to compensate for 
constituency disproportionality. Thus, cloning (or should it be called 
splitting?) a party into two parties, one for the constituency 
candidates, and one for the list, pays off. But is it possible to make a 
sort of MMP where that strategy doesn't work?


I don't know about making it not work, but the 'overhang' provisions in some versions of MMP would, 
at least partly, address this problem.  The version of MMP used for 

elections to the Scottish Parliament
(no overhang correction) is wide open to this abuse, and we already have two registered political parties 
that could make very effective use of it IF they so wanted.  The Labour Party and the Co-operative Party 
jointly nominate candidates in some constituencies.  The Co-operative Party does not nominate any
constituency candidates nor does it contest the regional votes.  


I don't doubt that the problem exists. After all, the term decoy list 
(lista civetta) comes from the Italian abuse of the system. Do you know 
of any countries that do have overhang provisions to ameliorate the problem?


 Basically, MMP is a rotten voting system, with or without the

'overhang' correction, and it should be replaced by a better system of 
proportional representation.


Even though I think multiwinner methods should be party-neutral, I can 
see the appeal of MMP: parties are guaranteed to get their share of the 
vote, even if the constituency vote is disproportional. Thus they can't 
say that they were robbed of seats because of the quirks of the system. 
While in reality such complaints would be infrequent (because those who 
have power in a very disproportional system are those where the 
disproportionality swung their way), why have disproportionality when it 
can be avoided?


If we generalize this, the list part of MMP is a patch to the 
disproportionality of the constituency method, to take advantage of 
explicitly-known properties (like party allegiance). That suggests that 
we use a proportional multiwinner method (like STV) for larger 
constituencies, and then award list seats (of a much smaller share than 
half the parliament) to patch up whatever disproportionality still 
exists - even if the multiwinner method is perfect, rounding errors 
regarding district size would introduce some disproportionality.


At that point, the generalized MMP with STV sounds a lot like Schulze's 
suggestion for Berlin.



Election-Methods mailing list - see http://electorama.com/em for list info


Re: [Election-Methods] Matrix voting and cloneproof MMP questions

2008-07-08 Thread Kristofer Munsterhjelm

Rob LeGrand wrote:

Kristofer Munsterhjelm wrote:

(On a related note, has anyone tried to use Range with LeGrand's
Equilibrium Average instead of plain average?)


I don't recommend using Equilibrium Average (which I usually call AAR
DSV, for Average-Approval-Rating DSV) to elect winner(s) from a finite
number of candidates.  AAR DSV is nonmanipulable when selecting a single
outcome from a one-dimensional range, just as median (if implemented
carefully) is, but it is manipulable when used as a scoring function in
a way similar to how Balinski and Laraki proposed using median:

http://rangevoting.org/MedianVrange.html


You use movie site data for your AAR-DSV examples. Does AAR-DSV 
manipulability mean that a movie site that uses it would face difficulty 
telling users which movie is the most popular or highest rated? The 
manipulability proofs wouldn't harm them as strongly (since very few 
users rate all of the movies), but they would in principle remain, 
unless I'm missing something...


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [Election-Methods] A Better Version of IRV?

2008-07-12 Thread Kristofer Munsterhjelm

Dave Ketchum wrote:

Again, why NOT Condorcet?

Its' ballot is ranking, essentially the same as IRV, except the 
directions better be more intelligent:

 Rank as many as you choose - ranking all is acceptable IF you choose.
 Rank as few as you choose - bullet voting is acceptable if that 
completes a voter's desired expression.

 Equal ranking permitted.

Condorcet usually awards the same winner as IRV.  Major differences:
 Condorcet looks at ALL that the voters rank, while IRV ignores parts.
 Condorcet recognizes near ties, and tries to respond accordingly.

Could be a debate about the near ties - would it be better to resolve 
such with a runoff?  Runoffs take time and are expensive.  Are they 
enough better than what Condorcet can do with the original vote counts?


On technical merit alone, why not Condorcet indeed? But the thread was 
about momentum. In the situation where IRV can't be stopped, what is the 
best way to nudge IRV towards something more desirable while still 
keeping it IRV-ish enough that it'll retain the momentum of pure IRV?


One modification that's been mentioned before is bottom two runoff - 
eliminate the one of the two last placed that fewer prefer to the other. 
That would ensure a Condorcet winner always wins, but to core IRV 
supporters, that's a weakness, because the Condorcet winner could be a 
weak centrist. The ameliorated procedure would also fail LNHarm.


If the people on which the momentum is based would support any sort of 
elimination procedure, then I think Borda-elimination would be better; 
so what one really has to ask is, if IRV is unstoppable, then how far 
from pure IRV can you go and still have it be IRV? IRV with candidate 
withdrawal? IRV with candidate completion? BTR-IRV? Schwartz,IRV? Any 
sort of elimination system? Any sort of ranked ballot system?



One argument against Condorcet, which one may call half-technical, is 
complexity. It's technical because it regards the method itself and not 
whether Condorcet Winners are good winners (or similar), and 
nontechnical because what's complex to a computer may not be complex to 
a person and vice versa.


As far as complexity with regards to Condorcet goes, the good Condorcet 
methods are complex. Schulze may be easy to program (once you know the 
beatpath algorithm), but explaining beatpaths to the average voter is 
going to be hard. Copeland is easy but not very good and ties a lot.


One thing I've observed is that IRV focuses on how the process is done, 
while Condorcet methods focus on properties (the winner is the 
candidate which wins all one-on-one contests). I'd say explaining 
properties would be more easily understood than explaining the process, 
but apparently this isn't a great limitation for IRV, given its momentum 
so far.


Perhaps Ranked Pairs would have a chance? It's one of the better 
Condorcet methods (cloneproof, etc), and if people accept the pairwise 
comparison idea, it should follow quite easily. Say something like that 
you can't please everyone all the time, so please most, which is to say 
that one locks preferences in the order of greatest victories first. 
Then anyone complaining because his group's (cyclic) preference was not 
locked could be rebutted by a larger group saying that if it had been, 
more people (namely, that larger group) would have been overridden. Here 
you have both method (locking) and properties (group complaint 
immunity), as well.


It'd be interesting to investigate which simple or intuitive methods are 
the best. I don't know what would constitute simple to voters, perhaps 
Of those candidates that [some statement], choose the one that [some 
statement], or [Somehow reduce the set of candidates] until [criterion 
is met], then that one is the winner for various sentence parts inside 
the brackets. Those are all method-based explanations; maybe 
property-based ones would be better. If the voter trusts that the method 
does what the property says, and the property is desirable, then that 
could be the case.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [Election-Methods] A Better Version of IRV?

2008-07-13 Thread Kristofer Munsterhjelm
I don't see how IRV's failure to elect the Condorcet candidate is 
necessarily linked to its non-monotonicity.
There are monotonic (meets mono-raise) methods that fail Condorcet, and 
some Condorcet methods that fail mono-raise.


(For information: I think Bucklin would be an example of the former, and 
one of the Borda-elimination methods be an example of the latter.)


I think Smith (or Shwartz),IRV is quite a good  Condorcet method. It 
completely fixes the failure of Condorcet while being more complicated

 (to explain and at least sometimes to count) than plain IRV, and a Mutual
 Dominant Third candidate can't be successfully buried.
But it fails Later-no-Harm and Later-no-Help, is vulnerable to Burying 
strategy, fails mono-add-top, and keeps  IRV's failure of  mono-raise 

 and (related) vulnerability to Pushover strategy.

At the risk of taking this thread away from its original topic, I wonder 
what you think of Smith,X or Schwartz,X where X is one of the methods 
Woodall says he prefers to IRV - namely QTLD, DAC, or DSC.


(Since QLTD is not an elimination method, it would go like this: first 
generate a social ordering. Then check if the ones ranked first to last 
have a Condorcet winner among themselves. If not, check if the ones 
ranked first to (last less one), and so on. As soon as there is a CW 
within the subset examined, he wins. Schwartz,QLTD would be the same but 
has a Schwartz set of just one member instead of has a CW.)


DAC and DSC only satisfy one of LNHelp/LNHarm, but they're monotonic in 
return. According to Woodall, you can't have all of LNHelp, LNHarm, and 
monotonicity, so in that respect, it's as good as you're going to get. I 
don't know if those set methods are vulnerable to burying, though, or if 
they preserve Mutual Dominant Third.


Then again, satisfying one of the LNHs may not matter since combining it 
with Condorcet in that manner makes the combination fail both LNHs. 
Combining a method that satisfies LNHarm with CDTT gives something that 
still satisfies LNHarm, but the result fails Plurality, and that's not 
good either... and the Condorcet method of Simmons (what we might call 
First preference Copeland) resists burial very well, but it isn't 
cloneproof.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [Election-Methods] Local representation

2008-07-17 Thread Kristofer Munsterhjelm

Juho wrote:
I think already the basic open list provides a quite strong link between 
candidates and voters. Voters will decide which candidates will be 
elected, not the party (this is an important detail). (Extensions are 
needed to provide proportionality between different subgroups of the 
party.)


I'd classify the various party systems like this:

Closed list: Forced party-based voting.
Open list: Opt-out party-based voting.
Ranked ballot PR: Manual opt-in party-based voting.

In other words, although a properly constructed open list may be 
equivalent to ranked ballot PR (it would be pretty easy, if pointless, 
to make a party list that translates the votes into ranked ballots), 
which way the default goes makes a difference.


I would put STV+above-the-line somewhere in there, but because the only 
implementation of above the line voting is Australia's, and since the 
forced rank all below the line if you're going to vote below the line 
constraint means that it'll be prohibitively expensive, in terms of 
effort, to vote below the line, it's probably quite similar to closed 
list PR.


The vote for some, then your party completes the rank method would go 
in between open list and STV (ranked ballot) somewhere.


(I don't think there's a point in having closed list if you can have 
open list. Others may disagree, though; they could argue that coherent 
policies is what matters and that individual candidates would become 
demagogues and swing to the short-sighted public opinion instead of 
forming such coherent policy. But inasfar as democracy has a problem in 
that people are shortsighted, that should be handled separately, such as 
by long term limits or rotating assemblies.)


Election-Methods mailing list - see http://electorama.com/em for list info


[Election-Methods] Second run of multiwinner proportionality test

2008-07-18 Thread Kristofer Munsterhjelm

Hello all,

I've rewritten my program that tests the proportionality of PR methods 
by assigning binary issue profiles to voters and candidates and 
comparing the council's proportion of candidates in favor of each issue 
with the proportions of the people.


There were some bugs in my previous version. For one, I incorrectly 
implemented IRV so that it got a higher score than should be the case. 
The new version now puts that which is to IRV as SNTV is to Plurality as 
one of the best methods.


The full results (of those better than the average randomly chosen 
assembly) are:


0.176552QPQ(div 0.1, multiround)
0.176552QPQ(div 0.1, sequential)
0.191093QPQ(div Sainte-L, sequential)
0.209409QPQ(div Sainte-L, multiround)
0.230898STV
0.248373Maj[Eliminate-Plurality]
0.259064QPQ(div D'Hondt, sequential)
0.26736 Meek STV
0.280724QPQ(div D'Hondt, multiround)
0.314229Maj[Plurality]
0.318016Maj[AVGEliminate-Plurality]
0.358127Maj[Eliminate-Heisman Trophy]
0.362992ReweightA[Heisman Trophy]
0.391753Maj[AVGEliminate-Heisman Trophy]
0.391753Maj[Heisman Trophy]
0.393261-- Random candidates --

That's 23 rounds, RMSE, normalized for each round so that 0 is best of 
ten thousand and 1 worst of ten thousand random assemblies. The other 
end (of those most majoritarian) has:


0.70004 Maj[Borda]
0.705089Maj[AVGEliminate-Borda]
0.709886Maj[Cardinal-20(norm)]
0.718258Maj[ER-QLTD]
0.722618Maj[Cardinal-20]
0.731794Maj[ER-Bucklin]
0.750181Maj[Eliminate-VoteForAgainst]
0.758258Maj[Schulze(wv)]
0.761436Maj[AVGEliminate-Antiplurality]
0.8394  Maj[Eliminate-Antiplurality]

Some notes on the terminology: Eliminate-X is loser elimination. 
AVGEliminate-X is like Carey's Q method, only generalized: it 
eliminates all of those with worse than average scores. Maj[X] is the 
simple porting of single-winner X to a multiwinner system, where one 
just picks the n (for a council of n) highest ranked in the social 
ordering. Heisman Trophy is the positional system 2, 1, 0, 0,  0. 
VoteForAgainst is 1, 0, ..., -1. Antiplurality is 1, 1, 1, ..., 0. 
ReweightA[X] is like RRV, only with positional scores (of positional 
method X) instead of range scores.



The really strange thing here is that my method seems to have a 
substantial small-state (small-party) bias. For instance, QPQ with a 
divisor of 0.1 is scored much better than QPQ with a divisor of 0.5 
(Webster/Sainte-Lague) or with 1 (D'Hondt).


I don't know why that happens, as it's not obvious from the idea 
(generate hidden binary issue profiles, generate ranks of all candidates 
 based on Hamming distance to each candidate, run through election 
method, compare proportions of TRUEs in the issue profiles of the 
assembly to the proportions among the people). Could it be something 
related to the assumption that people vote sincerely? Or is the variety 
of positions so large that in order to get a lower score, it's better to 
elect the one that supports your opinion than one who would deprive you 
of the opinion while smoothing out all other opinions a little bit? But 
if so, then using RMSE to measure party proportionality in multiparty 
states would be flawed, and someone would have written about it.


Other odd results: Ordinary STV scores better than Meek STV (Meek is 
usually considered better) and single-round QPQ scores better than 
multi-round QPQ (where the latter is usually considered better). Some 
methods that fail Droop proportionality score better than ones that pass 
it: namely, IRV (Elimination-Plurality) scores better than Meek STV.


Perhaps the better PR rules do, well, better most of the time, but there 
are some instances where they do much worse. To detect that reliably, 
I'll have to add Pareto-domination tests or median (instead of/in 
addition to average) scores.


It may also be tha 23 rounds is far from enough, but I've run some of 
these longer (up to 500 rounds) and the general position isn't that far 
off. Some times, IRV even gets ahead of STV.


(Here's an example with only QPQ being tested, with 76 rounds:

 0.223125QPQ(div D'Hondt, sequential)
 0.231456QPQ(div D'Hondt, multiround)
 0.153442QPQ(div Sainte-L, sequential)
 0.16047 QPQ(div Sainte-L, multiround)
 0.146262QPQ(div 0.1, sequential)
 0.146262QPQ(div 0.1, multiround)

 The small-party bias still seems to hold. And here's a 423-round test 
with Meek and ordinary STV, and IRV:


 0.203673Maj[Eliminate-Plurality]
 0.208361STV
 0.220629Meek STV.)

Election-Methods mailing list - see http://electorama.com/em for list info


Re: [Election-Methods] A Better Version of IRV?

2008-07-18 Thread Kristofer Munsterhjelm

Chris Benham wrote:


At one stage Woodall was looking for the method/s that meet as many of
his monotonicty properties as possible while keeping  Majority 
(equivalent to Majority for Solid Coalitions).  That is what led him to 
Quota-Limited Trickle Down (QLTD) and  then Descending Acquiescing Coalitions (DAC).
 
But I wouldn't conclude from this that for public political elections he 
currently prefers those methods (or DSC) to IRV.


I was going by his statement of DAC being the first system I'm really
happy with or something to that effect. It's true that he could have
changed his mind, and given your example below, he probably did.


They  don't meet Mutual Dominant Third.
 
49: A

48: B
03: CB
 
The MDT winner is B, but DSC elects A.
 
03: D

14: A
34: AB
36: CB
13: C
The MDT winner is C, but DAC elects B.
 
This latter example (from Michael Harman, aka Auros) I think put

Woodall off  DAC.  B is an absurd winner. Without the 3 ballots
that ignore all the competitive candidates the majority favourite is C.


I agree that it is quite absurd.
In 2003, you referred to a method called Descending Half-Solid
Coalitions, which, despite failing both LNHarm and LNHelp, might be
preferrable to them. What is DHSC, and does it salvage DAC/DSC?


But of course Smith implies MDT.


You said that Schwartz,IRV protects MDT candidates from being buried.
Does that hold for all Schwartz,X if X passes MDT? It would seem to do
so, since burial most often involves a cycle, and without a cycle the
Schwartz (and Smith) set is just a single candidate.

In that respect, Smith, or Schwartz,[something that passes MDT] is not
redundant, even if Smith itself implies MDT.


DSC and DAC aren't just monotonic (meet mono-raise), they meet
Participation (which of course is lost when combined with Smith/Schwartz
because Participation and Condorcet are incompatible).
 
I think all methods that meet Condorcet are vulnerable to Burial. By 
themselves
DSC is certainly vulnerable to burial (and has a 0-info. random-fill 
incentive) and

DAC has strong truncation incentive.
Your question about QLTD has been asked before:
http://lists.electorama.com/htdig.cgi/election-methods-electorama.com/2005-March/015367.html
 
http://lists.electorama.com/htdig.cgi/election-methods-electorama.com/2005-March/015369.html


I see. If QLTD isn't cloneproof (and it isn't), then the result won't be
either, hence we could just as well go with first preference Copeland 
(unless that has a flaw I'm not seeing).


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [Election-Methods] A Better Version of IRV?

2008-07-23 Thread Kristofer Munsterhjelm



If QLTD isn't cloneproof (and it isn't), then the result won't be
either, hence we could just as well go with first preference Copeland
(unless that has a flaw I'm not seeing).

What is supposed to be the attraction of  first preference Copeland?
And how do you define it exactly? 


The attraction of FPC (which is what I call Simmons' supposed Cloneproof 
extension of Copeland, but that wasn't cloneproof after all) is that 
it's extremely hard to do burial with it.


The definition of first preference Copeland is:

The candidate for which those who beat him pairwise gather fewest 
first-place votes, in sum, is the winner.


Simmons invented the method, I just use that name instead of Simmons 
cloneproof method, as it isn't cloneproof. It's an extension of 
Copeland since the only information it takes from the pairwise matrix is 
binary; in this case, whether some candidate Y beats X, and in 
Copeland's case whether some candidate Y is beaten by X; thus first 
preference Copeland.


I could also just call it Simmons, I suppose, since the other Simmons 
methods I know of have defined names of their own and so wouldn't be 
confused with it.


First preference Copeland would be vulnerable to the situation where 
multiple candidates have equal first place rival scores. One way to 
solve this would be to use Schwartz, or Schwartz//. Another would be to 
use a positional system that counts second, third, etc, place votes also 
but only very weakly, like Nauru-Borda (or something going 1/10^p, p = 
0..n); yet another would be to use Bucklin (if there are any ties, count 
first and second place votes of rivals, etc), and even another would be 
to have an approval cutoff and use Approval instead of first preferences 
(unless everybody bullet votes).
I haven't tested the positional or Bucklin variants here so I don't know 
if those solutions would be any good. I'm not sure if it's possible to 
make a situation where two candidates are in the Schwartz set yet none 
of their rivals rank first or the rivals' ranked-first sum is equal for 
all. Perhaps that's possible if you make dummy candidates that 
collectively hog all the first place votes, but who each are ranked 
below the various other candidates enough times that they don't beat any 
of them? Something like

 Q1  A  B  C  Q2  Q3
 Q2  B  C  A  Q3  Q1
 Q3  B  A  C  Q1  Q2
..
In the general case, such scaffolding will work (block winners) with 
Schwartz,. It won't work with Schwartz// unless you can somehow get 
all the Qs inside the Schwartz set yet still have them cover each 
candidate, first-preference wise, equally. But, (reading the cloneproof 
Copeland thread even as I'm writing this,) Schwartz//FPC would not be 
summable.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [Election-Methods] delegate cascade

2008-07-23 Thread Kristofer Munsterhjelm

Juho wrote:

On Jul 22, 2008, at 14:26 , Michael Allan wrote:


I'm grateful I was directed to this list.  You're clearly experts.  I
wish I could reply more completely right away (I should know better
than to start 2 separate threads).  I'll just reply to Juho's
questions today, and tomorrow I'll look at Abd's work.  (You've been
thinking about this longer than I have, Abd, and I need to catch up.)


1) All voters are candidates and it is possible that all voters consider
themselves to be the best candidate. Therefore the method may start from
all candidates having one vote each (their own vote). Maybe only 
after some

candidates have numerous votes and the voter himself has only one vote
still, then the voter gives up voting for himself and gives his vote to
some of the frontrunners. How do you expect the method to behave from 
this

point of view?


The basic rule of vote flow is: a vote stops *before* it encounters a
voter for a second time, and it remains held where it is.  A vote is
always considered to have encountered its original caster
beforehand.  So it is not possible to vote for oneself.  It is
permitted, but the vote stops before it is even cast - there is no
effect.


Ok, not allowing voters to vote for themselves may to some extent solve 
the problem. (Some voters may however decide to abstain for a while.)


This is a bit offtopic (again), but another idea that might be less 
prone to strategy in the case of cyclical proxy candidacy occurred to 
me: use eigenvector or Markov-based methods to distribute the deferred 
power smoothly over the candidates in the cycle.


At this point, the method looks similar to the original PageRank used to 
vote on web pages, where various web pages vote for the importance of 
each other - and such voting chains may be cyclical.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [Election-Methods] RELEASE: Instant Runoff Voting

2008-07-28 Thread Kristofer Munsterhjelm

(Oops, seems I sent this only to James Gilmour. Let's try again. )

James Gilmour wrote:

 it would have to look at the entire ballot.

 That is a consequence of your interpretation of how the voting system
 is supposed to work and what the voting system is supposed to
 be doing.  But that's not what IRV is about.  As I said in the
 previous message, the origins of IRV are in the Exhaustive Ballot,
 and in the Exhaustive Ballot there is no possibility of looking at
 the entire ballot.  IRV is not about satisfying a set of criteria
 derived from social choice philosophy.

In taking the people out of the loop in all rounds but the first, the 
reduction of Exhaustive Ballot to IRV turns IRV into yet another ranked 
ballot method. Thus it wouldn't matter if IRV originates in Exhaustive 
Ballot or not, because it has to stand as a ranked ballot method among 
other ranked ballot methods, using criteria and tests that can be 
applied to all of them.


 If you want something that only a social choice approach can deliver,
 then clearly IRV is not for you.  But that does not make Kathy
 Dopp's original statement a valid criticism of IRV.

Wouldn't it be, from a social choice point of view if no other?

 Or more concrete: if you want the sort of compromise that Condorcet
 gives (and you don't think that's a weak centrist), then you can't
 have LNHarm. I don't think you can have LNHelp either, but I'm not
 sure about that.

 I agree, but one could I think reasonably argue in the specific case
 of Condorcet that it does comply with LNHarm (at least, in Condorcet
 where there were no cycles or ties).  Your higher preferences are
 always placed above your lower preferences in the Condorcet
 head-to-head comparisons.  So YOUR lower preference can never harm
 YOUR higher preference.  But that is certainly not true for many other
 social choice voting systems that use the preference information in a
 quite different way.

That's true; it's the cycles that cause the problem. Still, Woodall's 
proof shows that it's possible to make a ballot set with no CW in a way 
 that no matter who wins, it's possible to append a later preference to 
some of the ballots so that another candidate becomes the CW.
The problem is in the transition between cycle and non-cycle, so inasfar 
as Condorcet winners usually occur, the Condorcet method passes LNHarm; 
but since cycles can occur, that means Condorcet is incompatible with 
LNHarm.


If we look at it from what you call the social choice point of view, 
then what has happened that makes Condorcet fail LNHarm is that it's 
used a later preference to find the Condorcet winner that it didn't know 
of, had it only used earlier preferences.


 Many on this list may think that, but it is my experience of more
 than 45 years as a practical reformer explaining voting systems to
 real electors, that 'later no harm' does matter greatly to ordinary
 electors.  If they think the voting system will not comply with
 'later no harm', their immediate reaction is to say I'm not going
 to mark a second or any further preference because that will hurt my
 first choice candidate  - the one I most want to see elected.  And
 of course, if you once depart from 'later no harm' you open the way
 to all sorts of strategic voting that just cannot work in a 'later
 no harm' IRV (or STV) public election with large numbers of voters.

 If the method fails LNHarm about as often as it fails LNHelp,  then
 that argument should fail, because bullet voting may harm your other
 choices as much (or more, no way to know in general) as consistently
 voting all of them will. Ceteris paribus, it's better to have a
 method that passes both of the LNHs than neither (since you get
 strategy in the latter case), but the hit you take might not be as
 serious as it seems at first.

 Your argument in respect of bullet voting in IRV is based on a
 misinterpretation of what that voter has said to the Returning
 Officer.  Because IRV conforms to LNHarm, a bullet vote, or any
 truncation, is a voter saying After this point, I opt out and leave
 any choice among the other candidates to the other voters.  Such a
 voter has no other choices.  So there is no question of harming them
 or helping them.

That wasn't an argument against bullet voting in IRV. I know that IRV 
satisfies both LNHarm and LNHelp (it's also nonmonotonic, which is a 
consequence of that it satisfies both and Mutual Majority; but that's 
not relevant to the case here).


What I'm saying, regarding voting systems that fail LNH, is that you can 
divide strategies into those that every voter would use just to maximize 
the power of the ballot, and those that require information to pull off. 
If a voting system satisfies neither of the LNHs, and the rate of 
failure is balanced (doesn't consistently harm earlier candidates nor 
consistently help earlier candidates), then ordinary voters won't 
truncate (resp. randomly fill) because they don't know whether doing so 
would 

Re: [Election-Methods] [english 94%] PR favoring racial minorities

2008-07-31 Thread Kristofer Munsterhjelm

Jobst Heitzig wrote:

Hello all,

although I did not follow all of the discussion so far, the following 
question strikes me:


Why the hell do you care about proportional representation of minorities 
when the representative body itself does not decide with a method that 
ensures a proportional distribution of power?


It is of no help for a minority to be represented proportionally when 
still a mere 51% majority can make all decisions!


If the assembly is elected using a majoritarian method, then that 51% 
majority is a majority of a majority. Thus, even with the constraint of 
power inequity, a majority of a representative body is better than a 
majority of a majority.


So, if you really care about the rights of minorities, the consequence 
would be to also promote some non-majoritarian, truly democratic 
decision method for the representative body itself. Examples of such 
methods have been discussed here.


That's right, but it would also have to be somehow moderated so that the 
result isn't just that the position puts some laws into effect (or elect 
a government), and then, because they lose temporary power, what was 
opposition and now is position uses all of *their* power to cancel it 
(or elect another government), making the collective decision pattern 
oscillate wildly.


One possible way to handle this would be to increase the majority 
required to pass anything from 50%+1 to, say 55%, or 60%, towards 
consensus. That'll have a bias towards the status quo, but not towards 
any given political majority in the assembly, and it won't have problems 
with hunting.


Another non-compensation option is to weight the coalitions so that 
they get near-equal power by Banzhaf calculations. But in party-neutral 
systems, who the coalitions are is not obvious, and there may be that 
there's no solution even in declared-party systems; for instance, 
there's no way that I know of to adjust relative assembly seat 
proportions so that coalitions have Banzhaf (or Shapely-Shubik) power of 
 40%, 31%, 29%. The power indices won't be relevant if some coalition 
members vote against the grain, either.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [Election-Methods] New improved fla for vote counts to be reported for auditing IRV elections

2008-08-02 Thread Kristofer Munsterhjelm

Kathy Dopp wrote:

Well, any election method can be parallelized (in quote marks) with a
superpolynomial amount of information when there are as many choices as
candidates.


I am not certain what you mean. Precisely, any Ranked Choice ballot
has a number of possible permutations of all the candidates given by
the fla above; or a number of unique candidate rankings given by the
fla above.


Since n!  n^n, it also reduces to a polynomial amount of
information for a fixed number of choices or rankings (n for one, n*(n-1) =
O(n^2) for two, and so on).

Thus, if the activists claim that IRV escapes the problem if you do it in
that way,


Not sure what you mean by that way.


My reference here was, perhaps a bit obliquely, to the summability 
criterion. The summability criterion says that if a method gives an 
internal count that represents data, and that this internal count can be 
aggregated with other such internal counts so that the result for the 
method run on the aggregated count is the same as if the method was run 
on all the ballots that made up the count, then the size of that 
internal count is a polynomial with respect to the number of candidates 
running.


In other words, if a method is summable (absent edge cases like the 
internal count being n^1000), then it can be counted in districts - 
like Plurality, Borda, Condorcet, and the others. IRV isn't summable, 
and that is a problem.


When you referred to activists, I assumed you meant IRV activists, and 
that they would use the equation you showed to say that San Francisco's 
IRV is summable. It technically is, because the count of various 
preferences is the internal count, and with only 2 (or 3.. any constant 
less than the number of candidates), the internal count (of how many 
had this preference and that preference) is polynomial with respect to 
the number of candidates.



you can say that that holds for absolutely every kind of ranked
ballot method that exists - at least for the neutral ones, and election
systems really should be neutral.


However, as you know, it is *not* true that the counting method is
complex or non-additive for each precinct with *any* counting method
for Ranked Choice ballots. The IRV method is, for instance, far more
complex than the approval or Borda methods where it is easy to audit
the accuracy of the machine counts *without* having to publicly report
all the tallies for all permutations of candidate orderings in order
to do valid partial post-election audits.


My point here is that if IRV proponents claim that IRV with truncated 
preferences is summable, then you can say that that's true of all ranked 
vote systems because they all take the same input (namely, ranked 
ballots). Therefore, the criterion isn't worth much and, as you point 
out, you should use other things to judge whether IRV is good or not.



Rated ballots with a granularity permitting k possible ratings for a single
candidate would have complexity k for a single choice, k^2 for two, ..., k^n
for a full preference ballot.


Yes. The ballots might be as complex but the counting method is not as
complex and doing partial post-election auditing does *not* require
keeping track of all the permutations of possible unique ballot
choices that voters can make.


It's certainly possible to make non-summable methods for rated ballots 
that don't force you to truncate - just imagine something like a rated 
IRV, where instead of the first preferences, the highest ratings of each 
voter is counted towards each candidate and then the one with the lowest 
count is eliminated.


Still, I understand what you're saying: Range certainly isn't very 
complex, and Approval is even simpler (count all the votes).


On an aside, if I were to pick a Range-like, I'd prefer a DSV version of 
Range to handle the compression incentive, but then it gets more complex 
(and perhaps even loses summability).



At some point, rated or ranked, it becomes easier to simply send every
voter's preference. An interesting consequence of that would be that it'd be
possible to fingerprint one's own vote to vote-buyers if there are a


Yes. That certainly might be possible because in most precincts, if
there were enough candidates, the number of voters would be far less
than the number of possible unique ballot ranking choices.

That is a disadvantage of any ranked choice voting ballot method - the
fact that post-election auditing on the individual ballot level is
probably not a good idea.  But auditing at the ballot level can be
problematic anyway due to ballot privacy concerns, even with a single
choice plurality ballot.


One possible solution would be to do this: Have a device make a rank 
tree. Each node in the tree contains a candidate name and a number 
specifying how many voted the preferences you can get by traversing from 
this level up to the root. For instance, if the ballots were:


1: A  B  C
100: A  C  B
10: B  A  C

Then the tree is
 A : 101
  A  B: 1
   A  B  C : 1
 

Re: [Election-Methods] voting research

2008-08-03 Thread Kristofer Munsterhjelm

Warren Smith wrote:

--see this:
http://RangeVoting.org/ConitzerSmanipEasy.pdf


Oops, disregard the point I said about not being familiar of IRV 
manipulation. I cited the paper myself!


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] [Election-Methods] about IRV median voting (answers to Dopp, Roullon)

2008-08-07 Thread Kristofer Munsterhjelm

Warren Smith wrote:

1. Dopp wanted simple nonmonotone IRV elections examples.
See
http://rangevoting.org/Monotone.html

and here is another:

#voters  Their Vote
8BAC
5CBA
4ACB
If two of the BAC voters change their vote to ABC, that causes
their true-favorite B to win under IRV.
(If they vote honestly ranking B top as is, then their most-hated
candidate, C, wins.)


Those are simple enough, but do you have any that satisfy Dopp's 
particular specifications? That is, A wins, but if k (for small k, 
preferrably 2) voters join and vote A top, then someone else 
(preferrably, the ones they ranked last) wins.


I think that that'll require more than three candidates. My reasoning is 
that, in order for an A-first vote to change the winner away from A, it 
must have a chaotic influence on the next round. But in three-candidate 
IRV, there are only two rounds, and since A is put first, the first 
round can't change from A to non-A. Then the second round must be A and 
someone else - call that someone else B. But if it's the case that, in 
aggregate, B  A and A  C (which is what you'd use to cause 
nonmonotonicity), then the addition of the two votes couldn't have 
changed the other candidate from C (originally) to B (now), since the 
first round only looked at the first preference votes, and the 
newcomers' ballots ranked A first.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Can someone point me at an example of the nonmonotonicity of IRV?

2008-08-10 Thread Kristofer Munsterhjelm

Kathy Dopp wrote:

From: rob brown [EMAIL PROTECTED]
Subject: Re: [EM] Can someone point me at an example of the
   nonmonotonicity of IRV?



Are you aware that in going to a doctor to treat an injury, you can get in a
car accident and get injured some more?  Why would anyone go to a doctor if
doing so can actually make your health WORSE?


OK. So you are saying we must use voting methods where voting for our
FIRST-Choice candidate as our LAST Choice helps our first choice
candidate win, and when I go to the polls I have no idea if that is
true or false because I might get into an accident when I drive to
the doctor when I'm sick?

I must have fallen down the rabbit hole when I joined this list.




I think what he means is that although the paradox is severe when it 
does happen (similarly to driving off the road), it happens very rarely, 
and in general, IRV gives a result that's better than say, Plurality 
applied to ranked ballots.


If it happens too often, though, one could get real paradoxes such as 
one that Ossipoff gave: a candidate being shown to be corrupt (so that 
many rank him lower) leads to that candidate's victory.


There's also the it smells fishy that nonmonotonicity - of any kind or 
frequency - evokes. I think that's stronger for nonmonotonicity than for 
things like strategy vulnerability because it's an error that appears in 
the method itself, rather than in the move-countermove game brought on 
by strategy, and thus one thinks if it errs in that way, what more 
fundamental errors may be in there that I don't know of?. But that 
enters the realm of feelings and opinion.


A less feelings-based way of showing the oddities of IRV would be to 
point at Yee pictures: http://rangevoting.org/IEVS/Pictures.html
The disconnected regions in IRV pictures are a consequence of 
nonmonotonicity - moving towards a candidate leads to another winning. 
Note that a method may be nonmonotonic in general and still be monotonic 
in the subset that 2D Yee-pictures cover. Also, that doesn't resolve the 
problem of figuring out how severe a monotonicity failure is, but just 
how frequently they occur in voting space.



Just because there is a non-zero chance of harm resulting from your choice
does not mean that you should be paralyzed from making a decision.


I am *not* paralyzed. I have DECIDED that I IRV voting is an insane
voting method that would cause much more havoc with voting systems.


Out of curiosity, what voting system would you recommend? I'm not saying 
don't say anything if you don't have an alternative, I'm just curious.



Nope. Never said it was and I have no problem with voting methods that
do such things, but you may have neglected to notice that with IRV,
ranking my first-choice LAST could help my first-choice MORE than
ranking my first-choice FIRST. In IRV, putting my candidate FIRST can
help my LAST place candidate win and putting my candidate LAST can
help my FIRST place candidate win.

Please identify all the other voting methods for me which have that
property (that ranking or rating a candidate LAST can help that
candidate MORE than ranking or rating a candidate FIRST) in addition
to IRV so that I can oppose them as well because I am not familiar
with any of these other methods that share that property with IRV.


I think that all methods that work by calculating the ranking according 
to a positional function, then eliminating one or more candidates, then 
repeating until a winner is found will suffer from nonmonotonicity. I 
don't know if there's a proof for this somewhere, though.


A positional function is one that gives a points for first place, b 
points for second, c for third and so on, and whoever has the highest 
score wins, or in the case of elimination, whoever has the lowest score 
is eliminated.


Less abstractly, these methods are nonmonotonic if I'm right: Coombs 
(whoever gets most last-place votes is eliminated until someone has a 
majority), IRV and Carey's Q method (eliminate loser or those with below 
average plurality scores, respectively), and Baldwin and Nanson (the 
same, but with Borda).


It may be that this can be formally proven or extended to other 
elimination methods. I seem to remember a post on this list saying that 
Schulze-elimination is just Schulze, but I can't find it. If I remember 
correctly, then that means that not all elimination methods are 
nonmonotonic.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Why We Shouldn't Count Votes with Machines

2008-08-15 Thread Kristofer Munsterhjelm

Dave Ketchum wrote:

  Or do we want the voter to be able to cancel the ballot and let
  the poll workers know that he needs a paper ballot instead that
  he can mark himself?
 
  I'm fine with the latter.  Actually that seems like a reasonable
  thing to do.

I agree, but that is not happening on all of todays' voting systems.
Election officials seem to be hopelessly slow to grasp the problem or
the solution.


This is a possible place for some new, clear, thinking - the Election 
officials likely couldn't fix this by themselves.


We are in trouble - failures have been proved too often, and there is 
good reason to believe most failures do not even get proved.


Even if the voting machine would be perfect - have no flaws at all - 
having a backup paper balloting option would be a good idea, I think. To 
the extent that democracy is not only about who won, but also about the 
losers (and their voters) being confident that they lost in a fair 
manner, any voter who doesn't trust the machine can request a paper 
ballot instead; and candidates that distrust the machinery can tell 
their voters to use the paper ballot backup.


If the machine works correctly, and candidates and voters know that, the 
load on the backup system will be minimal. However, if the machines are 
untrusted or haven't earned the reputation for being fair, the backup 
will at least limit fraud somewhat.


A possible problem with the solution may occur if many more voters use 
backup ballots than was predicted, and the infrastructure (parties' 
counters, and so on) can't keep up with the load. This weakness is the 
consequence of that the load is going to be dynamic (depending on 
voters' trust in the machines), and in the worst case, the backup might 
be neglected completely.


Even working correctly, Plurality voting is not adequate.  While there 
are many competing methods, Condorcet is discussed here:

 Ballot is ranked, as is IRV's.
 Plurality voting is permitted, and can satisfy most voters most of 
the time - letting them satisfy their desires with no extra pain while 
getting their votes fully credited.

 Approval voting is likewise accepted, satisfying a few extra voters.
 Fully ranked voting is Condorcet's promise, giving full response to 
those desiring such when desired.


From a purely technical point of view, I agree. I think the good (at 
least cloneproof) Condorcet methods to focus on here would be either 
Ranked Pairs (easy to explain) or Schulze (seems to be gaining momentum 
for non-governmental purposes, e.g MTV and Debian), both wv as their 
definitions state. That shouldn't keep us from trying to find things 
like good burial-resistant Condorcet methods, though.



Open source is ESSENTIAL:
 While it encourages quality programming by those who do not want to 
get caught doing otherwise, it also encourages thorough testing by the 
community.

 But, there is a temptation for copying such without paying:
  Perhaps the law should provide a punishment for such.
  Perhaps customers should pay for such code before it
becomes open - and get refunds of such payments if the code, once open,
proves to be unreasonably defective.

The community should be demanding of Congress such support as may help.

While open source could be thought of as just the voting program, 
proper thinking includes the hardware, protecting the program against 
whatever destructive forces may exist, and verifying what happens.


Secret ballot is essential.  While voter should be able to verify the 
vote before submitting such, this is only to verify - goal above is 
election programs that REALLY DO what they promise.


Let's look at this again. What does a voting machine do? It registers 
votes. Surely, that can't be a difficult task, so why use a computer? 
Why not (for Approval or Plurality) just have a simple chip connected to 
a PROM, with the chip in turn connected to a bunch of switches, one for 
each candidate, with a matrix display next to each switch, and a final 
switch to commit the ballot? Such a machine would be provably correct: 
as long as you have a PROM that hasn't been preprogrammed (this can be 
checked at the beginning), and the machine hasn't been compromised 
(rewired switches, backdoor chips), then it'll work as promised.


Reading off the PROMs would require more complex machinery, but it's 
really just an adder. In a Condorcet election, it's a two-loop adder 
(for each candidate, for each ranked below, increment vote_for[a][b]). 
That, too, is not too difficult a task and it should be possible to 
prove that it'll work in all cases.


One might also have to take TEMPEST sniffing and similar things into 
account, but the point is that both actually registering ballots and 
counting the votes is a simple task, and therefore one can inspect the 
device or program to see that it works properly, and more than that, 
that it'll always work properly within 

Re: [EM] Can someone point me at an example of the nonmonotonicity of IRV?

2008-08-15 Thread Kristofer Munsterhjelm

Chris Benham wrote:
 
*Kristofer Munsterhjelm*  wrote (Sun. Aug.10):

There's also the it smells fishy that nonmonotonicity - of any kind or
frequency - evokes. I think that's stronger for nonmonotonicity than for
things like strategy vulnerability because it's an error that appears in
the method itself, rather than in the move-countermove game brought on
by strategy, and thus one thinks if it errs in that way, what more
fundamental errors may be in there that I don't know of?. But that
enters the realm of feelings and opinion.
 
 
 
Kristopher,
The intution or  feeling you refer to is based on the idea that the 
best method/s must be mathematically elegant and that methods tend to
be consistently good or consistently bad. But in the comparison among 
reasonable and good methods, this idea is wrong.
Rather it is the case that many arguably desirable properties (criteria 
compliances) are mutually incompatible. So on discovering that  method X

has some mathematically inelegant or paradoxical flaw one shouldn't
immediately conclude that  X  must be one of the worst methods.  That
flaw may enable X to have some other desirable features.
 
To look at it the other way, Participation is obviously interesting and 
viewed in isolation a desirable property. But I know that it is quite

expensive, so on discovering that method Y meets Participation I know
that it must fail other criteria (that I value) so  I don't expect
Y  to be one of my favourite methods. 


Looking at this further, I think part of the intuition is also one of 
the frequency of the situations that would bring about the paradox. In 
the case of Participation, you'd have to have two districts that later 
join into one, which is not frequent; but for monotonicity, voters just 
have to change their opinions (which voters often do). That's not the 
entire picture, though; perhaps I consider monotonicity an inexpensive 
criterion, and thus one that reasonable methods should follow, or 
perhaps the degree of paradox (winner becomes loser) along with Yee-type 
visualization, makes nonmonotonicity seem all the worse.


The frequency idea is also related to the explanation of criteria 
failure conditions. If a person says that this method can cause winners 
to become losers when voters change their minds in favor of the 
now-loser, that appears completely ridiculous. On the other hand, 
LNHarm/LNHelp failure could be explained as a consequence of the method 
finding a common acceptable compromise, and so there's at least a 
natural reason for why it'd exist. Participation would be more 
difficult, but maybe one could draw parallels to the Simpson paradox of 
statistics like one would with Consistency failure.


This is like the IRV method-focus versus Condorcet goal-focus, in 
reverse. Criterion failure that is the necessary consequence of some 
desirable trait can work (and even more so when one can easily see that 
there's no way to have both), but criterion failure that's based on how 
the method works rather than what it aims to achieve doesn't pass as easily.



I think that all methods that work by calculating the ranking according
to a positional function, then eliminating one or more candidates, then
repeating until a winner is found will suffer from nonmonotonicity. I
don't know if there's a proof for this somewhere, though.

A positional function is one that gives a points for first place, b
points for second, c for third and so on, and whoever has the highest
score wins, or in the case of elimination, whoever has the lowest score
is eliminated.

Less abstractly, these methods are nonmonotonic if I'm right: Coombs
(whoever gets most last-place votes is eliminated until someone has a
majority), IRV and Carey's Q method (eliminate loser or those with below
average plurality scores, respectively), and Baldwin and Nanson (the
same, but with Borda).
 
That's right, but I think that Carey's method  (that I thought was 
called Improved FPP)
is monotonic (meets mono-raise) when there are 3 candidates (and that is 
the point of it.)


Yes, Carey's method is called IFPP, as defined on 3 candidates. I think 
he used the name Q method for IFPP generalized to more than three 
candidates. The Q method is nonmonotonic - see 
http://listas.apesol.org/pipermail/election-methods-electorama.com/2001-September/006656.html


Carey later tried to patch Q for 4 candidates. The first patch failed, 
and he later came up with P4 
(http://listas.apesol.org/pipermail/election-methods-electorama.com/2001-September/006721.html 
) which I haven't tested. While Carey said that he didn't get around to 
rewriting it in stages (elimination) form, if that is possible, it's 
monotonic, and it's possible (in theory) to patch to five candidates, 
then patch that to six and so on up to infinity, the statement would 
have to be rephrased to positional loser/below-average elimination 
methods are nonmonotonic. That's a lot of ifs, but to be charitable, 
I'll use that phrasing next

Re: [EM] [Election-Methods] [english 94%] PR favoring racial minorities

2008-08-15 Thread Kristofer Munsterhjelm

[EMAIL PROTECTED] wrote:

  Jobst Heitzig said:

  It is of no help for a minority to be represented proportionally when 
  still a mere 51% majority can make all decisions!


I disagree.  The advantage is that it allows 'on the fly' coalition 
re-organisation.


If all the legislators are elected via a single seat system, then in 
effect, the 2 coalitions must be decided prior to the election.  In

fact, in the US, the Republican and Democrat 'coalitions' last on a

 multi-decade scale.


A block of 15% of the legislature would be a minority.  However, if 
something oppressive was attempted against them, they could switch
sides. 


However, if all the legislators were elected via a single seat method, then
the supporters of those 15% would have to wait until the subsequent election
and it might be to late by then.


This appears to be, more generally, an issue of feedback. Democracy 
itself does better than dictatorship (even from a purely technical point 
of view, as opposed to a moral one) because the people can steer the 
representatives in the right direction. If the rulers get too detached 
from this correction, they get corrupted by the power and bad things happen.


If that's correct, then we should try to find ways of connecting the 
system even more tightly. Proportional representation would fit within 
this idea set for the reasons you point out, or broadly, that as 
minorities change, the representative-voter links update more quickly 
than they do within a majoritarian system.


Predictions based on that idea would consider the ideal to be direct 
democracy. Next to that would be continuous update of representative 
power (continuous elections). While both of these might work if we 
were machines, the former scales badly and the latter would put an undue 
load on the voters unless they could decide whether to be part of any 
given readjustment.


If we consider the case where decisions have effects that don't appear 
instantly, it gets more complex. For instance, democratic opinion could 
shift more quickly than the decisions made by one side has time to 
settle or actually do any difference. But even there, if we consider it 
an issue of feedback, we have parallels; in this case to oscillations or 
hunting, and to control theory regarding how to keep such oscillations 
from happening.


The feedback point of view is not an end-all-be-all. If there's a static 
or consistent majority that decide to, as an example, exclude 
minorities, that is democratic, but still not a good state of things, 
and no amount of making the democracy more accurately translate the 
wishes of the majority into action can fix that, since the majority 
wants to keep on excluding the minority.



PS
Anyone know a better free mail system that doesn't cause lots of ??? when
I post to this group?
The usual suspects should work: Gmail, hotmail, Yahoo; or see the 
Wikipedia comparison page at 
http://en.wikipedia.org/wiki/Comparison_of_webmail_providers . Most ISPs 
also provide mail accounts of their own for their subscribers, and 
(without knowing more) I'd assume yours do as well; if that is so, you 
could use that account and a dedicated mail reader like Thunderbird.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] [Election-Methods] [english 94%] PR favoring racialminorities

2008-08-15 Thread Kristofer Munsterhjelm
Also, such a scheme would be, I think, highly susceptible to agenda 
manipulation: who decides which issue is to be effectively on the 
ballot, and who decides that the candidates associated with X and 
not-X are sincere?


Citizens are free to form such lists. Each list may support and oppose 
any topics, and the lists are supposed to collect similar minded 
candidates together. Ballots may be just votes for individual candidates 
(not for issues). I don't see any specific problems in this case.


Does that mean that a single candidate can be a member of more than one 
list? If so, how are ties handled? Depending on how that's done, it 
could cause complex interactions depending on which party a voter 
decides to support.


If a single candidate can't be on more than one list, then agenda 
manipulation still has some power. If a candidate has to commit to a 
list that is based primarily on issue X, but where he also supports Y, 
he has to make a choice (distinct from the choices voters make) of X 
over Y. That could be technically solved by making 2^n lists for n 
issues, but then you'd have to let candidates be on multiple lists, and 
pure party-neutral PR becomes much simpler.


Tree lists would help, but say that a voter likes Y, but not X any more 
than the candidate in question does. Then he wouldn't want to have his 
vote contribute to any of the other X-favoring candidates.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] [Election-Methods] [english 94%] PR favoring racialminorities

2008-08-16 Thread Kristofer Munsterhjelm
I could see a kind of proxy front end to STV elections. I'm not sure I'm 
convinced it would be a good idea, or even practical to implement, but 
suppose that any person or group (including parties) could register an 
STV ranking, and a voter could select that ranking instead of ranking 
individual candidates. The logistical difficulty would be in determining 
how a voter specified their proxy, along with the possibility of 
ambiguity deliberate or accidental (Siera Club, John Smith).


There's another difficulty with that idea, and one that Juho has shown 
earlier, as related to the inheritance order of candidates in the 
electoral system of Fiji. Candidates may put preferences in different 
orders than you do, or come to an agreement with other candidates to 
support each other.


To some extent, that could be fixed by publishing the ranking 
beforehand, but one should still be aware of the difficulty.




I think that the simplest way of adding proxying to STV, user interface 
wise, would be to have a delegation mark, where your stated preference 
ordering overrides that of the candidate. For instance


A  B*  C (rest left blank)
with * as the delegation mark, and B having the preference ordering B  
E  F  C, would give


A  B  E  F  C

by substitution, whereas

A  C  B*

would give

A  C  B  E  F

since the A  C preference that the voter manually stated overrode the F 
 C preference of candidate B.


Paradoxical preferences could be resolved by highest ranked first. If

A  B*  C*

and B prefers D to E, but C prefers E to D, then the final ordering 
prefers D to E since B is ranked above C. An even more sophisticated 
version could run a single-winner social order election for equal-ranked 
candidates, so that


A  B* = C*

gives A  (result of social ordering, according to single-winner method, 
for those candidates for which B and C gave any preference)


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Why We Shouldn't Count Votes with Machines

2008-08-17 Thread Kristofer Munsterhjelm

Jonathan Lundell wrote:

On Aug 16, 2008, at 12:54 AM, Kristofer Munsterhjelm wrote:

I am for a record on disk of each ballot, but done in a maner to 
not destroy secrecy.


You have to be very careful when doing so, because there are many 
channels to secure. A vote-buyer might tell you to vote exactly at 
noon so that the disk record timestamp identifies you, or he might, in 
the case of Approval and ranked ballots, tell you to vote for not just 
his preferred candidate, but both the low-support communist and the 
low-support right extremist as well, so that he can tell which ballot 
was yours and that you voted correctly.


In the US, at least, voting by mail has become so prevalent that I 
wonder whether it's worthwhile making voting machinery absolutely 
impregnable to vote-buying. All else being equal, sure, why not, but if 
we trade off other desirable properties to preserve secrecy, and leave 
the vote-by-mail door unlocked


I think it'd be better to lock the vote-by-mail door. One simple way of 
doing that has already been given, with the two envelopes under a 
verified setting. If you like technology, you can achieve the same 
effect, without the need for the physical verified setting, by using 
blind signatures. However, that runs into the same problem where the 
voters may not know what's going on.


The fingerprinting vulnerability of ranked ballots is annoying, because 
I like ranked methods (rated ones would have even greater a 
vulnerability). I can think of a crypto solution where the recording is 
done under k of n secret sharing, and the secret-holders don't disclose 
their key parts unless it becomes necessary to do a recount. But yet 
again, how could the voters know that'll actually work? Even if they 
don't, it may still be better than nothing, though.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Why We Shouldn't Count Votes with Machines

2008-08-17 Thread Kristofer Munsterhjelm

Dave Ketchum wrote:
So you're saying that computers are better than specialized machines? 
I'm not sure that's what you say (rather than that machines are better 
than paper ballots), but I'll assume that.


Your specialized machines can each do a fragment of the task. However, 
dependably composing a capable whole from them requires big efforts from 
humans.

 Composing the same capability whole from a computer and adequate
programming can be easier.


Each does a fragment of the task, yes; that's the point of modular 
design, so that you can treat the local units differently from the 
central units and don't have to prove everything everywhere.


Consider a general computer. Even for general computers, it makes little 
sense to have the district joining software - that counts the results 
from various districts and sum them up in the case of a summable method 
- on the individual units. As such, the general-purpose computers are 
already specialized, only in software instead of hardware.


Because the specialized machines are simpler than computers, once mass 
production gets into action, they should be cheaper. The best here 
would probably be to have some sort of independent organization or 
open-source analog draw up the plans, and then have various companies 
produce the components to spec.


They can be cheaper by not doing the complete task - make the task an 
election system and the cost goes up and dependability becomes expensive.


By extension, they can be cheaper by, in concert, doing just enough and 
no more. One doesn't need Turing-completeness to count an election. 
(Perhaps unless it's Kemeny.)


The simplicity of voting could also count against general-purpose 
computers as far as manual labor is concerned. If the machine has been 
proved to work, you don't need to know what Access (yes, Diebold used 
Access) is to count the votes, and you don't need a sysadmin present 
in case the system goes to a blue screen.


You need equivalent of a sysadmin to sort out getting a whole composed 
of your specialized machines.


The way I would set up the system, there would be different counting 
units. The group of units would need a person to unlock them each time 
a new voter wants to vote; that could be included in the design so that 
you don't need a system administrator for it. Then, once the election 
day is over, gather the read-only media (CD or programmable ROM), and 
either send them or the summable result (given by a second machine) to 
the central. Count and announce as you get higher up in the hierarchy.


If the components are constructed correctly, and proved to be so (which 
can be done because of the units' relative simplicity), then there won't 
be any bluescreens and little need for maintenance - except for cases 
where the machines simply break.


In this manner, the setup is more like paper balloting than it is to 
ordinary computer systems. The read-only media take the place of the 
ballot box, and the aggregating machines the place of the election count 
workers.


Computers get cheaper and cheaper - think of what is hidden inside a 
cell phone.


That's true. Maybe a compromise could be using cheap computer hardware 
with read-only software, standardized components, and have the software 
not be a full OS, but instead just enough to get the job done and be 
provable. You'd have to rely on that there are no hardware backdoors, 
but the existence of such would be very unlikely, and the entire thing 
would have to be put inside some sort of tamper-resistant enclosure so 
hackers can't attach keyloggers or do similar things.


That's true, but it's still fairly simple. Assume the ranked ballot is 
in the form of rank[candidate] = position, so that if candidate X was 
ranked first, rank[X] = 0. (Or 1 for that matter, I just count from 
zero because I know programming)


Then the simple nested loop goes like this:

for (outer = 0; outer  num_candidates; ++outer) {
 for (inner = 0; inner  num_candidates; ++inner) {
  if (rank[outer]  rank[inner]) {  // if outer has higher rank
   condorcet_matrix[outer][inner] += 1; // increment
  }
 }
}


What ran this loop outside a computer?


A chip with just enough transistors to do this task. I'm not a hardware 
expert, but I think it could be done by the use of a HDL like Verilog.


It's less than instead of greater than because lower rank number means 
the rank is closer to the top.


Write-ins could be a problem with the scheme I mentioned, and with 
transmitting Condorcet matrices. One possible option would be to 
prepend the transmission with a lookup list, something similar to:


Candidate 0 is Bush
Candidate 1 is Gore
Candidate 2 is Nader
Candidate 3 is Joe Write-In
Candidate 4 is Robert Write-In, etc

and if the central gets two condorcet matrices that have the same 
candidates in different order (or share some candidates), it flips the 
rows and columns to make the numbers the same before adding up.


Do you concede central 

Re: [EM] Why We Shouldn't Count Votes with Machines

2008-08-17 Thread Kristofer Munsterhjelm

But murderers get away with murder, police are being bought
off by criminals, government employees steal office supplies.  No one knows
exactly how much any of things happen.  We try to limit them (balancing the
degree of the problem and the cost of addressing it), and we go on with our
lives.


OH. So you see it as no big problem to pretend to live in a democracy
(where you can pretend to yourself that most election outcomes are
accurate) and continuing to let elections be the only major industry
where insiders have complete freedom to tamper because 49 US states
never subjected their election results to any independent checks,
except the wholly unscientific ones in NM.

Even when Utah used to use paper punch card ballots, one person did
all the programming to count all the punch cards for the entire state
of Utah, and no one ever checked after the election to make sure that
any of the machine counts were accurate.

You sure must believe in the 100% infallibility and honesty of this
one person, and all the other persons who have trivially easy access
to rig elections.

Apparently  none of the plethora of evidence that election rigging has
been occurring ubiquitously in the US is of any interest or concern to
you.


I'm not Rob, so excuse the interruption, but some questions and ideas here:

Won't the people, as a last stop, keep fraud from being too blatant? You 
don't need scientific methods to know that something's up if a state was 
80-20 Democratic one cycle and then suddenly becomes 80-20 Republican 
(or vice versa) the next. Fraudsters could swing 45-55 results, but it 
doesn't completely demolish democracy, since the 60% (or whatever 
margin) results would presumably be left alone.


Fraud corrupts results, but it seems to me that fortunately we have some 
room to implement improvements that get us closer to verifiability 
without having the fraud that exists plunge the society directly into 
dictatorship.


New voting methods and improved fraud detection could also strengthen 
the prospects of each other. If you have an election method that 
supports multiple parties (since the dominant parties can't rig all the 
elections everywhere), then instead of only one other party, you have 
n-1 parties actively interested in keeping an eye on what rigging 
attempts do occur, and a lesser chance of entrenched forces colluding to 
ignore each other's attempts, since collusion among multiple entities 
become much harder as the number of entities grow.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] PR favoring racial minorities

2008-08-17 Thread Kristofer Munsterhjelm
Predictions based on that idea would consider the ideal to be direct 
democracy. Next to that would be continuous update of representative 
power (continuous elections). While both of these might work if we 
were machines, the former scales badly and the latter would put an 
undue load on the voters unless they could decide whether to be part 
of any given readjustment.


I don't see the burden to voters as a big problem since the system 
allows some voters to follow and influence politics daily and some to 
react only on a yearly basis.


Hence the unless they could decide whether to be part of any given 
readjustment part. Irrespective of that, there's also the 
paradox-of-choice type load that one gets upon permitting voters to 
alter their decisions at any time, but perhaps the voters would get used 
to it and down-adjust the effort they exert at any given time, reasoning 
that if they elect wrongly, they can fix it at any later time.


(Continuous elections could also increase the level of participation in 
decision making in the sense that old votes could be valid for a long 
time even if the voter wouldn't bother to change the vote often. Well, 
on the other hand the votes must have some time/event limits after which 
they become invalid. Otherwise the system would e.g. make any changes in 
the party structure very unprofitable.)


Another option that presents itself is that of candidates handing over 
their power to their successors, but one should be very wary of 
unintended consequences if one makes power transferrable in 
non-transparent ways. Party list elections could just have the party 
instead of the candidates gain the power, but I think that would defeat 
some of the dynamic purpose of continuous elections, and possibly lead 
to pseudoparties whose only purpose is to shield the candidates from 
changes of opinion.


If we consider the case where decisions have effects that don't appear 
instantly, it gets more complex. For instance, democratic opinion 
could shift more quickly than the decisions made by one side has time 
to settle or actually do any difference. But even there, if we 
consider it an issue of feedback, we have parallels; in this case to 
oscillations or hunting, and to control theory regarding how to keep 
such oscillations from happening.


When thinking about the problems of continuous elections and direct 
democracy maybe the first problem in my mind is the possibility of too 
fast reactions. Populism might be a problem here. Let's say that the 
economy of a country is in bad shape and some party proposes to raise 
taxes to fix the problem. That could cause this party to quickly lose 
lots of support. These rather direct forms of democracy could be said to 
require the voters to be more mature than in some more indirect 
methods in the sense that the voters should understand the full picture 
and not only individual decisions that may sometimes even hurt them. In 
an indirect democracy painful decisions are typically not made just 
before the elections. This is not an ideal situation either. But all in 
all, the more direct forms of democracy seem attractive if the voters 
are mature enough.


From the feedback point of view, populism would be another form of 
overreaction or opinion shifting too quickly. Consider the tax case. For 
the sake of the argument, let's say that the tax raise is going to make 
things better in the long run. Then the problem is that the adjustment 
mechanism (the people using the election system) react too quickly. A 
common way of fixing this for ordinary feedback systems is to introduce 
smoothing. In a continuous election, this may take the shape of that, if 
you change your vote, the power given to the previous candidate slowly 
decreases while the power given to the new candidate slowly increases 
instead of happening immediately. This would take the edge off 
populism and other overreaction-related problems while avoiding the 
representative problem of don't do anything before the elections, 
since the elections can still be any day of the year, and a different 
day for different supporters of any given candidate.


Still, there are limits. When dealing with machine feedback loops, one 
usually has the luxury of being able to tune loop characteristics (such 
as the degree of smoothing, reaction to increasingly large changes, and 
so on) beforehad, which wouldn't be applicable for a political process 
since the situation of the world may change with time. Second, there's 
no sure way of knowing, ahead of time, whether the tax (in the example) 
really would benefit the society or not, at least not without being 
given more data; so smoothing could both harm and help, and knowing what 
level to set it to, even if we had a completely unbiased and trustworthy 
engineer to adjust the dynamics, seems to be a problem for which we 
can't even know whether any given answer is correct. It would be like 
setting the federal interest rate, yet more 

Re: [EM] [Election-Methods] [english 94%] PRfavoringracialminorities

2008-08-18 Thread Kristofer Munsterhjelm

Juho wrote:


This is a very interesting real life example on how such horizontal 
preference orders may impact the elections and strategies in them.


Do you have a list of the strategies/tricks that are used?


One trick that appears, as has been mentioned in other posts here, is 
vote management. In vote management, parties aim to make voters vote for 
each party candidate so that the accumulated strength is nearly the 
same, minimizing the chance that any get lost or go to other candidates.


Schulze wrote about vote management strategies at [ 
http://m-schulze.webhop.net/schulze2.pdf ] as part of his STV method, 
which is intended to have vote management work only when to do otherwise 
would make the method fail Droop proportionality. I'm not completely 
sure if I've got it right, but I think it does it somewhat like DSV, by 
running vote managements for every voter so that manually doing a vote 
management confers no advantage.



I tend to favour counting exact proportionalities at national (=whole
election) level ((if one wants PR in the first place)).


One slight issue here is how to define proportionality.  It is implicitly
assumed that if a voter votes for a candidate, they also support the
candidate's party.  However, as can be seen with personal votes,
this is not always the case.


If candidates are seen as individuals then the rounding errors of such 
small units are typically higher than the rounding errors of big units 
like parties.


(What I was thinking was basically that if there is one quota of voters 
that have opinion X then the representative body could have one 
representative that has opinion X. This could apply to parties but also 
to smaller groupings and individuals as well as other criteria like 
regions (= regional proportionality) (and even representation of other 
orthogonal groups like women, age groups, religions, races if we want to 
make the system more complex).)


This is the idea that I've based my honest voter multiwinner 
comparison program on. With a party-neutral voting system, 
proportionality of individuals make no sense since there's only one 
individual - the only way to make that work would be to have weighted 
power in the assembly. Therefore, what is important is proportionality 
of opinion, if we assume that voters prefer those with opinions similar 
to their own to those with opinions less similar to their own.


If you want to make it even more abstract, you could say: if there's a 
method to how voters rank (or rate) candidates, and this method ranks 
candidates according to proximity of some standard of information, then 
the assembly should be proportional with regard to that standard of 
information.


Party-neutral methods like STV use no inputs other than the ballots 
themselves, so those have to infer the proximity or proportionality 
directly. In a way, party list cheats (or goes beyond the assumptions) 
because it lets voters give their opinions (on one axis - the party 
membership axis) directly. One could also make a system that similarly 
cheats by knowing the location of every representative and voter, 
minimizing the average distance, for each voter, to the closest 
representative. I think that would be unwieldy, though, and there'd be 
the issue of weighting (how much better/worse is a close representative 
that disagrees with you than a faraway representative that agrees with 
you?).


The extreme would be a voting system where people just say how much they 
agree with an opinion, for all relevant opinions, and then the system 
picks the maximally representative assembly. Such a method is not 
desirable, I think, because it would be very vulnerable to strategy, and 
someone would have to say which opinions were relevant and then redo 
the list when voters' priorities change and other opinions become 
relevant. In a simulation, one can do this easily because the voters 
vote mechanically (and so the what the opinion really is doesn't 
matter), but in the real world, not so much.



Another option is to allow a voter vote for local candidates and then
as their last choice, vote for a national list.


This is maybe yet one step more complex since now candidates can belong 
to different orthogonal groupings (several local parties; one party 
covers all local regions). Or maybe you meant to allow voting only 
individuals locally, not to support all local candidates of all parties 
as a group.



The local count would be standard PR-STV, but with the same quota
nationwide (and a rule that you must reach the quota to get elected).


Ok. National level proportionality could influence the election of the 
last candidates in the districts.



Unallocated seats would then be assigned using d'Hondt or similar
method based on the amount of votes transferred to the national list.

Also, it could be in effect an open list.  The person elected would be
from the district that transferred the most votes to the party's national
list.


Maybe all 

Re: [EM] [Election-Methods] Re: final attempt for a strategy-free range voting variant, and another proportionally democratic method

2008-08-22 Thread Kristofer Munsterhjelm

Raph Frank wrote:

I had a similar though previously.

It was based on a legislature rather than individual voters.

I called it 'consumable votes'.
Here is one example, though there was a fair few versions.

http://listas.apesol.org/pipermail/election-methods-electorama.com/2006-March/017903.html


One problem of a straightforward every candidate gets p voting money 
units at the beginning of each block of time is that, on one hand, the 
situation may be serious enough that one needs to pass more than p 
units' worth. In that case, we'll have a problem. On the other hand, it 
may be a calm time, in which case less than p units are used, which 
would also be a problem except if there's a ceiling to how many voting 
units one can hold. But if there's a ceiling, it may inspire frantic 
voting near the end of the block of time so as not to waste the voting 
units. A better solution to that would be to, if there are q days, 
supply p/q (subject to ceiling) every day, or p/(q*24*60) every minute.


That still leaves the former problem, though. Reweighting would escape 
it, but the relation to voting money (which is easy to understand) would 
be somewhat obscure. In order to prevent dictatorship of the rich, the 
weights would then be reset, for all candidates, to one at the beginning 
of each period, or for a continuous variant, the differences would be 
smoothed out at a certain rate so that it goes towards equality.


Voting-money or legislative consumable votes/reweighting values might 
also give a back-and-forth effect, but I'm not sure how serious that is. 
One can observe the oscillation in two-party states, where the first 
party spends a lot of its time undoing what the second party passed in 
the previous period. Then the second party is elected later on and uses 
its time tearing down the efforts of the first party. That's really 
wasteful. At least your supermajority clause would help keep this from 
happening.


I suppose one possible way of making an intuitive reweighting system is 
to allow legislators to go into the red as far as they want, but that 
the decision is checked by others. Thus the opposition could cancel a 
proposal if they've used less voting money in the past, no matter how 
much each side has used. There might be unintuitive consequences, 
though; for one, it won't have the incentive (on all parties) not to 
vote any more than they have to.



 My thoughts were to have accounts decay and each voter
would be given a fixed income. This also handles the effect of new
citizens becoming adults and also elections if it is for a legislature.
However, that creates deflation (or at least encourages 'spending now').


Adding up to a ceiling would mitigate this somewhat; as long as the 
current amount isn't close to the ceiling, there's no incentive to spend 
it now. However, when it gets close, the incentive returns.



b) Can voting money really be considered a form of money, and can we
expect it to be linear in individual utilities? My hope is that this is
so because the virtual money is only used to buy immediate voting
power and pay with potential later voting power. I think it would not be
necessary that utilities be comparable between different persons. Just
the utilities some fixed person assigns to options in the decision at
hand need be comparable to utilities the same person assigns to options
in a later decision.


It has the advantage that everyone has equal access to it.
This would eliminate the complaint that people with higher
wealth would end up controlling the process.

It might even be constitutional despite the ban on poll taxes.

Depending on your view of utility, it could be considered
just as valid as any other distribution.

However, there could be complaints since people can lose
their voting power.  In fact, since in nearly all cases, there would
be no change to vote totals and then suddenly, a large
chunk of people would lose some of their votes and thus
power, there could be major complaints (or riots).  Also,
they would have little expectation of increasing their totals again
as it would be a while before another change to the totals.

I think in practice, it would just be treated like a range voting
election and thus changing account totals is like reweighting
people's votes.


It seems to be not only like reweighting, but simply another form of 
reweighting. Well, that depends on how it's used. If it's used to pay 
for a decision, then it's indirect reweighting, but if it's used to 
reduce the strength of votes, then it's direct reweighting.



Finally, with a national election, it is unlikely that the results
would be accurate to a single vote, so even if it was balanced,
a recount would probably change things.


c) How would the exchange rate of this virtual money be established?


I don't think it should be transferable, or otherwise, you
might as well just use normal money.

If you do want it to be exchangable, then just let the
market decide.


I agree, 

Re: [EM] PR favoring racial minorities

2008-08-22 Thread Kristofer Munsterhjelm

Juho wrote:

On Aug 18, 2008, at 12:10 , Kristofer Munsterhjelm wrote:

The extreme would be a voting system where people just say how much 
they agree with an opinion, for all relevant opinions, and then the 
system picks the maximally representative assembly. Such a method is 
not desirable, I think, because it would be very vulnerable to 
strategy, and someone would have to say which opinions were relevant 
and then redo the list when voters' priorities change and other 
opinions become relevant. In a simulation, one can do this easily 
because the voters vote mechanically (and so the what the opinion 
really is doesn't matter), but in the real world, not so much.


In principle STV allows (especially if ties are allowed) voters to 
determine any sets of candidates (without requiring someone to fix them 
beforehand). Voters may e.g. list all female candidates. It is also 
possible that any number of such group definitions would be available. 
Candidates could indicate themselves which opinions they support, and 
voters could include references to those lists in their ballot. Also 
opinions created by others than candidates themselves could be 
available. The lists could freely overlap. Someone could vote e.g. Women 
(1st priority), candidates that indicate that they support election 
reform (2nd priority) and candidates that were listed by the election 
reform society (3rd priority). An STV like ballot would be derived from 
this information.


To a limit, yes. But say that you prefer women and leftists. Also assume 
that there are some women who are leftists, some leftists that are not 
women, and some that are both. Then you'd rank those who were both above 
either of the two.


In my simulation, a voter who preferred women and leftists would rank 
male leftists and right-wing women randomly with respect to each other. 
In reality there could be different preferences among those. The point 
is that no concatenation of two lists would produce the correct result. 
If the list is by political ideology, then it could rank men on the left 
ahead of women, and if it was by gender, then it could rank right-wing 
women ahead of left-wing ones.


A tree could solve this, but it'd get increasingly more complex for 
numerous opinions. The complexity is probably a true issue - that is, 
not an artifact of the system - and one may wonder if voters would 
compare candidates on all issues in order to figure out a true 
consistent ballot (even for a party-neutral system). I have no data for 
that, so my simulation assumes the voters do so, since that taxes the 
proportional representation of the method more than if the voters didn't.


That sounds like MMP. I think MMP can work if done right (with STV 
instead of FPTP as base, and reweighting to avoid lista civetta). 
Using party list here is probably better than the party-neutral 
version where you'd rank representatives for local, regional, and 
national levels, and then it keeps the reweighting at each stage; 
simply because there would be an immense number of candidates at the 
national level, and ranking them all would be Herculean.


MMP style is also one option, although I was still thinking of methods 
where all representatives are of the same type. The method would in that 
case have to force the districts to elect so that also election wide 
balance is maintained.


What would the method look like, so that a voter could specify (for 
instance) women ahead of men on both a local and national scale? The 
only thing that seems to work is candidate ranking on both the local and 
national level, which would take a lot of time and produce extremely 
long ballot papers.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Why We Shouldn't Count Votes with Machines

2008-08-22 Thread Kristofer Munsterhjelm

Dave Ketchum wrote:
You claim that many fragments can be done by specialized machines. 
AGREED, though I do not agree that they can do it any better than a 
normal computer - which has equivalent capability.


In a technical capacity, of course not. Since a computer is 
Turing-complete, it can do anything the specialized machines can. 
However, and this is the point I've been trying to make, the specialized 
machines are simple enough that it's possible to formally prove that 
they do only what they're intended to do, and perhaps also to convince 
the voters that this is the case.


It's kind of like the difference between physics and mathematics. Doing 
tests is analogous to the hypothesis testing of physics: you can say 
that this particular machine does not exhibit any flaws that would 
compromise security, within some margin of error. However, if the 
machines are sufficiently simple, then one can use formal proving to 
show, mathematically, that there are no bugs; that the Condorcet counter 
will turn ballot records into Condorcet matrices and no more - that the 
machine with buttons on it will register votes, register them to the 
candidate shown on the display, and no more, and so on.


Now, the analogy is not total. Even a correct hardware system could be 
compromised by vendors adding backdoors to their fabrication (going 
outside of the spec) and so on, but those errors are much harder to 
conceal than simple software tinkering. Even if the software is open 
source (as you've stated that you want), knowing the full limits of the 
hardware keeps hackers out. The more complex the OS, the greater the 
chance that there's a bug: even Linux has had privilege escalation bugs, 
although they appear much less frequently than in closed-source 
software. What I'm saying here is that if you have to have machines, 
have a way of saying to, first, the experts that there is no way there 
can be an error, and second (if possible), the same to the ordinary 
voters as well.



However, the whole task involves connecting the fragments:
 One way is via computer capability.
 You seem to be doing without such, so What do you have other than 
humans HOPEFULLY correctly following a HOPEFULLY correct and complete 
script?


That's right - the links are the weak spots. The script can be devised 
just as any programming can be, and it would be quite simple, and 
ideally reminiscent of what one does when having a manual count regime. 
The PROMs or CDs are the ballot boxes, and they're transported from one 
location to another as one would ballot boxes.


That leaves the humans. The humans may do weird things, and the ensured 
limits that the specialized hardware would have would obviously not 
apply to them. But since the script is simple, various parties can 
monitor each other. In the worst case, the transportation and 
aggregation parts of the process are as insecure as they would be for 
manual ballots.
If that is still too risky, the ballot boxes could be numbered and 
digitally signed prior to being distributed to the machines for writing, 
so that if any are lost or replaced, it would immediately show up as an 
error. Such a process would add steps to the script, but I think it'd be 
managable.


Consider a general computer. Even for general computers, it makes 
little sense to have the district joining software - that counts the 
results from various districts and sum them up in the case of a 
summable method - on the individual units. As such, the 
general-purpose computers are already specialized, only in software 
instead of hardware.


???


That was simply intended to show that you don't need the full powers of 
a computer. It's convenient, but that convenience can tilt in the favor 
of manipulators or hackers as well.


That's true. Maybe a compromise could be using cheap computer hardware 
with read-only software, standardized components, and have the 
software not be a full OS, but instead just enough to get the job done 
and be provable. You'd have to rely on that there are no hardware 
backdoors, but the existence of such would be very unlikely, and the 
entire thing would have to be put inside some sort of tamper-resistant 
enclosure so hackers can't attach keyloggers or do similar things.


Hardware backdoors can be hard to find.  Still, if and when one is 
found, can there not be an appropriate punishment to discourage such 
crimes in the future?


Agreed defense against such as keyloggers is essential.

I still say OPEN SOURCE!


I was thinking more of hardware keyloggers, such as those that look like 
keyboard extension cords. Thus the computer should be tamper resistant 
so you can't just do these things. Ideally, for the cheap computer 
compromise, you'd use a cryptoprocessor (like banks use to keep their 
keys, but more general purpose) to run the actual software - perhaps an 
IBM 4758, though since I'm not a hardware expert I don't know if that 
one is sufficiently powerful to do what 

Re: [EM] [Election-Methods] [english 94%] PRfavoringracialminorities

2008-08-22 Thread Kristofer Munsterhjelm

Juho wrote:

I could accept also methods where the voting power of each 
representative is different. The good part is that such a parliament 
would reflect the wishes of the voters more accurately than a parliament 
where all the representatives have the same voting power. Maybe one 
could force the voting power of different candidates within some agreed 
range. That could be done by cutting only the power of the strongest 
representatives and forwarding their excess votes to the nearest group 
(or as indicated by the STV ballots).


Having different amounts of voting power would simplify multiwinner 
election systems considerably. One could, for instance, just do a FPTP 
count and then elect the n highest scoring, giving them voting power 
equal to the share of the total vote they got.


Still, that doesn't happen, and no assembly is set up that way. Why? 
Does it seem too unfair?


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Why We Shouldn't Count Votes with Machines

2008-08-22 Thread Kristofer Munsterhjelm

Kathy Dopp wrote:

On Thu, Aug 21, 2008 at 10:00 PM, Dave Ketchum [EMAIL PROTECTED] wrote:


First, this is not intended to be used in a zillion precincts - just to
validate the programs.


OK. Well if you don't care about validating the election outcome
accuracy, and just want to verify the small amount of programs on
voting machines that pertain to voting, then you could do parallel
(Election Day) sampling of memory cards (memory cards unbelievably
have  today interpreted code on them on most voting systems) like
the University of CT engineering dept. has designed for checking the
voting code on CT's voting systems.


That's bad design. The election machine shouldn't have code that can be 
simply replaced by switching memory cards. The code should be loaded at 
some time prior to the election and then locked in, and the machine 
should verify that it's the right code, perhaps by checking a digital 
signature. Anything less is, well, just bad.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] PR favoring racial minorities

2008-08-22 Thread Kristofer Munsterhjelm

Raph Frank wrote:

On 8/22/08, Juho [EMAIL PROTECTED] wrote:

 In Finland where the number of candidates is relatively high some less
obvious candidates may have some trouble getting in to the lists but on the
other hand some well known figures (that have become popular (and respected)
in other areas than politics) tend to get offers from multiple parties to
join their lists (even as an independent candidate on their list, without
becoming formally a party member).


Would they be expected to vote with the party if they do end up getting elected?
Is the theory that they will pull in more than 1 seat's worth of
votes, so it is worth
having them on the list no matter what they do?

Under PR-STV, the whole vote management thing means that parties cannot
just let their candidates run completely independent campaigns and also that
the number of candidates run must be controlled based on tactical
considerations.


Fortunately, that isn't a limitation to party-neutral multiwinner 
methods in general - not any more than single-winner strategic 
nomination due to a method failing independence of clones is a 
limitation of single-winner methods in general.


If I understand Schulze's STV method correctly, it calculates vote 
management strengths and so does vote management on behalf of the voter 
and on all candidates. I may be wrong, though, and Schulze STV uses a 
very large amount of memory for elections with many candidates and 
winners. Still, it shows the possibility of having a method that resists 
vote management.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] PR favoring racial minorities

2008-08-24 Thread Kristofer Munsterhjelm

Raph Frank wrote:

On 8/22/08, Kristofer Munsterhjelm [EMAIL PROTECTED] wrote:

 What I had in mind was something like this: Say there's a single-winner
election where the plurality winner has 35% support. Then those voters
effectively got 0.5 (+1) worth of the vote with only 0.35 mass. The total
voting power of the entire electorate should not be altered.


Well, they actually got 1 constituency's worth of voting power at
parliament for 35% of the vote in the constituency.  This is just
fundamentally unfair and the problem cannot really be eliminated
while also allowing independents to run.

If there are no independents, then the problem pretty much goes
away.


 Thus

 pA (unscaled) = 0.35/0.5 = 0.7
 pB (unscaled) = 1.3

 For the sake of the example, consider the case of 1000 votes. Then the
scaling factor is x, so that 0.7 * x * 350 + 1.3 * x * 650 = 1000. x is then
about 0.918, so the voters for the winner now have voting power 0.6426 and
all the other voters have voting power 1.1934.


This would mean that if a candidate had 55% support, his voters would
have their voting power boosted (though, I guess in that case, there would
be no change).  It still has the problem that the constituency gets a non-party
candidate and also its voters get to vote for a party.


In MMP, party voters also get to elect a candidate and also contribute 
to the national level (to the extent that the total constituency result 
is disproportionate). The reasoning would then go that even a voter that 
elects a nominal independent may have a preference for a certain party 
(the party that is closest to the views of the independent). Thus the 
system should treat the vote as if it was a vote for the party in 
question, if it has to treat the vote as a party vote in the first 
place, and it should do so in order that all voters have equal power 
(influencing both local and national levels).


As for fairness, consider the case where more than just enough voters 
voted for candidate X. With your you either get full strength or no 
strength scheme, some voters are going to look at the result and say 
hey, my vote wasn't required yet I have no power. This means my vote 
was wasted, so I'm going to be more careful later. To some extent, this 
unfairness observation would exist in all the cases where some voters' 
votes were unneeded, but at least with continuous reweighting you get 
the counter that your vote did in fact have an effect, in that all the 
others who shared your preference for the independent got more of a say 
in the national round. That's about the best you can do for the 
single-seat local election, since you can't transfer votes when there's 
only one seat.



I think fairer might be to just exclude the voters who voted for the
independent from consideration at the party level (if the independent
is elected for the constituency.)
They have already obtained 1 full seats worth of representation for
1/3 of a seat's worth of votes, there is no point in also giving them
more representation by including them in the party allocation. Voters
who voted for other independents or party members would still be included.


I can see two points of view here. The first is that they got more than 
their share by the extent that they were less than a majority, and the 
second is that they got more than their share by the extent that they 
didn't represent every voter. In either case, I think that there should 
be a continuous function, but the point of view matters when considering 
how much power should be retained in contrast to those who didn't get 
anything at all (that is, whose candidate lost).


Looking at it again, the point of view that it should be with respect to 
100% is probably better than the one that it should be with respect to a 
majority. Consider the case where's there unanimity towards which 
candidate should win. Then I think the right way to treat that is as if 
no votes had been cast at all, rather than to give those who unanimously 
decided to elect the constituency candidate double power in contrast to 
those who did not vote (or a hypothetical voter that'd only vote in the 
national election, if that was possible).



Another option is to have a reasonable number of top up seats.  If 1/3 of
the seats were top-up seats, then 2/3 of a constituency would be enough
to be entitled to a seat.  This would mean that independents would be
able to archive a quota in 1 constituency.  They would have to obtain 2/3
of the votes to be eligible for election.


If you run the national and local election as a single STV election, I 
think you could get the result where many national candidates get a 
quota and outcrowd the various regional candidates, even if those got 
close to a quota. The problem here is that a local plus national 
election has a subset constraint (on number of local and national 
candidates) which a plain STV election doesn't have.


Or to put it differently, in a general case. Say that you have

Re: [EM] PR favoring racial minorities

2008-08-25 Thread Kristofer Munsterhjelm

Raph Frank wrote:

On 8/22/08, Kristofer Munsterhjelm [EMAIL PROTECTED] wrote:

 If I understand Schulze's STV method correctly, it calculates vote
management strengths and so does vote management on behalf of the voter and
on all candidates. I may be wrong, though, and Schulze STV uses a very large
amount of memory for elections with many candidates and winners. Still, it
shows the possibility of having a method that resists vote management.


That is interesting.  I have had a look at his paper, but
the method itself seems pretty complex.

Is there a simple/basic explanation of the method?


You would have to ask Schulze that. I don't know of any, at least.


- capturing a greater proportion of personal votes for the party

Any surplus of a candidate who easily reaches
quota will be a mix of personal and party votes.
Thus some of the votes that went to a party member
will end up being transferred away.

If less party supporters vote for the candidate, then
they can be used at full strength for other party members.
This means that that the party gets better use of the personal
vote of the member.

This is the vote management version of Hylland free
riding.  Party voters are en mass downgrading their
top choice as he is likely to get elected anyway.

Meek's method solves the first problem by adjusting the quota
and CPO-STV solves the 2nd problem by not eliminating anyone.
The 3rd one like Hylland free-riding on an individual level is
very hard to fix.  (Schulze aims for equality of effect rather than
trying to eliminate it).


If it turns out that we can't get rid of Hylland free-riding, then 
equality of effect might be the best thing to have: while it degrades 
the performance of the method, hopefully it won't degrade it too much, 
and it'll keep the dynamics from going in the wrong direction of 
encouraging party centralization.



Meek's method also solves Woodall free-riding, though I'm not
sure if there is a vote management method that takes advantage
of it.  A party would need to flood the constituency with 'no-hope'
candidates so there is enough of them for all of their supporters
to vote for.  That might be a little to obvious, but it could work.


Schulze considered the case with write-in candidates. Obscure write-ins 
are pretty much ensured not to win, and could be used as Woodall 
free-riding dummies. He then checked an STV election where write-ins 
were permitted (the city council of Cambridge, MA), but found no obvious 
evidence of Woodall free-riding.


See the free-riding section of http://m-schulze.webhop.net/schulze2.pdf .


None of your party's supporter's votes would be wasted electing
candidates for other parties who get elected on the first count.


At least not until the other parties do the same. The absurd result 
would be an STV election with thousands of candidates, none of which can 
win, getting eliminated at the start of the election before the real 
candidates appear.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] PR favoring racial minorities

2008-08-25 Thread Kristofer Munsterhjelm

Raph Frank wrote:

On Sun, Aug 24, 2008 at 8:03 PM, Kristofer Munsterhjelm
[EMAIL PROTECTED] wrote:

As for fairness, consider the case where more than just enough voters
voted for candidate X. With your you either get full strength or no
strength scheme, some voters are going to look at the result and say hey,
my vote wasn't required yet I have no power. This means my vote was wasted,
so I'm going to be more careful later.


With a result like

A: 40%
B: 30%
C: 20%
D: 10%

Each voter for A could still be weighted as

(VA - VB)/VA = (40-30)/40 = 0.25

as only 75% of each of their votes was required to win the constituency.

Under plurality, you don't even need a majority, you just need to beat
the 2nd best candidate.


That could work, since additional votes for A increase the weighting, 
meaning that a vote for A isn't wasted even if A wins.



I can see two points of view here. The first is that they got more than
their share by the extent that they were less than a majority, and the
second is that they got more than their share by the extent that they didn't
represent every voter. In either case, I think that there should be a
continuous function, but the point of view matters when considering how much
power should be retained in contrast to those who didn't get anything at all
(that is, whose candidate lost).


I don't think there is a really a way to square this.  If a party had
51% of every constituency, they could guarantee that they win 100% of
the seats.


Let's look at that case, with two parties. Call them A and B. A wins 51% 
of every constituency, and that this amounts to 51 votes (thousand 
votes, whatever) out of 100. Then if there are as many top-up seats as 
there are ordinary seats, nearly all of the latter should go to B. Since 
there are two parties, by the reweighting above, the A voters would have 
strength (51-49)/51 = about 0.04. If there are 90 constituencies and 
thus 90 list seats, and all the A voters vote for A, nationally, as 
well, they'll have 51 * 90 * 0.04 = 183.60 votes worth. Meanwhile, the B 
voters have 49 * 90 = 4410 votes worth. The total is 183.60+4410 = 
4593.60, so A gets round(p * 183.60/4593.60) and B gets round(p * 
4410/4593.60) with p chosen so that the sum is 90. This turns out to be 
p = 90, A gets 4 seats and B gets 86 seats.


In total, A has 94 seats and B has 90. 94 out of 180 is 51.1% which 
isn't too bad, considering A had 51% support everywhere.


So reweighting seems to work, at least in this case. If there are fewer 
top-up seats than constituency seats, the equation would have to be 
adjusted.



I think maybe my issue is that constituencies which elect party
candidates are 'playing fair'.  The candidate that they elected gets
added to the party total and thus has an effect on the national level
seats.  A constituency which elects an independent elects an
independent, but that has no effect on the number of seats affecting
each party and yet they still get to decide how the party
proportionality is decided.  This ensures national proportionality and
then they get to add their independent representative on top of that.
This shifts the legislature in the direction of the constituency in
question.


Instead of a party point of view, how about an opinion point of view? 
The voters for independents also have opinions, and so their opinion 
should affect the national level (as it would explicitly if this was a 
national election). As mentioned earlier, the voters that vote for an 
independent would probably vote for the party closest to the point of 
view of that independent if they have to vote for a party. As such, the 
shift in national influence is not an artifact, since the voters do have 
the corresponding opinion. With a good normalization system (reweighting 
or quota-based), the shift won't be very large if the independent 
actually got elected, but it'll exist -- ideally to the extent that 
there was a surplus.



Looking at it again, the point of view that it should be with respect to
100% is probably better than the one that it should be with respect to a
majority. Consider the case where's there unanimity towards which candidate
should win. Then I think the right way to treat that is as if no votes had
been cast at all, rather than to give those who unanimously decided to elect
the constituency candidate double power in contrast to those who did not
vote (or a hypothetical voter that'd only vote in the national election, if
that was possible).


Right, to allow them participate in the national vote would give them
double power.

However, it depends on how many additional seats are being used.  If
1/3 of the seats were national seats, then their votes should still be
counted but at a reduced weighting.

The voters in each constituency elect 1 and 1/3 of the seats, that
means that if a district elects an independent locally, then their
votes should count but with a weight of 1/4 of the votes in other
constituencies.


Seems that we're

Re: [EM] PR favoring racial minorities

2008-08-25 Thread Kristofer Munsterhjelm

Juho wrote:

On Aug 24, 2008, at 1:34 , James Gilmour wrote:


Juho   Sent: Saturday, August 23, 2008 9:56 PM

Trying to guarantee proportionality for women at national level may
be tricky if there is no woman party that the candidates and voters
could name (well, the sex of a candidate is typically known, but that
is a special case).


I think you need to define what you mean by proportionality for women 
at national level.  Do you mean numbers of representatives
proportional to the numbers of women among the registered electors 
(typically 52%), or among the voters (women frequently
predominate), or do you mean proportional to the extent that the 
voters wish to be represented by women?  These criteria are all
quite different, and none of them is the usual 50:50 that is commonly 
called for.


I treated women just as a random example of voter indicated preference 
to favour some set of candidates.


(This was Kristofer Munsterhjelm's example. I hope he thought the same 
way. This example group has also the other problem that we know which of 
the candidates are women, but I think this is not intended to limit the 
example either. = Just random sets of candidates.)


It can be treated as either. Some properties, like whether a candidate 
is a woman, can be objectively ascertained. Others, like opinion, can't 
be determined as readily - all we have for opinion is what the 
candidates actually say, as well as their past history, and there's no 
mechanism that can say yes, he truly believes this in contrast to no, 
he's only saying it because that's what the voters want to hear.


Reading my past posts, I intended woman to be just another binary 
property, but in some other branches of this thread, I've also 
considered methods that use the objective verification property of 
being a woman (to balance assemblies if that is desired).



And why should there be guaranteed proportionality for women?


In this example, just because that can be derived from the ballots cast, 
no other reasons (although of course there could be in some other 
elections).



  The logical corollary is guaranteed proportionality for men.


This was not intentional. Since I assumed this to be a random group this 
just indicated a requirement to guarantee that at least indicated number 
of women should be elected (and said nothing about the non-women). In 
practice this may lead to proportional representation of non-women too 
but I didn't consider that to be that to be a requirement.


Depending on some method treats this kind of freely defined sets, it is 
also possible that only 10% of the voters would indicate support to 
women. This should not be taken to mean that the proportion of women 
should be limited to 10% since many voters may be neutral with respect 
to this particular opinion.


I was going to say that it'd seem, from a binary point of view, that 
valuing proportionality x of those in a given set implies valuing 
proportionality (1-x) of those not in the set. But the binary point of 
view is wrong. As you say, many people would have no particular interest 
in which way the distribution goes. A variant of the argument may still 
apply, though. If 10% really want women candidates and 90% don't care, 
then having 11% or 9% would be equally bad (presumably) from the point 
of view of the 90% of the voters that don't care. Therefore, it's not 
critical that there are 10% women candidates. The slack will be more 
taut in the less-than direction (because the women-prefering voters 
would have no problem with a superproportional allocation of women), but 
it wouldn't be absolute since the other 90% don't seem to care, making 
this a less important issue than if, say, 20% preferred women, 10% men, 
and the other 80% had no opinion.


That is true, but such ranking is currently so unusual that I think it 
would be a fair assumption.


Yes, a good guess, but there could be also situations where e.g. some 
district has high concentration of members of some racial group and most 
candidates are from that group. Ranking only members of that group 
should in this case not be taken as an indication to support all the 
members of this group at national level and in all ideological opinion 
groups.


A better estimate would be whether more candidates of a certain group 
are elected than one would expect from a random sample of the community 
or district in question. Even that isn't foolproof, because there may be 
some properties that people who desire to be a part of the political 
process (i.e. candidates) tend to have with greater frequency than those 
who don't.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] PR favoring racial minorities

2008-08-25 Thread Kristofer Munsterhjelm

Juho wrote:

On Aug 22, 2008, at 12:36 , Kristofer Munsterhjelm wrote:

Juho wrote:

On Aug 18, 2008, at 12:10 , Kristofer Munsterhjelm wrote:


If we are taking about methods that rank the candidates the idea is to 
define a grammar and terminology so that the most common voter opinions 
(orderings or approximations of them) can be expressed using short 
expressions. Bullet votes and tree inheritance is one (very compact) 
option. Giving a complete ordering of the candidates is another 
(complete) option.




That's a good point. Voters probably wouldn't like to rank tens of 
candidates from tens of parties, so to the extent that it would not 
confuse the voters or make the ballot papers too long, there should be 
shortcuts for the most common patterns of voting.


Those shortcuts could be party list, party tree, a ranked ballot on 
parties rather than candidates, or something similar, with an override 
space for the first few preferences (since that's where most of the 
strength lies).


However, the way the ballot's formatted is going to have some influence 
on the voters simply by what it shows to be the path of least 
resistance. The STV ballots in Australia (that are used as a curious 
form of party list by most voters) provide a good, if extreme, example 
of this. In my opinion, parties shouldn't be given a boost or gain an 
advantage for free just because they are parties - this is part of the 
reason that I prefer party-neutral multiwinner methods. Thus one would 
have to be careful when designing the ballot format so that, on one 
hand, the ballots are not too arduous to complete, but on the other, 
they don't obstruct voters that want to submit personal votes.


How such a ballot would be constructed, I don't know. The parameters 
required (how susceptible the voters are to shortcuts, etc) can't be 
arrived at by mere deduction; they depend on the nature of the voters.


For small districts, a ranked ballot like the one used in Ireland is 
probably sufficient. You pay for it by not being able to ensure national 
proportionality by party. The next step up (in fidelity and complexity) 
is the you have two votes form that accompany systems which try to 
correct the disproportionality on the national level.


If the various formats (list, tree, truncated personal vote) should be 
shortcuts rather than the only way to vote, then one needs to use a 
method, or one of a class, that can understand all the formats. I think 
that party-neutral multiwinner systems make up that class of methods. 
The exception is correcting party disproportionality, which it can't do 
by itself.


On the other hand, if you want list (or tree or whatnot) to be the only 
way to fill out the ballot, then you don't need anything as complex as 
PR-STV.


  In order to guarantee proportionality (of any imaginable grouping) at
national level we may need to allow the voters to rank all candidates 
nation wide (as you noted). The next question then is if we allow the 
voters of one district to have a say on which candidates will be elected 
in the other districts. If we allow that then we could simply arrange a 
national level STV election with some further tricks. The trick could be 
e.g. to refuse to nominate any candidates from some district after the 
agreed number of candidates has been elected from that district. (This 
was just one quickly drafted option.)


Another trick related to one that I've referred to before is this: give 
each voter an additional fractional vote where the candidates are ranked 
in order of distance from the voter. Continuous districting, if you 
want. The fraction depends on how much you want locality to matter. 
You'd also have to link the two votes' weight somehow, otherwise it just 
becomes minisum distance, which isn't what we want.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] PR favoring racial minorities

2008-08-26 Thread Kristofer Munsterhjelm

Juho wrote:

On Aug 26, 2008, at 1:20 , Raph Frank wrote:


Each candidate can register in any number of polling stations covering
at most N seat's worth of population.  (N=5 might be reasonable).


You might want to keep the sizes of the registered areas of each 
candidate about equal (or to balance the situation in some other way).


Well, since we're already talking about logistics-heavy methods, how 
about this: Take the location of the candidate (his home). Then order 
the polling stations by distance from that location. Find the number p 
at which the circle given by the radius drawn from the candidate's home 
to polling station #p on the sorted list (closer first) encompasses more 
than N seats worth of population. Then the candidate is listed on the 
ballot in polling stations 1 to (p-1) on the sorted list, inclusive.


If the politicians have any influence in where the polling stations are 
placed, they would want to put them more or less evenly so that if, for 
instance, all polling stations are to the North of a candidate, one 
would add some to the South too, to get on more ballots.


Strategic house buying would be funny! Perhaps parties would have 
candidate houses, all of which are carefully located so as to maximize 
the effect, and new candidates are given one of them to stay in for as 
long as he's a candidate.


Election-Methods mailing list - see http://electorama.com/em for list info


[EM] A very simple quota method based on Bucklin

2008-08-26 Thread Kristofer Munsterhjelm
As the subject says, this is a very simple multiwinner method that's 
based on Bucklin. I referred to it in another post, and so I think I 
should explain how it works:


Inputs are ranked ballots. Each voter starts with a weight of one. The 
quota is Droop (Hare does much worse).


As in Bucklin, start counting first place votes, then second place, and 
so on. Equal ranks may either count as one vote for each, or as a 
fractional vote for each.


At some point, the counts of one candidate will exceed the quota. That 
candidate is elected. If more than one are above quota, pick the one who 
has the most votes. If they're still tied, break by the first round 
where their numbers differed. If they're still tied, break the tie based 
on a single-winner method, or flip a coin, or somesuch.


For all of those voters that voted for the winner, reweight their 
weights by (new weight = old weight * (votes for winner - quota)/(votes 
for winner)).
Don't alter the quota, but in all other respects, restart the election 
with the winner removed from all ballots, as if he never entered. Keep 
on doing this until enough candidates have been elected.


That's it. For the single-winner case, the method reduces to Bucklin, 
which is monotonic. I'm not sure if the method is monotonic in the 
multiwinner case as well, but I think so.


According to my simulation, the method isn't as proportional as STV.

Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] PR favoring racial minorities

2008-08-26 Thread Kristofer Munsterhjelm

Raph Frank wrote:

That's fine.  In fact, if you had 50% local and 50% national seats,
then it can be made to work perfectly.

Just say that an independent must get at least 50% of the constituency
to be elected and if he does, each of his voters have their weights
reduced to

(VA-50%)/100% where VA is his percentage vote share

This gives perfect proportionality.

In effect, it 'costs' 50% of a constituency to win a seat, and the
independent has managed it and so gets a seat.  The candidate's
supporters 'spend' enough votes to elect the candidate and the rest go
to the national count.

IRV could even be used to make it a little fairer to possibly push a
candidate over the 50% mark.  (Exhausted ballots wouldn't affect the
quota, so 50% would be a true majority).  Any ballots which end up
with the independent would be de-weighted if he is elected.  This
would mean that voters would be advised not to rank independents that
they don't actually want to get elected.

Or one could use STV instead, and have more local seats. If I'm not 
wrong, the 50% mark of maximum disenfranchisement would be lowered 
significantly by STV, so fewer list seats would be required.



If there were 50 top-up and 100 local, then the A party would get 100
and the B party would be assigned all of the top-up seats.


Ah, I see now. No amount of reweighting would handle that case for FPTP 
because the 100 seats are already elected. As I said in the other post, 
the logical thing to do at this point is to start deweighting 
A-victories, although the A-voters would complain. In this two-party 
case, this results in a problem, however, since exactly 51 out of 100 
voted for A in every district. Thus, deweighting A would do nothing 
until a certain point where A loses all its seats and B gains them. The 
right thing would be that A retains some of the seats while B gets the 
rest, violating the constituency preferences only enough to get 
proportionality. But if all districts are equal in preferences (51 out 
of 100 for A), then which should get their preferences overridden? That 
would be done either randomly or by ignoring neutrality - for instance 
by picking based on geographical properties so that the overridden 
constituencies are as evenly distributed as possible.



Under some rules, the number of top-up seats can depend on the results
of the election.

One rule is that all parties must get at least 1 seat assigned.  An
independent on 50% of a district wouldn't get assigned any seats until
the number of national and local seats was equal.

Another option would be saying that if any independent gets elected,
the number of top-up seats must be equal to the number local seats.

A less severe version would be to activate the rule if more than x%
(say 5%) of the legislature are independents.  This would still cause
slight imbalances, but shouldn't be major.  It would also prevent a
major party using the decoy list strategy as if they did, it would
trigger the rule and thus cancel out the effect, as there would be
enough seats to completely compensate.


In general, one should be careful with a variable size assembly, since 
the size can go beyond all reason if the disproportionality is driven to 
be severe enough.


Decoy lists don't have to be on independents, though. The strategy that 
was used in Italy was that they had constituency parties and list 
parties. Voters would elect the constituency party for the constituency 
and the list party for the top-up. Because there were no list proportion 
for the constituency party, it could not be balanced in that direction, 
nor could it be balanced in the list direction because nobody voted for 
the list party in any constituency.


Schulze's STV-MMP method suggests that, in this case (party overhang), 
some voters who voted for party A in the constituency and party B for 
the lists, are set so that a fraction of their list votes go to A instead.



My issues is that there are 2 types of candidates

- party candidates

If a party candidate wins a seat, they get 100% of the representation
for that constituency.  However, this is compensated by the fact that
they count as 1 seat won for the party.

This votes budget for that district is

Winner won a local seat
+ 1 local seat for the party

Winner counts as a party member
- 1 national seat for the party

All party votes added to party list totals as normal (1 person, 1 vote = fair)

By assigning the seat to the local winner, the winner's party loses a
seat at the national level, so it is neutral.

- independents

Winner won a seat
+1 seat for the independent

Candidate doesn't count as a member of any party
No effect on national vote

All party votes added to party list as normal (1 person, 1 vote)

Here, despite the winner winning a seat, all of his supporters still
get full effect at the national level.

Even if their votes were not eligible for the national count, it still
isn't fair as it hasn't cancelled a full seats worth of votes.


I 

Re: [EM] PRfavoringracialminorities

2008-08-27 Thread Kristofer Munsterhjelm

Raph Frank wrote:

On 8/26/08, Kristofer Munsterhjelm [EMAIL PROTECTED] wrote:

 No, it uses logarithmic and exponential functions to find the divisor
 that corrects the bias that arises with certain assumptions about the
 distribution of voters. See
http://rangevoting.org/NewAppo.html . Warren
 refers to states and total population, but it works for parties as well
 - the state population is the number of voters that voted for the
 party in question, and the total population is the total number of
 voters -- or for scored single-winner methods, the score for the party
 and the total score, respectively.


Ahh, I think I had read that page before.

Anyway, his conclusion is that his parameter should be set to
d=0.495211255149063832...

Webster's sets d to 0.5, so I think that would be easier to use that.

The difference is thus pretty slight and thus the benefit (if any) is
also pretty low.


True. I just gave it as an option for the perfectionists who aren't 
satisfied with Webster, or for the case where the election system is so 
complex that adding the calculation wouldn't be noticed in theg rand 
scheme of things (and where every little bit helps).



 Yes. In the same vein, for single-winner methods, a NOTA that actually
 does something is preferrable to one that has no influence apart from
 showing that people dislike all the candidates.


Yeah.  It could be argued that it is a leave the seat vacant/hold
another election vote.

With IRV, it could even be a ranked option.

You can rank NOTA as your lowest option.

In the last round, if the winner doesn't have a majority including
NOTA votes, then the election is declared to have failed and a new one
called or the office left vacant.


For any ranked method, you could have a new election option, being 
shorthand for I'd rather have a new election with the status quo going 
on in the meantime, than elect any of those listed below this option. 
Then, in the social ordering, if this option ranks first, there's a new 
election (and all the candidates of the previous election are barred 
from participating in the next one). If it doesn't rank first, whoever 
wins wins.


For multiwinner elections, you could either redo the entire election, or 
if one of the seats go to new election, give those who were elected 
prior to this their seats and then elect the remaining seats anew. The 
latter option would be very complex, however, because you'd make sure 
that it retains proportionality. The obvious way to do so is to retain 
weights, but then you have to match those up with the voters, and doing 
that while keeping the secret ballot secret would be very difficult indeed.



 If we can fix the adjustment for multiple seats, it could be used with
methods that don't reduce to IRV or other nonmonotonic single-winner
methods. Reweighted Range Voting is monotonic, as are all additively
reweighted methods based on monotonic single-winner methods. However, these
don't do very well in my simulation - the best one is reweighted
plurality, which is just plurality, or in other words, SNTV.


RRV still would need the local constituencies to announce a complex
list of results.  To work out the winner, you need to know how many
voters voted A, B, C ... and also A+B, A+C, A+D, B+C ... and so on
(and that is assuming everyone votes approval style).

Actually, it is even more complex, I think for RRV you might need the
individual ballot list.


Almost all party-neutral proportional representation methods are 
nonsummable. I say almost all because Warren claims that Forest solved 
this (that is, made a summable PR method), but I don't have the 
rangevoting.org password and I don't want to join, at least not yet.


What I found of Forest discussing summability in 2007 was this: 
http://lists.electorama.com/pipermail/election-methods-electorama.com/2007-April/020081.html 
. It seems to say, in essence, that since voters first k preferences are 
what count (for some small k), you can store them and average out the 
rest to make standard ballots that won't lose much from reality.


I think this may be iffy; there are no hard rules for how much 
proportionality you lose, and if there are more than k seats, the 
averaging could upset things.



The only way to get transfers to work would be if there was a very
simple way to handle them.

I really don't like PR-SNTV, but it would work.


It works *if* all parties run what is in essence vote management: they 
have to divide votes so that no allied candidate gets too many nor too 
few votes. The same careful allocation has to happen within each party, 
and this can encourage hierarchical systems where those high up 
apportion votes in the direction of a candidate in return for the 
candidate allying with the higher levels (both inside and outside of 
parties).


But at least it's proportional if parties do this. Majoritarian Borda or 
Condorcet (elect the first n in the social ordering) isn't even that.



Another, possibly better

Re: [EM] A very simple quota method based on Bucklin

2008-08-27 Thread Kristofer Munsterhjelm

Raph Frank wrote:

On 8/26/08, Kristofer Munsterhjelm [EMAIL PROTECTED] wrote:

 Inputs are ranked ballots. Each voter starts with a weight of one. The
quota is Droop (Hare does much worse).


Can a voter skip ranks and also is there a limited number of ranks?

If you allow rank skipping, then a voter can distinguish between

ABC
and
ABC

E.g.
A:1
B:2
C:10

and

A:1
B:9
C:10

In the second case, the voter will only compromise and vote for B if A
can't get elected even after 9 rounds.




In fact, the notation could include the number of skipped ranks

AB

This means
A:1
B:4

i.e.

A(empty)(empty)B


That would cause exhaustion. Here's an example with six candidates, 
single winner.


10: AD
10: BE
10: CF

The quota is 50% + 1, or 16. However, none of the candidates get more 
than 10 votes.


If the ballots are fully specified, then by pigeonhole, once all ranks 
have been included, each candidate must have got one vote per voter. 
Thus some candidate will be above quota.


One could perhaps fix this by equal-ranking all remaining candidates 
last, below any specified rank.



 For all of those voters that voted for the winner, reweight their weights
by (new weight = old weight * (votes for winner - quota)/(votes for
winner)).
 Don't alter the quota, but in all other respects, restart the election with
the winner removed from all ballots, as if he never entered. Keep on doing
this until enough candidates have been elected.


It might be worth recalculating the quota based on exhausted ballots.
Otherwise, your method might end without electing enough candidates.

You can just recalculate the Droop quota using the new seat total and
the reweighted number of votes.

For example, assuming 100 voters and 4 seats

Q = 100/(4+1) = 20

After round 1, your reweigting will decrease the effective number of
ballots by 20 and seats to 3.  This has no effect on the quota
(assuming no exhausted ballots)

Q = 80/(3+1) = 20

This means that you can just keep recalculating the quota to take
account of exhausted ballots.


It should have no effect on ballots that aren't exhausted, since the 
reweighting reduces the nominator by a quota, and the election of a seat 
reduces the denominator by one, thus canceling out.


Say there are k votes for the winner, and all weights are 1. The quota 
is Q  k. Then the sum of the new weights is k * (k - Q) / k. Cancel out 
the  factor of k and we get (k-Q). Call the number of those who didn't 
vote for the winner r. Then the quota was (r + k)/(numseats + 1). 
Afterwards, we have


(r + k - Q) / (numseats),

which has reduced the numerator by a quota, and the denominator by one, 
which was what we wanted.



Another option for weightings is to weight each ballot at

w = 1/(candidates elected + 1)

If the ballot was voting for a candidate who gets elected, its
'candidates elected' count goes up by 1.

This also achieves proportionality.  It works like proportional approval voting.


Does that pass Droop proportionality? It looks like D'Hondt.


 That's it. For the single-winner case, the method reduces to Bucklin, which
is monotonic. I'm not sure if the method is monotonic in the multiwinner
case as well, but I think so.

 According to my simulation, the method isn't as proportional as STV.


What does this mean?  It looks like the method meets Droop
proportionality, so should be proportional.


It means that if voters and candidates have binary opinion profiles and 
vote in order of Hamming distance (number of opinions where they 
disagree) to each candidate, ranking those with greater Hamming distance 
lower and breaking ties randomly, then the difference of the proportion 
that hold the yes stance on some issue or issues in the assembly 
differ more from that proportion in the population, on average, than 
would be the case for an assembly elected using STV.


The simulation shows different scores even among methods that satisfy 
Droop proportionality. QPQ does best, then STV, then this.



If sims are showing non-proportional effects, it probably means that
votes are 'bleeding' over into other parties.

If I vote

A1A2B1

I could end up helping party B get seats instead of my favourite
party.  A better vote (from my point of view) would be

A1A2

This means none of my vote bleeds over into party B.


That makes sense. Bucklin passes Later-no-help while failing 
Later-no-harm, thus producing an incentive to truncate ballots. IRV (and 
thus STV) passes both, but pays for it by being nonmonotonic.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] PR favoring racial minorities

2008-08-28 Thread Kristofer Munsterhjelm

Juho wrote:
Yes, security might force us to use simpler solutions like ballots to be 
similar, votes to be shorter (e.g. only two or three rankings allowed), 
and even to reduce the number of candidates. The latter two 
simplifications are already vote buying / coercion oriented.


Security might also force us to more complex solutions like having 
districts to limit the number of available candidates. Otherwise the 
voter might be asked to vote for some candidate from the other side of 
the country that nobody is expected to vote.


One more approach to semi-computerized voting. A computer displays the 
personal alternatives and then prints a ballot. This solution hides the 
personalized nature of the ballot and still avoids the problem of voter 
voting for candidates that he/she should not vote.


One could augment the semi-computerized voting by making it print all 
candidates, but randomly order (last behind all others) the ones that 
are not applicable to the districts. Then the ballots would have to be 
examined more closely in order to figure out what house is its center.


That's not to say it would make it impervious to such attacks: the 
random ordering might easily have ...  DistantCommunistA  
DistantRightWingerA  DistantCommunistB  ... because the randomizer 
doesn't know (and can't know) what's reasonable. Filling out the 
random-last with a Markov simulation of other ballots would be more 
reasonable, but that'd require a postprocessing step and it might mess 
with the proportionality, so I don't think that would be worth it.


However we look at it, we return to the problem that ranked ballots can 
be fingerprinted. The only solution I can see for that is to have a 
summable system and add the individual ballot in matrix (or array) 
format instead of ballot format. But most PR methods are not summable! 
Are there other ways of preventing ranked ballot fingerprinting?


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] PR favoring racial minorities

2008-08-28 Thread Kristofer Munsterhjelm

Juho wrote:
The idea of an appropriate size circle around candidates home (or home 
district) sounds like a pretty safe and simple approach. That gives also 
the voters a natural explanation to why some of the familiar candidates 
are on the list and some not.


Dynamic districts may also be seen to fix something important. If the 
district borders are considered artificial the circle based approach 
moves the borders further away, and as a result also the problem of 
artificial borders (in the sense that one can not vote for and be 
represented by one's neighbour) may mostly fade away.


One more approach to this would be to provide perfect continuous 
geographical proportionality. One would guarantee political and 
geographical proportionality at the same time. One would try to minimize 
the distance to the closest representative from each voter and make the 
number of represented voters equal to all representatives. In short, 
distribution of representatives would be close to the distribution of 
the voters (while still maintaining also political proportionality).


There would, of course, be limits to the guarantee of having both 
political and geographical proportionality at the same time. If your 
immediate vicinity have candidates whose opinion you completely disagree 
with, one of geographical proportionality and political proportionality 
will have to sacrifice part of itself for the other. As I've said 
before, in that case I think political proportionality is more 
important. In the long run, the effect might self-stabilize, if for no 
other reason that if there are many Y-ists in an area, one of them is 
going to notice and want to become a candidate.


I'm not quite sure how to do perfectly continuous geographical 
proportionality. My two linked ballots idea would probably work, but I 
think we can do better by using the distance information directly. Just 
how, though, I'm not sure.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Geographically proportional ballots

2008-08-29 Thread Kristofer Munsterhjelm

Juho wrote:

On Aug 28, 2008, at 11:36 , Kristofer Munsterhjelm wrote:

One more approach to semi-computerized voting. A computer displays 
the personal alternatives and then prints a ballot. This solution 
hides the personalized nature of the ballot and still avoids the 
problem of voter voting for candidates that he/she should not vote.


One could augment the semi-computerized voting by making it print all 
candidates


That could be thousands, so maybe a subset in many cases.


Just enough to hide the data. One could print out to the nearest 
candidate that's, say, a tenth of the population away from the voter.


Here I say that a candidate is N voters away from a voter if it's not 
possible to make a compact region that includes both the voter and the 
candidate, yet has fewer than N voters in it. For simplicity, the region 
might be a circle.


, but randomly order (last behind all others) the ones that are not 
applicable to the districts.


I guess one would need to know the district (or the person if the 
candidate lists are personal) to decipher the vote in the calculation 
process. That information could help also the malicious readers. Or did 
you mean that the random data would be part of the vote but would be 
just noise since different ballots would cancel the effect of each others.


The latter. The idea is to append a ballot with noise so that a voter 
that votes A  B  C  rest, with the first of the rest being (randomly) 
D cannot be distinguished from a voter somewhat closer to A, B, C, and 
D, voting A  B  C  D  rest, except probabilistically with a 
resolution too low to discern individual voters.


Voter would have also have to trust that the printed ballot is what it 
should be.


True, that's a problem, and it's not even possible to know whether the 
printed ballot has some hidden information in it. For instance, the 
voting machine might have been tampered with so that it encodes a 
timestamp (or copy of the first few ranked votes) into the random-last 
votes, encrypted so that it's indistinguishable from the noise a good 
random process would produce.


The technique that I proposed above would work best with bullet votes 
(e.g. with open lists). Also short votes (that list only few candidates) 
are quite ok (some risk if one votes for the most distant candidates in 
all directions).


Yes; and also large virtual districts.

However we look at it, we return to the problem that ranked ballots 
can be fingerprinted. The only solution I can see for that is to have 
a summable system and add the individual ballot in matrix (or array) 
format instead of ballot format. But most PR methods are not summable! 
Are there other ways of preventing ranked ballot fingerprinting?


One could break a Condorcet ballot ABC to separate pairwise 
preferences AB, AC and BC (is this what you meant with the matrix 
format?). If there are also many other candidates (tied at bottom) one 
could use e.g. A*, B*, C*, cancel BA, cancel CA, cancel CB. That 
information could be derived also from a more understandable (= easier 
for the voter to check) set of opinion fragments A, B, C, AB, AC, BC.


That's what I mean, but not restricted to Condorcet matrices. If you 
want to use the ballot data for any sort of positional system (as well 
as those where the positional scores change over time, like Bucklin), 
you'd use an n*n matrix where candidate[0] is the number of times the 
candidate got first place, candidate[1] is the number of times the 
candidate got second place, and so on. Counting a particular positional 
rule requires only an array (one-dimensional matrix), and so on; but all 
of this is made all that more difficult by that no PR method I know of 
is summable without doing quantization on the ballots (like Forest's 
patch does).


I don't know how to capture that formally - perhaps that I know of no 
summable party-neutral PR method that's responsive, if I understand 
responsivity correctly (if there's a tie in the social order, a single 
vote must always be able to resolve that tie). But even that doesn't 
quite get it, for say that some candidate ranked (n+1)th in the 
quantization method has a tie with another. Then a ballot that ranks 
this candidate first would be accepted and resolve the tie (assuming the 
underlying method is responsive, too).


IRV ballots are trickier. Raph Frank already mentioned the idea of 
truncation and combination of this with candidate given preference lists.


An interesting corollary of this is that if the quantization method is 
acceptable, then the IRV is not summable complaints can be effectively 
countered. Just apply the patch to IRV and then you can sum up the ballots.


If you allow me I'd like to advertise trees once more. Trees (= 
hierarchical open lists) can be seen as very truncated ranked votes. 
Bullet vote to one candidate is inherited by his/her nearest group and 
so on. When the tree is formed one can expect all common thinking 
patterns to get

Re: [EM] PRfavoringracialminorities

2008-08-29 Thread Kristofer Munsterhjelm

Raph Frank wrote:

On Wed, Aug 27, 2008 at 7:59 PM, Kristofer Munsterhjelm
[EMAIL PROTECTED] wrote:

True. I just gave it as an option for the perfectionists who aren't
satisfied with Webster, or for the case where the election system is so
complex that adding the calculation wouldn't be noticed in theg rand scheme
of things (and where every little bit helps).


Someone would probably call you on it :).  You would have to justify
what the particular method is best.


Well, yes, but I meant something like that if you're switching from Meek 
(numerical solution for nonlinear systems) to an election system with a 
divisor component, then a single exponential equation might seem simple 
in comparison. Of course you would have to explain it, but the society 
would already be used to the idea that voting systems may have to be 
complex to give good results, and thus would accept it (if they accepted 
the explanation, or trusted those who accepted the explanation) more 
readily than those who were not used to that.



For any ranked method, you could have a new election option, being
shorthand for I'd rather have a new election with the status quo going on
in the meantime, than elect any of those listed below this option. Then, in
the social ordering, if this option ranks first, there's a new election (and
all the candidates of the previous election are barred from participating in
the next one). If it doesn't rank first, whoever wins wins.


Well, I guess it depends on the method.  However, if it was IRV, I
think there is a reasonable case for making the NOTA option not be
subject to elimination.


For multiwinner elections, you could either redo the entire election, or if
one of the seats go to new election, give those who were elected prior to
this their seats and then elect the remaining seats anew. The latter option
would be very complex, however, because you'd make sure that it retains
proportionality. The obvious way to do so is to retain weights, but then you
have to match those up with the voters, and doing that while keeping the
secret ballot secret would be very difficult indeed.


One option is to use Asset voting for that situation.

Your vote can designate a named candidate as responsible for voting
for you if a NOTA option wins a seat.

This might be a separate column.  You rank the candidates in one
column and then mark one of them as your NOTA delegate.



That could work. However, the NOTA list would have to be longer than the 
ordinary list, I think, because in situations where NOTA ranks first, 
that means that the candidates below NOTA are not considered good 
enough, and thus by implication would not be considered good enough to 
give one's vote (as an asset) to. The same argument, if weakened, could 
be used where all but one is below NOTA - presumably only those who gave 
the above-NOTA candidate the asset vote would consider him good enough, 
and so all those who gave below-NOTA candidates the asset would be left 
out in the cold.



What I found of Forest discussing summability in 2007 was this:
http://lists.electorama.com/pipermail/election-methods-electorama.com/2007-April/020081.html
. It seems to say, in essence, that since voters first k preferences are
what count (for some small k), you can store them and average out the rest
to make standard ballots that won't lose much from reality.

I think this may be iffy; there are no hard rules for how much
proportionality you lose, and if there are more than k seats, the averaging
could upset things.


Hmm, this looks like a more general case.  However, I think you would
maintain proportionality as long as you meet the Droop criteria.

If all voters from a group vote for their candidate first choice, then
they are guaranteed a seat (assuming the group has a Droop quota).

The only effect is on lower rankings.

One issue is that if a party expects to get more than 3 seats, then
there could be issues.  However, even then it mightn't be a major
problem.

Abuse would require that the abusers vote 1,2,3 for the party and then
try to mess up their 4th rank.  I think that this is likely to
increase the number of votes received by that faction rather than
decrease it.


I don't even think it needs to be malicious. If voters have sufficiently 
many opinions they compare the candidates on, that might cause honest 
differences in the 4th rank and below. Averaging, by necessity, throws 
away some of this data.



I really don't like PR-SNTV, but it would work.

It works *if* all parties run what is in essence vote management:


Yes.  However, vote management strips voters of their power to choose.
 They can't bottom rank a disliked party candidate (without the party
losing a seat).


True, and I don't think you could make a summable SNTV DSV method.


The election system here in Norway is somewhat like this. In the national
election, you vote for a party (closed list PR). For each district,
candidates are allocated to the parliament according

Re: [EM] A computationally feasible method (algorithmic redistricting)

2008-09-01 Thread Kristofer Munsterhjelm

Michael Rouse wrote:


There was a discussion of district-drawing algorithms on the 
election-methods list a few years back. I've always thought that taking 
centroidal Voronoi cells with equal populations was an elegant way to do 
it. Here's an example of standard Voronoi cells and the centroidal 
version I pulled off of Google:

http://www.mrl.nyu.edu/~ajsecord/npar2002/html/stipples-node2.html


To find the district centers (centroids), you have to do what's 
effectively vector quantization. The voters make up the points, and you 
want to choose n codebook points so that the average distance to the 
closest codebook point, for all points, is minimized.


To my knowledge, optimal vector quantization is NP-hard. The good news 
is that there are approximation methods that have proven worst-case time 
complexity. However, they'll not give you the absolutely best possible 
arrangement.


One such algorithm is described here: 
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.3529


A simpler algorithm, though one that doesn't give any worst-case bounds, 
is Lloyd's algorithm. Start with random center points, then calculate 
the centroids for the current Voronoi cells. Move the center points 
towards the centroid (shifting the Voronoi cell somewhat) and repeat. Do 
so until they settle. This can get stuck in a local optimum.


The other possibility I liked was allowing voters to vote for the 
districts they wanted -- either for the next election, or more 
entertainingly, the current one. People have a pretty good feel for what 
mapping is compact and reasonable, and which ones are ridiculous, 
especially if they can compare them. You could have certain criteria 
that must be met -- like all districts must be contiguous  -- and sort 
the maps by some metric, like from shortest to longest aggregate 
perimeter. You could have all qualifying parties submit a map, as well 
as any group that gets above a certain number of signatures in a petition.


Those maps could be pruned so that only the Pareto front remains. That 
is, if there's some map that's worse on all metrics with regards to some 
other map, then that first map isn't included. As long as there are 
enough metrics to give a reasonable choice on the Pareto front, this 
should exclude the worst of the gerrymandered proposals and keep the 
voters from being swamped with millions of frivolous proposals.


I don't think it's necessary to make it that complex, though. If you 
favor actual people doing the final choice, an independent commission 
(like the redistribution commissions of Canada and Australia) could make 
the choice of which nondominated map to use.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] language/framing quibble

2008-09-01 Thread Kristofer Munsterhjelm

rob brown wrote:
On Mon, Sep 1, 2008 at 3:20 AM, Kristofer Munsterhjelm 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:


Consider Condorcet. One of the greater problems with plurality is
vote-splitting, which favors minorities since it destroys a center
that many think is good but only a few think is great. Thus,
adopting Condorcet would help the majority, not minorities at the
expense of the majority, ...

 
First, I think you are misusing the words majority and minority here 
(as is common).  Personally I think they have no meaning unless there 
are only two candidates (and there were never any other potential 
candidates).


I'd say that one could generalize the above to coalitions. Then a method 
favors a majority if it gives more power to a coalition supported by a 
majority, and favors a minority if it gives more power to a coalition 
supported by a minority.


One example I would use to argue that Plurality can swing in the 
direction of a minority is the 1987 South Korean election. The two 
democratic groups split the vote, giving the election to the general who 
supported an earlier coup.


I'll grant that if we use your definition, that example isn't applicable 
- since there were three parties in all.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Geographically proportional ballots

2008-09-02 Thread Kristofer Munsterhjelm

Juho wrote:

On Aug 29, 2008, at 15:51 , Kristofer Munsterhjelm wrote:

One more approach to semi-computerized voting. A computer displays 
the personal alternatives and then prints a ballot. This solution 
hides the personalized nature of the ballot and still avoids the 
problem of voter voting for candidates that he/she should not vote.


One could augment the semi-computerized voting by making it print 
all candidates

That could be thousands, so maybe a subset in many cases.


Just enough to hide the data. One could print out to the nearest 
candidate that's, say, a tenth of the population away from the voter.


Here I say that a candidate is N voters away from a voter if it's not 
possible to make a compact region that includes both the voter and the 
candidate, yet has fewer than N voters in it. For simplicity, the 
region might be a circle.


One should maybe avoid the possibility of someone deriving the location 
of the voter based on the distribution of all the candidates on the 
ballot. (Also picking fully random candidates may reveal the location 
since there will be one concentration of nearby candidates.)


This could happen if the voter has a compact region significantly 
different from the rest. For instance, vote-buyers may (theoretically) 
advertise that they're just in range to candidate x, so that x will be 
on his ballot whereas x won't be on all the others in his region.


But if that's not the case, then the only way your deduction will work 
is statistically, and when doing so, it'll be hard to separate a single 
person from the rest of the mass you know is in the close vicinity. If 
you tell A to rank yourself in first place, and you then find out that 
one of the ballots from this area has yourself in first place, you still 
don't know if it's A or not (absent intentional ballot fingerprinting by 
the voter).


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] A computationally feasible method (algorithmic redistricting)

2008-09-03 Thread Kristofer Munsterhjelm

Raph Frank wrote:

On Tue, Sep 2, 2008 at 11:00 PM, Kristofer Munsterhjelm
[EMAIL PROTECTED] wrote:

The reasonable thing to use would be Euclidean distance, since that makes
sense, given the geometric nature of the districting problem. If you want to
be even more accurate, you can use great circle distance instead to reflect
that the districts are on the (near-)spherical Earth, but at the scales
involved, the difference would be slight (and so it would be another
dotting the i-s refinement :-)


My splitline software used a sphere (and gnomonic projection for the
lines).  However, I think it wasn't necessary, but wanted it to be as
accurate as possible.

OTOH, I think that it might be better to define the map as a 2d map
using Mercator projection.  The problem with the sphere is that each
point is a real number.

If the data is presented as (longitude, latitude), then the numbers
that are input into the algorithm are rational numbers as they are
given with a certain number of digits after the decimal point.  This
allows an exact solution to be found.  However, with reals, different
precisions could give different answers.  Also, if there is a tie, it
may not be possible to determine that one occurs.  This is less of a
problem if a specific algorithm for determining the result is used.

I think splitline can be solved exactly if the map is 2-d and all
coordinates in the input data are rational numbers.


If you use a Mercator projected map, you're just hiding the 
quantization. All maps have some distortion, and since the map 
projection uses trigonometric functions, you can just use the Haversine 
distance directly. If you need the precision and an exact measurement of 
error, you could use a rational number class with sine and cosine 
approximation tables (or Taylor series), but I think real error like the 
Earth not being perfectly spherical will get you before the rounding 
errors do.



Back to to the Voronoi diagrams, I think you may have misunderstood
what I meant.

The issue here is that ordinarily, it doesn't matter what power you use.

If you colour each pixel the colour of the nearest point, then you get
a standard Voronoi  diagram.

Likewise, if you say that you colour each pixel the colour of the
point with the lowest (distance)^2, then you get the same diagram.

If you square all the distances, then the lowest distance is still the lowest.

All cells will have straight lines as their edges.

However, if I say that you should colour the pixel the colour of the
nearest point, but that the distance to point 1 is to be decreased by
100km, then you would expect that the cell with point 1 as its centre
would be increased in size as some pixels which went to other cells,
would now go to point 1's cell (as its distance has been decreased).

If you look at the results, then you would see that cell 1 no longer
has straight lines as its boundary.

Now, if instead, you assign the pixels to the cell with the nearest
distance squared and apply an offset to point 1, then you still
maintain straight line edges.

Also, it has the nice feature that you can work out the square distance as

(x0-x1)^2 + (y0-y1)^2 + Cn

You save a square root which takes a long time to calculate.


I see. I thought you were talking about how to calculate the distance in 
the first place. Since squaring and square roots are monotonic, if you 
have squared distance = (x0-x1)^2 + (y0-y1)^2, picking the maximum or 
minimum distance would be the same as picking the maximum or minimum 
squared distance. Weighting would differ, of course, as you note.


Power diagrams on Euclidean distance are still convex polyhedra, though, 
to my knowledge.



That last one might be ideal for districting.  It would allow a city
to be a circle which surrounded by a rural district.


The surrounding district would score pretty badly on the all-pairs 
distance measure, and also on a similar convexity measure (given as the 
probability of the line between two random points being entirely inside 
the district, or where it is outside of the district, being so only over 
water or outside of the state).



For example, if you had a State with 2 cities and 3 seats, it might be
best to split the state into 2 circle districts centred on the cities
and 1 rural district which is everyone else.

Most other methods can't handle having one district as an island
contained in another.


If it's best, the earlier measures are not adequate to discover it.


Also note that it's possible to find the borders of the Voronoi cells (for
the Euclidean metric, at least) much quicker than doing a nearest-neighbor
search on every single pixel. Quantization brought on by the varying sizes
of census blocks may complicate matters, though.


Yeah, most of the automatic methods assume that the 'population' is
uniform density.

What is nice about the reweighted version is that you can expand and
contract a region without having to move the points.

If you increase a region's weight

Re: [EM] Using gerrymandering to achive PR

2008-09-03 Thread Kristofer Munsterhjelm

Raph Frank wrote:

1) Every odd year, an 'election' is held but voters vote for parties

2) based 1), seats are distributed using d'Hondt between the parties


If you're going to have D'Hondt, or PR in general, why bother with the 
districting? Just use open list or a party-neutral proportional 
representation method like STV.


It's not impossible to introduce STV; it has been used in many places in 
the US, like New York's experiment with PR from 1937 to 1945. What seems 
to be more difficult is to keep PR in the face of compounding opposition 
from the established parties (though in the aforementioned New York 
case, they got a lot of help from the Cold War situation along with the 
election of Communists; they could then link communism and PR).



Also, if a party cannot be gerrymandered any seats, maybe it should be
eliminated, to allow its supporters votes to be useful.

Another question, would it be illegal to restrict candidates so that
they must come from the designated party?  This would allow each party
to run 2 candidates in their district, thus improving choice.


I suppose that if you want to steer democracy, you could redistrict so 
that a certain fraction (changing for each election) have narrow 
margins. The question would be one of stability on one hand and 
responsive changes on the other (analogous to feedback damping), but 
again, who's to say where the optimum is? That is, if one should steer 
democracy in the first place.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Free riding

2008-09-03 Thread Kristofer Munsterhjelm

Jonathan Lundell wrote:

On Sep 3, 2008, at 12:28 AM, Juho wrote:

I hope this speculation provided something useful. And I hope I got 
the Meek's method dynamics right.


Meek completely fixes Woodall free riding. That strategy takes advantage 
of the fact that most STV methods (to the extent we're in a STV/Meek/etc 
context) are sensitive to elimination order in how they distribute 
surpluses. In most other STV methods, if I vote for my first and second 
preferences AB first, and A has a surplus, then only a fraction of my 
vote (or a probabilistic whole) transfers to B. But if I rank hopeless 
candidate Z first: ZAB, then (hopefully) A gets elected before Z is 
eliminated, and my whole vote goes to B. If Z gets eliminated first, no 
harm done, I'm left with AB. The hazard, of course, is that so many 
voters do this that Z gets elected and/or AB eliminated.


Meek cures this entirely via its principle that when Z is eliminated, 
the ballots are counted *as if Z had never run*. There's no advantage to 
me in ranking Z first.


In general then, any method that acts like Z had never run (when Z is 
eliminated) would be resistant to Woodall free-riding.


Hylland is another kettle of fish. Here, I vote BA instead of my sincere 
AB, because I know that A will be elected without my help, and I can 
afford to spend my entire vote on B.


This is only useful, of course, if I'm competing with other A supporters 
who have some second choice, say AC voters. They will have only a 
fraction of their votes transfer to C, while I will have my entire vote 
counted for B because I didn't bother to rank A first, even though A is 
my first choice (I'd better be very confident).



There's a risk to the Hylland strategy, of course, if I make a mistake 
in judging that A will be elected without my help. Other than that, 
though, I don't offhand see a way of defending against Hylland free riding.


Hmm.. what could be done here? We could try to find out methods that 
resist Hylland free-riding, or find methods where there are few honest 
reasons to use the vote management version.


For the latter, I think PR methods that deal with equally ranked 
candidates as if they were symmetrically completed would have an 
advantage. For a party that expects very few personal votes, equal 
ranking would spread the voting power to a much larger extent than they 
could by running a vote management strategy. For instance, for a 6 
candidate case, there's no way the party could arrange 720 different 
pseudo-bailiwicks. Hopefully, parties that say don't equal rank would 
appear dishonest. There's nothing stopping them from doing so, 
technically, though, and the equal-rank property would make it easier 
for those who actually want to do vote management to do so, as they can 
get the majority to equal rank and then just have a small subgroup vote 
opposite the ordering of the personal voters.


For the former, I think that Approval methods would have some inherent 
safety against this (simply because you can't reorder the candidates). I 
might be wrong (I don't know enough about it), since Plurality doesn't 
let you reorder the candidates either, but SNTV basically requires 
vote-management to work at all.


One could also have a PR method that uses relative information about the 
ranking of strong winners as little as possible. Schulze's STV is one 
example of such a method. Perhaps one could make a method based on DSC 
or DAC in a similar vein (but not PSC-CLE, it scores badly in my 
simulations), since DAC/DSC works based on sets of candidates.


A final option would be to have a method where either not running a vote 
management scheme is a stable equilibrium, or where the risks when 
performing vote management are too high. The latter would probably deter 
individual voters more than parties, since parties can coordinate; but 
parties can't perfectly manage votes either.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] A computationally feasible method (algorithmic redistricting)

2008-09-03 Thread Kristofer Munsterhjelm

Brian Olson wrote:


I guess my time in Computer Science land has left me pretty comfortable 
with the idea that there are lots of problems that are too hard to ever 
reliably get the best solution. I don't know if there's a short-short 
popularizing explanation of how finding a good solution is Hard while 
measuring the quality of a solution is pretty quick.


If anybody asks and it's not the time, place, or audience for discussing 
NP Hard problems, I will wave my hands and say, Hey, look over there! 
Good results, on my one puny computer! With more it'd only get better!


I think puzzles and games make good examples of NP-hard problems. 
Sokoban is PSPACE-complete, and it's not that difficult to show people 
that there are puzzles (like ciphers) where you know if a solution is 
right, but it takes effort to find the solution. That's pretty much the 
point of a puzzle, after all (although not all puzzles are NP-hard; they 
can be fun even if they're not, as long as they do something for which 
it's challenging to find a solution).


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Free riding

2008-09-04 Thread Kristofer Munsterhjelm

Raph Frank wrote:

On Wed, Sep 3, 2008 at 10:51 PM, Kristofer Munsterhjelm
[EMAIL PROTECTED] wrote:

In general then, any method that acts like Z had never run (when Z is
eliminated) would be resistant to Woodall free-riding.


Right, you can get that benefit from alot of methods.  For example,
you could do hand counted PR-STV with the following changes

- restart the count if someone is eliminated (excluding them from consideration)
- the quota is recalculated at each restart

This gets almost all the benefit from Meek's method.


At the risk of sounding repetitive, I'll say that this is what the 
multi-round tweak to QPQ does. Whenever someone's eliminated, that 
candidate is permanently excluded, and then the method starts from the 
beginning. Presumably it too would be resistant to Woodall free-riding. 
I don't know if the single-round method is as well.



Hmm.. what could be done here? We could try to find out methods that resist
Hylland free-riding, or find methods where there are few honest reasons to
use the vote management version.


Ultimately, the problem is that you cannot meet Droop proportionality
without allowing it to some extent.


The next best thing is what Schulze calls weak invulnerability to 
Hylland free-riding. A method passes WIHFR if it's vulnerable to 
Hylland free-riding only if to not be so would make it violate Droop 
proportionality.


(He then proves an equivalent condition, but the notation is a bit too 
heavy for me to parse. It seems to be if voters reorder strong winners, 
then the outcome should not change, where a strong winner is in all 
stable sets, a stable set being a multiwinner generalization of the 
Smith set.)



For instance, for a 6 candidate case, there's no way the party
could arrange 720 different pseudo-bailiwicks.


I can't imagine them trying in that case, they likely to just try to
manage first choice vote management.  Maybe, more seats would protect
against vote management.


There's one way they could slip this under the radar. The party could 
say that they want all voters to vote A1  A2  A3 instead of A1 = A2 = 
A3, ostensibly to perform the same function as list ordering does in 
party list PR. Then, once the people have become used to not using 
equal-ranking, the party can spice it up by vote-managing.



There's nothing stopping them
from doing so, technically, though, and the equal-rank property would make
it easier for those who actually want to do vote management to do so, as
they can get the majority to equal rank and then just have a small subgroup
vote opposite the ordering of the personal voters.


I think if voters were aware of it, they may react by not giving
personal votes to that party.


Yes. From one point of view, vote-management based on Hylland 
free-riding uses the strength of personal votes to prop up a collective 
(party) ordering that would otherwise collapse. This can be seen by that 
if there are no personal votes, everybody votes by party, and the party 
is indifferent to which candidates within their party actually gets 
their seats, then equal-ranking would have no disadvantages.



For the former, I think that Approval methods would have some inherent
safety against this (simply because you can't reorder the candidates).


You mean proportional approval voting ?

I think that has strategy issues too.


Not necessarily PAV, but a method that's based on Approval and would 
otherwise be as good as STV, if such a beast exists. What kind of 
strategy can be used in PAV?



Schulze's STV is one
example of such a method. Perhaps one could make a method based on DSC or
DAC in a similar vein (but not PSC-CLE, it scores badly in my simulations),
since DAC/DSC works based on sets of candidates.


I wonder if a range ballot would be useful.  The algorithm then
optimally converts it into a ranked ballot.

For example,

1) Each voter submits a range ballot
2) Each voter's algorithm votes for 1 candidate
3) Vote totals are published
4) Repeat step 2,3 say 200 times

Each candidate's score is the sum of votes received in the last 100 rounds.

This allows voters decide if they want to risk it.  If their favourite
is ahead, they might decide to stop voting for him.  However, if they
have rated him much higher than 2nd, they might continue to vote for
that candidate.

I am not sure what algorithms to use in 2), one option is to allow
voters to pick.


Allowing voters to pick the candidate to vote for would be very tedious; 
the algorithm would have to run 200 rounds of voting. If this was a 
single-winner method, you could have used a cardinal ratings (range) 
equivalent of approval strategy A, but I'm not sure how you'd make a 
multiwinner version of that.


More generally, this could be considered DSV. Within the category of 
DSV, one cuold probably make a method that vote-manages on behalf of all 
voters more effectively than any party can. I thought Schulze STV did 
that (because of its mention of strength of vote managements

Re: [EM] language/framing quibble

2008-09-04 Thread Kristofer Munsterhjelm

Fred Gohlke wrote:

Good Afternoon, Kristofer Munsterhjelm

Thank you for your thoughtful comments.  I understand and agree with you 
on plurality and two-party dominion, and their off-shoots, 
gerrymandering and the various forms of corruption.  The difference 
between our views seems to be the focus on finding a 'better way' to 
count votes when (in my opinion) the real problems are the 'who' and the 
'what' we vote for.  Until we enable the people, themselves, to select 
who and what they will vote for, changing the way the votes are counted 
is an exercise in futility.


Although you didn't specifically say so, I take it you do not consider 
the political duopoly right.  Neither do I.  But neither do I see 
wisdom in fragmentation ... replacing the duopoly with a multitude of 
smaller factions ... because it bypasses the vital step of studying the 
nature of partisanship and how it came to dominate politics, right here 
in the birthplace of 'The Noble Experiment':


   When the Founders of the American Republic wrote the U.S.
Constitution in 1787, they did not envision a role for
political parties in the governmental order.  Indeed, they
sought through various constitutional arrangements such as
separation of powers, checks and balances, federalism, and
indirect election of the president by an electoral college to
insulate the new republic from political parties and factions.
Professor John F. Bibby[1]


You are right in your assumption: I do not consider the political 
duopoly right. On the other hand, I don't consider better election 
methods to be mere tweaks. The construction of organizations and their 
interplay in the domain of politics is, I think, more than anything else 
a process. The process is influenced by both external and internal 
constraints: what weakens and what strengthens.


A plurality system serves as a constraint that gives voice to the two 
most likely choices and consequently silences the rest. However noble 
minor parties or groups may be, there is no way that they can compete, 
at least not without displacing a former major group or party (as the 
Republicans did after the Civil War, occupying the empty space left 
after the collapse of the Whigs).


I think that a proper election method can offer the people a much better 
way of picking their leaders, when that is required. Ideally, such 
leaders would not be required, and we'd all be in a minimally 
hierarchical society, but reality intervenes.


Because organization is a process, the change of election methods don't 
just change how the people pick their representatives or leaders, but 
also how those leaders react to the now-differing constraints, and how 
the people in turn respond to those changes, and so on. Using a party 
neutral method (like STV) would also encourage indpendents to run since 
they'd actually have a chance, and thus weaken the partisanship you 
refer to. With Duverger's tendency reversed, the multiple parties would 
keep any one party from gaining such dominance that it could trump 
through policy unopposed, even more so since the opposition of multiple 
parties would be stronger than the opposition of a single party.


If considered desirable, party power could be weakened further by rules 
similar to those of the consensus government used in some Canadian 
territories. One should still be careful not to consider organization 
itself an evil and reason that since dictatorships are the extreme of 
order, the extreme of chaos, on its own, would be ultimate liberation. 
At the least, one should have something with which to replace the old 
party dynamics, or risk that groups make their own rules (rules that 
favor themselves, naturally).


To sum that up, I am saying that first, altering the methods of election 
can lead to favorable results beyond the immediately obvious. Second, 
perhaps partisan politics can be improved upon, but if there'll still be 
elections, there'll still be a need for good election methods; and 
third, further decentralizing changes will be next to impossible to get 
through when the ruling parties are so few and hence so much a central 
power.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] A computationally feasible method (algorithmic redistricting)

2008-09-04 Thread Kristofer Munsterhjelm

Juho wrote:

On Sep 4, 2008, at 0:59 , Kristofer Munsterhjelm wrote:
I think puzzles and games make good examples of NP-hard problems. 
Sokoban is PSPACE-complete, and it's not that difficult to show people 
that there are puzzles (like ciphers) where you know if a solution is 
right, but it takes effort to find the solution. That's pretty much 
the point of a puzzle, after all (although not all puzzles are 
NP-hard; they can be fun even if they're not, as long as they do 
something for which it's challenging to find a solution).


Puzzles and ciphers are good examples of cases where general 
optimization may typically fail to find even a decent answer (well, in 
these example cases the solution must be 100% good or it is no good at 
all). My assumption was that in the area of voting methods it would be 
typical that general optimization methods are sufficient and will with 
good probability lead to good enough results. Are there and 
counterexamples to this?


That gets harder, since most puzzles are of the right or not quality. 
Games could count, but it muddies the situation because if games are 
*-complete, they're usually PSPACE (because you have to come up with 
something that works for all possible replies).


However, games that have rules governing the dynamics could be used. For 
instance, finding out where to put the pieces in a Tetris game in the 
absolutely best manner possible is NP-complete. Still, people manage to 
play Tetris, because their approximations are good enough (or not, in 
which case they usually lose at higher levels) and because the 
situations aren't critical.


The problem with using these examples is that you lose the explaining 
power. If you tell someone that placing pieces optimally in Tetris is 
NP-complete, he won't get it, since to him Tetris is easy up to a 
certain point (assume this is someone who knows how to play it), and the 
reason he loses at a sufficiently level is because he can't approximate 
fast enough, which probably has very little to do with asymptotic 
complexity and rather much to do with the constants in the term.


So you'd have to use puzzles to explain the all-or-nothing version and 
only then go on to the optimization version.


Note that I fudged my own explanation a bit here: optimal anything isn't 
NP-complete, it's NP-hard. It's decision problem (is there a way of 
doing it better than x by some given measure) that's NP-complete, and 
you can find the optimum (best value of x possible) by doing a binary 
search.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] No geographical districts

2008-09-05 Thread Kristofer Munsterhjelm

Raph Frank wrote:

On Fri, Sep 5, 2008 at 2:00 AM, Stéphane Rouillon
[EMAIL PROTECTED] wrote:

Hello Juho,

using age, gender or other virtual dimension to build virtual districts
replaces geographic antagonism by generation antagonism.
The idea is to get equivalent sample that are not opposed by intrinsec
construction.


A simple option would be to convert the date of birth into a number,
but have the year the the least significant part..

16-04-82 would become 160,482

The public could then be sorted by those numbers.  In effect, you are
splitting people by the day of the month they are born on, if there is
a tie, you use month and only use year at the end.

This would give a mix of ages, genders and any other measure in each district.

It is pretty much equivalent to just randomly distributing the voters
between the districts, but unlike a random system, it is harder to
corrupt.


It could have a similar result to having alphabetic ranked ballots, only 
with birthdays instead of last names. The selection would be biased in 
the direction of those that are born close to January. It may not 
matter, but it would appear unfair.


If you have computers, you could just sort by SHA512(name concatenated 
with birthdate concatenated with the year of the election). That's 
probably overkill (since even if you could break SHA-512, which would be 
a feat by itself, you'd have to convince the favored member to change 
his name to something suitable), but then there'd be a sufficient margin 
of safety. Randomness without randomness.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Free riding

2008-09-06 Thread Kristofer Munsterhjelm

Raph Frank wrote:

I think there is a slight issue.  In PAV, the satisfaction of each
voter is determined by

S(N) = sum(1 + 1/2 + 1/3 + 1/4 +  + 1/N )

Where N is equal to the number of candidates elected.

An approx function could be created that gives S(N) for non-integer N.
 The easiest would be just linear interpolation.  However, log(N) is
pretty close, but has a slight offset.

So maybe:

C(N) = log(N) - S(N)
Creal(x) = linear interpolate C(N)

Sreal(x) = log(x) - C(x)

This gives the right answer at the integers and a smooth curve between
them (maybe to much detail here :) ).


Why not have just linear interpolation, like RRV? Say that you've voted 
for A and B, and these are elected, and your rating was:


A: 0.9
B: 0.6

Then your satisfication is 0.9 + 0.6/2. Voting approval-style (A: 1, B: 
1) would give the familiar 1 + 1/2.


If it's Cardinal-n (limited number of choices), then you would normalize 
by maximum (e.g 8 of 10 becomes 0.8) and sum as above.



N would then be set equal to the sum of the ratings divided by the max
rating.  E.g. if I rate A as 100 and B as 30, and both are in the
result, then that counts as 1.3 candidates elected, so would take 1.3
terms of the above sum.  This would work out as a satisfaction of
around 1.2 with the above formula, which is between a satisfaction of
1.0 for 1 candidate and 1.5 for 2 candidates.


Oh, I see. You're considering a candidate with score 0.3 being not one 
candidate elected at score 0.3, but a third of a candidate elected. 
Let's consider that from basics. In PAV you have a function,


x
f(x) = SUM 1/n
   n=1

defined for integers, so that e.g f(2) = 1/1 + 1/2, which fits. Then, 
the continuous version would be the integral. The integral of 1/x is 
log(x), but that doesn't quite give the same results since 1/1 + 1/2 
just sums up two values, whereas the integral takes the entire curve 
between 1/1 and 1/2. Thus you'd have to adjust it somehow.


If done correctly, you wouldn't need any sort of interpolation, though.

A simple regression gives f(x) = log(e + 1.773 * (x-1)). The middle 
constant there increases very slowly: it's 1.7778 for x=14.


Using 1.773, your 1.3 candidate example would give a satisfaction of 
1.17871.



 The hard part is finding a way of tallying group support. As mentioned,
first preference wouldn't work, because then you'd get return of Woodall
free-riding: parties would say just vote for a friend as a write-in, then
the order we give you.


Wouldn't parties want to have as many FPV as possible?


Oops, I must have been thinking about decoy lists. You're right, so the 
distortion would be in the other direction. Counting party support by 
FPV would discourage personal votes, both in one end (because 
independent  A1  A2 would count for the independent and not at all for 
A) and in the other (because A1  independent  A2 .. would count 
completely for A).



In theory, if there was some way to measure vote management, then a
party could be punished for vote management by being assigned a lower
value in a later election, but that is probably to complex.


If there was a method of counting party support, parties with 
superproportional support could be deweighted the next time around. I 
think it would be too unstable, though.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Geographical districts

2008-09-10 Thread Kristofer Munsterhjelm

James Gilmour wrote:

Raph Frank  Sent: Friday, September 05, 2008 12:35 PM



Also, what is optimal for Should we use subsidiarity to make
decisions?.


I don't think this question can be answered as you have asked it.
Perhaps it would be more appropriate to ask Do we want our 
decision-making to be based on the principles of subsidiarity (i.e.

bottom-up)?  There are also problems with regard to optimal. A
benign dictator might make a more optimal decision than any
democratic group, but on the other hand you could say that that could
never be optimal because it was not democratic.  [As Professor Joab
(BBC Brains Trust - long ago!) might have said: It all depends on
what you mean by 'optimal'!!]


By the standard of freedom, (real) subsidiarity seems only logical. If 
you're an individual and you do something that has no impact on the rest 
of the world, there would appear, by that measure, to be little point 
for the rest of the world to interfere. If we make this more general, 
then if you're a group and you do something that has no impact on anyone 
but your group, then there would be little point for those outside to 
interfere.


There are two problems with this, though. Some may use logic similar to 
the potential way of arguing for a temporary regime of enforced equality 
 of minorities or sexes in an assembly: to interfere, if shortly, with 
the dynamics of the system, one can quickly redirect it towards a 
direction that it would naturally find but that would take significantly 
longer had you not interfered.


The second is that, for some actions, it's not easy to say who will be 
influenced. All may be, because we are social animals. Another argument 
I've heard, regarding public services, is that if you do something that 
would usually only affect yourself, but that you need a public service 
because of this, then that's a concern of all. For instance, in a nation 
with public healthcare, some may argue that if you do risky things, that 
concerns society in general since you could end up getting hurt and need 
the public health service, which the entire society maintains.



In conclusion, it appears that optimization for any given programmatic 
sense of optimality can just as well expose edge or corner cases that 
one didn't think of, as give what was intended. Still, as long as one 
keeps the above in mind, I think subsidiarity is a good idea.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] the 'who' and the 'what'

2008-09-10 Thread Kristofer Munsterhjelm

Michael Allan wrote:

What about an alternative electoral system, in parallel?  If voters
really want to see change - if they really want to choose the 'who'
and the 'what' - a parallel system would give them an opportunity to
vote with their feet.  If nothing else, they might be curious to learn
how the results would differ (who would be Mayor, for example) if the
selection wasn't restricted to party candidates.



That's one way to do it. I think that if electoral reform is to work, 
the voters have to recognize that the new option (the better system) 
really is a better system, and also be interested in changing the system 
in the first place.


One way of showing that the new method works better is to work from the 
local level up. Another is, as you state, to have a parallel instance 
where voters can see that it's better. The parallel instance doesn't 
have to be completely identical, it could be as simple as MTV's use of 
Selectricity (Schulze) for its elections, although in that example, it 
may be harder for voters to identify that it's the voting method that 
makes for better results (since the internals are hidden).


If you take the parallel system strategy to its extreme, you'd get a 
parallel organization where (as an example), a group elects a double 
mayor and support him over the real mayor, essentially building a state 
inside the state. I don't think that's very likely to happen, though; as 
hard it may be to alter the nation through voting, it's going to be even 
harder to make a duplicate state from nothing, and that duplicate state 
would still have to abide by the laws of the real state.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] language/framing quibble

2008-09-14 Thread Kristofer Munsterhjelm

Fred Gohlke wrote:

Good Afternoon, Kristofer

re: This sounds a lot like what I've previously referred to as
'council democracy'.

I hadn't heard that term before or seen the proposal.  I wonder if the 
concepts can be merged, perhaps by an analytical critique of the processes.


I first mentioned it here: 
http://listas.apesol.org/pipermail/election-methods-electorama.com/2008-July/021966.html 
to which Abd replied here: 
http://listas.apesol.org/pipermail/election-methods-electorama.com/2008-July/021968.html 
. I said that I think some unions use this process: they have local 
delegates that form councils that elect regional delegates and so on.



re: The first problem of council democracy is that it magnifies
 opinion in a possibly chaotic manner.

This is, I suspect, a function of the size of the 'council'.  The larger 
it is, the less opportunity each member has to help form its view.


That is, unless we use proportional representation. If the council is of 
size 7, no opinion that holds less than 1/7 of the voters can be 
represented, so if the opinion is spread too thin, it'll be removed from 
the system; but if you have an extreme of a single layer with PR, 
elected nationally, then the number is much lower.


An aspect of this question that troubles me is the backward-looking 
nature of opinion.  Government is (or, at least, ought to be) concerned 
with the present and the future.  We should prize our representatives' 
ability to address contemporary concerns with all the resources at our 
command rather than apply pre-conceived solutions to new, and possibly 
unknown, circumstances.  In other words, opinion must be subject to 
intellect.


Yes, that's true. I'm using opinion mainly as a way to show that 
minority properties can be either attenuated or magnified, based simply 
on how the voters are distributed among the councils. This could apply 
to any preference that may be held by only a minority (or even by a 
majority, as the worst case scenario shows): it could be a preference 
for deliberative or intelligent representatives for that matter.



re: In the very worst case, an opinion held by (2/3)8 = 4% can
 be held by a majority of the last triad.

I lack the expertise to evaluate the math, but I don't understand the 
point for a different reason:  Is 'an opinion ... held by a majority of 
the last triad' not but one of a multitude of such opinions?  Does a 
person's value rest on a single opinion or on the mix of opinions that 
define the person?  Indeed, is their value not better determined by 
their ability to implement whatever mix of opinions we perceive them to 
have?


Again, I use opinions to make the argument simple. Consider it another 
way: each reduction of many triads to one triad has to, by some measure, 
aggregate minority opinion. In the worst case, only the majority counts 
(as this is majority-based and not a consensus mechanism), and the 
minority preference (opinion, share, whatever) gets shaved off. Since 
the reduction is exponential, even more gets shaved off at each 
instance, and these slices may in the end constitute a majority.



re: ... but the point holds: because the comparisons are local,
 disproportionality can accumulate.

I'm not clear on this point.  By 'local', do you mean that the 
participants are from a distinct locality?  That is certainly true at 
the very lowest levels, but the distinction blurs as the levels advance. 
 I'm not sure what will be disproportionate.


Here's an example of size 3 of the effect I'm talking about. I hope my 
(and your) mail software won't mangle this too badly.


For the sake of simplicity, again, we'll consider binary opinion. 
There's a question that has a yes or no answer. The concils are set up 
like this:


L1  YYN YYN NNN
 |   |   |
 Y   Y   N
 |   |   |
 +---+---+
L2  YYN

Here there are four ayes that overrule the five nays, simply because 
they're better positioned. If you look at the second level, it even 
seems like the ayes have 2/3 of the public support, when that is clearly 
not the case.


In an ordinary council democracy, a conspiracy could stack the councils 
in this manner, but in your proposal, because of random selection, that 
would not be possible. Still, it shows a problem of the process by 
showing a true majority getting assigned a minority of the 
representatives and vice versa.


Weighted votes could ameliorate the case, but it wouldn't fix it 
completely, and it may be unwieldy. In the case above: the unanimous N 
would have strength 3 while the Y members have strength 2 each, thus 
giving 58% for the ayes. That's lower than the raw 2/3 ~= 67%, but still 
too high.



re: One could reduce the first problem by having a larger group
 that elects more than one member.

The question of group size is worthy of considerable thought.  Rather 
than extend this message, I will post a message titled 'DELIBERATIVE 
GROUP SIZE and PERSUASION' so we can focus on 

Re: [EM] sortition/random legislature Was: Re: language/framing quibble

2008-09-14 Thread Kristofer Munsterhjelm

Raph Frank wrote:

Sorry, pressed reply instead of reply to all

On 9/11/08, Aaron Armitage [EMAIL PROTECTED] wrote:
  It doesn't follow from the fact we choose representatives for ourselves
   that we would lose nothing by being stripped of the means of political
   action. We would lose our citizenship, because citizenship means precisely
   having a share of the right to rule. Registering for a lottery doesn't
   count.


So, any form of randomness is not acceptable?  What about one of the
 the proposed random ballot rules, where if there is consensus, a
 specific candidate wins.  However, if that doesn't work, the winner is
 random.


There's probably a tradeoff here. A completely random legislature would 
have no direct link to the people, except by the people, as a mass, 
changing their opinions. An elected legislature is at the other end of 
the scale: the people can directly influence its composition by 
declining to vote for some candidates and supporting others.


The relative isolation from direct influence is both a random assembly's 
strength and a weakness. It's a strength because, if campaign men can 
influence voters in the wrong direction, then the assembly remains 
impervious to this attack. It's a weakness for the reasons Aaron gives, 
that it severely weakens the voter-representative link.


A random assembly also resists the attack where one corrupts candidates, 
simply because it's not clear who the candidates are going to be. I 
don't know if randomness, or more generally, a weak voter-representative 
link is required for this resistance. It might be, for a single given 
representative, but a method where voters elect groups and some subset 
of each group is taken could also be resistant to this, if it's not 
obvious beforehand which subset is taken.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] A computationally feasible method (algorithmic redistricting)

2008-09-14 Thread Kristofer Munsterhjelm

Juho wrote:
The traditional algorithm complexity research covers usually only 
finding perfect/optimal result. I'm particularly interested in how the 
value of the result increases as a function of time. It is possible that 
even if it would take 100 years to guarantee that one has found the best 
solution, it is possible that five minutes would be enough to find a 99% 
good solution with 99% probability.


Decrypting ciphered text does not work this way (the results could still 
be worth 0 after 10 years with good probability). But solving e.g. 
CPO-STV may well behave more this way (probably one can find an 80% good 
solution in one minute). Good performance in value/time means that 
general optimization works (and the method can be considered feasible in 
practice despite of being theoretically infeasible).


In computer science, you have something called polynomial time 
approximation schemes (or PTASes). If a problem has a PTAS, then it's 
possible to get within a factor e (epsilon) of being optimal, in 
polynomial time, where the solution time for a fixed e is polynomial 
(but need not be polynomial with respect to e). An approximation scheme 
that runs in time n^(1 + 1/e) is a PTAS, although it's not polynomial 
with respect to e.


The equivalent group for probabilistic algorithms is called polynomial 
time randomized approximation schemes, and give a result within e of 
optimal in polynomial time /with high probability/.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] the 'who' and the 'what'

2008-09-14 Thread Kristofer Munsterhjelm

Michael Allan wrote:

Kristofer Munsterhjelm wrote:
If you take the parallel system strategy to its extreme, you'd get a 
parallel organization where (as an example), a group elects a double 
mayor and support him over the real mayor, essentially building a state 
inside the state. I don't think that's very likely to happen, though; as 
hard it may be to alter the nation through voting, it's going to be even 
harder to make a duplicate state from nothing, and that duplicate state 
would still have to abide by the laws of the real state.


Or the leading mayoral candidates of the parallel system might
subsequently place themselves on the ballot of the City system.
People would expect more-or-less equivalent results.  They would
expect the City system to reflect and ratify their prior choices.
Then the two electoral systems would not be competitive (as I
implied).  They would be in synergy. The parallel system would be
feeding candidates into the City system.  Its function in that context
would be indentical to that of the party electoral systems.  It would
occupy the same political niche.  So the competition would be there,
in that niche.

Similar arguments can be applied to a parallel legislature.  Popular
parallel legislation would naturally find its way onto the legislative
agenda of the state.  Unpopular state legislation would naturally be
voted down in the parallel legislature.  Party discipline might be
undermined.


That is interesting. Perhaps one could have, for example, a Condorcet 
party that pledges to run the Condorcet winner of an earlier internal 
election for president. Then various small parties could nominally join 
up with the Condorcet party, and that party would hold an election (a 
primary of sorts).


The effects predicted by game theory would be a problem, though. A 
losing party could think that hey, if I run independently, I may get a 
share, no matter how small, and that's better than the 0% chance I have 
if I stay under the Condorcet party umbrella.


There would also be a duplication of effort since the Condorcet party 
would have to manage its own (secret ballot) elections.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] sortition/random legislature Was: Re: language/framing quibble

2008-09-16 Thread Kristofer Munsterhjelm

Raph Frank wrote:
On Sun, Sep 14, 2008 at 8:56 AM, Kristofer Munsterhjelm 
[EMAIL PROTECTED] wrote:

A random assembly also resists the attack where one corrupts
candidates, simply because it's not clear who the candidates are
going to be.


There is also the effect that a person who wants to be a candidate
may need support to have any chance at all.


What do you mean?


I don't know if randomness, or more generally, a weak
voter-representative link is required for this resistance. It might
be, for a single given representative, but a method where voters
elect groups and some subset of each group is taken could also be
resistant to this, if it's not obvious beforehand which subset is
taken.


Interesting.

You could have a system with PR-STV where half of the elected 
candidates are excluded from consideration and then the election is 
held a second time.


One way of doing this would be to take a leaf from genetic algorithms. 
Using either roulette selection or tournament selection, pick until you 
have the council size.


Here's an example for roulette selection. The strategy would need a 
method that returns an aggregate scored (rated) ballot, where that 
aggregate is a proportional completion. Six candidates, three to be elected:


Score   NameCumulative score
0.9474: A   0.9474
0.6680: B   1.6154
0.3046: C   1.9200
0.2980: D   2.2180
0.1502: E   2.3682
0.0015: F   2.3697

We pick a random number on [0, 2.3697). We get 1.85603, so the first 
with cumulative score greater than 1.85603 is elected. That's C. Next, 
the random number is 2.04665. D is elected. Next, 0.738655. A is elected.


So A, C, and D are elected. The candidates with greater electoral 
support have greater chance of being chosen, but for any candidate, 
there's still a nonzero probability that some other will be selected 
instead.
By running the scores through a function, one could make the method 
regard the electoral results more (by amplifying the gaps in scores) or 
less (by evening them out).



However, in general, there's a problem with such hybrids. The problem is 
that, for elections to work, the people must know the candidates to at 
least some extent. Because of this, candidates are going to have a 
history - they will be persistent, and some candidates will run multiple 
times. But this means that they can be corrupted, since the hypothetical 
conspiracy know who to target. If you elect groups instead (or parties), 
the conspiracy or lobbyists are going to target those who decide the 
group composition - the party management in the case of parties. The 
effect will lessen if there are many groups, or the method supports 
independents, but it won't disappear.


There seems to be an inescapable tradeoff here, at least unless one 
thinks outside the box, like with delegable proxy.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] language/framing quibble

2008-09-16 Thread Kristofer Munsterhjelm

Fred Gohlke wrote:

Good Morning, Kristofer

Thanks for the link.  I'll check it as soon as I can.

re: If the council is of size 7, no opinion that holds less than
 1/7 of the voters can be represented, so if the opinion is
 spread too thin, it'll be removed from the system; but if
 you have an extreme of a single layer with PR, elected
 nationally, then the number is much lower.

If an opinion is not held by the majority of the electorate, what is the 
rationale (from the point of view of a democratic society) for not 
removing it from the system?


The rationale is that it enables compromise. The compromise on a 
national level might be different from the compromise on a local level, 
meaning that the entire spectrum should be preserved to the extent that 
it is possible. Otherwise, you can get effects similar to primaries 
where the primary electors elect those that are a compromise within 
their own ranks, and then the general election turns out to have 
candidates that are more extremely placed than the voters.


Holders of minority views who wish their view to gain ascendancy have an 
obligation to persuade the majority of their compatriots that their 
(currently minority) view is advantageous for all the people.  If they 
can not do so, they have no 'inherent right' to representation in a 
democratic government.


The problem of democracy is not to provide representation for minority 
views, it is to select representatives with the judgment and intellect 
to contemplate minority views in a rational fashion.  The only reason 
this seems improper is that we have been subject to partisan rule for so 
long it's difficult to see beyond partisanship and the contentious 
society it produces.  A wise electorate will realize their best 
interests are served by electing people with the wit and wisdom to 
listen to, consider, and, when appropriate, accept fresh points of view.


Yes, but to do so, they need the big picture.


re: ... each reduction of many triads to one triad has to, by some
 measure, aggregate minority opinion.

I'm not sure the word 'minority' is proper.  I think it would be better 
to say 'aggregate public opinion'.


That's right. What I meant is that even if you could magic up an 
election method, there will be som reduction of minority opinion. There 
simply isn't enough room in a 200-seat legislature (to use example 
numbers) to perfectly represent opinions that are held by less than a 
200th of the people; if the method tries, then some opinion held by a 
greater share will suffer. On this I think we agree, whether we look at 
things from a PR or majoritarian point of view (with exceptions 
regarding people who can have multiple opinions, or find the best way of 
combining and compromising).




re: In the worst case, only the majority counts ... and the minority
 preference ... gets shaved off.

Why is that the 'worst' case?  This seems to lead back to my original 
comment on this thread to the effect that there is less interest in 
democracy than in schemes to empower minorities.


The majority /of that council/. That need not be the majority of the 
people at large. If the real majority is thinly spread, it can get 
successively shaved off until nothing remains.



re: Since the reduction is exponential, even more gets shaved off at
 each instance, and these slices may in the end constitute a
 majority.

This assertion seems based on the assumption that because someone 
inclines toward a given view they are incapable of responding to any 
other view.  People are not like that.  Political views are a continuum. 
 They range from one side to the other and from mild to extreme.  The 
method we are discussing will reject extremes and advance people with a 
broader perspective.  The attempt to preserve the 'slices' overlooks the 
improvement in the quality of the people selected to advance and their 
ability to grasp and be responsive to the advocates of those 'slices'.


That does weaken the argument, particularly because we can't model the 
aggregating behavior of the councilmembers in any simple way. As long as 
councilmembers in councils with a majority for X will tend towards X 
(and a compromise is going to contain more X than non-X), then the 
effect would persist, but weakened.


But let's say that the members of a council (or triad) can change their 
opinions. Let's also say that the initial triads are randomized in the 
manner you say. Then it seems you'll face a variant of the sortition 
problem mentioned earlier: if a candidate says Okay, I'll try to 
compromise and gets the votes of the rest of the triad, and then 
escalate, then what's keeping the candidate from turning on his promise? 
Presumably you'd expect most people to be honest, but there's still an 
uncertainty, and that uncertainty appears at every level.



re: There's a question that has a yes or no answer. The concils are set
 up like this:

 L1  YYN YYN NNN
  | 

Re: [EM] the 'who' and the 'what'

2008-09-26 Thread Kristofer Munsterhjelm

Michael Allan wrote:

Kristofer Munsterhjelm wrote:
That is interesting. Perhaps one could have, for example, a Condorcet 
party that pledges to run the Condorcet winner of an earlier internal 
election for president. Then various small parties could nominally join up 
with the Condorcet party, and that party would hold an election (a primary 
of sorts).


The effects predicted by game theory would be a problem, though. A losing 
party could think that hey, if I run independently, I may get a share, no 
matter how small, and that's better than the 0% chance I have if I stay 
under the Condorcet party umbrella.


Or the parallel electoral system (Condorcet party) might undertake a
hostile takeover of the other parties.  It would appeal to their
members and cherry-pick their candidates.  (But I'm uncertain how this
would play out in a PR context, unfamiliar to me.)  It might attract
candidates by the chance to be their own parties, or maybe just to
be independent of any party.  It might attract members (voters) by the
ease of shifting votes across party lines, opening up a wider field of
candidates to them.  (So it would be like a market fair, with
independent vendors.)


It seems this system would be more stable than I originally thought. 
Third parties could run as parts of the Condorcet party without running 
much of a risk, since they would otherwise get no votes at all. The 
defection danger surfaces when the third parties have become 
sufficiently large from using that parallel electoral system. Then a 
party that would win a plurality vote but who isn't a Condorcet winner 
has an incentive to defect.


Following that kind of reasoning, it would appear that conventional 
parties have very little to lose by running Condorcet primaries instead 
of Plurality primaries, more so if there's an open primary. (So why 
don't they?)


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Range-Approval hybrid

2008-09-28 Thread Kristofer Munsterhjelm

Chris Benham wrote:

I  have an idea for a  FBC complying method  that  I think is clearly
better  than the version of   Range Voting (aka  Average Rating or
Cardinal Ratings)  defined and promoted by  CRV.

  http://rangevoting.org/
 
I suggest that voters use multi-slot ratings ballots that have the bottom

slots (at least 2 and not more than half) clearly labelled as expressing
disapproval and all others as expressing Approval.  The default
rating is the bottom-most.
 
Compute each candidate X's  Approval score and also Approval

Opposition score  (the approval score of the most approved candidate
on ballots that don't approve X).
 
All candidates whose approval score is exceeded by their approval

opposition (AO) score are disqualified.  Elect the undisqualified
candidate that is highest ordered by Average Rating.
 
I suggest many fewer slots than 99  and no  no opinion option, so I

think the resulting method is not more complex for voters.


One way of making it less complex would be to have a cardinal ratings 
(Range) ballot with both positive and negative integers. The voter rates 
every candidate, and those candidates that get below zero points are 
considered disapproved, while those that get above zero are considered 
approved. This idea doesn't specify where those rated at zero (or those 
not rated at all) would appear.


Normalization could be used if required, with either the voter 
specifying absolutely worst and absolutely best (setting the range), 
or by the lowest and highest rated candidate having those positions. So 
if a voter wants to say that he likes all the candidates, but some are 
better than others, he could vote all positive integers, whereas a 
McCain/Obama/Clinton voter could vote McCain less than zero and the 
other two greater than zero. With normalization, the contribution of


A: 1 pts.
B: -1 pts.

to the raw scores would be the same as

A: 3 pts.
B: 1 pt.

but would have a different effect regarding the approval component (only 
A approved in the first case, both approved in the second).


Election-Methods mailing list - see http://electorama.com/em for list info


[EM] the 'who' and the 'what' - trying again, again

2008-09-29 Thread Kristofer Munsterhjelm

For some reason, I didn't receive Dave Ketchum's reply to my post about
the Condorcet party. So let's try this again, indeed.

Dave Ketchum wrote:

On Mon, 29 Sep 2008 00:05:28 +0200 Kristofer Munsterhjelm wrote:

Dave Ketchum wrote:


My goal is using Condorcet, but recognizing that everything costs
 money, wo we need to be careful as to expenses.

Thus I see: Condorcet as the election method. But then see no
value in a condorcet party. Also then see no value in
primaries, but know parties see value in such.



The idea of having a Condorcet party is to gradually transform
Plurality elections into Condorcet elections.


Disturbing existing elections by marrying in something from Condorcet
seems very destructive considering possible benefits, so how about: 
Run a phantom Condorcet election with current candidates before the 
existing voting.


Candidates can drop out if they choose: Third party candidates have
little to lose. Major party candidates risk static as to why they did
not dare.

Those who choose to, vote via internet.

Thus we have ballots to count and report on as a sort of poll.


If you're going to have a poll, you don't need the Plurality shell; 
that's true enough. But if you're a third party and you're seeing your 
rate go to close to zero, then uniting with other third parties under a 
Condorcet party could improve your chances, because at least the third 
parties aren't splitting the votes among themselves anymore.


For polling, I would advocate ordinary polling, because internet polls 
would be colored by the effect that those who have good internet 
equipment would affect the results in a disproportionate manner. So 
could foreigners or hacked computers, although in reality those probably 
wouldn't be much of a problem.


Perhaps internet voting biases could be fixed by having a vote by 
party adjustment like real polling organizations do. That is, if 53% of 
the people are Democratic, then all Democrat-first voters count for 53% 
of the voting power in the poll, and so on. But that faces another 
problem, because many of those Yes, I like Democrats replies (that 
were used to derive the 53%) may be a result of the strategic vote 
nature that Plurality encourages.



And no value in runoffs - Plurality needs runoffs because of the
way voters cannot express their thoughts - but Condorcet has no
similar problem.



Runoffs are not perfection even in Plurality - look at the recent
French election for which voters thought of rioting when neither
runoff contender was popular.


You're replying to yourself, but I'll agree with you here. Plurality 
plus runoff is not perfect, but it's much better than Plurality without 
runoff. To make a general observation, runoff weakens strategy, and 
Plurality is filled with strategy (least of two evils). Runoff doesn't 
eliminate the strategy, but then it can't, no matter what voting system 
it is paired with.



With Condorcet they offer little possible value - every voter could
rank AB, A=B, or BA at the same time as doing any other desired
ranking.


For public elections I think it's likely that candidates won't 
strategize enough to necessitate further hardening against strategy. Not 
everybody agrees, and I'm simply saying that I can see how someone would 
argue in the favor of having a runoff even with a Condorcet method.



Also, if there is no CW there are at least three candidates in a near
tie - want to put the N candidates in a runoff?


I don't know - is that the case for Plurality ties with Plurality+runoff?


Condorcet runoffs may have value if the people decide to play dirty
and always use strategy. Since the runoff must be honest (with only
two candidates, the optimal strategy is honesty), it hedges the
risk since the best of the two will always win.


How much strategy need concern us with Condorcet?  The plotters need
an accurate picture of their starting point.  The plotting is complex
because of the tournament counting.  Then they must advertise their
plot to their friends while keeping that a secret from their enemies.


I'm rather thinking of uncoordinated strategy, like Burial, here.

Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Fw: Range-Approval hybrid

2008-09-30 Thread Kristofer Munsterhjelm

Chris Benham wrote:

Yes. I suggest that those not rated should be interpreted as
disapproved and bottom-most rated.  Those candidates rated zero
should be considered to be half-approved. Candidate X's approval
opposition to Y should be X's approval score (including of course the
half-approvals) plus half  X's approval score (likewise) on ballots
that rate Y zero.  Y's  Approval Oppostion score refers to Y's 
maximum approval opposition score from any X.


Here it seems you would have a choice analogous to wv versus margins in 
Condorcet. What you describe would be margins; wv would give no points 
to A nor B in the case of a tie.


Normalization could be used if required, with either the voter 
specifying absolutely worst and absolutely best (setting the

range), or by the lowest and highest rated candidate having those
positions. So if a voter wants to say that he likes all the
candidates, but some are better than others, he could vote all
positive integers, whereas a McCain/Obama/Clinton voter could vote
McCain less than zero and the other two greater than zero. With
normalization, the contribution of

A: 1 pts. B: -1 pts.

to the raw scores would be the same as

A: 3 pts. B: 1 pt.

but would have a different effect regarding the approval component
(only A approved in the first case, both approved in the second).



I don't think I'm that keen on normalization, but I don't really
object to 'automating' the approval cutoff, so that ballots are
interpreted as approving the candidates they rate above the mean of
the ratings they've given (and half-approving those exactly at that
mean).  I can imagine that others would object on various grounds,
and the US voting reform enthusiasts who like FBC-complying methods
like Range and Approval generally seem to prefer their voting methods
to have  'manual transmission'.


The advantage of having zero set the boundary between approved and 
disapproved, instead of the mean doing so, is that you could express a 
general favor (or dislike) of politicians. For instance, if you think 
only one person's mostly decent and the rest are all corrupt (but some 
are more corrupt than others), you could vote the favored candidate 
above zero and the others below zero, whereas above mean would include 
some of the corrupt candidates as well.


I can understand that some would prefer the ballot to have, to use your 
own words, a manual transmission, but I think the concept of an explicit 
approval cutoff would be confusing to most. With the boundary at 0, you 
can just say, implicitly, give those who you like points, and take 
points away from those you don't like.


When Approval voting has better strategies than plain commonsense 
approval, that's going to be a suboptimal strategy, but hopefully the 
voters are going to be mostly honest so that that's not much of a problem.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Random and reproductible tie-breaks

2008-10-01 Thread Kristofer Munsterhjelm

Stéphane Rouillon wrote:

Hi,

for an anti-fraud purpose, the capacity to repeat the counting operation 
is a must.

Hence I recommand to use a reproductible random procedure to break ties.
This allows the use of different computers to reproduce the counting
operation, while always obtaining the same result despite ties.


This is a somewhat late reply, but here's my suggestion:

For a ranked ballot method, use Random Voter Hierarchy, as by the MAM 
definition, to construct a tie-breaker ordering (ranked ballot). When 
you encounter a tie, use that ordering. If it's a multiwinner method, 
invalidate the tie-breaker after a candidate has been elected, and use 
roulette wheel selection based on reweighted values when constructing 
the random voter hierarchy.


To make the randomness reproducible, there are two methods to do so. The 
first would be to use a strong cryptographic hash, say SHA-256. Each 
candidate can submit a string, and those strings are all sorted and 
concatenated, then fed through the hash. The result is used as a seed 
for a pseudorandom RNG. In case no candidate bothers to submit a string, 
the list of strings should start off with some variables, like the year 
of the election, or perhaps some easily verifiable data of higher entropy.
The second is simpler. After the ballots have been gathered, but before 
the election, have the candidates watch a lottery machine. Use its 
output, which is made public, as a seed for the RNG.


If you really want to overdo it, you could also translate the ranked 
ballots to integers, then feed all those integers through the hash. Sort 
candidates alphabetically. The problem with this way of doing it is that 
if there's even one miscount, then the hash will be different when the 
audit's done.



As an example of why Random Candidate can't be used even with cloneproof 
methods, consider this election. We'll use Schulze, which is cloneproof.


10: A  B  C
10: B  C  A
10: C  A  B

Classic three-way tie. Random candidate gives A, B, or C with 1/3 
probability (as is fair). Now, let's clone A.


10: A1 = A2 = A3 = A4  B  C
10: B  C  A1 = A2 = A3 = A4
10: C  A1 = A2 = A3 = A4  B

Schulze returns A1 = A2 = A3 = A4 = B = C. So the tiebreaker decides. 
With Random Candidate, the A group has probability 4/6, with B at 
probability 1/6, and C at probability 1/6. Thus, cloning paid off for A.


There would be a slight problem here with Random Voter Hierarchy, since 
it'd have to go through all the ballots in order to find out that A1 = 
A2 = A3 = A4 on everybody's ballot; thus the process would be lengthy 
and difficult to audit correctly.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Why We Shouldn't Count Votes with Machines

2008-10-05 Thread Kristofer Munsterhjelm

Kathy Dopp wrote:

On Sat, Oct 4, 2008 at 3:51 PM, Dave Ketchum [EMAIL PROTECTED] wrote:


In fact some computer scientists just recently mathematically PROVED
that it is impossible to even verify that the certified software is
actually running on a voting machine.

Tell us more, a bit more convincingly as to fact behind this opinion -
assuming proper defenses.


Here is the info. I have not read the proof yet myself:

In 'An Undetectable Computer Virus,' David Chess and Steven White show
that you can always create a vote changing program (called virus
there) that no verification software can ever detect.


Without having read the paper, I suspect this is a reduction to the 
Halting problem. Of interest regarding my earlier idea of 
special-purpose machines is that most voting systems don't need full 
Turing capability to find out who the winner is, so one may be able to 
make a program (or chip) for counting votes that can be proven not to 
have modifications (subject to the assumptions of the surrounding, 
less-than-Turing, framework).



It seems to me that most of the persons on this list would rather have
votes fraudulently counted using some alternative voting scheme that
requires an unverifiable unauditable electronic voting system, than
accurately counted using the plurality election method.

Curious.


Say that the losses due to fraud is p. Also say that the losses due to 
using Plurality is q. Then, if there is no fraud at all under Plurality, 
and a lot of fraud under the better method, and p  q, then switching to 
an alternative voting scheme, even if that would lead to fraud, is an 
improvement. This is a quick and dirty argument (because surely there 
can be some fraud under Plurality, and no voting method would work if 
all the ballots have been subject to fraud, i.e the entire input is 
garbage), but it should get the point across.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Idea for a free web service for (relatively) secure online voting

2008-10-07 Thread Kristofer Munsterhjelm

Mike Frank wrote:
Hello, I was thinking of building a free public web service, perhaps 
operated by a charitable NPO,  that would allow organizations (including 
perhaps small governments) to operate online elections in a way that 
offers some sophisticated modern security features.


In addition to taking standard security precautions, the site would 
generate a certain form of electronic certificate, made available after 
the election to each registered voter, that is basically a concise, 
easily-verifiable, cryptographically-secure proof which assures that the 
voter's specific ballot information (or their lack of a ballot, if they 
did not submit one) was correctly figured into the official election 
results.  (The voter could verify their certificate using open-source 
software or online services which could be made available by any number 
of independent organizations.)


In such a system, if significant numbers of ballots were being 
electronically altered before tallying (as Diebold has been accused of), 
this kind of tampering could be easily detected by affected voters.  So 
it would be much harder to get away with, would be less likely to 
happen, and so the voters could hopefully have more confidence in the 
system as a whole.


How would this system work? I guess you could use blind signatures to 
submit the actual votes, but how would it ensure the voters that their 
votes are counted? I know of some systems to produce proofs for 
Plurality, but I'm not sure how they could be turned into proofs for, 
say, Schulze. If the system permits ranked or rated votes, you'll also 
have to deal with the fingerprint attack, where a vote-seller asks the 
voter to vote in a particular manner, using a rank that with high 
probability will be unique.


Such a system wouldn't directly address suspicions that the voter rolls 
in a given election might have been padded with unreal voters; this 
would require verifying the real-world authenticity of voter identities 
through some process of voter registration, but that is a problem that 
could be handled separately offline (e.g. via registration in-person or 
by mail, like voter registration is often done now, and/or by publishing 
of voter rolls for independent verification).  For use in smaller 
organizations where the list of eligible voters is common knowledge 
(e.g. all organization members), padding of rolls would not be an issue 
anyway.


Other possible attacks from the outside could involve coercion (vote my 
way while I watch) or bribery (same as above, but with a payment if you 
do what I say), and identity confusion (where the person's computer is 
zombified so that the ballot cast differs from what the voter intended). 
If you want to be sophisticated, you could have a vote retraction signal 
(a number or similar) which would nullify your vote if you send it 
before the election, and an external device to confirm the ballot just 
before you submit it (so that you can see it's what you actually wanted).


Of course, a voter retraction signal opens up the possibility for 
coercion or buying of said signal, and it'd also be difficult to 
reconcile the goals of both having it possible for a voter to verify if 
his vote was counted and making it possible for the voter to annul his 
vote. If the annulment makes the signature return you didn't vote or 
your vote didn't count, then a coercer could attack the voter for 
having retracted his vote, whereas if it still makes the signature 
return you did vote and your vote counted, then that might be used for 
fraud (mass retraction after the polls have officially closed).


Incidentally, the cryptographic certificates attesting to the 
correctness of the ballot-tallying process might be easier to create for 
some election methods than for others - for example, plurality, range, 
and approval voting are all easy to handle, but with ranking-based 
methods it gets a little more complicated (b/c aggregated subsets of 
ballots couldn't be summarized with just a single number for each 
candidate).  It's still possible, but the certificates might get a lot 
larger.


Would the certificates differ for different Condorcet methods? How about 
IRV, which is very sensitive to changes in ballots?


If the certificates are unmanagable for IRV, that may still not be much 
of a problem, though, since (in my opinion) IRV is not a very good 
system. Others who like IRV may disagree.


But in any event, the site could still allow election organizers to 
select from any of a number of interesting voting methods, such as those 
being discussed on this list.


Anyway, I was wondering if the folks on this list think that such a site 
would be useful - or has it already been tried?  Perhaps I can improve 
in some way on what's been done.


I don't think it's been tried yet. I know of some sites that do election 
counting on demand, but none that have the sort of cryptography setup 
you're talking about.


As for that setup, I think 

Re: [EM] Who comes second in Ranked Pairs?

2008-10-14 Thread Kristofer Munsterhjelm

Scott Ritchie wrote:

I'm writing a ranked pairs counter as practice for learning python, and
I realized I don't know the answer to this question.

Suppose I want to know who comes in second in a ranked pairs election.
Is it:

1) Run ranked pairs algorithm on the ballots, find that candidate A
wins, then purge A from all the ballots and rerun the algorithm to find
a new winner and call him the second place candidate OR

2) Run ranked pairs algorithm on the ballots, lock in all pairs in their
order that don't create cycles, then look at whom is second in the graph
(ie, whoever beats all but A)


Or will these two always be the same?  It'd be nice if I could see an
example where that's not the case.


I think they're the same. I don't have proof of this, but I think it was 
given in an earlier post on this list.


In any event, your #2 answer is the right one. When you lock in 
victories, you'll either go through all 0.5*n*(n-1) possible victories, 
or enough that you have a complete ordering. In either case, you use 
that ordering as the final result.


For instance, if you have a situation with candidates A, B, and C, and

1. A  B
2. B  A
3. A  C
4. B  C
5. C  B

in that order, you lock A  B, A  C, B  C, which gives A  B  C. Thus 
B comes in second.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Range Condorcet (No idea who started this argument, sorry; I am Gregory Nisbet)

2008-10-15 Thread Kristofer Munsterhjelm

Greg Nisbet wrote:

Reasons why Range is better and always will be.
I would like to end the truce.
 
I'll be generous to the Condorcet camp and assume they suggest something 
reasonable like RP, Schulze or River.
 
Property Related:

favorite betrayal, participation and consistency.
Implications:
1) It is always good to vote and it is always good to rate your favorite 
candidate 100. The only Condorcet method to satisfy favorite betrayal is 
an obscure variant of Minmax which I'll ignore because of its glaring 
flaws (clone dependence *cough*)


MMPO's greatest flaw isn't clone dependence but indefensible Plurality 
failure. Consider this case (by Kevin Venzke):


 A  B = C
   1 A = C  B
   1 B = C  A
 B  A = C

C wins.

Also, MMPO isn't technically a Condorcet method, since it doesn't pass 
Condorcet. Here's another example, also by Venzke:


30 BC=A
19 A=BC
51 A=CB

The Condorcet Winner is C, but A wins in MMPO.

If you like Range, this may be to your advantage, since you could say 
that instead of there being only one Condorcet method that satisfies 
FBC, there are none at all, or if there is, that this method must be 
very obscure indeed.


2) How does it make sense to be able to divide a region into two 
constituencies each electing A if B is the actual winner? Condorcet 
methods are not additive, this calls into question the actual meaning of 
being elected by a Condorcet method.


I'd consider this problem similar to Simpson's paradox of the means, 
where one can have trends that go one way for the means of two separate 
groups, but where this trend reverses if the groups are aggregated. It's 
unintuitive, but doesn't invalidate the use of means in statistics.



answers to potentital majority rule counterarguments:
1) Range voting isn't a majority method.
answer: any majority can impose their will if they choose to exercise it.
concession: it is true that Condorcet methods solve the Burr Dilemma 
fairly well because parties can simultaneously compete for majorities 
and swap second place votes. Range Voting can at best allow voters to 
differentiate between better and worse candidates by one point. So 
Range's ability to emulate this behavior is competitive.
 
I am not aware of another anti-range voting property one could claim 
that is applicable to cardinal methods.


This is really a question of whether a candidate loved by 49% and 
considered kinda okay by 51% should win when compared to a candidate 
hated by the 49% and considered slightly better than the first by the 
51%. A strict interpretation of the majority criterion says that the 
second candidate should win. The spirit of cardinal methods is that the 
first candidate should win, even though it's possible to make cardinal 
methods that pass strict Majority.


Another argument against Range as a cardinal method might be that it 
suffers from compression incentive (with complete knowledge, the best 
strategy is to, for each candidate, either maximize or minimize the 
rating given). Something like, say, a Condorcet method where rating A 
100 and B 20 gives AB 80 points would not be as susceptible to this 
(though it would probably be vulnerable to other strategies).



Computational Complexity (time):
Range O(c*v)
RP O(c^2*v+c^3) #c^2*v = constucting matrix; c^3 finding local maximum 
or generating implications c^2 many times.
 
Range Voting is more scalable.


I don't think this is much of a concern. With modern computers, voters 
will have trouble ranking all the candidates long before the computers 
that do the counting would exhaust CPU processing power, and that'll 
hold as long as the complexity is a reasonably sized polynomial.



Voter Experience:
 
Range Voting (based on the existence of Amazon product ratings, youtube 
video ratings, hotornot.com http://hotornot.com, the number of movies 
rated out of stars.) I cannot find a single instance of Condorcet 
methods besides elections in various open source communities. It doesn't 
qualify as mainstream.


http://en.oreilly.com/oscon2008/public/schedule/detail/3230 mentions 
that MTV uses Schulze, internally. The French Wikipedia, as well as the 
Wikimedia Foundation in general, also uses Schulze. The Wikipedia 
article on the Schulze method also lists some other organizations that, 
while small, are not communities organized around open source.



Understandability:
 
Range Voting (I dare anyone to challenge me on this)
 
Bayesian Regret:
 
Range Voting (same comment)


Granted, though DSV methods based on Range do better (and may help with 
the compression incentive - I'm not sure, though). If they help 
sufficiently that one doesn't have to min-max in order to get the most 
voting power, it would keep Range from degrading to Approval and thus 
(absent other problems) fix the Nader-Gore-Bush problem (where Nader 
voters don't know whether they should approve Nader and Gore or just Nader).



Ballot expressiveness:
 
For elections with less than 100 candidates Range 

Re: [EM] Multiwinner Method Yardstick (Gregory Nisbet)

2008-10-16 Thread Kristofer Munsterhjelm

Greg Nisbet wrote:

Proportional Approval Voting
http://www.nationmaster.com/encyclopedia/Proportional-approval-voting
Brief summary of this method:
there are O(c!) (candidates factorial) many pseudocandidates 
consisting of all the possible combinations of candidates.
Let's say we have a voter named Alice and a three person pseudocandidate 
composed of real candidates X,Y, and Z.

If Alice approves of one of them, the score for XYZ += 1
two  ,   
+= (1 + 1/2)
three/all   
,+= (1 + 1/2 + 1/3)
 
This way Alice approving of X and Bob approving of X is worth 2 pts 
whereas Alice approving of X and Y and Bob approving of neither is only 
worth 1.5 pts. The procedure isn't iterative hence the failure of RRV

http://rangevoting.org/RRV.html
to satisfy the multimember equivalent of the participation criterion is 
sidestepped. In other words, voting for a candidate cannot hurt you 
because PAV does not use an elect-candidate-then-punish-supporters 
iteration to achieve its result.
 
However great PAV may be its O(c!cv) (candidates factorial * candidates 
* voters) time complexity is enough to make me think twice before 
seriously considering it.


Perhaps one could use branch-and-bound methods to wrangle this down to 
something more managable (with high probability or in the case of 
realistic ballots). One option, if that's impossible, is to reduce the 
ballots to a tree (to thwart fingerprint attacks), make the tree public, 
and then have anybody who wants to submit their proposed council. The 
council with the best score then wins. If there's a PTAS for this 
problem, that might serve as a default.


One could also have a Sainte-Laguë variant of this. In it, the score for 
got one candidate would be 1, got two candidates 1 + 1/3, got three 
candidates 1 + 1/3 + 1/5, and so on.



Multiwinner Method Yardstick
 
PAV is the basis of the multiwinner analogue of Bayesian regret. Think 
of it this way.

PAV gives us a nice formula for dealing with range values.
Let's use the previous example of Alice and XYZ
Let's pretend Alice votes X = 99, Y = 12, Z = 35
 
with PAV, the formula is (1+1/2+1/3...1/n) for the nth thing
think of it as sorting the list for that candidate and THEN applying 
(1,1/2,1/3..1/n) to it.

in the previous example if Alice approved X and Z (1,0,1)
we sort the list
(1,1,0)
then multiply by the coefficients
(1*1,1*1/2,0*1/3)
and add
1.5
 
apply the same thing to the current example
 
99,12,35 == 99,35,12
 
and multiply...
 
99*1,35*1/2,12*1/3
 
and add...
 
120.5
 
there, the score for XYZ from Alice is 120.5
 
Thus the procedure for evaluating various multiwinner methods is simple:
 
create some fake voters (make their preferences between 0 and n, 
distributed however you like) 
I'd recommend NOT using negative numbers because I have no idea how they 
will interact with the sorting and tabulating procedure.


This works *if* PAV is the ultimate solution. That is, if what PAV 
produces is the best of the best, then your scores will give you an idea 
of how good a multiwinner method is, because you can calculate the PAV 
score given any proposed council.


But is that the case? It doesn't seem to readily follow. One may ask, 
even if we have a single universal standard independent of external 
information (as candidates' opinions), is PAV the best possible 
standard? Why not, for instance, Sainte-Laguë PAV? Or, for that matter, 
Warren's Logarithmic Penalty Voting defined in his paper #91? As long as 
it's true that approving an additional candidate can only improve your 
satisfication, they should all pass your multiwinner equivalent of 
participation.


I'm in the process of programming something to actually test this. If 
anyone has a program for STV, CPO-STV, or some other multiwinner 
something or rather, I would really appreciate it.
 
Even if it's just a description of a method; it's better than nothing. 
(no party-based or asset voting related methods please.)


I made a program to test multiwinner methods based on a metric one may 
call opinion fidelity. The simulation consists of many rounds, and for 
each round there are a certain number of binary opinions, voters, and 
candidates. Each voter (a candidate is also a voter) is assigned a 
random boolean vector of length equal to the number of opinions. Then 
the simulation counts how many have true (aye) for each opinion, and 
constructs rank ballots for each voter, where the voter ranks those who 
agree with him (lower Hamming distance on the opinion vector) ahead of 
those who don't. Then it sums the ayes for each opinion on the council 
produced by a multiwinner method, and the closer these are (by RMSE, 
Webster measure, Gini .. any measure), the better the multiwinner system 
in question.


This program has multiwinner method objects which may be of interest for 
your tests. It implements STV (Meek or ordinary), D'Hondt 

Re: [EM] FW: IRV Challenge - Press Announcement

2008-10-17 Thread Kristofer Munsterhjelm

Markus Schulze wrote:

Dear Jonathan Lundell,

I wrote (7 Oct 2008):


Well, the second paper is more general. Here they use
Arrow's Theorem to argue why monotonicity has to be
sacrificed.


You wrote (7 Oct 2008):


Or at least that something has to be sacrificed. Do
you see that as a problem?


Well, monotonicity is actually not needed in Arrow's
Theorem. Therefore, Arrow's Theorem is frequently
stated as saying that no single-winner election
method can satisfy (1) universal admissibility,
(2) Pareto, (3) nondictatorship, and (4) independence
from irrelevant alternatives.

Therefore, using Arrow's Theorem to argue that
monotonicity should be sacrificed to get
compatibility with the other criteria seems
to be odd.


If you want to be generous, you could read the argument as all methods 
fail one of Arrow's criteria; monotonicity failure is a result of this, 
and if a method doesn't fail monotonicity, it'll fail something else. 
That's still odd, though, because you can turn the argument around and 
say well, then if you think Arrow failure makes all methods equal, 
there's no disadvantage to using Condorcet, but if you think some 
criteria are more important than others, then there's an advantage to 
using Condorcet, therefore in any case there's no disadvantage to using 
Condorcet.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] IRV vs Condorcet vs Range/Score

2008-10-17 Thread Kristofer Munsterhjelm

Dave Ketchum wrote:

I suggest a two-step resolution:
 Agree to a truce between Condorcet and Range, while they dispose of 
IRV as being less capable than Condorcet.

 Then go back to the war between Condorcet and Range.


I think the problem, or at least a part of it, is that if we (the 
election-methods members) were to advocate a method, to be effective, it 
would have to be the same method. Otherwise, we would split the vote, 
as it were, against the status quo. Therefore, both Condorcet and Range 
groups would prefer their own method to win.


If that's true, then one way of uniting without running into that would 
be to show how IRV is bad, rather than how Condorcet or Range is better. 
If there's to be unity (or a truce) in that respect, those examples 
would focus on the properties where both Range and Condorcet, or for 
that matter, most methods, are better than IRV, such as in being 
monotonic, reversal symmetric, etc.


An expected response is that these properties don't matter because they 
happen so rarely. To reply to that, I can think of two strategies. The 
first would be to count failures in simulations close to how voters 
would be expected to act, perhaps with a reasoning of we don't know 
what strategy would be like, but the results would be worse than for 
honesty, so these provide a lower bound. The second would be to point 
to real uses, like Australia's two-party domination with IRV, or Abd's 
argument that TTR states who switched to IRV have results much more 
consistent with Plurality than what used to be the case.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Range Condorcet (No idea who started this argument, sorry; I am Gregory Nisbet)

2008-10-17 Thread Kristofer Munsterhjelm

Jobst Heitzig wrote:

Dear Kristofer,

you wrote:
This is really a question of whether a candidate loved by 49% and 
considered kinda okay by 51% should win when compared to a candidate 
hated by the 49% and considered slightly better than the first by the 
51%. A strict interpretation of the majority criterion says that the 
second candidate should win. The spirit of cardinal methods is that 
the first candidate should win, even though it's possible to make 
cardinal methods that pass strict Majority.


What does this spirit help when the result will still be the 2nd 
instead of the 1st candidate, because the method is majoritarian despite 
all cardinal flavour?


Again looking at my 55/45-example shows clearly that compromise 
candidates are not helped by voters' ability to express cardinal 
preferences but rather by methods which require also majority factions 
to cooperate with minorities in their own best interest, as is the case 
with D2MAC and FAWRB.


Would you bother to answer me on this?


Sorry about that. Because I've been away for some time, I've got a long 
backlog of posts, and I'm working my way through them.


Let's look at your example.

55: A 100  C 80  B 0
45: B 100  C 80  A 0

Range scores are 5500 for A, 4500 for B, and 8000 for C. So C wins. For 
Condorcet, A wins because he's the CW. So Condorcet is strictly 
majoritarian here, while Range is not.


You may say that, okay, the A voters will know this and so strategize:

55: A 100  C 1  B 0
45: B 100  C 80  A 0

In which case A wins. This, I think, is what Greg means when he says 
that a majority can exercise its power if it knows that it is, indeed, 
a majority.


As far as I understand, the methods you refer to aim to make this sort 
of strategy counterproductive.


Because Range isn't majoritarian by default, it doesn't elect A in your 
honest-voters scenario. I would say that from this, it's less 
majoritarian, because majorities don't always know that they are 
majorities. However, it's still more majoritarian than your random 
methods, because in the case that the majority does coordinate, it can 
push through its wishes.


To answer your question: the spirit helps because majorities are not 
always of one block, or the same. You have shown that it's possible to 
be less majoritarian than Range, though.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] multiwinner election methods

2008-10-19 Thread Kristofer Munsterhjelm

Warren Smith wrote:

1. the right way to compare election methods is Bayesian Regret
(BR). http://rangevoting.org/BayRegDum.html

For a long time I thought this was only applicable for single-winner 
voting methods. However, I eventually saw how to do it for
multiwinner methods also: 
http://groups.yahoo.com/group/RangeVoting/message/7706


it would be a substantial computer programming project to try to do 
this, and so far, nobody has undertaken that project. But I recommend

it!!  If Gregory Nesbit is looking for a project to undertake for,
e.g. Intel Science Talent Search, he could do it :)

In the absence of BR, one is reduced to comparing voting methods
using properties. I also recommend that, but for multiwinner voting
methods this too is in its infancy. A paper attempting to compare
multiwinner voting methods (using properties) by me is here 
http://www.math.temple.edu/~wds/homepage/works.htmlpaper #91. 
However this paper is out of date and not fully satisfactory...


2. About RRV (reweighted range voting) 
http://rangevoting.org/RRV.html recent developments are these: Steven

J. Brams found an example (in email to me) in which RRV violates
favorite betrayal. That is, there are elections in which foolishly
voting your true favorite top, causes you to get a worse election
result.

Warren Schudy found a beautiful theorem that EVERY multiwinner 
election method in which the ballots are approval-style or

range-style, must either 1. fail to be proportional 2. fail to be
invariant to reinforcement (IR).

IR means that if a ballot is altered to increase score for X, that 
should not stop X winning; similarly if decrease score for X, that

should not stop X losing.


Does that include multiwinner methods that are based on ranked ballots?
I'm not sure, because on one hand, you only specify approval and range
style, but on the other, any range ballot can be reduced to a ranked
ballot with some additional information (how much better a certain
choice is to another), and so one could construct a rated pseudo-ballot
by running a ranked ballot through a weighted positional system.

I guess the theorem applies neither to asset (unless candidates pledge 
to transfer support in a certain way) nor to closed list PR, since both 
use single-vote type ballots.


4. systems based on every subset of the candidates is a 
pseudocandidate are just nonstarters because there are far too many

pseudocandidates.


Some of these might work if they have a single local optimum, so that a 
hill-climbing algorithm can find that optimum. But then it could be 
restated by including the hill-climbing algorithm into the method, and 
so might be exempt...


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Fixing Range Voting

2008-10-19 Thread Kristofer Munsterhjelm

Greg Nisbet wrote:
Instant Range-off Voting is an interesting idea. I thought about it once 
a while ago too. I didn't renormalize the ballots though, I just set the 
co-highest to 100 and the co-lowest to 0 for each ballot as a sanitation 
measure. I eventually abadoned it due to nonmonotonicity, but I think 
the discussion is a valid one.
 
There are some problems with Range Voting, and perhaps tweaking it or 
adding some new features will fix them, perhaps not.
 
Most of the problems seem to involve voters being coerced into making 
extreme ballots for fear of being outcompeted by strategic rivals. 
Assuming people will be honest out of charity is naive. Some of them 
will, perhaps many of them will, but unscrupulous individuals could 
manipulate an election if there were enough of them. So, in the spirit 
of idiotproofing voting, let's discuss Range Voting spinoffs.
 
so for there is:
 
IRNR (Instant Runoff Normalized Ratings)
 
Cardinal Condorcet http://fc.antioch.edu/~james_green-armytage/cwp13.htm 
 
Various semi-proposed tweaking of Range Voting to include an elect 
majority winner first or elect CW first clause.
 
All of these have the same goal and that goal is very simple. To either 
encourage honest ratings or force more explicit ratings.


You could also turn approval methods into Range methods. For example, 
the Range version of UncAAO (Uncovered Approval, Approval Opposition) 
would treat Range votes as fractional approval votes. However, for 
UncAAO you'd still need an approval cutoff (I'd rather not have any 
candidates below this value), which would make the ballot complex. 
Also, the methods would have to use the rating information for some 
other purpose, not just as fractional approval votes (otherwise, 
approval strategy would still work).


That being said, I think the most promising area of development here is 
based around the concept of a conditional vote that came up a few 
threads ago. The idea here being that individual ballots should react 
to a particular candidate being kicked out of the hopeful group or 
something like that.


DSV systems would do something like that. You'd submit an honest ballot, 
and then the system would strategize maximally (not just for you, but 
for all others), first on the honest information, then on the previous 
round's strategic information, until the result settles. That would be a 
sort of automatic conditional ballot. The idea would be that the system 
or computer would be so good at strategizing on your behalf (for all 
voters), that it wouldn't pay off to try to manually use strategy.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Range Condorcet (No idea who started this argument, sorry; I am Gregory Nisbet)

2008-10-20 Thread Kristofer Munsterhjelm

Greg Nisbet wrote:



On Wed, Oct 15, 2008 at 3:09 PM, Kristofer Munsterhjelm 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:


If you like Range, this may be to your advantage, since you could
say that instead of there being only one Condorcet method that
satisfies FBC, there are none at all, or if there is, that this
method must be very obscure indeed.

 
Before writing this, I knew there were about five versions of Minmax, 
all possessing different properties. I think there is one version that 
satisfies CW but not CLoser and various other weird combinations of 
properties such as that. On the topic of whether there is a method that 
satisfies both Condorcet and FBC. 
http://osdir.com/ml/politics.election-methods/2002-11/msg00020.html claims 
that any majority method will violate FBC. 


Strong FBC. But that's already been answered. Even so, I don't think 
there's a method that satisfies both weak FBC and Condorcet. If there 
is, I'm unfamiliar with it; but the simulations results given at 
http://www.mail-archive.com/[EMAIL PROTECTED]/msg06443.html 
may show that Schulze, while technically failing FBC, does so rarely.



2) How does it make sense to be able to divide a region into two
constituencies each electing A if B is the actual winner?
Condorcet methods are not additive, this calls into question the
actual meaning of being elected by a Condorcet method.


I'd consider this problem similar to Simpson's paradox of the means,
where one can have trends that go one way for the means of two
separate groups, but where this trend reverses if the groups are
aggregated. It's unintuitive, but doesn't invalidate the use of
means in statistics. 

 
ONE CRUCIAL DIFFERENCE: Simpsons paradox relies on comparing fractions 
with different denominators to mask statistics. (I know it isn't 
necessarily fractions, it is just different results compared against 
each other that are weighted differently in the final average, but 
'denominator' is easier to say/explain than this sentence.)
 
Here is why that analogy fails:

We are not using different districts for each candidate.
 
Let's say I can divide country X two ways. Into Y1 and Y2 and into Z1 and Z2
 
The consistency criterion states that if I divide my country into Y1 and 
Y2 and both of them are a victory for candidate A and B wins this IS a 
violation of the consistency criterion.
 
Now let's say that for candidate A I divide it into Y1 and Y2 and for 
Candidate B I divide into Z1 and Z2. In addition to this division not 
making sense, let's say A did manage to win twice (however that work 
work). B wins. This DOES NOT constitute a violation of the consistency 
criterion. The regions you are dividing the country into have exactly 
the same weight for every single candidate.
 
The Simpson's paradox is impossible if I am always comparing data of 
like weights.


It can still happen if the method in question weights raw data 
differently, depending on the circumstances. While thinking about this, 
I found an example for Range with Warren's no opinion feature. Consider 
this case:


There are two candidates: A and B, and also two districts.

Range-10 with the no-opinion option.

For the first district, there are 31 voters. All of them have an opinion 
of A, and only 18 of them about B. The magnitude (total) is 200 for A 
and 108 for B, so that you get mean 6.45 for A and mean 6 for B.


For the second district, there are 30 voters. All of them have an 
opinion of B, but only 13 of them about A. The magnitude for A is 124, 
and for B, 280, so the average for A is 9.54, and for B, 9.33 (both 
candidates are very well liked here).


Now, you may guess what happens next. If we sum this up, there are 44 
voters who had an opinion about A, and 48 about B. The total magnitudes 
are 200+124 = 324 for A, 108+280 = 388 for B. Thus B wins with an 
average score of 8.08 against A's 7.36.


For Condorcet, I'll be more general and say that the reason is that when 
it's using a completion method, some preferences count more than others. 
Because the data is broken down from orderings to pairwise preferences, 
that means that some ballot may have an effect on many preferences (the 
direction of the beatpaths or whatever), while others have an effect on 
relatively fewer. The argument would be weakened if one could find a 
consistency failure example where all three ballot groups (two districts 
and sum) produce a CW.


That doesn't justify a Condorcet method, though. For that, I'll say 
that consistency is unnecessarily strict. The only methods that pass it 
are those that are summable with vectors of size equal to or less than 
the number of candidates; meaning Approval, Range (without no-opinion), 
and weighted positional methods.


Compression is a problem. A makeshift attempt to avoid it might cause 
more harm than good though. The fact of the matter is that Range at 
least allows voters to express

Re: [EM] Simulation of Duverger's Law

2008-10-20 Thread Kristofer Munsterhjelm

Raph Frank wrote:

On Fri, Oct 17, 2008 at 12:44 AM, Kevin Venzke [EMAIL PROTECTED] wrote:

I think what we need to see, are IRV elections to a chamber that is
not parliamentary (i.e. there is no particular prize for one party getting
the most seats). Perhaps in that situation IRV could support more than
two parties.


In Ireland, the rule is that the Taoiseach (PM) needs to obtain the
support of a majority of the Dail before he is appointed.  This seems
pretty fair.  There is no specific incentive to obtain the most seats
(parties can always form a coalition later).

However, it looks like in (nearly?) all other parliamentary countries,
the rule is that the leader of the largest party is appointed.  The
eliminates the need to form a coalition.


Isn't it the case for most parliamentary countries that the government 
needs support of the legislature to stay in power? At least here (in 
Norway), that's the case, which means that a coalition of parties decide 
to populate the government (determine who gets to be in the executive), 
and the PM is, like any other position, determined by said coalition, 
subject to the general approval of the government by the legislature.


Thus, while in majority governments, the largest party gets the PM (and 
that may happen for minority governments, too), for some minority 
governments, a smaller party of the coalition gets the PM (in exchange 
for it staying in the coalition).


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Multiwinner Methods Request

2008-10-20 Thread Kristofer Munsterhjelm

Greg Nisbet wrote:

So far the following multiwinner methods have been suggested or I know of:

CPOSTV

Schulze STV

QBS (this is what I meant by Proportional Borda, sorry!)
http://en.wikipedia.org/wiki/Quota_Borda_system

QanythingS (look at the description of QBS, it effectively allows a
black box single winner method to be used in place of Borda Count).

Naive Adaptations -- you can do this with just about anything. Not
proportional at all but enh.

STV various ballot transfer rules

IRNRSTV (**)

BordaSTV (**)

Sainte-Lague (and the 1.4 divisor variant)

Largest Rem (various quotas)

D'hondt

All party-flavored methods can be made with open/closed/free lists too
so its great.

SNTV

Limited vote

Block vote

Preferential Block

RRV

PAV

PRV

Cumulative vote

Districted crap

MMP (combination of districted crap and some party alloc.)

Asset Voting (*)

Forest Simmons' methods:
http://www.rangevoting.org/cgi-bin/DoPassword.cgi (I'll include a copy
of the page at the bottom if you don't feel like joining CRV)


I already suggested QPQ, but I think I forgot to mention some others my 
simulation program (or a previous vote-counting program) includes.


D'Hondt without lists:
This is a multiwinner method that can be paired with any Condorcet 
method. First, elect the winner of that method. Second, redo the 
Condorcet matrix, so that all preferences below the winner is 
downweighted by f(1). E.g, if A wins and C  B  A  D  E, then D  E 
has half the strength of C  E. Run the method again. Remove the winner 
from the output social ordering and whoever placed first to the list of 
winners.
Next round, downweight the preferences below one or more winners by 
f(x), where x is how many winners the highest-ranked candidate of the 
pair is below. E.g if A and E are winners and we encounter a ballot of 
the type B  A  D  E  F  C, then

B  E has strength f(0)
D  F has strength f(0) * f(1)
F  C has strength f(0) * f(1) * f(2).
Continue in this manner until you have all the winners.

For D'Hondt, f(0) = 1, f(1) = 1/2, f(2) = 1/3.. etc. Sainte-Laguë is 
probably better. You could also use additive weighting.


CFPRM: See 
http://listas.apesol.org/pipermail/election-methods-electorama.com/2002-November/008855.html



===

I do need some single winner methods as well to test for QanythingS,
districted crap, and naive crap. I'm not suggesting all [insert large
number] that we have ever discussed. FPTP and Range make the list.
Schulze too. Any other suggestions? (I'd like to limit it to about ten
if that's OK).


A trick here is to make envelopes which transform one method into 
another. For instance, Eliminate-* (combine with FPTP to get IRV, or 
with Borda to get Baldwin), and Average-Eliminate-* (combine with FPTP 
to get Carey's Q method).


But if you want to keep the number down.. add one simple Condorcet 
method to see if the complexity matters. Say, minimax. Also, Borda (or 
some other simple weighted positional method). If you'll support 
approval cutoff ballot formats, you could have one or two of those: 
UncAAO and MDDA, perhaps Condorcet//Approval.




===



Puzzle #15 (open – multiwinner EP  PR voting systems):

[snip]
That's interesting, and not quite what I thought the method was like 
beforehand. It makes sense that the array is limited by the number of 
candidates, since ultimately, opinion space can't be rendered any more 
accurately.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Idea for a free web service for (relatively) secure online voting

2008-10-21 Thread Kristofer Munsterhjelm

Paul Kislanko wrote:

There are several ways to make ballots-counted public record without
compromising the anonymity of ballots-cast. The trick is to assign a unique
key to each POTENTIAL ballot-cast, and expose said key only to the voter who
casts an actual  ballot. 


The collecting authority publishes the list of keys that are associated with
ballots cast, and the counting authorities for the different items on the
ballot (different for local, state, federal, etc. items on the ballot)
publish the ballot keys COUNTED for each item for which they are
responsible.

The voter, who's the only person who knows the key associated with her
ballot, can verify that her ballot was collected and counted by comparing
her ballot-ID with those listed. Her identity is never known to anyone, but
if she finds her ballot-ID in the collected list but not in any counted
the way I voted list she can present the conflict to an alternate counting
authority who can challenge the count and go back to the collecting
authority to retrieve all ballots and re-count them.


I think we'd have to figure out what the system is supposed to protect 
against. There has been some confusion: Mike said that his system would 
let the voters know that their ballots have been counted, upon which I 
said that this may not be enough, if it would also enable vote-buying 
and coercion attacks.


Does your method only solve Mike's desiderata, or mine as well? As far 
as I can see, your method would be vulnerable to vote-buying/coercion 
because the buyer would demand the seller's ID. The seller might give 
the wrong ID, but then he doesn't get paid (after the election, of 
course). This is more a vulnerability towards coercion, since a 
vote-buyer might want to be paid immediately, but in the case of 
coercion, the mafia could beat up the voter later (or the boss could 
fire the voter, or whatever).


Considering it in greater detail, there are three classes of vote-buying 
or coercion attacks:


Passive immediate - The voter does something, and produces proof that 
that's been done.
Passive delayed - The voter does something, and produces part of a token 
that confirms, after the election, that he voted for the right candidate(s).
Active - The adversary watches the voter the entire time, or the 
adversary can demand pictures from the polling booth. The former regards 
vote-at-home, the latter voting with cameras/etc.


One possible way of making your system safe against passive delayed 
attacks would be to augment the hash. That is, you vote A  B  C, your 
ID is 13, and the hash is 24. When you leave, they give you a random 
number (say 100) and the sum of the two (124). If the vote-buyers wanted 
 C  B  A with hash 23, you just tell them your random number was 101. 
This is a bit impractical, though, since you'd have to remember both 
your random number and hash, and those would be significantly larger, 
and you would also have to be able to compute, from the voting booth, 
the hash of any ordering, so you could find the difference to trick the 
vote-buyers.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Maintenance Elections

2008-10-21 Thread Kristofer Munsterhjelm

Raph Frank wrote:

Another option is to use the original ballots.  In Australia, for
their PR-STV seats, the ballots are reexamined after a vacancy and the
results calculated a second time.  However, no candidate who is still
sitting in the parliament can be eliminated (i.e. you can't lose your
seat because someone else resigns).   This has some potential problems
in the maths, but it should ensure that a candidate similar to the
outgoing member is elected, while allowing the voters' choice to
determine the replacement.

I think that is a good idea, and it encourages a party to run extra
candidates so that they have 'spares' to fill vacancies.  This can
help reduce the ability of parties to perform vote management.


Schulze's STV proposal uses a proportional completion for this purpose. 
As far as I understand, the proportional completion is an extension of 
the PR result, for more seats than really exist. If a party member 
quits/dies/etc, he's replaced by the highest-ranked unelected party 
member on that proportional completion ordering. I'm not sure how this 
works with independents; perhaps they should just appoint a replacement 
ahead of time (that is, as a precaution, like with your VP or EU 
Parliament examples). The risk may be too low for it to be worth the 
bother, in which case that seat could simply be empty.


For list PR, it would be even simpler. The next candidate on the list 
gets the seat. I don't think this should be used if the representative 
decides to vote against the party, but just if he leaves, since party 
list PR grants enough power to the parties as it is.



Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] NPV vs Condorcet

2008-10-21 Thread Kristofer Munsterhjelm

Dave Ketchum wrote:

On Mon, 20 Oct 2008 19:51:55 -0700 Bob Richard wrote:
  Some states may not be up to Condorcet instantly.  Let them 
stay with FPTP
   until they are ready to move up.  Just as a Condorcet voter 
can choose to rank
  only a single candidate, for a state full of such the counters 
can translate FPTP

  results into an N*N array.

What would enforcing the truncation of rankings (to a single ranking) 
for part of the electorate -- but not the rest -- do to the formal 
(social choice theoretic) properties of any given Condorcet method? 
Would the effect be the same for all Condorcet-compliant voting methods?


It is not a truncation.  It is interpreting FPTP ballots as if used by 
Condorcet voters.  Should result in pressure on all states to conform ASAP.


I am ONLY considering FPTP and Condorcet  The exact Condorcet method 
cold be stated in the amendment.  Note that this is only a single 
national election, though there would be extreme pressure on other 
government uses of Condorcet to conform.


If you're considering only FPTP and Condorcet, synthesize a Condorcet 
matrix out of the FPTP ranking. That'll fix the consistency problems 
with Condorcet, since if the other state's already Condorcet, you'll be 
adding a real Condorcet matrix and not just a ranking.


On the other hand, perhaps the state will use arguments similar to those 
in favor of winner-takes-all and say if our method says A  B  C, then 
we have to maximize the chances of A winning, and failing that, that B 
wins. I'm not sure whether the (hypothetical so far) agreement should 
then demand Condorcet matrices, or if it should let the states choose 
whether to use rankings instead.


Range might be more difficult, since one can transform a rating into a 
ranking (and a ranking into a Condorcet matrix), but not easily a 
Condorcet result to a rating, or a ranking to a rating. Some Condorcet 
methods exist that return aggregate rated ballot outputs (a rated 
scoring instead of a social rank ordering), but they're very complex; 
in an earlier post, I mentioned a continuous variant of Schulze that 
uses quadratic programming.


One solution to this might be to have states submit either a Condorcet 
matrix or a range vector (n entries if it's plain Range, 2n if it's with 
 Warren's no-opinion option). Then, at the end, all the Range vectors 
are added and the Range result is computed for this. That becomes one 
ordering, and a Condorcet matrix can be synthesized from it. That 
artificial Condorcet matrix is scaled by the voting power of the Range 
states and then added to the real Condorcet matrix, and the result is 
given based on that.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] NPV vs Condorcet

2008-10-22 Thread Kristofer Munsterhjelm

Bob Richard wrote:
I'm obviously missing something really, really basic here. Can someone 
explain to me what it is?


  Take it from the FPTP count and recount it
  into the N*N array by Condorcet rules ...

I still have no idea what this means. Here's an example:

Plurality result:
  Able: 45
  Baker: 40
  Charlie: 15

Here's a (very naive) NxN matrix (fixed-width font required):

Able BakerCharlie
---  ---  ---
Able--   45   45
Baker   40   --   40
Charlie 15   15   --

But it's not a Condorcet count because we have, for example, no idea how 
many of the Able voters prefer Baker to Charlie and how many prefer 
Charlie to Baker. As a result, the pairs of cells above and below the 
diagonal don't add up to 100. I still don't see how we can recount it 
into the NxN matrix by Condorcet rules.


Someone please show me the NxN matrix that Dave Ketchum would use to 
combine these votes with the other votes that had been cast on ranked 
ballots.


If we consider the votes as bullet votes, then we can expand to:

45: Able  Baker = Charlie
40: Baker  Able = Charlie
15: Charlie  Able = Baker

which produces the matrix you gave above.

That's the consider bullet voters idea. The other one is to count the 
plurality vote locally, so you get:


100: Able  Baker  Charlie

which gives

   A   B   C
A  0 100 100
B  0   0 100
C  0   0   0

and which could be used for any voting system. I think the first idea is 
better, though.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Nondeterminism in Multiwinner Methods

2008-10-29 Thread Kristofer Munsterhjelm

Greg Nisbet wrote:

For the record, I am against nondeterminism in single winner methods,
but that is another ball of wax that I want to keep separate.

Anyway, the single winner methods can be divided into a few basic types:

1) slow (these take O(candidates!) time. They are non-iterative)
2) fast (these rely on iterations. Usually a kind of elect and punish
cycle (think RRV or STV).)
3) party-based
4) nondeterministic (this includes your collusion-based methods (Asset
Voting) and random ones (e.g. random ballot))
5) naive (without making any changes, use a single winner method)
6) plurality-based (CV, Block vote, Preferential Block etc...)

(1)s tend to become unwieldy.
(2)s suffer bizarre paradoxes
(3)s require parties
(4)s produce lower quality winners on average
(5)s do not produce proportional results...
(6)s are kinda unimpressive

Just a bit of multiwinner voting theory: I suspect it would be
relatively uncontroversial to declare (1) to be best if execution time
weren't an issue. However, it is. What do you do about it?

There are various shortcuts to help a reasonable solution be found
quickly. You could resort to iteration, randomness, parties, or give
up.

Of course, various elements of these can be combined. It would be
possible to have a party-based method with various other methods
inside of it.

Nondeterministic elements do seem to be useful in the case of
multiwinner methods.

It is unlikely that a nondeterminstic solution would be perfect, of course.

However, I suspect that it can deliver at least some of the benefits
of group (1) without incurring factorial execution time.

Any thoughts on the matter?


Raph gave an explanation with multiple groups doing the optimization. 
Another option that would probably seem fair would be to find an initial 
council by using a type-2 method, then hill climb from there. The 
outcome may still have strange paradoxes (since the type-2 method may be 
nonmonotonic with the two outcomes being respective local minima), but 
it would be deterministic and may do better.


This would be a scale, where the nondeterministic pure randomness 
shortcut would be on one end, and this on the other. In the middle you 
would have something like genetic algorithm-based optimization, or 
simulated annealing.


If there's a PTAS for the problem in question, that could also be used. 
Of course, then one has to ask, a polynomial time approximation scheme 
of what? What variable does an election method approximate? That would 
have easier answers for PAV/LPV/etc methods, since those minimize or 
maximize some satisfaction number.


Also, one may ask if a strategyproof nondeterministic method exists for 
multiwinner elections (like Random Ballot for single-winner). That 
question may not be of much practical use, though, but could give some 
ideas of how multiwinner methods should be constructed. A multiwinner 
analog of random candidate would be vulnerable to cloning, and I don't 
think random ballot (pick n ballots) would be proportional either.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] language/framing quibble

2008-10-30 Thread Kristofer Munsterhjelm

Fred Gohlke wrote:

Good Evening, Kristofer

Before responding to your most recent letter, I'd like to revisit a 
topic mentioned in your letter of Fri, 26 Sep.  In discussing the way a 
group of three people might resolve a traffic question involving three 
alternatives, each championed by a member of the group, you mentioned 
the possibility of a fourth, unrepresented, alternative.  I found your 
suggestion stimulating.


It stimulated more than I expected because, in reflecting on it, I 
recalled an aspect of human relations that influenced adoption of the 
triad concept in the first place ... the tendency of small groups of 
problem solvers to experience intuitive leaps.


In the hypothetical case we're discussing, the goal of the group is to 
solve the problem.  It is not uncommon for such efforts to produce 
unanticipated results.  Indeed, some enterprises seek such results with 
'brainstorming' sessions.  The chances of such mental leaps are severely 
restricted (if even possible) when the decision-making group is 
ideologically bound.  The mind is a wonderful thing.  We mustn't chain it.


That is true, and after reading, I think I've given the wrong 
impression. What I want is not so much to reproduce partisanship 
accurately as to reproduce the entire range of ideas accurately. This 
is, I think, the real idea of PR, at least as I see it: that the groups 
are accurate not just by party, but by idea distribution. In the context 
of brainstorming, such a group distributed in a good way would have more 
points of view to draw upon; in your example, they would know about the 
fourth option, so they could take that into account when deciding if, 
perhaps, there is a solution that goes beyond all these yet get 
reasonably close to the goal of the four options that were proposed.


I'll try to refine this, although I may sound a bit like I'm talking 
about cardboard persons or stickmen again. Part of this is because I 
don't really know how people are going to act, so I'm making a first 
degree approximation, to use such a term.



And, now, to work ...


re: ... why are your web log entries timestamped 2010?

Because the site puts the most recent posts at the point where they are 
the first encountered by visitors.  I asked the site how I could put the 
material in 'book order', and they told me I'd have to reverse the 
dates.  I chose a future date, and made subsequent posts at earlier 
dates to put the material in a logical order for the visitor.


Alright. I haven't seen that kind of format elsewhere - presumably they 
list the most recent entries first so that people who come back know if 
anything's new.



With regard to focusing on the job our representatives do ...

  I can see the point you're making, but I think you should be
   careful not to go to the other extreme, too.  Opinions may
   shift, but at the bottom of things, they're the people's
   priorities of in what direction to take society. The vagaries
   you speak of could be considered noise, and that noise is
   being artificially increased by the two main parties, since if
   they can convince their wing voters they represent their
   opinion (or change their opinion), then those voters are more
   likely to vote for them instead of not voting at all. That
   doesn't mean that there's no signal, though, and where that
   signal does exist, it should not be averaged out of existence
   or amplified in some areas and attenuated in others (as could
   happen if the majority of the majority is not equally much a
   majority of the whole).

re: That doesn't mean that there's no signal, though, and where
 that signal does exist, it should not be averaged out of
 existence or amplified in some areas and attenuated in
 others ...

I agree, but believe the signal is strongest during the selection phase. 
 That is when people focus their attention on their priorities of in 
what direction to take society and select the people they believe will 
lead them in that direction.


Here I'll refer to what I saw in my simulations. They show that, at 
least for simple opinion models, those ideas held by a majority turns 
into a near consensus. Something has to go, and that is the ideas by the 
minority. As far as I understand, your response to that is that people 
are not static: the ostensible majority learns from the minority as 
we progress up the stages. That may be the case, and if so, that is 
good; but if not, then the method significantly limits the ideas that 
are not common to all.


I'll restate that what I'm talking about is not simple partisanship. 
Maybe a picture would work better. Consider the candidates' reaction to 
certain ideas as a plain. In some areas, you have mountains (where they 
agree very much on the corresponding idea), and in others, you have only 
small hills (where they're indifferent). Proportional representation 
would ideally construct a good replica of the combined landscape of the 
entire population.

Re: [EM] language/framing quibble

2008-11-01 Thread Kristofer Munsterhjelm

Fred Gohlke wrote:

Good Morning, Kristofer

There is so much good material in your message that, instead of 
responding to all of it, I'm going to select bits and pieces and comment 
on them, one at a time, until I've responded to all of them.  I hope 
this will help us focus on specific parts of the complex topic we're 
discussing.  For today, I'm going to concentrate on two of your comments 
regarding group (or council) size:


1) Have a council of seven. Use a PR method like STV to pick
four or five. These go to the next level. That may exclude
opinions held by fewer than two of the seven, but it's better
than 50%-1. If you can handle a larger council, have one of
size 12 that picks 9; if seven is too many, a group of five
that elects two.

For small groups like this, it might be possible to make a
simpler PR method than STV, but I'm not sure how.

2) It's more like (if we elect three out of nine and it's
always the second who wins -- to make the diagram easier)

e  n  wLevel 2
   behknqtwz   Level 1

  b  e  h   k  n  q   t  w  z  Level 1
 abcdefghi jklmnopqr stuvwxyzA Level 0

The horizon for all the subsequent members (behknqtwz) is
wider than would be the case if they were split up into
groups of three. In this example, each person at a level
represents three below him, just like what would be the
case if you had groups of two, but, and this is the
important part, they have input from the entire group of
eight instead of just three. Thus some may represent all the
views of less than three, while others represent some of the
views of more than three. The latter type would be excluded,
or at least heavily attenuated, in the triad case.

For convenience, I'll work with a group size of 9 picking 3 by a form of 
proportional representation:


Am I correct in imagining the process would function by having each of 
the 9 people rank the other 8 in preferential order and then resolve the 
preferences to select the 3 people that are most preferred by the 9?


Yes. In a truly unbiased scenario, each of the 9 people could submit a 
complete ranking (including himself), but since you've said you don't 
want voters ranking themselves first, they would rank all but 
themselves. Either the method would consider them equal-first, or have 
some sort of special no opinion provision (like Warren's Range).


That seems like a really good idea.  It is, however, a new idea for me, 
so it may take me some time to digest all the ramifications of the 
concept.  Even so, the first thoughts that leap to mind are:


1) It would allow voting secrecy.  In a group size of 3 selecting 1, 
secrecy is not possible; a selection can only be made if 2 of the three 
agree on the selection.  Many people say secrecy is important.  For my 
part, I'm not sure.  It may be important in the kind of electoral 
process we have now, but I'm not sure open agreement of free people is 
not a better option.


You could weaken that in two ways. The first would be to simply make the 
voting public. For instance, each person may need to say my vote is A, 
then B, then C.., and then that is recorded. The second would be to 
have a consensus step like I considered in my previous post, where the 
proportional representation method is just advisory, and what really 
counts is a supermajority vote for who'll go onwards. The second 
wouldn't really be PR, though - or rather, it would only be so if the 
councilmembers are all good negotiators.


2) It reduces the potential for confrontation that would be likely to 
characterize 3-person groups.  We can make the argument that, in the 
selection of representatives, confrontation is a good thing.  Seeing how 
individuals react in tense situations gives us great insight into their 
ability to represent our interests.  We can also make the argument that 
a pressure-cooker environment is hard on the participants.


This returns us to the subject of optimal council sizes. I already 
talked about Parkinson's coefficient of inefficiency (that committees 
degrade significantly above 20 members), but beyond that, I think the 
only way to really know or to find answers to your questions - such as 
whether confrontation is too intense at three, or if it's too slack at 
nine - is to try it. People are people, so not everything can be derived 
from models.


There is one thing I will add, though. If we have a very simple concept 
of how people negotiate, where each have to know all the others to find 
a good compromise, then the collective burden increases as the second 
power of the number of members. (For n members, each has n-1 links, so 
n*(n-1)). This means, for the simple model at least, that we pay more by 
increasing the size from 10 to 17 than from 3 to 10.


3) Each participant's opportunity to evaluate each other participant is 
reduced; they must evaluate 8 people in the allotted time 

Re: [EM] Methods for Senators, governors, etc.

2008-11-02 Thread Kristofer Munsterhjelm

Dave Ketchum wrote:

A few thoughts:
 Plurality or Approval cannot fill need.
 IRV uses about the same ballot as Condorcet - but deserves 
rejection for its method of counting.
 Condorcet can - but I am trying to word this to also accept other 
methods that satisfy need.
 Range does much the same, but needs better words than I have seen 
as to how, simply, to rate SoSo when ranking would be GoodSoSoBad.
 Method needs to be understandable by voters (I read compaints about 
handling of Condorcet cycles - I claim that they do not need to be 
ubderstood in detail - mostly that discussing frequency and effect 
should satisfy most).
 The methods that inspired this missive claim to offer some, 
possible valuable, benefits - at a cost that may be prohibitive - leave 
them to audiences who agree the benefits are worth the cost.


If Schulze's too complex, use MAM (Ranked Pairs) or River. These are at 
least easy to explain. If people are very concerned about FBC, then 
perhaps MDDA - though I don't know it does with respect to the advanced 
criteria (like clone resistance).


Schulze does have the advantage of wide use, at least compared to the 
two other methods here. While I don't know if potential legislators 
would lend any weight to its use in computer related organizations, the 
others haven't much of a record at all.


One other thing to note is that some multiwinner elections in New 
Zealand uses Meek STV. Not exactly the simplest to understand of 
methods, so it may still be possible to get complex methods through.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] language/framing quibble

2008-11-02 Thread Kristofer Munsterhjelm

Fred Gohlke wrote:
The proposed electoral method uses computers to maintain a database of 
the electorate, generate random groupings, and record the selections 
made at each level,  This makes the process inherently bi-directional. 
Each elected official sits atop a pyramid of known electors, so 
questions on specific issues can easily be transmitted directly to and 
from the electors for the guidance or instruction of the official.


How extensively this capability is used depends on those who implement 
and administer the process.  For example, the town council, state 
legislature, and/or national congress can decree that certain questions 
must be referred to the people who selected the representatives serving 
in the body.  In such an event, the representative would instruct the 
database administrator to send the question down through the chain that 
elected him.


So, in essence, the pyramid structure remains even after selection? This 
sounds a bit like having councils all the way down to adjust the higher 
levels, except that the councils don't meet in the traditional fashion, 
but the messages just propagate when required. That should provide some 
correction and accountability, but I'll try to see how it may err.


You'll lose the corruption resistance by surprise property. This would 
mean that external parties could try to corrupt those at the next 
highest level in order to overturn the highest; that is, if the next to 
highest council has formal powers. If it doesn't, there's nothing to 
say, in the worst case, that the highest level will listen to them, so 
let's say they have.


One possible way of handling this would be to make the lower levels 
advisory. A lower level could send a message upwards; at higher levels 
short of the ones actually making up the legislature (or other body), 
these messages would be considered important, but from lower ones, they 
would be more like letters to congressmen. Also, lower levels may 
initiate referenda (initiative or recall) as if a majority of those 
below them in the pyramid had requested it; so if a direct democracy law 
requires ten thousand people to agree (submit signatures) for the 
referendum to be held, a person with twenty thousand below him (majority 
being 50%+1) could request it on his own. That would provide a 
countering force which would not be absolute; the higher level 
delegates would have to pay attention to what those lower to them have 
to say, yet those lower than them cannot be corrupted   at the blink of 
an eye. It's still not perfect, since higher-but-not-highest pyramid 
members may swamp the system, or those who were not selected may decide 
to make matters hard for those who were, just on principle. Still, 
perhaps those problems can be fixed. For PR, instead of 50%+1, it would 
be proportional to those who did elect the person in question, or 
rather, the strength of his support.
Perhaps it could be balanced further by that if a person at level n 
wants to ask the people of something, he must get a majority at levels 
below him to agree, or a majority of the two levels below, or something 
like that. That would work as a filter to prevent the kind of swamping 
or flooding I've considered above.



Another potential problem may lie in that the pyramid structure is 
static between elections, which means that as opinions shift among the 
members that were not elected, they may have inappropriate power. That's 
also a problem with representative democracy (parliamentary or 
presidential) but those have only one level, whereas this has more than one.


This capability should be used with caution, however.  Some of the 
matters public officials must decide do not admit of simple answers. 
Some may be unpopular or painful to the citizenry ... restraining the 
cancerous growth that currently dominates (and threatens) our existence 
will not be accomplished easily.  We want to elect people with the 
courage and wisdom to improve our society, not destroy it.  We can not 
expect to be happy with all their decisions.  We've taken pains to 
select people of integrity and judgment, we should not restrain them 
unnecessarily.


The matter of how and when this option should be used raises several 
questions.  For example, it leaves open the matters of how the questions 
should be framed and evaluation of the responses.  The answers to 
questions that elicit 'yes/no' responses can be influenced by the 
phrasing of the question.  On the other hand, anything more complex than 
a 'yes/no' response requires interpretation which could be difficult, 
since clarity of written expression does not seem to be an inherent 
human trait.


We must also consider how responses are to be transmitted upward.  My 
initial idea was that the people would give their response to the person 
they selected from their group, and that person would pass it upward.  I 
anticipate, though, an objection that this method would preserve the 
biases that influenced 

Re: [EM] language/framing quibble

2008-11-05 Thread Kristofer Munsterhjelm

Fred Gohlke wrote:

Good Morning, Kristofer

re: So, in essence, the pyramid structure remains even after
 selection?

Yes.  We have the capability of retaining the information and it should 
be used to enhance the role of those elected to act as spokesperson for 
a segment of the electorate.  In this connection, we should note the 
random grouping of candidates at each level insures that the segment of 
the electorate represented by each elected official will be diverse.


How this capability is actually used will depend on those who implement 
the process.  As I've said before, I don't favor rigid monitoring of the 
people we elect.  However, because the process provides a simple means 
of enabling referenda and recall, the implementors should establish 
rules for their use.


Even without rigid monitoring, there should be a counteracting measure. 
After all, the councilmembers work on behalf of the people, so if they 
start consistently diverging from what the people want, there should be 
a way of directing them back. This way must not be too strict, or we get 
short term interest on one hand and populism on the other. It should 
still be there; I think that's partly the point of the bidirectionality 
we're talking about, although it's not limited to counteraction, but 
also involves information (to guide).



re: You'll lose the corruption resistance by surprise property.
 This would mean that external parties could try to corrupt
 those at the next highest level in order to overturn the
 highest; that is, if the next to highest council has formal
 powers. If it doesn't, there's nothing to say, in the worst
 case, that the highest level will listen to them, so let's
 say they have.

The process does not give unselected people any powers.  If they are to 
have powers, they must be granted by those who implement the process. As 
you point out, it would be fairly easy to devise rules that destroy the 
integrity of the system.  Indeed, that is precisely the way the current 
system was devastated, so the risk is real and imminent.  The best 
defense may be analyze the rule-making aspect as quickly and as 
thoroughly as possible, one of the many considerations I hadn't 
anticipated.


My idea was that even the unselected helped the selected become 
selected. In one sense, they used their power (what one may call that 
power) to reach an agreement. Therefore, the selected are to some extent 
 accountable to them; which makes sense if you go all the way down, 
where the people (except the candidates) are all ultimately unselected. 
As you say, formal power might not be the way to do so, though.



re: ... lower levels may initiate referenda (initiative or
 recall) as if a majority of those below them in the pyramid
 had requested it ...

I understand the reasoning but wonder if it's not a bit dangerous to 
assume those at lower levels support a given side of a single issue? 
Would it not be better to ask them, since the means of doing so is at hand?


Your suggestion that ... if a person at level n wants to ask the people 
of something, he must get a majority at levels below him to agree, or a 
majority of the two levels below, or something like that. is probably 
the better option.


There's a scale: on one end of the scale is a referendum. On the other 
end is an immediate decision by the council or government. The former 
end is direct but takes very long time, while the latter is indirect but 
goes quickly. The suggestion was of something in between. Setting a 
majority of the two levels below or similar might work; this all 
depends on how much the selected at one level can be considered to 
represent those below them.


The resolution of this issue it tied to the particulars of the election 
cycle(s), particularly the term of office of the elected officials. 
Where an entire body is replaced every two years, there should be little 
need for referenda.  The combination of the time it takes for a public 
official to perform a misdeed and for that fact to become public, 
combined with the time the referendum takes, will usually exceed the 
time it will take to replace the individual in the next election.  Even 
so, we must have a mechanism to deal with malfeasance during longer terms.


We end up at another element of that which it's hard to predict. If the 
final council (or aggregation of councils, assuming the body in question 
is larger than a triad or council) represents the people well, there's 
little need for a referendum - if they represent the people perfectly 
and would always do what they'd do given a magic zero cost referendum, 
then there would be no need or point at all. To the extent they 
disagree, referenda and initiatives would have a point. If we're unsure 
about whether they're useful, I think we should include them; they could 
be removed later if not used at all, whereas if the system turns more 
flawed than expected, the backup option (of direct 

Re: [EM] In defense of the Electoral College (was Re: Making a Bad Thing Worse)

2008-11-07 Thread Kristofer Munsterhjelm

Dave Ketchum wrote:
With the EC it seems standard to do Plurality - a method with weaknesses 
most of us in EM recognize.


Let's do a Constitutional amendment to move up.

I propose Condorcet.  One advantage is that states could move up to use 
it as soon as ready.  States, and even districts within states, could 
remain with Plurality until able to move up - with their votes counted 
as if they did bullet voting with Condorcet.  Approval voting would be 
permitted the same way.
 To clarify, the US would be a single district, while vote counts 
could be published for states and other contained districts, as might be 
useful.


I think an NPV-style gradual change would have a greater chance of 
succeeding than would a constitutional amendment. The constitutional 
amendment requires a supermajority, and would thus be blocked by the 
very same small states that benefit from the current Electoral College.


As for the system of such a compact, we've discussed that earlier. I 
think the idea of basing it on a Condorcet matrix would be a good one. 
That is, states produce their own Condorcet matrices, and then these are 
weighted and added together to produce a national Condorcet matrix, 
which is run through an agreed-upon Condorcet method.


If all states use Plurality, well, the results are as in Plurality. If 
some use Condorcet, those have an advantage, and if some want to use 
cardinal weighted pairwise, they can do so. Yet it's technically 
possible to use any method that produces a social ordering (by 
submitting, if there are n voters and the social ordering is ABC, the 
Condorcet matrix corresponding to n: ABC). While imperfect, and 
possibly worse than Plurality-to-Condorcet or simple Condorcet matrix 
addition, the option would be there, and would be better than nothing.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] In defense of the Electoral College (was Re: Making a Bad Thing Worse)

2008-11-08 Thread Kristofer Munsterhjelm

Dave Ketchum wrote:

On Fri, 07 Nov 2008 09:58:30 +0100 Kristofer Munsterhjelm wrote:


I think an NPV-style gradual change would have a greater chance of 
succeeding than would a constitutional amendment. The constitutional 
amendment requires a supermajority, and would thus be blocked by the 
very same small states that benefit from the current Electoral College.


An NPV style change MIGHT have a greater chance than an amendment but:
 It would be incomplete.
 Small states could resist for the same reason.


If the small states resist, the large and middle sized states will 
attain a majority, and thus through the compact/agreement overrule the 
others. At that point, it'll be in the interest of the small states to 
join since their share of power by staying outside the system is 
effectively zero.


Note that small states could retain their advantage with an amendment -- 
as I proposed.  What might all states compromise on?


That would depend on the nature of the agreement. Either it would be 
straight NPV (all states weighted by population) or it would be 
according to current (EC) weighting.


For an amendment, it's possible that small states would oppose the 
amendment if it's population-normalized, whereas large states would 
oppose it if it was electoral-college-normalized.


As for the system of such a compact, we've discussed that earlier. I 
think the idea of basing it on a Condorcet matrix would be a good one. 
That is, states produce their own Condorcet matrices, and then these 
are weighted and added together to produce a national Condorcet 
matrix, which is run through an agreed-upon Condorcet method.


How do we tolerate either weight or not weight without formal agreement 
(amendment)?


I imagine a clause like: The maximum power of a state shall be its 
population, as a fraction of the population of all states within the 
compact. Call this power p. The state shall be free to pick an x so that 
the weighting for this state is p * x, 0 = x = 1. That's for the 
closest thing to NPV. For a continuous electoral college, the first 
sentence would be The maximum power of a state shall be the sum of its 
number of representatives and senators, divided by the sum of the number 
of representatives and senators for all states within the compact. 
There's no reason to have x  1 but for future agreements to mutually 
diminish power (to turn an EC compact into a population-normalized one 
or vice versa).


I'll add that this phrasing would give states the same power no matter 
the relative turnout. If that's not desired, it could be rephrased 
differently, but giving states the same power is closer to the current 
state of things. The continuous electoral college variant does not take 
into account the 23rd Amendment, either.


If all states use Plurality, well, the results are as in Plurality. If 
some use Condorcet, those have an advantage, and if some want to use 
cardinal weighted pairwise, they can do so. Yet it's technically 
possible to use any method that produces a social ordering (by 
submitting, if there are n voters and the social ordering is ABC, 
the Condorcet matrix corresponding to n: ABC). While imperfect, 
and possibly worse than Plurality-to-Condorcet or simple Condorcet 
matrix addition, the option would be there, and would be better than 
nothing.


Actually each state does only the first step of Condorcet - the NxN array:
 If a state does Condorcet, that is exact.
 If a state does Plurality, conversion as if voters did bullet 
voting in Condorcet is exact.
 If a state does something else, it has to be their responsibility 
to produce the NxN array.


Yes. What I'm saying is that it's theoretically possible to incorporate 
any voting method into this; however, the results might be suboptimal if 
you try to aggregate, say, IRV results this way, since you'd get both 
the disadvantages of IRV and Condorcet (nonmonotonicity for the former 
and LNH* failure for the latter, for instance).



States have differing collections of candidates:
 In theory, could demand there be a single national list.  More 
practical to permit present nomination process, in case states desire such.
 Thus states should be required to prepare their NxN arrays in a 
manner that permits exact merging with other NxN arrays, without having 
to know what candidates may be in the other arrays.


The easiest way to do this is probably to have the candidates sorted (by 
name or some other property, doesn't really matter). When two matrices 
with different entries are joined, expand the result matrix as 
appropriate. Since the candidate indices are sorted, there'll be no 
ambiguity when joining (unless two candidates have the same names, but 
that's unlikely).


Election-Methods mailing list - see http://electorama.com/em for list info


  1   2   3   4   5   6   7   8   9   >