Seems like a waste, other than to perhaps help dump IRV, but:

Why not a logical matrix that can take less space if you only store elements that have data even if that takes more space per element:

Each element contains a candidate identifier, a count of usage, and pointers up and right.

Pick up first ballot. Put each candidate in a new element, count of 1, and pointer to right to next element. Finish with empty element.

Pick up next ballot - same voting so simply add 1 to each count.

Pick up next ballot. When you come to a difference create an element in new space, point up to it, and then go right to complete the ballot.

When going up there may already be a chain - you go up til you see a match - and go up to a new element if no match.

These matrices should be summable. When going to the right just add the counts. When going up, add counts if same candidates; add new elements to chain when needed.

Likewise when deleting a losing candidate - add its array to where it fits in what remains.

Dave Ketchum

On Feb 5, 2010, at 4:10 PM, Abd ul-Rahman Lomax wrote:

At 01:12 PM 2/5/2010, James Gilmour wrote:
Abd ul-Rahman Lomax  > Sent: Friday, February 05, 2010 4:50 PM
<CUT>
> Practically speaking, I'd assume, the precincts would be provided
> with a spreadsheet showing the possible combinations, and they would
> report the combinations using the spreadsheet, transmitting it. So
> some cells would be blank or zero. With 5 candidates on the ballot,
> the spreadsheet has gotten large, but it's still doable. What happens
> if preferential voting encourages more candidates to file, as it
> tends to do? 23 candidates in San Francisco? Even with three-rank
> RCV, it gets hairy.

Respectfully, I would suggest this would NOT be a wise way to collect the data. As I pointed out in my e-mail that correctly listed the maximum possible number of preference profiles for various numbers of candidates, the actual number of preference profiles in any election (or any one precinct) with a significant number of candidates, will be limited by the number of voters. Further, because some (many) voters will choose the same profiles of preferences, the actual number of preference profiles will likely be
even lower  -  as in the Dáil Éireann election I quoted.

That's correct; however, there is no practical way to predict which profiles are needed. Sorting the ballots into piles and subpiles until there is a separate pile for every profile strikes me as how it would be done. (or they could be sorted in sequence, according to the physical position of the marks, which would be faster, probably). Then the data from each pattern would be entered into the matching position on the spreadsheet.

Thus a spreadsheet containing all possible preference profiles would be unnecessarily large and the probability of making mistakes in data entry would likely be greater than if each precinct recorded only the numbers for each profile actually found in that
precinct.

The probability of making mistakes is not as stated, because there is a check on the spreadsheet data, there can be several checks. First of all, I'd first sort the ballots by first preference and transmit that data. This is merely preliminary, but those totals might decide the election. The sums should equal the number of ballots found.

Then the piles would be sequenced and the totals for each particular pattern found. It may be more efficient to keep A>.>B separate from A>B, because there is less interpretation required. I.e., "Blank" simply becomes another candidate. That adds to the possibilities, for sure, but simplifies the actual sorting. Blank intermediary votes should be pretty rare with IRV, so this will not materially add to the data that must be transmitted.

The spreadsheet could be transmitted raw, or it could be edited to remove empty rows (i.e, patterns with no ballots found matching). That reduces transmitted data but increases local processing and possibility for error. However, in either case, the check by summing remains. The check for subpatterns of each first choice is an additional error check. The first data transmitted could actually be used to shorten the process, i.e., there would be two reports from precincts: the first report with only first rank votes, a wait for central tabulation to have collected enough precincts to be able to advise on batch elimination, and then an additional transmission with all remaining relevant patterns

There is no doibt but that IRV can be counted, but the point is that it can get really complex and take a lot of time, when an election is close with many candidates. With more than a small handful of candidates, experience has shown that it can be a time-consuming and expensive process, done by hand. And very difficult to audit, even if done by computer. That's why the election security people here in the U.S., in general, don't like it.

What is done, in practice, is to collect and analyze ballot images. This has been done with preprocessing to collapse votes like A>,>B, but that's actually only a minor improvement and reduces transparency. If I'm correct, the collection of the data has been done centrally, the equipment not being present at the voting precincts, so, in short, they truck the ballots to central tabulation. This creates other risks.

> However, the problem with this is that a single error in a precinct
> can require, then, all precincts to have to retabulate.

Yes, this "distributed counting" would work. But there is an even simpler solution - take all the ballots to one counting centre and then sort and count only the ballots that are necessary to determine the winner (or winners in an STV-PR election).

That's what's being done. What experience here shows is that, even centrally counted, errors happen in earlier rounds that then require recounting all later rounds. The possibility of this rises with the number of candidates and the closeness of the election.

 That what
has been done for public elections in Ireland and the UK for many decades and it works well without problems. But I do appreciate that is far too simple and practical a solution and it suffers from NMH.

I don't think it's true that it has been "without problems." There are and have been problems. But if IRV were an optimal method, it might be worth the trouble. For multiwinner STV, indeed, it might well be worth the trouble. But for single-winner? I don't think so. There are simpler methods that produce better results, by all objective measures.

(Frankly, there is only one clearly objective measure, which is how a method performs in simulations, particularly with reasonable simulation of actual preference profiles -- full utility profiles -- and voting strategies as voters are known to use or are likely to use. "Election criteria," like the Condorcet Criterion, tend to be criteria that are intuitively satisfying, but that can actually fail completely and obviously under certain conditions, and a method failing a criterion may mean nothing if the failure is so rare and requires such unusual voting patterns that it will never be encountered under realistic conditions. Basically, how do we judge the criteria? And there are only two ways that I see, one is through utility analysis and the other through basic democratic principles, broadly accepted, such as the right of decision that is held by a majority; a majority of voters voting for a single proposition, with no opposing majority voting simultaneously for a conflicting proposition, must have the right to implementation. When there are multiple majorities there is not a simple question and there remains doubt as to a majority decision.)


----
Election-Methods mailing list - see http://electorama.com/em for list info

Reply via email to