On Wed, 2002-04-03 at 00:16, Anthony DeRobertis wrote: > I assume the following are the goals of verifying a vote. I > believe these goals are justified:
All the fun discussion tweaked my curiosity, so I pulled Schneier down from the shelf and did some reading. The results are interesting. For posterity, here's a brief summary. This is from _Applied Cryptography_, 2nd edition, section 6.1. (In my printing, it starts on page 125.) NOTE: I am NOT criticizing the current voting protocol, or anyone's integrity or anything; this is an intellectual exercise. First off, Schneier has six requirements for a secure election, and one sometimes requirement. > 1) No voter can vote twice This is number two. > 4) No person not registered to vote may vote. This is number one. > 6) Every voter can verify the correct counting of the votes This is number six, sort of. Schneier actually is only concerned with each voter being able to see that his/her vote was counted. > 7) No one can determine how another person voted This is number three. > [ I really should grab Applied Crypto and make sure I didn't > miss any ] You missed two: 4. No one can duplicate anyone else's vote. 5. No one can change anyone else's vote without being discovered. And the "optional requirement": 7. Everyone knows who voted and who didn't. Interestingly, Schneier considers number four to be the tough requirement. > All the shared keys schemes proposed so far have failed to > follow 5 and 9, and perhaps others. The reason is that nothing > stops the secretary from adding additional votes. Your steps 5 and 9 aren't "important"; however, steps 5 and 6 are pretty much identical if you define "the vote" as "the voter's intent", and step 9 is sort of assumed in all cryptographic protocols - that no one can break the rules without someone else knowing. Several protocols are discussed in the chapter. The first ones are flawed in some way. The last protocol discussed that assumes a central tabulator is claimed to meet all six of the "real" requirements and has two other nice features: 8. A voter can change his mind within a certain time frame. 9. Voters can correct misvotes without compromising the ballot's secrecy. The basic protocol goes something like this: All voters register their intent to vote, and the central tabulating facility (CTF) publishes the list of voters intending to vote. Each voter gets a unique ID number anonymously; the CTF has the list of IDs, but only the voter can associate himself with a particular ID. The voter generates a public/private key pair, combines the ID with his/her vote, encrypts that with the public key (s)he generated, and sends that anonymously to the CTF along with the plaintext ID. The CTF then publishes the encrypted message. Finally, all the voters send their private keys and IDs to the CTF (again, anonymously), who decrypts all the votes and publishes the results, along with all the encrypted messages. If the voter wants to change his/her vote or notices it is wrong, (s)he combines the vote with the ID, encrypts that, then sends the plaintext ID, the encrypted message, and the private key to the CTF, again anonymously. The trick is distributing unique IDs securely and anonymously. Two techniques are mentioned: blind signatures and all-or-nothing disclosure of secrets (ANDOS). Schneier goes into more detail with the ANDOS version. Blind signatures amount to a way of signing something without knowing what it is, but with confidence that it isn't something you don't want to sign. ANDOS is a way of distributing a set of secrets such that another person can learn one secret of his/her choice, but no others. Voters can't cheat with this. They can't use the same ID twice, or pick their own ID, because the plaintext ID is embedded in the encrypted message containing the vote, and the CTF knows all the valid IDs. They can't pretend to be someone else, because they'd need to guess a valid ID held by someone else. If the same ID comes in encrypted with two different private keys, the CTF picks one, generates a new unique ID, and publishes the new ID together with that original encrypted vote. That voter is then responsible for noticing the new ID and sending in a new vote, using the new ID from the CTF instead of the original one; otherwise, the vote is discarded. Thus, at most, one person could get multiple votes; if the IDs are random enough, this won't be a serious problem. This doesn't eliminate the problem of trusting the CTF, although it limits the CTF's options; the CTF may still "proxy vote" all non-voting IDs at the end of the election. Having voters declare their intention to vote mitigates this somewhat, since this reduces the number of unused IDs; if a voter declares his/her intention and then doesn't vote, however, that vote is the CTF's for the taking. Also, the CTF may drop votes, and the voter cannot prove that (s)he actually voted properly; one would think, though, that if this happened enough to influence the election, enough of a public stink would be raised that the election results would be suspect. > The easiest solution is to make sure we can trust our vote > counter. For other solutions, I will have to pull out Applied > Crypto. There is a method described in _AC_ that does not require a CTF. It's not practical, though. for any more than a few people. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

