Some, but not all issues, are going to have the correlation you are
talking about. Given that there are two issues that are really just
facets of some third issue, we can just let that third issue be
one of the two dimensions in my model. But surely there will be
some fourth issue that doesn't correlate with the first two.
Nevertheless, it's not a major change. I could make the type of distance
a command-line option so that it will be easy to test the effect of
changing it. Not my first priority, though, since I still have to:
1. Code more methods. Actually, I've done several already, but in the
Condorcet category I've only done Plain Condorcet. I will probably
add SSD and stop there. The sim will then include Random Choice,
Random Ballot, Plurality, IRV, PC, SSD, Borda, and Approval. I also
have to add tie-breakers for the methods I've implemented.
2. Code front-runner strategies for Approval and Plurality. My chief
interest at this point is to determine how sensitive Approval is to
voter strategy.
3. Code non-uniform distributions. Right now, I've only done uniform
distributions, and all methods except the random ones and Plurality
have about 90% success rate or better at picking the candidate with
highest potential. Plurality is somewhere in the 60-70% range (even
IRV gets around 90%). I expect a lot to change when the distributions
of voters and candidates both have some modality.
On the front-runner strategy question, I will probably select two
front runners by multiplying each candidate's majority potential
by a random number between 1 and 2. That way, centrist candidates
are favored to be front runners, but the effect of non-issue factors
will be modeled, too.
Instead of the two non-zero info strategies for Approval that I
previously suggested, I now plan to implement a single strategy,
which will be to adjust the voter's rating of each front runner up
or down (up for the preferred front runner, down for the other)
before applying the above-the-mean strategy. I haven't decided
on the exact algorithm for this adjustment, though.
Richard
Anthony Simmons wrote:
[EMAIL PROTECTED]">Usually, if there are a whole lot of factors (as in "factor
analysis"), they aren't independent. For example, you'd
imagine that if I'm morally opposed to ice cream, I'd most
likely be opposed to frozen yogurt as well. If you make one
the X coordinate and one the Y coordinate, and plot the
positions of actual moral philosophers, you'd expect to find
a pretty steady correlation between X and Y, with most of the
pack along a straight line through the origin and
representing a third factor (a derived one in this case), Z,
"moral correctness of frozen confections".
It's reasonable to consider Z as basic as X and Y, so we'd
like to be able to rotate our graph so that this factor is
horizontal or vertical and becomes one of the coordinate
axes, without changing any of the relationships between the
variables or the distances between points on the Z ! line; we
wouldn't want a policy to become more or less extreme on our
scale just because we rotated the scales.
Using the root-sum-of-squares distance makes all of this very
clean. Of course, you could preserve distances in other
ways. If you were using the the city block distance, and you
rotated the coordinates so that (1, 1) ended up on the X
axis, you could stretch it to (2, 0). After all, if you're
not using Euclidean distance, then there's no requirement
that (1, 1) rotates onto (1.414, 0). But when you throw out
the way we normally measure distances, you throw out the
underlying geometry, and we all know that the most important
thing about any measurement is the underlying geometric
aesthetics.
Another consideration: In the illustration above, the data
actually lies along one dimension, embedded in a two-
dimensional space. If we were to add popsicles to ice cream
