Ernest Prabhakar writes:

My point about the integer program model is that, while it may
technically be deterministic, such a deterministic would be highly
sensitive to algorithm details (e.g., do you start from the top or
bottom of the state) and tiny population fluctuations.   Minor errors
in input would lead to drastic changes in output, and anyone who
disliked the results would find ample excuses for challenging it.

Matt responds:

Can someone know in advance what the most likely impact of specific small population 
changes would be on district shapes, sizes and locations?  I doubt that this is likely 
to be the case in practice.  Details like deciding which corner to start should be 
standardized and published along with the source code.  It may be that there are 
problems with the way the re-districting or census is done in a given instance and a 
lawsuit brings the problem to light and forces corrections to be made.  That would be 
good.

Ernest Prabhakar writes:

While such randomness issues would almost certainly NOT benefit any
particular candidate, I think there are other issues involved.    
A pure "integer program model" would have no respect for communities,
drawing boundaries wherever it minimized circumference, even if it
meant slicing off small parts of a p random. District
boundaries would also tend to change radically when redistricted.

Matt responds:

What gets respect under the current politicized re-districting process is the interest 
of the majority party at the expense of the minority party. The biggest impact of 
automating re-districting would be to reduce the disparity in the influence of 
majority political party compared to the minority party.  The majority political party 
assign their loyal representatives to chairperson of the committees anyway.  Given the 
almost autocratic powers that committee chairs typically have I don't think we need to 
be worried about the loss of majority party influence due to automating the 
re-district process.

Ernest Prabhakar writes:

Now, maybe we on this list like the idea of prely random districts
that change dramatically and unpredictably every ten years.    However,
the politicians surely wouldn't -- and not only for selfish reasons. it
makes it harder to build any sort of coherent connection with your
district, and the communities that make it up.   This in turn would
make it much harder to sell to the general public.

Matt responds:

While dramatic changes to district shapes, sizes, and locations are possible with 
small population changes, what is possible is not the same as what is probable.  This 
issue of stability of districts has to be placed in the context of the current 
situation where changes in majority party are also likely to result in major changes 
to districts.  Currently, computers are utilized to help the majority party squeeze as 
many safe districts as possible out of the maps even if it means districts that look 
like they were drawn by an avant-garde artist.  Currently, the re-districting process 
allows an incumbent majority party supported by less than the majority of voters to 
still win the most seats in the legislature and thus retain their majority status 
contrary to public opinion.

Ernest Prabhakar writes:

Therefore, as a -practical- matter, I think any such computation
redistricting has to be done in a way that reflects "natural" community
boundaries.  This should lead to:
a) more recognizeable  and defineable districts
b) greater resistance to small fluctuations
c) greater stability across redistricting events

Of course, such initial conditions do have the potential to benefit one
party over another (say, by enhancing the voting weight of cities).  
Therefore, it would be wise to test this out in simulations before
giving it to the politicians.  However, I do think computational
redistricting needs some sort of 'sensibility' check.  Otherwise, you
might end up with the sort of public outcry we've seen with the
computerized college Bowl Championship Series.

Put another way: there are no unbiased algorithms, only hidden biases;
its better to get them out and explicit so that they can be debated and
decided upon.

Matt responds:

Lets be carefull about avoiding conflating stability and impact with bias.  The 
optimization algorithms and associated models can and should be unbiased.  But the 
only way this can be verified is by publishing the source code.  Most people won't  
understand the algorithm or the software, but i think enough people have the required 
training and knowledge to provide the required oversight to keep that part honest.

Ernest Prabhakar writes:

> I don't enough about the other optimization methods such as simulated
> annealing and genetic algorithm to comment on them.  I don't care what
> method is used.

Fair enough.  The principle is the main thing.  If we could agree on
that, implementation is a detail.

-- Ernie P.
----
Election-methods mailing list - see http://electorama.com/em for list info

Reply via email to