First, the correction: This correction isn't about eliminating or determining bias with a uniform distribution. What I've said about that situation is correct. Bias-Free is genuinely without bias, in the sense that the expected s/q is the same in every interval between two successive integers, if the probability distribution is assumed uniform.
And, regardless of the probability-density distribution, Bias-Free is genuinely unbiased in the sense that the average s/q in every interval between two successive integers is the same. That statement doesn't involve any assumption about the probability-density distribution. It speaks of an average s/q over all q in the interval. The correction is about the case of a non-uniform probability-density. I'd previously said that the expected s/q in the interval from consecutive integers a and b is: The integral from a to b of (s(q)/q)*F(q), with respect to q, where s(q) is the number of seats as a function of q, and F(q) is the probability-density as a function of q. But, to get the expected s/q in the interval, the above integral has to be divided by the integral from a to b of F(q), with respect to q. The expression for expected s/q in the region is set equal to 1, and the resulting equation solved for R, the rounding-point. Now the clarifications: I'd said: II wasn't trying to achieve anything relating to Warren's solution. And, if the distribution is non-monotonic, has a peak, and, at the low end, is an increasing function, then Warren's exponential assumption can't be of any use for making an unbiased method. Not if there are any states that could be in the region where it's an increasing function. [endquote] That's true, but it's also true that if any states could even be in the region in which F has a negative second derivative, then Warren's exponential won't be accurate. But yes, Warren's exponential is at its worst if there are states in the region of increasing F. I'd said: As for Warren's solution: His main recommendation, a rounding-point that, in the interval between integers n and n+1, is n+.495, results in an allocation method whose bias is about the same as that of Webster. Not really an improvement. [endquote] Shall I justify that statement? Assuming a uniform probability-density, the expected s/q in the interval between two consecutive integers a and b is gotten by integrating s(q)/q, with respect to q, between a and b. For Warren's rounding point of a+.4954, evaluate that expected s/q for the interval from 1 to 2, and for the interval 53 to 54. Why those intervals? Because 1 to 2 is the lowest interval for which we can expect and demand equal s/q. And the most populous state (California) is getting around 53 seats. Divide the expected s/q in the higher interval by the expected s/q in the lower interval. Do the same comparison for Webster. The results will be about the same. That can also be seen by the fact that Warren's rounding point is almost exactly where Webster's rounding point is. For comparison, BF's rounding-point, in the lower interval, is at about 1.47 Just from the fact that BF is unbiased, under conditions assumed, and that Webster and Warren's method have roundoff points that are both much greater than that of BF, and much closer to eachother than they are to BF, it's clear that my statement in my previous post is correct. Now, that's for a uniform probability-distribution. What if the probability density distribution is the exponential that Warren's method is based on? Well, the exponential greater-magnitude negative 1st derivative in the lower region means that the exponential probability-density will bring increased large-bias. But Warren's method is already large-biased, even with the uniform distribution. Almost as much so as Webster. Therefore it will be even more large-biased with Warren's exponential distribution function, which is the basis of Warren's allocation method. Q.E.D. > So I guess we would have something between Warren's exact solution I replied: Warren's exact solution for what? Certainly not for a rounding point for an unbiased divisor method. [endquote] See above. I'd said: If F is approximated by a Taylor polynomial, then of course the antidifferentiatioion is analytical. But it certainly looks to me, for the reason that i gave earlier, that the resulting equation wouldn't be analytically solvable for R, and would require the use of an iterative equation-solving method. [endquote] I was speaking of approximating the complicated function (log-normal, or the more complicated function that Kristofer had spoken of). Approximating that function by a Taylor polynomial. I'd said: One could choose a polynomial approximation for F(q) that would give an analytical solution for R. [endquote] That needs clarification. I wasn't referring to a polynomial approximation of one of the complicated functions. Of course there can be only one polynomial that approximates in the way that the Taylor polynomial does. I was referring instead to a reasonable polynomial approximation, by interpolation or least squares. I was referring to what I'd described in an earlier posting: Number the states, starting with the smallest. Those are the cumulative state-numbers for the various states. Regard the cumulative state-number as a function of q. I call it G(q) By interpolation, or by least-squares, using the G(q) data points consisting of each state's cumulative-state number and its q, approximate G(q) with a polynomial function. Differentiate that polynomial function. That gives you the probability-density, F(q). Mike Ossipoff ---- Election-Methods mailing list - see http://electorama.com/em for list info
