Forest Simmons wrote: > That still leaves open at least two important questions. > > (1) How do we ascertain voter utilities accurately? The uncertainty > principle operates here; the measurement process inevitably introduces > uncertainties. How do we minimize this uncertainty to the extent > possible? > > (2) Voter utility is a vector valued function. Which scalar combination > of components of that function are we trying to maximize? Their sum? > Their median value? Their mode? Their sum of squares? Their minimum? > Their lowest quartile value? The distance from 100 percent utility in all > components as measured by some metric or another? > > There are infinitely many possible choices for (2), and they all affect > (1). > > How many methods have actually been designed with these considerations at > the forefront? > > How many methods, by sheer luck or the good intuition of the inventor, > stand up reasonably well under the light of these questions? > > Not many. > > Forest
L2-normalized CR, under zero information situations, gives no incentive to the voter to exaggerate or distort ratings. Of course, the presence of information about outcome probabilities will introduce such incentives, and may even cause inversion of preferences. Which Approval avoids, but only by sacrificing resolution. Approval might be thought of as the best first-order approximation. It seems to me that, in non-zero-information situations, any higher-order approximation will introduce errors that could be even more significant than the quantization error in Approval. Since it only introduces quantization noise, I suspect that Approval does a good job of averaging out the influence of information-influenced strategies, given a diverse electorate. (Perhaps a good electronics analogy would be a delta-sigma converter -- think of the 1-bit D/A converters found in many CD players.) -- Richard
