I agree that fro simple simulations simple models and functions may be
well be enough. I thought about different possibilities a bit an
drafted one function that captures quite well what I had in mind. One
function hopefully says more than thousand words.
f(x,y) = c*max(x,y) + (1-c)*min(x,y) + min(c,1-c)*((2a-1)*x
+(1-2a)*y)
This function takes two arguments, x and y, that are opinions in two
dimensions.
I assumed that human opinions / preferences can be expressed as values
on a fixed range. In this case x and y values are in range 0..1. This
means that if one wants to describe values with infinite range one
must first use some function to map them to the used fixed range. That
question is thus separate from the question of how to combine opinions
from two different opinion dimensions, that I intend to discuss here.
Also the results of the function are in range 0..1.
There are two adjustable parameters, c and a. Parameter c describes if
the voter/person wants puts more emphasis on getting good results in
both dimension or if getting good results in one dimension is enough.
If c=0 then the function becomes simply min(x,y). This means that the
voter is maximally happy only if both targets (dimensions) are met
fully. And she is unhappy if either of the dimensions fail. If c=1
then the function becomes max(x,y), and the voter is happy if she gets
good results in either of the dimensions, and is unhappy only if both
targets fail. If c=0.5 and a=0.5 then the function is neutral with
respect to min/max behaviour. The function becomes (x+y)/2 and both
dimensions have the same impact on the output. (In this case these two
dimensions could as well be merged into one dimension only.)
Parameter a indicates which one of the dimensions is more important.
If a=1 then x has more weight than y, and vice versa for a=0. If c=0.5
(neutral with respect to min/max behaviour) then the function becomes
a*x + (1-a)*y, and parameter a simply tilts the plane in either
direction (x has more impact and y has less impact or the other way
around). I made the function such that parameter c is stronger than a
in the sense that when parameter c approaches 0 or 1 then the
influence of parameter a goes down and has finally no effect. Also
other approaches are possible but this one seemed intuitive enough.
These two parameters seem to cover a somewhat sensible variation of
possible human behaviour/utility patterns. What do you think? Any use
in simulation of human behaviour or elsewhere?
One simple approach to handling more than two dimensions would be to
build the multi-dimensional space using multiple this kind of two-
parameter functions. For example if I want world peace and clean
environment, and I think that neither one of them is very enjoyable
without the other, or alternatively I might be also happy if I got
enough money, then my utility function in some election could be as
follows. I'll use notation f(c,a,x,y).
utility(i) = f( 0.9, 0.4, f( 0.1, 0.6, world_peace(i),
clean_environment(i) ), money(i) )
where "world_peace(i)" = "utility of anticipated level of world peace
if candidate i was elected" etc.
In short this means "(world peace and clean environment) ... or
money". "And" corresponds to "min", and "or" to "max".
Juho
P.S. Use Mac and Grapher and set parameters to:
View / Frame Limits... => x: 0...1 y: 0...1 z: 0...1
c = 0.2
a = 0.9
z = c*max(x,y) + (1-c)*min(x,y) + min(c,1-c)*((2a-1)*x+(1-2a)*y)
P.P.S. There's some resemblance to Fuzzy Logic
On Jun 9, 2010, at 5:21 PM, Kevin Venzke wrote:
Hi Warren,
--- En date de : Mar 8.6.10, Warren Smith <[email protected]> a
écrit :
1. I think using
utility=-distance
is not as realistic as something like
utility=1/sqrt(1+distance^2)
I claim the latter is more realistic both near 0
distance
and near
infinite distance.
Why would that be? Do you mean it's more intuitive?
--because utility is not unboundedly large. If a
candidate gets
further from you, utility does not get worse and worse
dropping to
-infinity.
No. Eventually the candidate as he moves
away approaches the worst
he can be for you, which is, say, advocating your death,
and then
moving the candidate twice as far away doesn't make him
twice as bad
from your perspective, and 10X as far doesn't make him 10X
worse. It
only makes him a little worse.
So my formula behaves better near infinity.
A difficulty with this is that you have to know where this reduction
in
effect (of distance) occurs in comparison to where the voters are. In
other words are there really voters who advocate policies so bad for
me
that I can't feel any difference among them, while they can?
Also, near 0 distance, it seems plausible there is a smooth
generic
peak, like the valley in U, not in V which has a
corner. Hence
again my formula more realistic near 0.
Why should there be a singularity at 0? Shouldn't
utility depend
smoothly on location?
If it should, then you must refuse to permit corners.
This seems to have the same difficulty, of where is the curve? Suppose
the issue is how close the bus will drop me off to my stop. Maybe
there
is a curve... Maybe 1 meter isn't twice as good as 2 meters. But maybe
1 mile is twice as good as 2 miles. Within a simulation it's not clear
what we're talking about.
Incidentally the formula could be
A/sqrt(B+distance^2)
where A and B are positive constants chosen to yield
reasonable results.
2. It has been argued that L2 distance may not be
as
realistic as L1 distance.
L2=euclidean
L1=taxicab
That's interesting. I wonder what arguments were
used.
--well, it was claimed. It's
debatable. If I differ from you on 3
issues, that ought
to be 3X as bad as 1 issue, not sqrt(3) times as bad.
It seems to make some sense.
Yes, I'm thinking it makes sense at the moment.
Well, it would be better to cycle over some of the
locations, but taking
the average over all possible locations would not be very
good evidence
either, since not all locations are equally likely.
--average over the correct nonuniform distribution of
location-tuples.
I admit, what that is, is not obvious :)
But eventually you'll have to summarize in one number, which means
you have to
do this. With some luck it may turn out not to matter
too much which
distribution is chosen from among a few reasonable ones.
I think it's more likely that rather than guess at the correct
distribution, I would try to categorize all possible scenarios
according
to noteworthy effects.
It's pretty clear to me that if you just toss out candidates randomly,
RangeNS will usually win. It just happens that in the scenarios I pick
out as being of interest to me, RangeNS isn't usually winning. So I
would
like to investigate this to find exactly what are the circumstances
that
cause methods like Bucklin or DAC to prevail.
Kevin Venzke
----
Election-Methods mailing list - see http://electorama.com/em for
list info
----
Election-Methods mailing list - see http://electorama.com/em for list info