Owen writes:

"The NFL paper certainly gave me some doubts but it seemed amazing how 
effective GA's, Ant Algorithms, and so on were .. at least in their own domain."


GA's are not an effective way to solve NP-hard, high-dimensional constrained 
optimization problems (> 1000 variables).  Problems like distribution of shared 
resources.  For that you need algorithms designed to do it and a lot of brute 
force (see SCIP, CPLEX, etc).   In public and private organizations we see 
dimensional reduction by introduction of hierarchy.  Individualists (sic) 
pretend the shared resources aren't finite or think by taking as much ground as 
they can they can better control the shared resource -- that their hierarchy is 
the best one.


Marcus

________________________________
From: Friam <[email protected]> on behalf of Owen Densmore 
<[email protected]>
Sent: Saturday, December 29, 2018 11:05:58 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] 2019 - The end of Trumpism

This reminded me of a seriously ancient post on Arrow's theorem, see forward 
below.

I particularly liked the examples in
    http://www1.udel.edu/johnmack/frec444/444voting.html
showing the surprises that can pop up. The first showed the example where the 
majority favorite was the most disliked!

That led me, when I first arrived here in 2002 after the 2000 SFI Complexity 
summer school, to work my way through:
    How to Solve It: Modern Heuristics
    Zbigniew Michalewicz & David B. Fogel
"Stochastic Processes" certainly seemed a bit magic.

The NFL paper certainly gave me some doubts but it seemed amazing how effective 
GA's, Ant Algorithms, and so on were .. at least in their own domain. Here are 
two:
http://backspaces.github.io/as-app3d/models/?ants
http://backspaces.github.io/as-app3d/models/?tsp
(The TSP stops after 500 steps w/o improvement. Open console for info.)

   -- Owen

---------- Forwarded message ---------
From: Owen Densmore <[email protected]<mailto:[email protected]>>
Date: Thu, Dec 3, 2009 at 6:06 PM
Subject: Arrow's Impossibility Theorem
To: Roger Critchlow <[email protected]<mailto:[email protected]>>, Nicholas Thompson 
<[email protected]<mailto:[email protected]>>, Jim Gattiker 
<[email protected]<mailto:[email protected]>>, Chip Garner 
<[email protected]<mailto:[email protected]>>, Frank Wimberly 
<[email protected]<mailto:[email protected]>>
Cc: Owen Densmore <[email protected]<mailto:[email protected]>>


Here's a very old post (Dec 03) from the FRIAM list that discusses
Arrow's Impossibility Theorem.  It proves the impossibly of uniquely
resolving social preferences from individual preferences given more
than two things to choose among.

    -- Owen

During the last Friam, we got talking about voting and Arrow's
Impossibility Theorem came up.  It basically discusses anomalies in
voting when there are more than two choices being voted upon.

The result depends strongly on how the votes are tallied.  So for
example, in our last election, due to having three candidates, we
entered the Arrow regime.  But "spoilers" like Ralph are not the only
weirdness.

The html references below have interesting examples, and the pdf
reference is a paper by SFI's John Geanakoplos who gave a public
lecture last year.

"Fair voting" schemes are getting some air-time now a-days.  There are
several forms, but the most popular I think is that you basically rank
your candidates in order of preference, the "top-most" being your
current vote. There are several run-offs which eliminate the poorest
performer and let you vote again, now with the highest of your ranks
still available.  This insures you always have a vote if you want
one.  This would have won the election here for Gore, for example,
presuming the Nader votes would favor Gore.

Various web pages with examples:
  http://www.udel.edu/johnmack/frec444/444voting.html
  http://www.sjsu.edu/faculty/watkins/arrow.htm
  http://www.math.upenn.edu/~kazdan/210/voting/notes_on_arrow.html
Three proofs by John Geanakoplos
  http://cowles.econ.yale.edu/P/cd/d11a/d1123-r.pdf

Owen Densmore          908 Camino Santander       Santa Fe, NM 87505
[email protected]<mailto:[email protected]>    Cell: 505-570-0168         
Home: 505-988-3787

On Sat, Dec 29, 2018 at 6:31 AM Eric Smith 
<[email protected]<mailto:[email protected]>> wrote:
Steve,

I wonder if there is a game theory problem to be worked on here.

Referring to your statement:

>> Arrow's Impossibility is real but no more significant IMO than the 
>> real-world ambiguities and paradoxes introduced by practical realities such 
>> as voter suppression and fraud, system hacking and mechanical errors (e.g. 
>> hanging chads)…

The Impossibility Theorem has the character of a case-existence proof: for any 
algorithm, there is a case of voter preferences for which that algorithm 
produces an unwanted outcome.  In the sense of only counting cases, it reminds 
me of no-free-lunch theorems: for any algorithm that is fast to solve one 
problem of combinatorial search, there is some other problem for which it is 
slow.  However, the NFL _threorem_ — that no algorithm is any better than any 
other — depends on an appropriately symmetric search space and a suitable 
associated uniform measure over problems on that space.  If search and 
optimization are embedded in a larger dynamic where correlation between 
algorithms is allowed, there can be global better or worse approaches.  I don’t 
(as in every other area) have details and references ready in memory, but David 
Wolpert wrote some of his later papers on NFL addressing the ways it ceases to 
apply under changed assumptions.

I wonder if anyone has done an analysis of Arrow Impossibility in a context of 
a kind of ecosystem of adversaries.  To game any algorithm, crucially with the 
outcome that not only _some_ voter is handled poorly, but that _a sufficiently 
large pool_ of voters is handled poorly that the algorithm is not best, 
requires arranging the preference case that violates the algorithm for suitably 
many voters.  Is this coordination problem harder for some preference-orders 
than for others?  Is there something akin to “canalization” in evolutionary 
biology, where some algorithms live further from the boundary of being 
collectively tipped into producing the wrong outcome than others?  Thus, are 
there measures of robustness for statistical violation of algorithms based on 
what happens in most cases rather than what happens in the worst case, as there 
are for spin-glass phase transition problems?

Another thing it seems unlikely I will ever put time into being serious about.  
Or maybe there is already a large standing literature that claims to have 
addressed this.

Eric
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to