I just ran into the same problem (on a smaller scale) in another context. I want to evolve a strategy for what's called "combat" in the AI Challenge <http://aichallenge.org/>contest from last Fall. (It's over now, but I'm using it in a class.)
The problem is that combat is between two (or more) teams (of ants). How well a combat strategy does depends on the strategies of the other team(s) at the time of the evaluation. I want to evolve all strategies against each other, but it's not clear how to take into account the dependency of a fitness value on the population in existence at the time it is calculated. I don't remember if a good approach to that has been developed. I couldn't think of anything very clever. *-- Russ Abbott* *_____________________________________________* *** Professor, Computer Science* * California State University, Los Angeles* * Google voice: 747-*999-5105 Google+: https://plus.google.com/114865618166480775623/ * vita: *http://sites.google.com/site/russabbott/ *_____________________________________________* On Mon, May 21, 2012 at 11:25 AM, Roger Critchlow <[email protected]> wrote: > > > 2012/5/21 Owen Densmore <[email protected]> > >> Either of you finish the paper? Comments? >> >> -- Owen >> >> > No, I can't seem to read anything these days. > > But the paper on the neural networks evolving strategies to play the > prisoners' dilemma with each other was very much a comment. > > The fitness of an inherited strategy is defined entirely by the population > of strategies it is born into, so you cannot evaluate the fitness of a > neural network independent of the environment in which it plays. The > environment in which it plays is determined by the fitness of the > strategies in play in the last generation of the game and random number > generators. The entire system is deterministic, you can integrate from any > initial state by running the simulation. You can generate an ensemble of > outcomes by varying random number seeds and running simulations. Now, > having run as many simulations as your budget allows, what do you know > about the laws governing the system? > > You know that the "organisms" evolve larger neural networks even though > size is penalized, and that you don't have sufficient budget to enumerate > the possible neural networks, or the possible populations of neural > networks, or the possible random number streams mutating the outcomes of > generations, or the possible encounter schedules between members of a > population (which will matter when the neural nets learn to implement > reinforcement learning). Although you know everything about the bits and > the deterministic rules that make a particular simulation, you don't know > squat about laws that allow you to predict the outcome of the next > simulation. Each generation of simulation is a law unto itself. > > -- rec -- > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org >
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
