On 7/11/2012 3:47 PM, Alberto G. Corona wrote:
Stephen:

Well itæ„€ not cooperation between computer programs, but cooperation of entities in the abstract level. This can be described mathematically or simulated in a computer program. In both cases, it starts with a game with its rules goals wins and loses is created.

If the game is simple and/or played by a small number of players (for example two) This game is analtyzed with Game Theory techniques to obtain the stable strategies that make each player to optimize its wins in a way that they can not win more and it is inmune to attacks from other players. This is a Nash equilibrium.

http://en.wikipedia.org/wiki/Nash_equilibrium

But when the game is too complex or the players use different strategies or they evolve and adapt, specially when the sucessful entities give birth to new generations with mutant and/or strategies which are a mix of the parents ones (in a way defined in the game) Then it is necessary to simulate it within an computer programs. This is part of the work of Axelrod. evolution of generations is modeled with a genetic program

http://en.wikipedia.org/wiki/Genetic_programming

to summarize, any entity that collaborate need memory of past interactions of each other entity , In other words, it needs individual recongnition ablities and a form of "moral evaluation" of each individual.

It also needs to punish free riders even at the cost of its own well being, in a way that the net gain of free riders is negative. or else the fhe collaborators will fail and the defectors/free riders will expland.

So the collaborators need to collaborate too in the task of punishing free riders because this is crucial for the stability of collaboration in other tasks.

Forgiveness is another requirement of collaboration, specially when the entities produce spurious behaviours of non collaboration, but collaborate most of the time. A premature punishment could make a collaborator to punish in response so the collaboration ends.

In these games the goals are fixed. In more realistic games the goals vary and the means to obtain them depend on knowledge and asssumptions/beliefs, so an homogeneity within the group around both things should be required for collaboration. For sure there is a tradeof between mind sharing and punishment. Less mind sharing, more violent punishment is necessary for a stable collaboration. To verify mind sharing and investment in the group collaboration, periodic public meetings where protocols/rituals of mutual recognition are repeated to assure to each member that the others are in-line. For example, to visit a temple each week, to discuss about the same newspaper or to assist to minoritary rock concerts. (or to mutually interchange checksums of the program content of each entity)

But this is not the last world. It is a world of infinite complexity. For example, a strategy for avoiding free riders or mind sharing can be exploited by meta-free-riders. Among humans, when trust is scarce, sacrifices in the temples, blood pacts and violent punishments become necessary.to <http://necessary.to> avoid free riders and maintain stable the collaboration.

All of this does not change wjheter the entities are humans, robots or programs. Evolutionary game theory is a field in active research by economist, lawyers,moralists, computer scientists, Philosophers, psichologists etc.

.  Matt Rydley "what is human" is a good introduction.

Gintis "Game Theory Evolving" is very good.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to