On 1/6/2013 5:47 AM, Alberto G. Corona wrote:
Because this is a form of guided question, I will not hide my cards and I will say my conclussions:

Once some actor (call it robot) collaborates with my robot I would mark it as faitful. therefore I will believe in what it says.

If I detect that what He says is false, I will mark this event as an act of non collaboration. Therefore this will influence my next collaboration with him. he will know it, so therefore he will not lie my robot next time if not for a good reason, or , else, he will loose the valuable collaboration of my robot.

But in situations of scarcity, when collaboration is more necessary, it is the moment where non collaboration may be egoistically profitable, he would say for example that there is a piece somewhere, that he will take care of my pieces, so may steal them. I can returm and revenge, producing in it a damage such that further actions of this type would be non profitable for him.

The dynamic of retaliation is know, it deter future offenses in the middle term, but at the short term the cost will be that, after the revenge, both will be in a situation much worse than at the beginning.

What can my robot and the many robots that usually collaborate to avoid such lies, revenges, misunderstandings etc?.

Herbert Ginitis has addressed many of these questions mathematically in "The Bounds of Reason" and "Game Theory Evolving".


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to