Your robot do not have time to know the true truth. He would not speculate
on the nature of his programmer, or why he is here. At least until the
problems of survival are solved by means of a stable collaboration. Even
so, he never could have the opportunity to know the programmer. He don´t
know the nature of other robots except that they need the same things. He
must choose the truth to know and how deep need to know it to have
collaboration. He as to focus on obtaining collaboration.

In game experiments, collaboration appear spontaneously when the actors
remember the other's actions individually.

Le me give my starting assumptions: The first truth algoritm in this
context, (in the absence of comunication),  "if this actor collaborated
with me in the past, he is faithful and will collaborate with me in the
future".  if we confront automata with different programs playing the
prisoner dilemma, The outcome of this game was discovered by Axelrod: the
tit-for-that was the simplest sucessful collaborator that was not
depredated by others. TFT collaborate with these who collaborate and defect
with these that don´t. But in the presence of noise or imperfect
information, if a TFT don´t collaborate for whatever reason one single
time, he is abandoned by the rest. For this reason a forgetting TFT that
don't take into account random non collaboration becomes more sucessful.

With time, if ia form of variation and selection is incorporated in the
game, more sophisticated evaluations of others appear. Still the truths to
be know for the actor are the relevant for his survival: that is the truths
about either if their fellow actors will collaborate or not.

My question is , *in presence of communication,* where the actors can lie
or tell the truth, how the game is modified and how the algoritms that
obtain data for action change?.

Because this is a form of guided question, I will not hide my cards and I
will say my conclussions:

Once some actor (call it robot) collaborates with my robot I would mark it
as faitful. therefore I will believe in what it says.

If I detect that what He says is false, I will mark this event as an act of
non collaboration. Therefore  this will influence my next collaboration
with him. he will know it, so therefore he will not lie my robot next time
if not for a good reason, or , else, he will loose the valuable
collaboration of my robot.

But in situations of scarcity, when collaboration is more necessary, it is
the moment where non collaboration may be egoistically profitable, he would
say for example that there is a piece somewhere, that he will take care of
my pieces, so may steal them. I can returm and revenge, producing in it a
damage such that further actions of this type would be non profitable for
him.

The dynamic of retaliation is know, it deter future offenses in the middle
term, but at the short term the cost will be that, after the revenge, both
will be in a situation much worse than at the beginning.

What can my robot and the many robots that usually collaborate to avoid
such lies, revenges, misunderstandings etc?.

To aleviate the cost of punishing individually non collaborators, the best
way is to collaborate to punish them. But they must offense a common good.
The common good may be material, but also can be a rule. The rules would be
of course, the rules of collaboration: In what situations it is mandatory
to help a member. All of these rules  of the group are to be admitted as
unquestionable truths by all members.

Then, the problem becomes how to be a member . That delimitation of
membership is very important, because every new member will receive the
benefits of our robot, and must be willing to incur in the cost of helping
others with as low cost of punishment as possible.

Membership in a group works like a assurance company.  A robot could not
enter the group when he need a repair to leave when he has received the
benefit. This would destroy the group collaboration. membership must be for
 a long time, enough to reciprocate many times. To avoid deception
after benefited, an initial investment in the group is necessary. For
example some pieces. Or a sacrifice (give one of his hands to the list of
group pieces) until something else for the group has been done.

The rules of group membership are added to the list of truths to be
defended and enforced.

We have a long list of rules, that every robot member must know and accept
(and refresh). Also it is necessary a sort of periodic show of commitment
where the group members refrest their memories about the rules, recognize
themseves and show the willingness to defend the rules and castigate the
offenders. This is a synchronization not only of knowledge, but also on
intentions. A sort of visual rites are necessary, where some clues, perhaps
a red light in top of each robot, goberned by a well know ROM program that
verifies the list of truths of the group. It can be substituted by a a
remote communication protocol , but some robots may prefer to see the red
lights from themselves because maybe they don´t trust the communication
protocol.

The verifier ROM program becomes a guaranty but slightly different rules
are in dfifferent groups, and I dont know in advance what the rules will
be.  So the ROM will not carry out the ruies, but the verification of
rules. Hence, the rite is necessary.

Probably the first groups of collaborators where around robots specially
cooperative and, for that matter, specially valuable, that for the first
rule, specially credible, that enunciated a set of rules around wich the
group was created. undoubtably  to show a high appreciation of this robot
was indeed a sign of agreement with these rules, so therefore rites of
appreciation of this founder robot would be the first form of rule
synchronization and group formation.

Another groups may have been around sceptionally aggresive robots that
promise pieces to their followers.

I still did not reach the society of collaborating groups, but they are
formed probably by acretion in agressive and/or peaceful interactions.
Finally, the "psychology" of these robots will be really complex.


Truth as truth is important per se, because it helps with prediction of the
future but collaboration  or social truths are equally important and
sometimes both collide. At first sight they are very different. but I will
sow that they are not so much after all. social truth are like the goods
used for interchange: the are goods and they are money. A group may base
its collaboration in lies, like a society can print arbitrary paper money
without backing in a valuable good like the gold. But in the long therm
their values, either monetary or social, dissapear.


2012/12/19 meekerdb <meeke...@verizon.net>

> of course knowing true things is not the same as saying true things to
> enhance




-- 
Albe

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to