On 7/12/2012 7:04 PM, Alberto G. Corona wrote:
2012/7/12 Stephen P. King <[email protected]
<mailto:[email protected]>>
On 7/11/2012 6:47 PM, Alberto G. Corona wrote:
Stephen:
Well itæ„€ not cooperation between computer programs, but
cooperation of entities in the abstract level. This can be
described mathematically or simulated in a computer program. In
both cases, it starts with a game with its rules goals wins and
loses is created.
Hi Alberto,
OK, but can we think of the abstract level as the dual of a
physical level where physical objects play out their scattering
games? What is described by mathematics and/or simulated by
computer program does not have to just be some abstraction. We
cannot assume absolute closure and any implied externality is just
semantics of the abstractions. Abstractions simply cannot exist as
free floating entities, for this leads inevitably to contradictions.
Concerning the question of individuality, A good selfish collaborator
must develop an individuality and !self conscience! (and we are
talking about collaboration between selfish entities, that want as
much benefit from the collaboration as possible).
Hi Alberto,
I suspect that the self has a good reason for existing! I will try
to reconstruct the rational that occurred to me the first time I read
this posting of your. Very good stuff, I must say! Basically, the idea
is that if there was no "self" to refer to then all agents would be free
riders as there would ultimately be no consequence for defection
strategies. Free riders and other parasites live so long as the host
they infect is not yet dead. They have no inherent or automomous
structure to preserve over arbitrarily many iterations of the game.
The point is that the entity must evaluate other individuals, but he
is evaluated by others.
Right, there is a symmetry involved.
So to know if others will collaborate with him, he must evaluate
himself in relation with the others, that is if I, entity A wants to
know what to expect from B, he does evaluate B, but also has to
evaluate what itself, A did to B in the past. This self start to have
the attributes of a conscious moral being. A measure of self steem
becones necessary to modulate what he can realistically demand from
the others and so on.
It is here that we get self-reference and its behaviors and
phenomena! Jon Barwise (with Seligman) discusses this sort of stuff in
his wonderful book Information Flow: The Logic of Distributed Systems
<http://www.amazon.com/Information-Flow-Distributed-Cambridge-Theoretical/dp/0521070996/ref=sr_1_1?s=books&ie=UTF8&qid=1342154191&sr=1-1&keywords=jon+barwise%2C+seligman>
. I highly recommend it. You can preview it here
<http://assets.cambridge.org/97805215/83862/frontmatter/9780521583862_frontmatter.pdf>.
In a computer program, the individuality would be composed of its
memory of relevant interactions with others and the evaluation
algorithms. It seems that humans can store the details of about 150
other individuals. Thatæ„€ why companies with less that 150 persons can
work efficiently without burocracy. This information is very important
and must be syncronized with the others. Most of the talks are about
what did who to whom and who deserve something from me because in the
past he did something to my friend. Bellond 150 external memory is
necessary: written records, registration cards, id numbers, Money
Interesting and very proprietary information! It reminds me of the
small network stuff that Ball discussed in his bookCritical Mass: How
One Thing Leads to Another
<http://en.wikipedia.org/wiki/Critical_Mass_%28book%29>. Where could I
read more on this? Is there a cyclical property that acts as a memory of
sorts in a network of that size (or less)? I get the image of something
like a round robin tournament
<http://en.wikipedia.org/wiki/Round-robin_tournament> going on....
If the game is simple and/or played by a small number of players
(for example two) This game is analtyzed with Game Theory
techniques to obtain the stable strategies that make each player
to optimize its wins in a way that they can not win more and it
is inmune to attacks from other players. This is a Nash equilibrium.
http://en.wikipedia.org/wiki/Nash_equilibrium
I understand and agree! My point is that equilibria to obtain,
but we cannot substitute the abstract descriptions of games for
the actual playing of the games. There is a duality involved that
cannot be collapsed without stultifying both sides.
But when the game is too complex or the players use different
strategies or they evolve and adapt, specially when the sucessful
entities give birth to new generations with mutant and/or
strategies which are a mix of the parents ones (in a way defined
in the game) Then it is necessary to simulate it within an
computer programs. This is part of the work of Axelrod. evolution
of generations is modeled with a genetic program
http://en.wikipedia.org/wiki/Genetic_programming
Yes! This is where we get into law of large numbers situations
and have some change of discovering the emergence of aspects of
reality that we have just been assuming to be a priori given. Some
examples of this are Penrose's "spin networks" and Reg Cahill's
"Process physics".
to summarize, any entity that collaborate need memory of past
interactions of each other entity , In other words, it needs
individual recongnition ablities and a form of "moral evaluation"
of each individual.
I agree, but how do we treat the notion of memory such that an
arbitrary entity has the capacity to access it? We humans have a
large memory capacity that we carry around in our craniums...
It also needs to punish free riders even at the cost of its own
well being, in a way that the net gain of free riders is
negative. or else the fhe collaborators will fail and the
defectors/free riders will expland.
I suspect that free-riders will be, like the poor, always with
us.
We each one are free riders because we are selfish collaborators. A
twist on selfish collaboration is the self deception: our memory is
unconsciously distorted to support our case. we thinkl that we deserve
more than the fair share etc.
These are the pathologies that I find interesting. What kinds of
strategies tend to minimize the "sociopaths"? Maybe the best stratergies
are the ones that distract sociopathic choices by nominally increasing
the pay-off for that appears to be selfish short term gain. But these
would have to be compensated form further down the line of iterations...
Not simple....
So the collaborators need to collaborate too in the task of
punishing free riders because this is crucial for the stability
of collaboration in other tasks.
But there is a problem with this. There does not exist any
finite and pre-given list of what defines a free rider!
we all!. The christian analogy of fallen beings is perfect image of
what evolutionary game theory teach about selfish collaboration under
darwinian selection!
I agree 100%.
Forgiveness is another requirement of collaboration, specially
when the entities produce spurious behaviours of non
collaboration, but collaborate most of the time. A premature
punishment could make a collaborator to punish in response so the
collaboration ends.
This rule is a form of pruning, so we can easily see what
effects it has in networks of collaborators. It is an aspect of
currying or concurrency.
In these games the goals are fixed.
This is only for the sake of closure, but closed systems have
very short life spans, if any life at all. The trick is to get
close to closure but not into it completely. Life exists as an
exploitation of this possibility.
In more realistic games the goals vary and the means to obtain
them depend on knowledge and asssumptions/beliefs, so an
homogeneity within the group around both things should be
required for collaboration.
Right!
For sure there is a tradeof between mind sharing and punishment.
Less mind sharing, more violent punishment is necessary for a
stable collaboration.
yes, but can you see how this rapidly suppresses any potential
for further evolution. It is in effect the establishment of
closure that seals off those involved. North Korea is a nice real
world example of this.
That is very true!. sucessful groups fix basic dogmas, but maintain
inside controlled darwinian variation/selection games among
individuals for the benefit of the whole group. The market of goods
and services operates in this way, under the "dogmas" of trade laws:
The offer of goods and services is the variation. The demand for each
of them is the selection. In the process, wealth is created because
internal needs are satisfied. The same happens in politics, science,
sports etc.
OK, how do we communicate this to a wider audience?
To verify mind sharing and investment in the group collaboration,
periodic public meetings where protocols/rituals of mutual
recognition are repeated to assure to each member that the others
are in-line. For example, to visit a temple each week, to discuss
about the same newspaper or to assist to minoritary rock
concerts. (or to mutually interchange checksums of the program
content of each entity)
Certainly! This shows a rational for the "rituals" that we see
as "traditions" in cultures, for example.
But this is not the last world. It is a world of infinite
complexity. For example, a strategy for avoiding free riders or
mind sharing can be exploited by meta-free-riders. Among humans,
when trust is scarce, sacrifices in the temples, blood pacts and
violent punishments become necessary.to <http://necessary.to>
avoid free riders and maintain stable the collaboration.
Are you familiar with Hypergames
<http://www.sci.brooklyn.cuny.edu/%7Eparsons/events/gtdt/gtdt06/vane.pdf>?
Novelity is the result of openness, but at the cost of allowing
free riders. They are a necessary evil.
Yes, see above. However, dogmas are necessary. The point is a good
combination of dogmas rules, rites, traditions and punishments so that
selfishness (perceived internally as freedom) work for the good of
the group. and deletereous selfisness (antisocial) is supressed.
We are faced with a situation where there is a pay-off for ignoring
these facts. People are in a "head in the sand
<http://www.inkcinct.com.au/web-pages/cartoons/past/2009/2009-152--Australian-ostrich-20th-March-.gif>"
mode. :'(
All of this does not change wjheter the entities are humans,
robots or programs. Evolutionary game theory is a field in active
research by economist, lawyers,moralists, computer scientists,
Philosophers, psichologists etc.
Good stuff!
. Matt Rydley "what is human" is a good introduction.
I will add this to my list. Thanks! I found this,
http://www.scribd.com/doc/47413560/69/MATT-RIDLEY , so far...
2012/7/11 Stephen P. King <[email protected]
<mailto:[email protected]>>
On 7/11/2012 4:29 AM, Alberto G. Corona wrote:
2012/7/10 meekerdb <[email protected]
<mailto:[email protected]>>
Why would you not expect a theory-of-everything to
include the behavior of people? Note that 'govern' does
not imply 'predictable'.
A phisicinst theory of everithing , despite the popular
belief, does not "govern" the behaviour of the people. No
longer than the binary logic govern the behaviour of
computer programs. I can program in binary logic whatever I
want without limitations. the wetware whose activity
produces the human mind could execute potentially any kind
of behaviour. Our behaviour is not governed by anything
related wth a phisical TOE, but by the laws of natural
selection applied to social beings. I can observe the
evolution of such behaviours (in a shchematic way) in a
binary world within a computer program as well. Robert
Axelrod
<http://www.google.es/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&sqi=2&ved=0CEoQFjAA&url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FEvolution_of_cooperation&ei=jTj9T77NB6iy0QXah8mmBw&usg=AFQjCNGua7j080q_oP5ft9ABtXu7bG99dg&sig2=KKUr0FxQezNKKU0MNCQ1vw>dit
it for the first time.
On the contrary, the antrophic principle tell you that is
the mind the determinant element for the existence of a
TOE. A phisical TOE It is just the playing field and the
stuff upon things are made.
Dear Albert,
Interesting that you bring up
http://en.wikipedia.org/wiki/Evolution_of_cooperation ! Could
you elaborate a bit on your thoughts? Do you have any ideas
how to model cooperation between computer programs? The main
problem that I have found is in defining the interface
between computations. How does one define "identity" for a
given computation such that it is distinguished from all others?
--
Onward!
Stephen
"Nature, to be commanded, must be obeyed."
~ Francis Bacon
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.