On Monday 26 May 2008 06:55:48 am, Mark Waser wrote:
> >> The problem with "accepted economics and game theory" is that in a proper
> >> scientific sense, they actually prove very little and certainly far, FAR
> >> less than people extrapolate them to mean (or worse yet, "prove").
> >
> > Abusus non tollit usum.
> 
> Oh Josh, I just love it when you speak Latin to me!  It makes you seem soooo 
> smart . . . .
> 
> But, I don't understand your point.  What argument against proper use do you 
> believe that I'm making?  Or, do you believe that Omohundro is making 
> improper use of AEFGT?

You're very right that people misinterpret and over-extrapolate econ and game 
theory, but when properly understood and applied, they are a valuable tool 
for analyzing the forces shaping the further evolution of AGIs and indeed may 
be our only one.

> Could you please give some references (or, at least, pointers to pointers) 
> that show the existence of the moral ladder?  I'd appreciate it and could 
> use them for something else.  Thanks!

BAI p. 178-9:

Further research into evolutionary game theory shows that the optimal strategy 
is strongly dependent on the environment constituted by other players. In a 
population of all two-state automata (of which tit-for-tat is one), a program 
by the name of GRIM is optimal. GRIM cooperates until its opponent defects 
just once, and always defects after that. The reason it does well is that the 
population has quite a few programs whose behavior is oblivious or random. 
Rather than trying to decipher them, it just shoots them all and lets 
evolution sort them out.

Chances are Axelrod's original tournaments are a better window into parts of 
the real, biological evolutionary dynamic than are the later tournaments with 
generated agents. The reason is that genetic algorithms are still unable to 
produce anything nearly as sophisticated as human programmers. Thus GRIM, for 
example, gets a foothold in a crowd of unsophisticated opponents. It wouldn't 
do you any good to be forgiving or clear if the other program were random. 

But in the long run, slightly nicer programs can out-compete slightly nastier 
ones, and then in turn be out-competed by slightly nicer ones yet. For 
example, in a simulation with ``noise,'' meaning that occasionally at random 
a ``cooperate'' is turned in to a ``defect,'' tit-for-tat gets hung up in 
feuds, and a generous version that occasionally forgives a defection does 
better--but only if the really nasty strategies have been knocked out by 
tit-for-tat first. Even better is a strategy called Pavlov, due to an 
extremely simple form of learning. Pavlov repeats its previous play if it 
``won,'' and switches if it ``lost.'' In particular, it cooperates whenever 
both it and its opponent did the same thing the previous time--it's a true, 
if very primitive, ``cahooter.'' Pavlov also needs the underbrush to be 
cleared by a ``stern retaliatory strategy like tit-for-tat.''

So, in simplistic computer simulations at least, evolution seems to go through 
a set of phases with different (and improving!) moral character.

Karl Sigmund, Complex Adaptive Systems and the Evolution of Reciprocation , 
International Institute for Applied Systems Analysis Interim Report 
IR-98-100; see http://www.iiasa.ac.at.

there's a lot of good material at 
http://jasss.soc.surrey.ac.uk/JASSS.html

> 
> Also, I'm *clearly* not arguing his basic starting point or the econ 
> references.  I'm arguing his extrapolations.  Particularly the fact that his 
> ultimate point that he claims applies to all goal-based systems clearly does 
> not apply to human beings. 

I think we're basically in agreement here.

Josh



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to