Kaj, Richard, et al,

On 5/5/08, Kaj Sotala <[EMAIL PROTECTED]> wrote:
>
> > > Drive 2: AIs will want to be rational
> > > This is basically just a special case of drive #1: rational agents
> > > accomplish their goals better than irrational ones, and attempts at
> > > self-improvement can be outright harmful if you're irrational in the
> > > way that you try to improve yourself. If you're trying to modify
> > > yourself to better achieve your goals, then you need to make clear to
> > > yourself what your goals are. The most effective method for this is to
> > > model your goals as a utility function and then modify yourself to
> > > better carry out the goals thus specified.
> >
> >  Well, again, what exactly do you mean by "rational"?  There are many
> > meanings of this term, ranging from "generally sensible" to "strictly
> > following a mathematical logic".
> >
> >  Rational agents accomplish their goals better than irrational
> ones?  Can
> > this be proved?  And with what assumptions?  Which goals are better
> > accomplished .... is the goal of "being rational" better accomplished by
> > "being rational"?  Is the goal of "generating a work of art that has
> true
> > genuineness" something that needs rationality?
> >
> >  And if a system is trying to modify itself to better achieve its goals,
> > what if it decides that just enjoying the subjective experience of life
> is
> > good enough as a goal, and then realizes that it will not get more of
> that
> > by becoming more rational?


This was somewhat wrung out in the 1950s by Herman Kahn of the RAND Corp,
who is credited with inventing MAD (Mutually Assured Destruction) built on
vengeance, etc.

Level1: People are irrational, so a rational path may play on that
irrationality, and hence be irrational against an unemotional opponent.

Level 2: By appearing to be irrational you also appear to be
dangerous/violent, and hence there is POWER in apparent irrationality, most
especially if on a national and thermonuclear scale. Hence, a maximally
capable AGI may appear to be quite crazy to us all-too-human observers.

Story: I recently attended an SGI Buddhist meeting with a friend who was a
member there. After listening to their discussions, I asked if there was
anyone there (from ~30 people) who had ever found themselves in a position
of having to kill or injure another person, as I have. There were none, as
such experiences tend to change people's outlook on pacifism. Then I
mentioned how Herman Kahn's MAD solution to avoiding an almost certain WW3
involved an extremely non-Buddhist approach, gave a thumbnail account of the
historical situation, and asked if anyone there had a Buddhist-acceptable
solution. Not only was there no other solutions advanced, but they didn't
even want to THINK about such things! These people would now be DEAD if not
for Herman Kahn, yet they weren't even willing to examine the situation that
he found himself in!

The ultimate power on earth: An angry 3-year-old with a loaded gun.

Hence, I come to quite the opposite solution - that AGIs will want to appear
to be IRrational, like the 3-year-old, taking bold steps that force
capitulation.

I have played tournament chess. However, when faced with a REALLY GREAT
chess player (e.g. national champion), as I have had the pleasure of on a
couple of occasions, they at first appear to play as novices, making unusual
and apparently stupid moves that I can't quite capitalize on, only to pull
things together later on and soundly beat me. While retrospective
analysis would show them to be brilliant, that would not be my evaluation
early in these games.

Steve Richfield

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to