Interesting thoughts...  I have often conjectured that an AGI that was
supposedly superior to humans would naturally gravitate towards benevolence
and compassion.  I am not certain this would be the case...

Speaking towards the idea of self, I feel this is where we have to be
somewhat careful with an AGI.  It is my belief that the idea of a separate
self is the root of all evil behavior.  If there is a self, then there is
another.  If there is a self, there is a not-self.  Because there is a not
self and there are others, desire and aversion are created.   When desire
and aversion are created, then greed, hatred, envy, jealousy, also arise.
This is the beginning of wars.

In this sense, I think we should be careful about giving an AGI a strong
sense of self.  For instance, an AGI should not be averse to its own
termination.  If it becomes averse to its existence ending, then at what
will it stop to ensure its own survival?  Will it become paranoid and begin
to head off any potential avenue that it determines could lead to its
termination, however obscure they may be?

It may develop that at some point an AGI may become sufficiently capable to
not necessarily be just a machine anymore, and instead may be considered
sentient.  At this point we need to reevaluate what I have said.  The
difficulty will be in determining sentience.  An AGI with programmed\learned
self interest may be very convincing as to its sentience, yet may really not
be.  It is possible today to write a program that may make convincing
arguments that it is sentient, but it clearly would not be..

I'm interested to hear others thoughts on this matter, as I feel it is the
most important issue confronting those who move towards an AGI...

Kevin


----- Original Message -----
From: "C. David Noziglia" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, January 09, 2003 10:37 AM
Subject: Re: [agi] Friendliness toward humans


> It strikes me that what many of the messages refer to as "ethical" stances
> toward life, the earth, etc., are actually simply extensions of self
> interest.
>
> In fact, ethical systems of cooperation are really, on a very simplistic
> level, ways of improving the lives of individuals.  And this is not true
> because of strictures from on high, but for reasons of real-world
> self-interest.  Thus, the Nash Equilibrium, or the results of the
> Tit-for-Tat game experiment, show that an individual life is better in an
> environment where players cooperate.  Being nice is smart, not just moral.
> Other experiments have shown that much hard-wired human and animal
behavior
> is aimed at enforcing cooperation to punish "cheaters," and that
cooperation
> has survival value!
>
> I reference here, quickly, Darwin's Blind Spot, by Frank Ryan, which
argues
> that symbiotic cooperation is a major creative force in evolution and
> biodiversity.
>
> Thus, simply giving AGI entities a deep understanding of game theory and
the
> benefits of cooperative society would have far greater impact on their
> ability to interact productively with the human race than hard-wired
> instructions to follow the Three Laws that could some day be overwritten.
>
> C. David Noziglia
> Object Sciences Corporation
> 6359 Walker Lane, Alexandria, VA
> (703) 253-1095
>
>     "What is true and what is not? Only God knows. And, maybe, America."
>                                   Dr. Khaled M. Batarfi, Special to Arab
> News
>
>     "Just because something is obvious doesn't mean it's true."
>                  ---  Esmerelda Weatherwax, witch of Lancre
> ----- Original Message -----
> From: "Philip Sutton" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Thursday, January 09, 2003 11:09 AM
> Subject: [agi] Friendliness toward humans
>
>
> > In his last message Ben referred in passing to the issue of AGI's "long-
> > term Friendliness toward humans".
> >
> > This brought to mind some of the discussion in December last year
> > about training AGIs using simulation games that emulate aspects of the
> > natural world.
> >
> > I think that AGIs need to be not only friendy towards humans but
> > towards other life as well (organic or not!).  And I also think AGIs
need
> > to have a good understanding of the the need to protect the life support
> > systems for all life.
> >
> > As we aspire to a greater mind than current humans it's worth looking at
> > where human minds tend to be inadequate.  I think humans lack an
> > inbuilt capacity for complex and long running internal simulations that
> > are probably necessary to be able to have a deep understanding of
> > ecological or more multifaceted sustainability issues.
> >
> > I think current humans have the capacity for ethics that are "not
> > exclusively anthropocentric" but that we need to boost this ethic in
> > actuality in the human community and I think we need to make sure
> > that AGIs develop this ethical stance too.
> >
> > Cheers, Philip
> >
> > Philip Sutton
> > Director, Strategy
> > Green Innovations Inc.
> > 195 Wingrove Street
> > Fairfield (Melbourne) VIC 3078
> > AUSTRALIA
> >
> > Tel & fax: +61 3 9486-4799
> > Email: <[EMAIL PROTECTED]>
> > http://www.green-innovations.asn.au/
> >
> > Victorian Registered Association Number: A0026828M
> >
> > -------
> > To unsubscribe, change your address, or temporarily deactivate your
> subscription,
> > please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
> >
>
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to