> Kevin et al.,
>
> Fascinating set of observations, conjectures, and methodologies, well
worth
> considering.  And it seems that you have ultimately touched on the kernel
of
> the dilemma of man v. machine en route to the so called 'singularity'.
>
> If I've understood you correctly vis-a-vis the emergence of 'evil' in AGI
> systems, you're suggesting that there is a need to prevent, a priori,
> certain expressions of self in AGI systems to prevent any hint of evil in
> AGI systems.  It's an approach well worth considering, but I believe it
> falls short of 'real' AGI.

Expressions of self are of no concern to me, only an actual belief in a
separate self and the concomitant concept of self preservation.

>
> The reason is that the approach is essentially 'Asimovian' in nature and,
> therefore, wouldn't result in anything more than perhaps a servile pet,
call
> it iRobot, which is always 'less-than-equal' to you and therefore always
> short of your goal to achieve the so called 'singularity' you originally
set
> out to achieve.

I did not suggest any approach to an AGI, and neither did I suggest the hard
coding of values, which would be "Asimovian" as I understand it...

I believe there will be a possible tier of development that many seem to
ignore.  This tier falls short of the fully self modifying Singularity,
universe conquering, machine.  BUT, it is of extreme utitlity and vastly
exceeds humans in many ways.  A true AGI.  As Ben describes intelligence "
the ability to solve complex problems in a complex environment".  This does
*not* require a Singularity.

In some ways, it *may* be considered 'less-than-equal' to a human.  And
what's wrong if it is servile?  Our computers today are 100% servile to
us(although mine does seem to get moody at times)..  I feel that an AGI of
this type can be of great benefit to society, and its pursuit and
development should not be dissuaded.  BUT, even an AGI of this level can be
somewhat dangerous *if* it is coded with some form of self preservation.

The code is not complex.  The high level description is:

10 Look for perceived threats
20    if perceived threat is found
30       seek best way to neutralize it
40    else 10

i realize this is very simplistic, but the idea is that encoding self
preservation is not hard, and is potentially dangerous.

>
> But perhaps any discussion about 'good' and 'evil' is best served by
> defining exactly what 'good' and 'evil' are. However, I'll be a complete
> Sophist and suggest that the dilemma of 'good' and 'evil' can be talked
> around by separating the dilemma into 3 obvious types and talking about
> these.  As I see it, the 3 dilemma types of 'good' and 'evil' are: 1. man
v.
> man, 2. man v. machine, and/or, at the 'singularity' 3. man-machine v.
> man-machine.  So I'll comment on a particular approach for 'real' AGI that
> addresses the dilemma type (2) guided by observations about type (1) and
> with obvious extensions to type (3).

You talk like we know what good and evil are...  If you witness me going
across the street and killing my neighbor, is that a good or evil action?

What if I then tell you that i knew for certain that my neighbor was about
to kill 50 people with a bomb?  Now is it good or evil?

This is a real pandora's box and I believe is at least one of the reasons
why Ben believes that morality cannot be hard coded...

>
> It seems that if you are trying to model your AGI system after nature,
which
> is a reasonable and likely place to start, you should realize that
'nature'
> simply hasn't created/engineered/evolved the human species the way your
> approach suggests.  Put another way, the human species does not have an
> intrinsic suppression of either 'good' or 'evil' behaviors.

I cannot agree on this point.  But this discussion gets into metaphysical
areas and the part of us that *knows* we are interconnected and therefore
resists doing harm to another.  Despite this, with the arising of certian
causes and conditions, we end up harming others sometimes anyway...

>
> And unless you're willing to hypothesis that this is either a momentary
blip
> on the radar screen of evolution, i.e. humans are actually in the process
of
> breaking this moralistic parity, OR that these 'good' or 'evil' behaviors
> will ultimately be evolved away through 'nature', natural selection, and
> time, you are left with an interesting conjecture in modeling your AGI
> system.
>
> The conjecture is that 'good' or 'evil' behaviors are intrinsic parts of
the
> human condition, intelligence, and environment, and therefore should be
> intrinsic parts in a 'real' AGI system.  And as a complete Sophist, I'll
> skip over more than 6,000 years of recorded human history, philosophical
> approaches, religious movements, and scholarly work - that got us where
> we're at today w.r.t. dilemma type (1) - to suggest that the best approach
> to achieve 'real' AGI is to architect a system that considers all
potential
> behaviors, from 'good' to 'evil', against completed actions and
conjectured
> consequences.  In this way, a certain kernel of the struggle of 'good' and
> 'evil' is retained, but the system is forced to 'emote' and
> 'intellectualize' the dilemma of 'good' and 'evil' behaviors.

In stating that "evil" is the natural result of a strong sense of self, I
washoping to avoid detailed discussion about good and evil, and instead
propose a possible direction by which a solution can be found.  Namely, do
not instill a strong sense of self into the AGI...

>
> The specific AGI architecture I am suggesting is essentially 'Goertzelian'
> in nature.  It is a  'magician system', whereby the two components, G
> (Good-PsyPattern(s)) and E (Evil-PsyPattern(s)), are, in and of
themselves,
> psychological or behavioral patterns that can only be effectuated,  i.e.
> assigned action patterns, in a combination with another component to
> generate and emerge as an action pattern, say U (Useful or
> Utilitarian-ActionPattern).  The system-component architecture might be
> thought of as a G->U<-E or GUE 'magician system'.
>
> The success of the implementation of a GUE  'magician system' for an AGI
> system is highly dependent on the successful implementation of an
evaluation
> function for the so called completed actions and conjectured consequences.
> However, these can be guided through analogy to human social, political,
or
> religious systems and/or the difference between them.  For example,
> evaluation functions in a GUE 'magician system' for an AGI system can be
> likened to the emergence of civil/criminal code in human systems which
seem
> to be a minimal intersection set of social secular democracy and religious
> morality in a church v. state distincition, etc.

I understand this approach, but it does not solve the problem of AI
Morality.  Given the structure you propose, including its evaluation
functions, the arising of evil intent can still occur..

>
> However, I will be the first to concede that any implementation of
> evaluation functions based solely on comparisons of to human social
systems
> will suffer the same fate...the system can be 'gamed'.  So, ultimately,
and
> as one approaches 'singularity' (and certainly as on supercedes it)
> completely synthetic, quarantined environments, say virtual-digital
worlds,
> would be required to correctly engineer the evaluation functions of a GUE
> 'magician system' for an AGI system.

This seems to be the approach many put forth...it may be hard to determine
from such tests what will *really* happen once something is unloosed to the
real world...

I must say that I fear this alot less than many other  things.  In
particular, biologically modified organisms...

An AGI could reap alot of havoc for sure, but assuming our nuclear
stockpiles are on isolated systems, I don't worry about the end of humanity

>
> Naturally, I welcome comments, critiques and suggestions.  Just my 2 cents
> worth.

your comments were useful to me..I hope to hear more from you and others as
we(society) move further along this path...

Kevin
>
> Ed Heflin
>
> ----- Original Message -----
> From: "maitri" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Thursday, January 09, 2003 10:59 AM
> Subject: Re: [agi] Friendliness toward humans
>
>
> > Interesting thoughts...  I have often conjectured that an AGI that was
> > supposedly superior to humans would naturally gravitate towards
> benevolence
> > and compassion.  I am not certain this would be the case...
> >
> > Speaking towards the idea of self, I feel this is where we have to be
> > somewhat careful with an AGI.  It is my belief that the idea of a
separate
> > self is the root of all evil behavior.  If there is a self, then there
is
> > another.  If there is a self, there is a not-self.  Because there is a
not
> > self and there are others, desire and aversion are created.   When
desire
> > and aversion are created, then greed, hatred, envy, jealousy, also
arise.
> > This is the beginning of wars.
> >
> > In this sense, I think we should be careful about giving an AGI a strong
> > sense of self.  For instance, an AGI should not be averse to its own
> > termination.  If it becomes averse to its existence ending, then at what
> > will it stop to ensure its own survival?  Will it become paranoid and
> begin
> > to head off any potential avenue that it determines could lead to its
> > termination, however obscure they may be?
> >
> > It may develop that at some point an AGI may become sufficiently capable
> to
> > not necessarily be just a machine anymore, and instead may be considered
> > sentient.  At this point we need to reevaluate what I have said.  The
> > difficulty will be in determining sentience.  An AGI with
> programmed\learned
> > self interest may be very convincing as to its sentience, yet may really
> not
> > be.  It is possible today to write a program that may make convincing
> > arguments that it is sentient, but it clearly would not be..
> >
> > I'm interested to hear others thoughts on this matter, as I feel it is
the
> > most important issue confronting those who move towards an AGI...
> >
> > Kevin
> >
> >
> > ----- Original Message -----
> > From: "C. David Noziglia" <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Sent: Thursday, January 09, 2003 10:37 AM
> > Subject: Re: [agi] Friendliness toward humans
> >
> >
> > > It strikes me that what many of the messages refer to as "ethical"
> stances
> > > toward life, the earth, etc., are actually simply extensions of self
> > > interest.
> > >
> > > In fact, ethical systems of cooperation are really, on a very
simplistic
> > > level, ways of improving the lives of individuals.  And this is not
true
> > > because of strictures from on high, but for reasons of real-world
> > > self-interest.  Thus, the Nash Equilibrium, or the results of the
> > > Tit-for-Tat game experiment, show that an individual life is better in
> an
> > > environment where players cooperate.  Being nice is smart, not just
> moral.
> > > Other experiments have shown that much hard-wired human and animal
> > behavior
> > > is aimed at enforcing cooperation to punish "cheaters," and that
> > cooperation
> > > has survival value!
> > >
> > > I reference here, quickly, Darwin's Blind Spot, by Frank Ryan, which
> > argues
> > > that symbiotic cooperation is a major creative force in evolution and
> > > biodiversity.
> > >
> > > Thus, simply giving AGI entities a deep understanding of game theory
and
> > the
> > > benefits of cooperative society would have far greater impact on their
> > > ability to interact productively with the human race than hard-wired
> > > instructions to follow the Three Laws that could some day be
> overwritten.
> > >
> > > C. David Noziglia
> > > Object Sciences Corporation
> > > 6359 Walker Lane, Alexandria, VA
> > > (703) 253-1095
> > >
> > >     "What is true and what is not? Only God knows. And, maybe,
America."
> > >                                   Dr. Khaled M. Batarfi, Special to
Arab
> > > News
> > >
> > >     "Just because something is obvious doesn't mean it's true."
> > >                  ---  Esmerelda Weatherwax, witch of Lancre
> > > ----- Original Message -----
> > > From: "Philip Sutton" <[EMAIL PROTECTED]>
> > > To: <[EMAIL PROTECTED]>
> > > Sent: Thursday, January 09, 2003 11:09 AM
> > > Subject: [agi] Friendliness toward humans
> > >
> > >
> > > > In his last message Ben referred in passing to the issue of AGI's
> "long-
> > > > term Friendliness toward humans".
> > > >
> > > > This brought to mind some of the discussion in December last year
> > > > about training AGIs using simulation games that emulate aspects of
the
> > > > natural world.
> > > >
> > > > I think that AGIs need to be not only friendy towards humans but
> > > > towards other life as well (organic or not!).  And I also think AGIs
> > need
> > > > to have a good understanding of the the need to protect the life
> support
> > > > systems for all life.
> > > >
> > > > As we aspire to a greater mind than current humans it's worth
looking
> at
> > > > where human minds tend to be inadequate.  I think humans lack an
> > > > inbuilt capacity for complex and long running internal simulations
> that
> > > > are probably necessary to be able to have a deep understanding of
> > > > ecological or more multifaceted sustainability issues.
> > > >
> > > > I think current humans have the capacity for ethics that are "not
> > > > exclusively anthropocentric" but that we need to boost this ethic in
> > > > actuality in the human community and I think we need to make sure
> > > > that AGIs develop this ethical stance too.
> > > >
> > > > Cheers, Philip
> > > >
> > > > Philip Sutton
> > > > Director, Strategy
> > > > Green Innovations Inc.
> > > > 195 Wingrove Street
> > > > Fairfield (Melbourne) VIC 3078
> > > > AUSTRALIA
> > > >
> > > > Tel & fax: +61 3 9486-4799
> > > > Email: <[EMAIL PROTECTED]>
> > > > http://www.green-innovations.asn.au/
> > > >
> > > > Victorian Registered Association Number: A0026828M
> > > >
> > > > -------
> > > > To unsubscribe, change your address, or temporarily deactivate your
> > > subscription,
> > > > please go to
http://v2.listbox.com/member/?[EMAIL PROTECTED]
> > > >
> > >
> > >
> > > -------
> > > To unsubscribe, change your address, or temporarily deactivate your
> > subscription,
> > > please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
> > >
> >
> >
> > -------
> > To unsubscribe, change your address, or temporarily deactivate your
> subscription,
> > please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
> >
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to