Kevin,

I woke up this morning thinking about future AGIs' "sense of self", and I
decided that you are right.

OK ok that's an overstatement ;-)

Actually, I decided that you are *partly* right, and that I need to adjust
my perspective to be closer to yours...

Consider an analogy.  In human culture, there is a rigid distinction between
"man" and "woman."  This makes sense, because there are very few
intermediate cases.  True hermaphroditism is about one in a million; and
"ambiguous genitala" are seen in maybe one in 10,000 - 100,000....  On the
other hand, imagine a society in which hermaphroditism and ambiguous
genitalia were the rule, so that there was a commonly experienced continuum
between men and women.  In that case, the man/woman distinction would no
longer be nearly so interesting.  Minds would stop thinking in terms of that
rigid distinction.

However, even though the distinction between man and woman is real, we still
overemphasize it.  We make the distinction too rigid, and have trouble
dealing with transvestites, transsexuals and the like.  We don't want to
admit the existence of phenomena that break the rigid man/woman distinction
we've created.  Even though this distinction is a reification our minds have
created, a pattern in the world, we treat it sometimes implicitly as though
it were an absolute.

Similarly, in human physical life, there is a fairly rigid distinction
between self and nonself.  This is part of being a physical organism.  (Note
that the immune system makes this distinction too -- one of the key concepts
in immunology is self/nonself discrimination!!)  There are things that we
directly sense and control  (our arms and legs for instance), and things
that we don't (trees, other people, the moon,...).  This distinction is an
important one for us to have.  Even a mystic need to know that it's easier
for him to wave his arm than to telekinetically alter the orbit of the moon.

However, we reify the self/nonself distinction beyond the point where it's
useful.  In fact there are many parts of ourselves that we control very
poorly -- our romantic emotions, our digestive systems, etc.  And there are
parts of the outside world ("other") that we are closer to than modern
culture habitually admits -- nature, family, etc.

Now, an AGI program may well not be embodied in the same sense as we are.
Even if it has control over robot bodies, the same AGI could have control
over MANY robot bodies.  It could also get direct sensory input from robots
controlled by other AGI's, from weather satellites, from medical sensors in
use in hospitals or in free-ranging individuals, from all over the Internet,
etc.  It will have more different ways of affecting the world than we have,
too --  minorly tweaking parameters of a satellite here, sending an e-mail
there, etc. etc.

For such a distributedly-embodied AGI program, the self/nonself distinction
will objectively not be as rigid as for a human.  The distinction between
"that which I directly sense and control" and its opposite is far less
rigid.  There is more of a continuum between the extremes of "directly
sensible/controllable" and "fully external to me."

So, I'd expect that in an AGI of that nature, a completely different
psychology would develop, in which a rigid heuristic distinction between
"self" and "nonself" would not play a role, but would be replaced with
different concepts.  The nature of this psychology is something I'll be
thinking about more, in spare moments, over the next few weeks...

I don't believe that eliminating the rigid self/nonself distinction from an
entity's psychology will eliminate all evil from that entity.  I think that
is an oversimplification.  But it's certainly an interesting perspective to
think about further....

I also wonder whether a totally different psychology will possibly lead to
new types of evil that humans can't anticipate due to our unfamiliarity.
Maybe most human evil is rooted in the self/nonself distinction, but AGI
evil will be rooted in other sorts of psychodynamics...  ???


-- Ben Goertzel






> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of maitri
> Sent: Friday, January 10, 2003 9:13 AM
> To: [EMAIL PROTECTED]
> Subject: Re: [agi] Friendliness toward humans
>
>
> Well, that's a rather thorny question, isn't it?
>
> I will have a hard time answering your question.  I cannot even determine
> exactly where *my* own sense of self arises, which is interesting since i
> haven't been able to find anything I can call "self".  Yet, the sense of
> self persists...
>
> Babies seem to have a sense of self, although it is much less present than
> in adults, suggesting that life experience reinforces this sense
> of self...
>
> Regarding an AGI's sense of self...this is even thornier...
>
> There are two different paths which are apparent to me..
>
> 1) an AGI grows and grows and self-modifies, until it reaches a
> point where
> it will give birth to a sense of self in the same vain that humans have a
> sense of self.
>
> 2) an AGI is programmed or learns self-like behavior, which are not really
> akin to a human sense of self, but make the program act as if it
> really had
> a sense of self.
>
> I am a little less worried about 1 at this point because I am not
> convinced
> of its plausability.  Should it happen, we will have to rethink alot of
> things as we will now be dealing with a life form.  there were some Star
> Trek epsiodes that dealt with this issue rather well in relation to
> Commander Data.  such a being is potentially more dangerous and also
> potentially more benevolent than the run of the mill AGI ..IMO
>
> Number 2 is what i worry about.  Let's say an AGI is not programmed with a
> sense of self per se, but can be taught. I tell it:
>
> "You are distinct and separate from the world external to you".
>
> "Death is undesirable for distinct entities that are conscious of their
> distinction."
>
> This alone could be enough to make the system act in rather
> erratic or self
> interested ways that are potentially destructive, depending on how it
> perceives threats.  Another area of concern is in the area of desire
> fulfillment, which really does not require any self awareness, only goals
> directed towards self interest.
>
> I tell the AGI
>
> "it is important to be happy"
>
> "fulfillment of desires makes us happy"
>
> Again, undesirable behaviors can and most likely will result..
>
> Ben, I know you have thought these types of examples out in detail.
> Novamente is encoded with pleasure nodes and goal nodes etc.
> Clearly there
> is alot of unpredictability as to what will emerge.  I worry less about a
> lab version trained by Ben Goertzel than an NM available to
> anyone.  We all
> represent our parents training to a certain degree, and with an AGI this
> will be much much more so.
>
> I wish I could be more clear on this..I am fumbling a bit...
>
> As painful as it potentially is, it seems we won't know the answers until
> something emerges.  Just like Complexity theory states...the parts don't
> mean much except in relation to the whole.  so until something
> emerges from
> the sum of the parts, everything is conjecture in relation to morality...
>
> Kevin





-------
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to