Kevin,

I am not sure that we mean the same thing by "sense of self."

I wonder if you could clarify your definition?

Ben

> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of maitri
> Sent: Thursday, January 09, 2003 4:19 PM
> To: [EMAIL PROTECTED]
> Subject: Re: [agi] Friendliness toward humans
>
>
>
> > 1)
> > Since we humans will be teaching the AGI, and it will be learning by
> > interacting with humans and reading human literature, it will absorb
> > something of the human sense of self
>
> I agree that our emodiment, along with our senses, is a primary source of
> our sense of self.  As I look out my window, I see trees and
> houses.  When I
> look down I see my legs and hand.  So There is *me* here, and all
> that other
> *stuff* out there.  So I *must* be a separate self.
>
> An Intelligent person thinking more deeply realizes that what we call the
> self is made totaly of non-self items.  So setting aside metaphysical
> concepts for the moment, i can even practically see that I am
> made of stars,
> and oceans and clouds and dirt and animals and air and conversations etc.
> etc. etc.  So the idea of non-self is not so great a leap..but i digress..
>
> For a computer, the idea of self will be more nebulous for sure.  But I am
> not comforted by the idea that just because they have a more
> disparate self,
> that they will in any sense be less harmful. In fact, if its ego equates
> with its size, it may even be worse than humans!! ;)
>
> I'm not convinced that conversing with humans will make it more human, or
> develop a sense of self.  Its all in the code, as I see it.  How is the
> structure set up?  Are there links where the idea of self preservation can
> be developed?  The machine does not *really* need to have a sense
> of self to
> be dangerous, just to have an algorithm that encodes self protective like
> actions will be enough to spawn potentially dangerous behavior...IMO
>
> I'm not convinced that a sense of self is required to develop an AGI.  Of
> course, a computer that understands that its a computer, and that
> humans and
> the rest of the world are "out there", is more useful than one
> that doesn't
> understand this most basic of concepts.  But this level of understanding
> does not constitute a *self* that I would be worried about...
>
> I think an AGI can exceed humans in many or most ways, yet still have no
> sense of self or self preservation...
>
> In fact, we have computers that do this today, but only in
> specific domains.
> I am stating that I think the same is possible for a more general
> intelligence as well...
>
> But I think we all can admit that once an AGI grows and grows and
> especially
> if it can self modify, that something tantamount to *self* or
> conscioussness
> might emerge...
>
> Kevin
>
> > -------
> > To unsubscribe, change your address, or temporarily deactivate your
> subscription,
> > please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
> >
>
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to