Hello Richard,

If it's not too lengthy and unwieldy to answer, or give a general sense as to 
why yourself and various researchers think so...

Why is it that in the same e-mail you can make the statement so confidently 
that "ego" or sense of selfhood is not something that the naive observer should 
expect to just emerge naturally as a consequence of succedding in building an 
AGI (and the qualities of which, such as altruism, will have to be specifically 
designed in), while you just as confidently state that consciousness itself 
will merely arise 'for free' as an undesigned emergent gift of building an AGI?

I'm really curious about researcher's thinking on this and similar points.  It 
seems to lay at the core of what is so socially controversial about 
singualrity-seeking in the first place.

Thanks,
~Robert S.

-------------- Original message from Richard Loosemore <[EMAIL PROTECTED]>: 
-------------- 


> albert medina wrote: 
> > Dear Sirs, 
> > 
> > I have a question to ask and I am not sure that I am sending it to the 
> > right email address. Please correct me if I have made a mistake. From 
> > the outset, please forgive my ignorance of this fascinating topic. 
> > 
> > All sentient creatures have a sense of self, about which all else 
> > revolves. Call it "egocentric singularity" or "selfhood" or 
> > "identity". The most evolved "ego" that we can perceive is in the human 
> > species. As far as I know, we are the only beings in the universe who 
> > "know that we do not know." This fundamental "deficiency" is the basis 
> > for every desire to acquire things, as well as knowledge. 
> > 
> > One of the Terminator movies described the movie's computer system as 
> > becoming "self-aware". It became territorial and malevolent, similar to 
> > a reaction which many human ego's have when faced with fear, threat or 
> > when possessed by greed. 
> > 
> > My question is: AGI, as I perceive your explanation of it, is when a 
> > computer gains/develops an ego and begins to consciously plot its own 
> > existence and make its own decisions. 
> > 
> > Do you really believe that such a thing can happen? If so, is this the 
> > phenomenon you are calling "singularity"? 
> > 
> > Thanks for your reply, 
> > 
> > Al 
> 
> Al, 
> 
> You should understand that no one has yet come anywhere near to building 
> an AGI, so when you hear people (on this list and elsewhere) try to 
> answer your question, bear in mind that a lot of what they say is 
> guesswork, or is specific to their own point of view and not necessarily 
> representative of other people working in this area. For example, I 
> already disagree strongly with some of the things that have been said in 
> answer to your question. 
> 
> Having said that, I would offer the following. 
> 
> The "self" or "ego" of a future AGI is not something that you should 
> think of as just appearing out of nowhere after a computer is made 
> intelligent. In a very important sense, this is something that will be 
> deliberately designed and shaped before the machine is built. 
> 
> My own opinion is that the first AGI systems to be built will have 
> extremely passive, quiet, peaceful "egos" that feel great empathy for 
> the needs and aspirations of the human species. They will understand 
> themselves, and know that we have designed them to be extremely 
> peaceful, but will not feel any desire to change their state to make 
> themselves less benign. After the first ones are built this way, all 
> other AGIs that follow will be the same way. If we are careful when we 
> design the first few, the chances of any machine ever becoming like the 
> standard malelvolent science fiction robots (e.g. the one in Terminator) 
> can be made vanishingly small, and essentially zero. 
> 
> The question of whether these systems will be "conscious" is still open, 
> but I and a number of others believe that consciousness is something 
> that automatically comes as part of a certain type of intelligent system 
> design, and that these AGI systems will have it just as much as we do. 
> 
> The term "singularity" refers to what would happen if such machines were 
> built: they would produce a flood of new discoveries on such an immense 
> scale that we would be jumped from our present technology to the 
> technology of the far future in a matter of a few years. 
> 
> Hope that clarifies the situation. 
> 
> 
> 
> Richard Loosemore 
> 
> ----- 
> This list is sponsored by AGIRI: http://www.agiri.org/email 
> To unsubscribe or change your options, please go to: 
> http://v2.listbox.com/member/?&; 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56630632-482e33

Reply via email to