Ok Richard, let's talk about your scenario...define 'Mad', 'Irrational' and 'Improbable scenario' ? Let's look at some of Charles Darwin's and Richard Dawkins theories as some examples in this improbable scenario. Ever heard of a Meme ? or the Memetic Theory ? I would like to say that based on these and the very probable scenario of AI combined with cognitive thought processes the chances of 'survival of the fittest' starts to look very rosy indeed. Remember it's one Superpower against another, whether that be man against man or country against country. Furthermore you need to look at the bigger picture...the ultimate goal here is as A. Yost mentioned, dollar signs and Super Power for the organisations involved. Candice > Date: Mon, 22 Oct 2007 19:55:43 -0400> From: [EMAIL PROTECTED]> To: > [email protected]> Subject: Re: [singularity] QUESTION> > candice > schuster wrote:> > I think you are very right...why build something that in > turn could lead > > to our distruction, not that we aren't on the downward > spiral anyhow. > > We need to perhaps ponder on the thought...why in the > first place ? We > > should be gaining super intelligence on an individual > level, this is not > > hard to achieve, build something that would aid our > progress but not > > something that you give free 'thought reign' to.> > Why > not address the scenario I described, rather than just contrdict it > and > insert a mad, irrational, improbable scenario without explaining how > it > could occur?> > What is the matter with people?> > > > > > Perhaps we are > these robots in the first place...ever thought of that ?> > > > > > > > Subject: RE: [singularity] QUESTION> > > Date: Mon, 22 Oct 2007 11:59:51 > -0700> > > From: [EMAIL PROTECTED]> > > To: [email protected]> > >> > > > ...but the "singularity" advanced by Kurzweil includes the integration> > > > of human brains with digital computation...or computers> > > > (http://www.ece.ubc.ca/~garyb/BCI.htm , http://wtec.org/bci/). Since> > > > war is the pampered offspring of the technosphere...it is highly likely> > > > that we can expect to see relatively rapid development of "singular"> > > > technologies in defense or "offense" industries (if indeed the> > > > technology has the potential to be developed/emerge). Those that will> > > > have lots of $ (oil exec control of gov), "direct mental access" to> > > > high-speed digital computation, expanded memory storage and retrieval,> > > > and access to advanced weapon systems, will also have enormous amounts> > > > of power. I think there is cause for monitoring who and where singular> > > > (brain-digital interfaces) technologies are being developed and how they> > > > evolve in the coming years. Supersapient is likely to lead to super> > > > power.> > >> > > A. Yost> > >> > >> > >> > >> > > -----Original > Message-----> > > From: Richard Loosemore [mailto:[EMAIL PROTECTED]> > > > Sent: Monday, October 22, 2007 11:15 AM> > > To: [email protected]> > > > Subject: Re: [singularity] QUESTION> > >> > > albert medina wrote:> > > > > Dear Sirs,> > > >> > > > I have a question to ask and I am not sure that I > am sending it to the> > >> > > > right email address. Please correct me if I > have made a mistake.> > > > From the outset, please forgive my ignorance of > this fascinating> > > topic.> > > >> > > > All sentient creatures have a > sense of self, about which all else> > > > revolves. Call it "egocentric > singularity" or "selfhood" or> > > > "identity". The most evolved "ego" that > we can perceive is in the> > > > human species. As far as I know, we are the > only beings in the> > > > universe who "know that we do not know." This > fundamental> > > > "deficiency" is the basis for every desire to acquire > things, as well> > > as knowledge.> > > >> > > > One of the Terminator > movies described the movie's computer system as> > >> > > > becoming > "self-aware". It became territorial and malevolent, similar> > > > to a > reaction which many human ego's have when faced with fear, threat> > >> > > > > or when possessed by greed.> > > >> > > > My question is: AGI, as I > perceive your explanation of it, is when a> > > > computer gains/develops an > ego and begins to consciously plot its own> > > > existence and make its own > decisions.> > > >> > > > Do you really believe that such a thing can happen? > If so, is this> > > > the phenomenon you are calling "singularity"?> > > >> > > > > Thanks for your reply,> > > >> > > > Al> > >> > > Al,> > >> > > You > should understand that no one has yet come anywhere near to building> > > an > AGI, so when you hear people (on this list and elsewhere) try to> > > answer > your question, bear in mind that a lot of what they say is> > > guesswork, > or is specific to their own point of view and not necessarily> > > > representative of other people working in this area. For example, I> > > > already disagree strongly with some of the things that have been said in> > > > answer to your question.> > >> > > Having said that, I would offer the > following.> > >> > > The "self" or "ego" of a future AGI is not something > that you should> > > think of as just appearing out of nowhere after a > computer is made> > > intelligent. In a very important sense, this is > something that will be> > > deliberately designed and shaped before the > machine is built.> > >> > > My own opinion is that the first AGI systems to > be built will have> > > extremely passive, quiet, peaceful "egos" that feel > great empathy for> > > the needs and aspirations of the human species. They > will understand> > > themselves, and know that we have designed them to be > extremely> > > peaceful, but will not feel any desire to change their state > to make> > > themselves less benign. After the first ones are built this > way, all> > > other AGIs that follow will be the same way. If we are careful > when we> > > design the first few, the chances of any machine ever becoming > like the> > > standard malelvolent science fiction robots (e.g. the one in > Terminator)> > > can be made vanishingly small, and essentially zero.> > >> > > > The question of whether these systems will be "conscious" is still > open,> > > but I and a number of others believe that consciousness is > something> > > that automatically comes as part of a certain type of > intelligent system> > > design, and that these AGI systems will have it just > as much as we do.> > >> > > The term "singularity" refers to what would > happen if such machines were> > > built: they would produce a flood of new > discoveries on such an immense> > > scale that we would be jumped from our > present technology to the> > > technology of the far future in a matter of a > few years.> > >> > > Hope that clarifies the situation.> > >> > >> > >> > > > Richard Loosemore> > >> > > -----> > > This list is sponsored by AGIRI: > http://www.agiri.org/email To> > > unsubscribe or change your options, > please go to:> > > http://v2.listbox.com/member/?&> > > a> > >> > > -----> > > > This list is sponsored by AGIRI: http://www.agiri.org/email> > > To > unsubscribe or change your options, please go to:> > > > http://v2.listbox.com/member/?&> > > > > > > ------------------------------------------------------------------------> > > Do you know a place like the back of your hand? Share local knowledge > > > with BackOfMyHand.com <http://www.backofmyhand.com>> > > ------------------------------------------------------------------------> > > This list is sponsored by AGIRI: http://www.agiri.org/email> > To > unsubscribe or change your options, please go to:> > > http://v2.listbox.com/member/?& > > <http://v2.listbox.com/member/?&>> > > -----> This list is sponsored by AGIRI: http://www.agiri.org/email> To > unsubscribe or change your options, please go to:> > http://v2.listbox.com/member/?& _________________________________________________________________ 100’s of Music vouchers to be won with MSN Music https://www.musicmashup.co.uk
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=56558045-b66d32
