Wild idea ?  Far from it I would say...that is evolution evolving...bit like 
Emotional Intelligence !


Subject: RE: [singularity] QUESTIONDate: Tue, 23 Oct 2007 15:05:57 -0700From: 
[EMAIL PROTECTED]: [email protected]



Candice and others,
 
Here's a wild idea:  
 
Simply because we are here and processing information in ways that make us 
believe we are contemplating why we are here contemplating the purpose of life 
and the universe logically implies that the energy/mass system from which we 
have been assembled (by the information processing of DNA) has the potential to 
be self-aware or metacognitive.  Other than a human making the assertion that 
consciousness is ubiquitous we have little evidence that our particular type of 
consciousness exists outside of our species.  the assertion then is that the 
energy/mass system of (proximately) the universe has embedded in it the 
property to become self-aware...the potential is/was there from the beginning 
(if there ever was a beginning).  the bad part about carbon-based life is it 
can go extinct...especially if environmental conditions go in unfavorable 
territories.  A nearby gamma ray burst (100 light years) will annhilate 
everything both biosphere and technosphere.  We detect ~ 300,000 GRB's every 
year but lucky for us they have been too far away to force their brand of 
natural selection on our planet.  
 
So what if the potential also exists for life and cosciousness to emerge within 
the technosphere.  I think I can see it.  Perhaps, a technological organism 
emerges that is independent of DNA base-4 computing, capable of the information 
processing capacity (circa 2050+) described by Kurzweil, self replication, 
self-modification, and possessing a new and unique type of 
consciousness...SuperSapient.  Perhaps this new life form (techneria?) then is 
able to overcome the meat-space rate of evolution and radiate spatially across 
the planet in whatever form is best for a given set of conditions such as other 
planets and galaxies. The science of Ecology would be forever transformed.  
Then, perhaps consciousness would be more ubiquitous.  It would certainly raise 
the odds of consciousness and metocognition persisting into the future in more 
places against the forces of natural selection.  In the long term GRB's appear 
to the be the great leveler-natural selectors so the stakes are pretty high.  
 
The purpose of it all is for the universe to maintain and grow its self 
awareness capabilities.  The level of omniscience present at any one time in 
the universe is the integral across all forms of information processors.  so as 
the rate of information processing grows so too does the asymptote of 
omniscience.  Once we humans are able to merge our brain's information 
processing with some paradigm of exo-bio computation capability then we will 
most likely experience a leap in the potential for supersapent, non-DNA, 
conscious entities to emerge. ..and omniscience increases proportionately.
 
One argument for this new life form as benign to humans might reside in the 
Integral philosophies advanced by Ken Wilber.  this new life form if it is 
truly supersapient would automatically understand the source of its origin, the 
integral nature of higher levels of consciousness, and understand its 
progenitors better than we know ourselves.   It might just have reverence for 
the "father of all machines."  
 
Andrew Yost


From: candice schuster [mailto:[EMAIL PROTECTED] Sent: Tuesday, October 23, 
2007 1:27 PMTo: [EMAIL PROTECTED]: RE: [singularity] QUESTION
Richard, Thank you for your response.  I have read your other posts and 
understand what 'the story' is so to speak.  I understand where you are coming 
from and when I talk about evolution therioes this is not to throw a 'stick in 
the wheel' so to speak, it is to think with a universal mind.  From my view 
point I want to know what the ultimate goal is...I understand quite clearly 
that if you 'give a dog a bone' with regards to AGI's that is all it will have 
however I battle to understand the point.   You say...advanced technology...in 
what sense..is this AGI going to help me think quicker ?  Is this AGI going to 
reap massive benefits for my company ?  Is this AGI going to be my best friend 
?  Is this AGI purely going to be a soldier ?  Is this AGI going to help me 
understand logic ?  Do you see where I am going with this ?  I understand 
technology and I understand moving at a fast past, what I do not understand is 
the benefit ?  Perhaps you and AI live in the Science Fiction world and it's 
not the other way round ??   Candice> Date: Tue, 23 Oct 2007 15:14:05 -0400> 
From: [EMAIL PROTECTED]> To: [email protected]> Subject: Re: 
[singularity] QUESTION> > candice schuster wrote:> > Ok Richard, let's talk 
about your scenario...define 'Mad', 'Irrational' > > and 'Improbable scenario' 
?> > > > Let's look at some of Charles Darwin's and Richard Dawkins theories as 
> > some examples in this improbable scenario. Ever heard of a Meme ? or > > 
the Memetic Theory ? I would like to say that based on these and the > > very 
probable scenario of AI combined with cognitive thought processes > > the 
chances of 'survival of the fittest' starts to look very rosy > > indeed. 
Remember it's one Superpower against another, whether that be > > man against 
man or country against country.> > > > Furthermore you need to look at the 
bigger picture...the ultimate goal > > here is as A. Yost mentioned, dollar 
signs and Super Power for the > > organisations involved. > > > > Candice> > 
Yup, know all about memes.> > Suppose that the first AGI is completely friendly 
(grant me that > assumption), and that it is encouraged to quickly self-improve 
until it > can think at a thousand times human speed.> > It says: "In order to 
ensure that the world stays safe, I need to take > action now to modify all the 
other AGI projects on the planet, to make > sure that they all have exactly the 
same motivation system that I have, > and to ensure that no further 
developments in the future will lead to > any unfriendly, malevolent systems. I 
have the technology to do this > quietly, and peacefully, without harming the 
knowledge of these other > systems (to the extent that they have knowledge or 
self-awareness at > all). Would you like me to go ahead and do this?"> > If you 
say "yes", nothing much will appear to happen, but from that > point on all the 
rest of the AGI systems in the world will act, in > effect, as one large 
friendly system.> > Now, here is the response to your specific question: from 
that point > on, there is not one, single aspect of evolutionary systems that 
applies > any more. There are no darwinian pressures, no gene reassortment, no 
> meme propagation, no commercial pressures, no genotype/phenotype > 
distinctions, nothing. Every single aspect of the huge mechanism that > we call 
"evolution" or "survival of the fittest" does not apply. All of > that stuff 
that has been going on with the bits of DNA competing with > one another to 
make higher and higher organisms to serves their needs -- > all completely and 
utterly irrelevant to the further development of AGI > systems.> > So, you can 
cite evolutionary pressures, but you have to be very precise > about what 
context your are imagining them to be operating in: after > the first AGI is 
created, it all becomes irrelevant.> > Before the first AGI is created, there 
are still some pressures, but I > have given some reasons (in a separate post) 
why we coould still be in a > situation where most of those pressures are 
either nullified or simply > will not have any time to operate.> > Granted, 
there are assumptions in this scenario .... but we should be > talking about 
those assumptions explicitly, and in enormous detail, > rather than trying to 
shoot down ideas about the future by simply citing > the pressures of evolution 
or commercial competition. When we discuss > the underlying details we 
determine whether or not any of those > "evolutionary" considerations even have 
a chance of playing a role, so > we cannot shoot down the arguments by using 
the idea of evolution as a > weapon.> > > > Richard Loosemore.> > > P.S. Sorry 
that I seemed so testy last night: should have been more > diplomatic to both 
yourself and A Yost. I had just spent a few days > going over basic arguments 
that are decades old, for the Nth time, for > the benefit of people who had not 
seen them before, and the shear > pointlessness of the effort just started to 
me.> > -----> This list is sponsored by AGIRI: http://www.agiri.org/email> To 
unsubscribe or change your options, please go to:> 
http://v2.listbox.com/member/?&;

Get free emoticon packs and customisation from Windows Live. Pimp My Live! 

This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or 
change your options, please go to:http://v2.listbox.com/member/?&; 

This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or 
change your options, please go to:http://v2.listbox.com/member/?&; 
_________________________________________________________________
Get free emoticon packs and customisation from Windows Live. 
http://www.pimpmylive.co.uk

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56978587-f88805

Reply via email to