****
 This will occur before the predictions of the experts in the field of Singularity prediction because their predictions are based on a constant Moore's Law and they over estimate the computational capacity required for human level AGI.  Their dates vary from 2016 to 2030 depending on whether they are using the 18 month figure or the 12 month figure.  Moore's Law is currently at 9 months and falling.  My calculations based on a falling Moore's Law put the Singularity on April 28th, 2005.   
 
 This human level AGI in a computer will be quite superior to a human because of several advantages that machines have over gray matter.  These advantages are: upgradability, self-improvement through redesign, self editability, reliability, functional parallelism, accuracy, and speed.  This superiority will be quantitative not qualitative.  It will be superior but completely comprehensible to us.  The belief in a radically different form of advanced thought incomprehensible to present humans is philosophical in nature, not based on evidence. 
**** 
 
 
Mike,
 
Is it really true that Moore's Law is at 9 months and falling?  Do you have some references on this?
 
Even if this were the case, it wouldn't cause the Singularity by 2005.  Processing power is not the only bottleneck!
 
It's true that with faster, cheaper processing power, more people will be able to experiment with more significant AGI systems. 
 
But even with a correct AGI design, and adequate funding, computing power and staffing, I think it's going to take anyone several years to get from AGI-design to teachable human-level system.  That is the nature of engineering complex software systems based on complex ideas.   And of course it may take some time to get from teachable-human-level system to superhuman-level system as well !!!   ;-p
 
So, I think that the most wildly optimistic projection we can rationally hope for is superhuman intelligence (the "Singularity") by 2010.
 
But this could only be achieved if *everything goes right*....  And of course, I don't know how to estimate the odds that everything goes right.  An example of "everything going right" would be: One of the currently in-development AGI designs (say, Novamente or A2I2 or NARS) turns out to be almost entirely correct, AND, gets adequately funded... and, teaching a human-level AGI to productively self-modify toward unlimited intelligence turns out to be a matter of a couple years, not a decade.  This is a lot of ANDs, Mike -- an awful lot of ANDs ...
 
-- Ben
 

Reply via email to