****
This will
occur before the predictions of the experts in the field of Singularity
prediction because their predictions are based on a constant Moore's Law and
they over estimate the computational capacity required for human level
AGI. Their dates vary from 2016 to 2030 depending on whether they are
using the 18 month figure or the 12 month figure. Moore's Law is currently
at 9 months and falling. My calculations based on a falling Moore's Law
put the Singularity on April 28th, 2005.
This human
level AGI in a computer will be quite superior to a human because of several
advantages that machines have over gray matter. These advantages are:
upgradability, self-improvement through redesign, self editability, reliability,
functional parallelism, accuracy, and speed. This superiority will be
quantitative not qualitative. It will be superior but completely
comprehensible to us. The belief in a radically different form of advanced
thought incomprehensible to present humans is philosophical in nature, not based
on evidence.
****
Mike,
Is it
really true that Moore's Law is at 9 months and falling? Do you have some
references on this?
Even
if this were the case, it wouldn't cause the Singularity by 2005.
Processing power is not the only bottleneck!
It's
true that with faster, cheaper processing power, more people will be able to
experiment with more significant AGI systems.
But
even with a correct AGI design, and adequate funding, computing power and
staffing, I think it's going to take anyone several years to get from AGI-design
to teachable human-level system. That is the nature of engineering
complex software systems based on complex ideas. And of course it
may take some time to get from teachable-human-level system to superhuman-level
system as well !!! ;-p
So, I
think that the most wildly optimistic projection we can rationally hope for is
superhuman intelligence (the "Singularity") by 2010.
But
this could only be achieved if *everything goes right*.... And of
course, I don't know how to estimate the odds that everything goes
right. An example of "everything going right" would be: One of the
currently in-development AGI designs (say, Novamente or A2I2 or NARS) turns out
to be almost entirely correct, AND, gets adequately funded... and, teaching a
human-level AGI to productively self-modify toward unlimited intelligence turns
out to be a matter of a couple years, not a decade. This is a lot of ANDs,
Mike -- an awful lot of ANDs ...
--
Ben
|
- Re: [agi] Intelligence by definition RSbriggs
- [agi] Chess Master Theory Of AGI. Mike Deering
- Re: [agi] Chess Master Theory Of AGI. Cliff Stabbert
- [agi] Diminished impact of Moore's Law o... Gary Miller
- RE: [agi] Diminished impact of Moore... Ben Goertzel
- Re: [agi] Diminished impact of Moore... James Rogers
- Re: [agi] Diminished impact of ... Shane Legg
- Re: [agi] Diminished impact... James Rogers
- [agi] archive ? Youlian Troyanov
- RE: [agi] archive ? Ben Goertzel
- [agi] Moore's law data - defining HEC Ben Goertzel
- [agi] Moore's law data - defining HEC Stephen Reed
- [agi] Moore's law data - defining HEC - ... Stephen Reed
- RE: [agi] Moore's law data - defining HE... Amara D. Angelica
- Re: [agi] Moore's law data - defining HE... Eliezer S. Yudkowsky
- Re: [agi] Moore's law data - defining HE... James Rogers
- Re: [agi] Moore's law data - definin... Stephen Reed
- Re: [agi] Moore's law data - de... Stephen Reed