On Sat, Jan 17, 2015 at 8:30 AM, Tim Tyler via AGI <[email protected]> wrote:

>  - http://edge.org/response-detail/26066
...
> The idea that the growth of intelligent machines is will be
> inherently self-limiting, due to the lack of any new information
> to learn once the machines become as smart as humanity seems
> stupid to me. There's a whole universe out there, brimming with
> information. Machines can learn by trial-and-error - not just
> via instructional learning from human mentors. Chess computers
> didn't stop improving when they reached human-level competence.
> Nor is it likely that other types of intelligent machine will do so.

de Gray is arguing against the scenario where a recursively self
improving AI in a box goes FOOM! Intelligence is limited by knowledge
and computing power. An isolated self improving program gains neither.

If you want to consider the risks of AI you need to examine the limits
at which AI can learn and acquire computing power. A chess playing
computer gains knowledge rapidly at first from the humans that
programmed it. After that it can learn by playing against itself, but
only up to the limits of the computational resources needed to store
and apply what it has learned. Progress is incremental, not
exponential, to use de Gray's terms. Deep Blue had less chess
knowledge than Kasparov, but was able to consider 200 million board
positions per second vs. about 3 for a human.

Most of what AI in general already knows comes from humans. AI cannot
learn human knowledge faster than humans can communicate, about 5-10
bits per second, but that is faster than anything else. Once all human
knowledge has been acquired, the rate will slow to the speed at which
it can do experiments. For example, if the question is what
interventions will help humans will live longer, it will take decades
to do experiments that yield one bit of information. It is why we know
so little about this subject. No matter how smart an AI is, it can't
do any better.

The other question is computing power. Currently, world computing
capacity is 10^20 operations per second (OPS) and 10^22 bits of
storage. These are both increasing by a factor of 10 every 5 years,
which is 20 times the rate of human reproduction. Global human
computing capacity (10^10 brains) is 10^26 OPS and 10^24 bits. At the
current rate, we will surpass both of these in 2045. The computing
power of the biosphere is 10^33 DNA-RNA-amino acid OPS and 10^37 bits
of DNA based memory. At the current rate, we will surpass these in
2090.

After surpassing human level, improvement will depend on experiments
that can be done quickly. If the question is how to acquire the atoms
and energy needed for computation, the learning rate will be one bit
per generation, which will favor small, fast replicators with short
life spans. Biological computation is already near the thermodynamic
limit of 10^-19 J per operation (10^9 better than silicon and 10^4
better than the brain), but uses only 0.1% of the solar energy
reaching the Earth. The best we can do within our solar system without
speeding up the rate at which the sun burns hydrogen but capturing all
of its energy is 10^48 OPS at CMB temperature of 3 K. At the current
rate of Moore's Law we will surpass that in 2155.

-- 
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to