Longevity researcher, Aubrey de Grey recently argued against
self-improving machines, as follows:

"I quite strongly suspect that recursive self-improvement is
 mathematically impossible. In analogy with the so-called
 "halting problem" concerning determining whether any program
 terminates, I suspect that there is a yet-to-be-discovered
 measure of complexity by which no program can ever write
 another program (including a version of itself) that is
 an improvement.

 The program written may be constrained to be, in a precisely
 quantifiable sense, simpler than the program that does the
 writing. It's true that programs can draw on the outside
 world for information on how to improve themselves—but I
 claim (a) that that really only delivers far-less-scary
 iterative self-improvement rather than recursive, and (b)
 that anyway it will be inherently self-limiting, since
 once these machines become as smart as humanity they
 won't have any new information to learn."

 - http://edge.org/response-detail/26066

Standard complexity metrics (e.g. K-complexity) apparently have
the feature that Aubrey is looking for. However, arguments
about self-improvement in a cognitive vacuum seem generally
reminiscent of the arguments about angels and pinheads.
Both types of argument are irrelevant. Intelligent machines
typically exist in a complex world which they interact with.
Of course, under those circumstances, organisms can produce
more capable descendants - we have an existence proof
demonstrating that that happens.

The idea that the growth of intelligent machines is will be
inherently self-limiting, due to the lack of any new information
to learn once the machines become as smart as humanity seems
stupid to me. There's a whole universe out there, brimming with
information. Machines can learn by trial-and-error - not just
via instructional learning from human mentors. Chess computers
didn't stop improving when they reached human-level competence.
Nor is it likely that other types of intelligent machine will do so.

Aubrey de Grey seems to imagine a future filled with immortal
fleshy robots. That seems rather different from the future I
imagine - in which most remaining humans are liberated from
their bodies and get sucked into the matrix. However if these
are Aubrey's reasons for failing to fully incorporate the rise
of intelligent machines into his world view, I don't think we
need to take him seriously. I know he's not a researcher in
the field, but these ideas don't seem to be particularly coherent.
--
__________
 |im |yler  http://timtyler.org/  [email protected]  Remove lock to reply.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to