Hank Conn wrote:
Here are some of my attempts at explaining RSI...
(1)
As a given instance of intelligence, as defined as an algorithm of an
agent capable of achieving complex goals in complex environments,
approaches the theoretical limits of efficiency for this class of
algorithms, intelligence approaches infinity. Since increasing
computational resources available for an algorithm is a complex goal in
a complex environment, the more intelligent an instance of intelligence
becomes, the more capable it is in increasing the computational
resources for the algorithm, as well as more capable in optimizing the
algorithm for maximum efficiency, thus increasing its intelligence in a
positive feedback loop.
(2)
Suppose an instance of a mind has direct access to some means of both
improving and expanding both the hardware and software capability of its
particular implementation. Suppose also that the goal system of this
mind elicits a strong goal that directs its behavior to aggressively
take advantage of these means. Given each increase in capability of the
mind's implementation, it could (1) increase the speed at which its
hardware is upgraded and expanded, (2) More quickly, cleverly, and
elegantly optimize its existing software base to maximize capability,
(3) Develop better cognitive tools and functions more quickly and in
more quantity, and (4) Optimize its implementation on successively lower
levels by researching and developing better, smaller, more advanced
hardware. This would create a positive feedback loop- the more capable
its implementation, the more capable it is in improving its implementation.
How fast could RSI plausibly happen? Is RSI inevitable / how soon will
it be? How do we truly maximize the benefit to humanity?
It is my opinion that this could happen extremely quickly once a
completely functional AGI is achieved. I think its plausible it could
happen against the will of the designers (and go on to pose an
existential risk), and quite likely that it would move along quite well
with the designers intention, however, this opens up the door to
existential disasters in the form of so-called Failures of Friendliness.
I think its fairly implausible the designers would suppress this
process, except those that are concerned about completely working out
issues of Friendliness in the AGI design.
Hank,
First, I will say what I always say when faced by arguments that involve
the goals and motivations of an AI: your argument crucially depends on
assumptions about what its motivations would be. Because you have made
extremely simple assumptions about the motivation system, AND because
you have chosen assumptions that involve basic unfriendliness, your
scenario is guaranteed to come out looking like an existential threat.
Second, your arguments both have the feel of a Zeno's Paradox argument:
they look as though they imply an ever-increasing rapaciousness on the
part of the AI, whereas in fact there are so many assumptions built into
your statement that in practice your arguments could result in *any*
growth scenario, including ones where it plateaus. It is a little like
you arguing that every infinite sum involves adding stuff together, so
every infinite sum must go off to infinity... a spurious argument, of
course, because they can go in any direction.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303