Hank Conn wrote:
On 11/17/06, *Richard Loosemore* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Hank Conn wrote:
> Here are some of my attempts at explaining RSI...
>
> (1)
> As a given instance of intelligence, as defined as an algorithm
of an
> agent capable of achieving complex goals in complex environments,
> approaches the theoretical limits of efficiency for this class of
> algorithms, intelligence approaches infinity. Since increasing
> computational resources available for an algorithm is a complex
goal in
> a complex environment, the more intelligent an instance of
intelligence
> becomes, the more capable it is in increasing the computational
> resources for the algorithm, as well as more capable in
optimizing the
> algorithm for maximum efficiency, thus increasing its
intelligence in a
> positive feedback loop.
>
> (2)
> Suppose an instance of a mind has direct access to some means of
both
> improving and expanding both the hardware and software capability
of its
> particular implementation. Suppose also that the goal system of this
> mind elicits a strong goal that directs its behavior to aggressively
> take advantage of these means. Given each increase in capability
of the
> mind's implementation, it could (1) increase the speed at which its
> hardware is upgraded and expanded, (2) More quickly, cleverly, and
> elegantly optimize its existing software base to maximize capability,
> (3) Develop better cognitive tools and functions more quickly and in
> more quantity, and (4) Optimize its implementation on
successively lower
> levels by researching and developing better, smaller, more advanced
> hardware. This would create a positive feedback loop- the more
capable
> its implementation, the more capable it is in improving its
implementation.
>
> How fast could RSI plausibly happen? Is RSI inevitable / how soon
will
> it be? How do we truly maximize the benefit to humanity?
>
> It is my opinion that this could happen extremely quickly once a
> completely functional AGI is achieved. I think its plausible it could
> happen against the will of the designers (and go on to pose an
> existential risk), and quite likely that it would move along
quite well
> with the designers intention, however, this opens up the door to
> existential disasters in the form of so-called Failures of
Friendliness.
> I think its fairly implausible the designers would suppress this
> process, except those that are concerned about completely working out
> issues of Friendliness in the AGI design.
Hank,
First, I will say what I always say when faced by arguments that
involve
the goals and motivations of an AI: your argument crucially depends on
assumptions about what its motivations would be. Because you have made
extremely simple assumptions about the motivation system, AND because
you have chosen assumptions that involve basic unfriendliness, your
scenario is guaranteed to come out looking like an existential threat.
Yes, you are exactly right. The question is which of my assumption are
unrealistic?
Well, you could start with the idea that the AI has "... a strong goal
that directs its behavior to aggressively take advantage of these
means...". It depends what you mean by "goal" (an item on the task
stack or a motivational drive? They are different things) and this begs
a question about who the idiot was that designed it so that it pursue
this kind of aggressive behavior rather than some other!
There is *so* much packed into your statement that it is difficult to go
into it in detail.
Just to start with, you would need to cross compare the above statement
with the account I gave recently of how a system should be built with a
motivational system based on large numbers of diffuse constraints. Your
description is one particular, rather dangerous, design for an AI - it
is not an inevitable design.
Also, if you meant to exclude the type of system I described (if you
meant a system with a goal stack and no motivational system), you might
well be describing a system design that, in my opinion, would not be
very dangerous because it would never actually make it to human level
intelligence. In that case none of us would have much to be worried about.
Richard Loosemore
Second, your arguments both have the feel of a Zeno's Paradox argument:
they look as though they imply an ever-increasing rapaciousness on the
part of the AI, whereas in fact there are so many assumptions built into
your statement that in practice your arguments could result in *any*
growth scenario, including ones where it plateaus. It is a little
like
you arguing that every infinite sum involves adding stuff together, so
every infinite sum must go off to infinity... a spurious argument, of
course, because they can go in any direction.
Of course any scenario is possible post-Singularity, including ones we
can't even imagine. Building an AI in such a way that you are capable of
proving causal or probabilistic bounds of its behavior through recursive
self-improvement is the way to be sure of a Friendly outcome.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303