On 11/30/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


>     Hank Conn wrote:
[snip...]
>      > I'm not asserting any specific AI design. And I don't see how
>      > a motivational system based on "large numbers of diffuse
constrains"
>      > inherently prohibits RSI, or really has any relevance to this. "A
>      > motivation system based on large numbers of diffuse constraints"
does
>      > not, by itself, solve the problem- if the particular constraints
>     do not
>      > form a congruent mapping to the concerns of humanity, regardless
of
>      > their number or level of diffuseness, then we are likely facing
an
>      > Unfriendly outcome of the Singularity, at some point in the
future.
>
>     Richard Loosemore wrote:
>     The point I am heading towards, in all of this, is that we need to
>     unpack some of these ideas in great detail in order to come to
sensible
>     conclusions.
>
>     I think the best way would be in a full length paper, although I did
>     talk about some of that detail in my recent lengthy post on
>     motivational
>     systems.
>
>     Let me try to bring out just one point, so you can see where I am
going
>     when I suggest it needs much more detail.  In the above, you really
are
>     asserting one specific AI design, because you talk about the goal
stack
>     as if this could be so simple that the programmer would be able to
>     insert the "make paperclips" goal and the machine would go right
ahead
>     and do that.  That type of AI design is very, very different from
the
>     Motivational System AI that I discussed before (the one with the
diffuse
>     set of constraints driving it).
>
>
>     Here is one of many differences between the two approaches.
>
>     The goal-stack AI might very well turn out simply not to be a
workable
>     design at all!  I really do mean that:  it won't become intelligent
>     enough to be a threat.   Specifically, we may find that the kind of
>     system that drives itself using only a goal stack never makes it up
to
>     full human level intelligence because it simply cannot do the kind
of
>     general, broad-spectrum learning that a Motivational System AI would
do.
>
>     Why?  Many reasons, but one is that the system could never learn
>     autonomously from a low level of knowledge *because* it is using
goals
>     that are articulated using the system's own knowledge base.  Put
simply,
>     when the system is in its child phase it cannot have the goal
"acquire
>     new knowledge" because it cannot understand the meaning of the words
>     "acquire" or "new" or "knowledge"!  It isn't due to learn those
words
>     until it becomes more mature (develops more mature concepts), so how
can
>     it put "acquire new knowledge" on its goal stack and then unpack
that
>     goal into subgoals, etc?
>
>
>     Try the same question with any goal that the system might have when
it
>     is in its infancy, and you'll see what I mean.  The whole concept of
a
>     system driven only by a goal stack with statements that resolve on
its
>     knowledge base is that it needs to be already very intelligent
before it
>     can use them.
>
>
>
> If your system is intelligent, it has some goal(s) (or "motivation(s)").
> For most really complex goals (or motivations), RSI is an extremely
> useful subgoal (sub-...motivation). This makes no further assumptions
> about the intelligence in question, including those relating to the
> design of the goal (motivation) system.
>
>
> Would you agree?
>
>
> -hank

Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such a
way as to preserve its existing motivational priorities.

That means:  the system would *not* choose to do any RSI if the RSI
could not be done in such a way as to preserve its current motivational
priorities:  to do so would be to risk subverting its own most important
desires.  (Note carefully that the system itself would put this
constraint on its own development, it would not have anything to do with
us controlling it).

There is a bit of a problem with the term "RSI" here:  to answer your
question fully we might have to get more specific about what that would
entail.

Finally:  the usefulness of RSI would not necessarily be indefinite.
The system could well get to a situation where further RSI was not
particularly consistent with its goals.  It could live without it.


Richard Loosemore



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



Yes, now the point being that if you have an AGI and you aren't in a
sufficiently fast RSI loop, there is a good chance that if someone else were
to launch an AGI with a faster RSI loop, your AGI would lose control to the
other AGI where the goals of the other AGI differed from yours.

What I'm saying is that the outcome of the Singularity is going to be
exactly the target goal state of the AGI with the strongest RSI curve.

The further the actual target goal state of that particular AI is away from
the actual target goal state of humanity, the worse.

The goal of ... humanity... is that the AGI implemented that will have the
strongest RSI curve also will be such that its actual target goal state is
exactly congruent to the actual target goal state of humanity.

This is assuming AGI becomes capable of RSI before any human does. I think
that's a reasonable assumption (this is the AGI list after all).


-hank

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to