Good distinction!

Edward W. Porter

 -----Original Message-----
From: Derek Zahn [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 3:22 PM
Subject: RE: [agi] RSI

Edward W. Porter writes:

> As I say, what is, and is not, RSI would appear to be a matter of
> definition.
> But so far the several people who have gotten back to me, including
> yourself, seem to take the position that that is not the type of
> self improvement they consider to be "RSI." Some people have drawn the
> line at coding. RSI they say includes modifying ones own code, but code
> of course is a relative concept, since code can come in higher and
> level languages and it is not clear where the distinction between code
> non-code lies.

As I had included comments along these lines in a previous conversation, I
would like to clarify.  That conversation was not specifically about a
definition of RSI, it had to do with putting restrictions on the type of
RSI we might consider prudent, in terms of cutting the risk of creating
intelligent entities whose abilities grow faster than we can handle.

One way to think about that problem is to consider that building an AGI
involves taking a theory of mind and embodying it in a particular
computational substrate, using one or more layers of abstraction built on
the primitive operations of the substrate.  That implementation is not the
same thing as the mind model, it is one expression of the mind model.

If we do not give arbitrary access to the mind model itself or its
implementation, it seems safer than if we do -- this limits the extent
that RSI is possible: the efficiency of the model implementation and the
capabilities of the model do not change.  Those capabilities might of
course still be larger than was expected, so it is not a safety guarantee;
further analysis using the particulars of the model and implementation,
should be considered also.

RSI in the sense of "learning to learn better" or "learning to think
better" within a particular theory of mind seems necessary for any
practical AGI effort so we don't have to code the details of every
cognitive capability from scratch.


This list is sponsored by AGIRI:
To unsubscribe or change your options, please go to:
> &

This list is sponsored by AGIRI:
To unsubscribe or change your options, please go to:

Reply via email to