RE: Matt Mahoney's Mon 10/1/2007 12:01 PM post which said in part

"IN MY LAST POST I HAD IN MIND RSI AT THE LEVEL OF SOURCE CODE OR MACHINE
CODE."

Thank you for clarifying this, as least with regard to what you meant.

But that begs the question: is there any uniform agreement about this
definition in the AGI community or is it currently a vaguely defined term?

As stated in my previous posts, a Novamente level system would have a form
of Recursive Self Improvement that would recursively improve cognitive,
behavior, and goal patterns.  Is the distinction between that level RSI
and RSI at the C++ level that in a Novamente-type RSI one can hope that
all or vital portions of the C++ code could be kept off-limits to the
higher level RSI, and, from that, one could hope that certain goals and
behaviors could remain hardcoded into the machine's behavioral control
system.

I assume it was maintaining that that the type of distinction that you
considered important.  Is that correct?

If so, that seems to make sense to me, at least at this thinking.  But one
can easily think of all sorts of ways a human level AGI with RSI of the
Novamente level could try to get around this limitation, if it broke
sufficiently free from, or sufficiently re-interpreted, its presumably
human friendly goals in a way that allowed it to want to do so.  For
example, it could try to program other systems, such as by hacking on the
net, that don't have such limitations.

But hopefully they would not do so if the hardcoded goals could maintain
their dominance.  As I said in my 9/30/2007 7:11 PM post, I don't really
have much understanding about how robust an initial set of hardcoded goals
and values are likely to remain against new goals and subgoals that are
defined by automatic learning and that are needed for the interpretation
of the original goals in a changing world.  Like human judges, these
systems might routinely dilute or substantially change the meaning of the
laws they are meant to uphold.  This is particularly true because almost
any set of goals for "human friendliness" are going to be vaguely defined
and the world is likely to generate many situations where various
sub-goals of being human friendly will conflict.

That is why I think keeping humans in the loop, and Intelligent
Augmentation, and Collective Intelligence are so important.

But in any case, it would seem that being able to hardcode certain parts
of the machine's behavioral, value, and goal system, and have it made as
difficult as possible for the machine to change those parts, would at
least make it substantially harder for an AGI to develop a set of goals
contrary to those originally intended for it.  It is my belief that a
Novamente-type system could have consideral room to learn and adapt while
still being restrained to avoid certain goals and behaviors.  Afterall,
most of us humans do.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Monday, October 01, 2007 12:01 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Religion-free technical content


In my last post I had in mind RSI at the level of source code or machine
code.  Clearly we already have RSI in more restricted computational
models, such as a neural network modifying its objective function by
adjusting its weights.
This type of RSI is not dangerous because it cannot interact with the
operating system or remote computers in ways not intended by the
developer.


--- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:

> To Matt Mahoney.
>
> Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn
> and implied RSI (which I assume from context is a reference to
> Recursive Self
> Improvement) is necessary for general intelligence.
>
> When I said -- in reply to Derek's suggestion that RSI be banned --
> that I didn't fully understand the implications of banning RSI, I said
> that largely because I didn't know exactly what the term covers.
>
> So could you, or someone, please define exactly what its meaning is?
>
> Is it any system capable of learning how to improve its current
> behavior by changing to a new state with a modified behavior, and then
> from that new state (arguably "recursively") improving behavior to yet
> another new state, and so on and so forth?  If so, why wouldn't any
> system doing ongoing automatic learning that changed its behavior be
> an RSI system.
>
> Is it any system that does the above, but only at a code level?  And,
> if so, what is the definition of code level?  Is it machine code; C++
> level code; prolog level code; code at the level Novamente's MOSES
> learns through evolution, is it code at the level of learned goal and
> behaviors, or is it code at all those levels.  If the later were true,
> than again, it would seem the term covered virtually any automatic
> learning system capable of changing its behavior.
>
> Edward W. Porter
> Porter & Associates
> 24 String Bridge S12
> Exeter, NH 03833
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED]
>
>
>
> -----Original Message-----
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> Sent: Sunday, September 30, 2007 8:36 PM
> To: agi@v2.listbox.com
> Subject: RE: [agi] Religion-free technical content
>
>
> --- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:
> > To Derek Zahn
> >
> > You're 9/30/2007 10:58 AM post is very interesting.  It is the type
> > of discussion of this subject -- potential dangers of AGI and how
> > and when do we deal with them -- that is probably most valuable.
> >
> > In response I have the following comments regarding selected
> > portions of your post's (shown in all-caps).
> >
> > "ONE THING THAT COULD IMPROVE SAFETY IS TO REJECT THE NOTION THAT
> > AGI PROJECTS SHOULD BE FOCUSED ON, OR EVEN CAPABLE OF, RECURSIVE
> > SELF IMPROVEMENT IN THE SENSE OF REPROGRAMMING ITS CORE
> > IMPLEMENTATION."
> >
> > Sounds like a good idea to me, although I don't fully understand the
> > implications of such a restriction.
>
> The implication is you would have to ban intelligent software
> productivity tools.  You cannot do that.  You can make strong
> arguments for the need for tools for proving software security.  But
> any tool that is capable of analysis and testing with human level
> intelligence is also capable of recursive self improvement.
>
> > "BUT THERE'S AN EASY ANSWER TO THIS:  DON'T BUILD AGI THAT WAY.  IT
> > IS CLEARLY NOT NECESSARY FOR GENERAL INTELLIGENCE "
>
> Yes it is.  In my last post I mentioned Legg's proof that a system
> cannot predict (understand) a system of greater algorithmic
> complexity.  RSI is necessarily an evolutionary algorithm.  The
> problem is that any goal other than rapid reproduction and acquisition
> of computing resources is unstable. The first example of this was the
> 1988 Morris worm.
>
> It doesn't matter if Novamente is a "safe" design.  Others will not
> be. The first intelligent worm would mean the permanent end of being
> able to trust your computers.  Suppose we somehow come up with a
> superhumanly intelligent intrusion detection system able to match wits
> with a superhumanly intelligent worm.  How would you know if it was
> working? Your computer says "all is OK". Is that the IDS talking, or
> the worm?
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48495985-2f4fe6

Reply via email to