To Matt Mahoney.

Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn and
implied RSI (which I assume from context is a reference to Recursive Self
Improvement) is necessary for general intelligence.

When I said -- in reply to Derek's suggestion that RSI be banned -- that I
didn't fully understand the implications of banning RSI, I said that
largely because I didn't know exactly what the term covers.

So could you, or someone, please define exactly what its meaning is?

Is it any system capable of learning how to improve its current behavior
by changing to a new state with a modified behavior, and then from that
new state (arguably "recursively") improving behavior to yet another new
state, and so on and so forth?  If so, why wouldn't any system doing
ongoing automatic learning that changed its behavior be an RSI system.

Is it any system that does the above, but only at a code level?  And, if
so, what is the definition of code level?  Is it machine code; C++ level
code; prolog level code; code at the level Novamente's MOSES learns
through evolution, is it code at the level of learned goal and behaviors,
or is it code at all those levels.  If the later were true, than again, it
would seem the term covered virtually any automatic learning system
capable of changing its behavior.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Sunday, September 30, 2007 8:36 PM
To: [email protected]
Subject: RE: [agi] Religion-free technical content


--- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:
> To Derek Zahn
>
> You're 9/30/2007 10:58 AM post is very interesting.  It is the type of
> discussion of this subject -- potential dangers of AGI and how and
> when do we deal with them -- that is probably most valuable.
>
> In response I have the following comments regarding selected portions
> of your post's (shown in all-caps).
>
> "ONE THING THAT COULD IMPROVE SAFETY IS TO REJECT THE NOTION THAT AGI
> PROJECTS SHOULD BE FOCUSED ON, OR EVEN CAPABLE OF, RECURSIVE SELF
> IMPROVEMENT IN THE SENSE OF REPROGRAMMING ITS CORE IMPLEMENTATION."
>
> Sounds like a good idea to me, although I don't fully understand the
> implications of such a restriction.

The implication is you would have to ban intelligent software productivity
tools.  You cannot do that.  You can make strong arguments for the need
for tools for proving software security.  But any tool that is capable of
analysis and testing with human level intelligence is also capable of
recursive self improvement.

> "BUT THERE'S AN EASY ANSWER TO THIS:  DON'T BUILD AGI THAT WAY.  IT IS
> CLEARLY NOT NECESSARY FOR GENERAL INTELLIGENCE "

Yes it is.  In my last post I mentioned Legg's proof that a system cannot
predict (understand) a system of greater algorithmic complexity.  RSI is
necessarily an evolutionary algorithm.  The problem is that any goal other
than rapid reproduction and acquisition of computing resources is
unstable.
The first example of this was the 1988 Morris worm.

It doesn't matter if Novamente is a "safe" design.  Others will not be.
The first intelligent worm would mean the permanent end of being able to
trust your computers.  Suppose we somehow come up with a superhumanly
intelligent intrusion detection system able to match wits with a
superhumanly intelligent worm.  How would you know if it was working?
Your computer says "all is OK".
Is that the IDS talking, or the worm?


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48427137-75820d

Reply via email to