One of the problems in defining RSI in a mathematically vigorous way is coming 
up with a definition that is also useful. If a system has input, then there is 
really no definition that distinguishes self improvement from learning, at 
least not one that people can agree on.

Of course, a practical AGI is going to have input, so my definition seems to be 
of little practical use. Nevertheless there are proposals along these lines. My 
goal is to prove the limitations of these systems.

One example of a system without input would be a chess playing program that 
improved its game by playing itself. One could imagine many approaches. For 
example, suppose the program makes random variations in its source code and 
plays these copies against each other in timed matches, keeping only the 
winning variations. What are the limitations of this approach? Or consider a 
more general approach to intelligence, where the parent gives its offspring 
hard problems. Is it possible for superhuman intelligence to arise 
spontaneously?

What I show is that if we measure intelligence by computational efficiency, 
then yes, but if we measure it by amount of knowledge, then no.

Anyway, I appreciate any comments that can be used to improve the paper. AGI is 
a hard subject to write about, given the wide range of opinions and the lack of 
proven results.

-- Matt Mahoney, [EMAIL PROTECTED]


--- On Mon, 11/24/08, Trent Waddington <[EMAIL PROTECTED]> wrote:

> From: Trent Waddington <[EMAIL PROTECTED]>
> Subject: Re: [agi] JAGI submission
> To: [email protected]
> Date: Monday, November 24, 2008, 7:58 PM
> I read the paper.
> 
> <pause to gather diplomatic tone>
> 
> Although I see what you're trying to achieve in this
> paper, I think
> your conclusions are far from being, well, conclusive. 
> You've taken a
> couple of terms that are thrown around the AI/Singularity
> community,
> assigned an arbitrary mathematical definition of your own
> devising,
> then claimed (not very rigoriously I might add) that your
> hypothesis
> is right.
> 
> This is basically half of a straw man paper.  You've
> come up with a
> definition that you must agree no-one else shares, and then
> you've
> failed to knock it down.
> 
> But at least for a little while this paper managed to
> capture my
> interest, and for that I thank you.
> 
> <something with a little more meat>
> 
> RSI has nothing to do with quines.  It's neat that you
> can write a
> program that will output itself.. but I'm not aware of
> anyone who has
> ever thought of RSI as involving such.  The futility of
> this paper is
> summed up in the last two words of the abstract:
> "without input".  Who
> ever said that RSI had anything to do with programs that
> had no input?
>  The whole freakin' purpose of intelligence is to react
> to a
> non-random but *complex* environment.  Input is what makes
> intelligence hard.  A program which has "more
> intelligence" than
> another program is the one that reacts better to the
> environment by
> some measure of fitness.  By taking input out of the
> argument you've
> taken intelligence out of the argument.
> 
> On Tue, Nov 25, 2008 at 10:20 AM, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
> > I submitted my paper "A Model for Recursively
> Self Improving Programs" to JAGI and it is ready for
> open review. For those who have already read it, it is
> essentially the same paper except that I have expanded the
> abstract. The paper describes a mathematical model of RSI in
> closed environments (e.g. boxed AI) and shows that such
> programs exist in a certain sense. It can be found here:
> >
> >
> http://journal.agi-network.org/Submissions/tabid/99/ctrl/ViewOneU/ID/9/Default.aspx
> >
> > JAGI has an open review process where anyone can
> comment, but you will need to register to do so. You
> don't need to register to read the paper. This is a new
> journal started by Pei Wang.
> >
> > -- Matt Mahoney, [EMAIL PROTECTED]
> >
> >
> > -------------------------------------------
> > agi
> > Archives:
> https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription:
> https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
> 
> 
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to