OK, well now you are backing away from your claim of a mathematical disproof
of RSI!!

What you did IMHO was to prove there is limited value in RSI by defining RSI
in a very limited way, and then measuring the value of this limited-RSI in a
manner that does not capture the practical value of any kind of RSI...

I don't agree that an AGI will be programmed by billions of humans.  I think
an AGI will be created by a fairly small team of programmers and
scientists.  Of course, this effort will build atop the prior work of a
large number of other scientists and engineers -- the ones who built the
computer chips, the Internet, the programming languages, and so forth.  But
I see no reason why the actual programming and design of the AGI can't be
done by a fairly small team...

I agree that RSI is not how human intelligence predominantly works, but my
goal is not to replicate human intelligence, rather to create better forms
of intelligence that can help humans better than we can help ourselves
directly ... and can also move on to levels inaccessible to humans...

-- Ben G

On Tue, Oct 14, 2008 at 12:36 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> Ben,
> If you want to argue that recursive self improvement is a special case of
> learning, then I have no disagreement with the rest of your argument.
>
> But is this really a useful approach to solving AGI? A group of humans can
> generally make better decisions (more accurate predictions) by voting than
> any member of the group can. Did these humans improve themselves?
>
> My point is that a single person can't create much of anything, much less
> an AI smarter than himself. If it happens, it will be created by an
> organization of billions of humans. Without this organization, you would
> probably not think to create spears out of sticks and rocks.
>
> That is my problem with the seed AI approach. The seed AI depends on the
> knowledge and resources of the economy to do anything. An AI twice as smart
> as a human could not do any more than 2 people could. You need to create an
> AI that is billions of times smarter to get anywhere.
>
> We are already doing that. Human culture is improving itself by
> accumulating knowledge, by becoming better organized through communication
> and specialization, and by adding more babies and computers.
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> --- On Mon, 10/13/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> From: Ben Goertzel <[EMAIL PROTECTED]>
> Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
> To: agi@v2.listbox.com
> Date: Monday, October 13, 2008, 11:46 PM
>
>
>
>
>
> On Mon, Oct 13, 2008 at 11:30 PM, Matt Mahoney <[EMAIL PROTECTED]>
> wrote:
>
> Ben,
>
> Thanks for the comments on my RSI paper. To address your comments,
>
> You seem to be addressing minor lacunae in my wording, while ignoring my
> main conceptual and mathematical point!!!
>
>
>
>
>
>
> 1. I defined "improvement" as achieving the same goal (utility) in less
> time or achieving greater utility in the same time. I don't understand your
> objection that I am ignoring run time complexity.
>
>
> OK, you are not "ignoring run time completely" ... BUT ... in your
> measurement of the benefit achieved by RSI, you're not measuring the amount
> of run-time improvement achieved, you're only  measuring algorithmic
> information.
>
>
> What matters in practice is, largely, the amount of run-time improvement
> achieved.   This is the point I made in the details of my reply -- which you
> have not counter-replied to.
>
> I contend that, in my specific example, program P2 is a *huge* improvement
> over P1, in a way that is extremely important to practical AGI yet is not
> captured by your algorithmic-information-theoretic measurement method.  What
> is your specific response to my example??
>
>
>
> 2. I agree that an AIXI type interactive environment is a more appropriate
> model than a Turing machine receiving all of its input at the beginning. The
> problem is how to formally define improvement in a way that distinguishes it
> from learning. I am open to suggestions.
>
>
>
>
> To see why this is a problem, consider an agent that after a long time,
> guesses the environment's program and is able to achieve maximum reward from
> that point forward. The agent could "improve" itself by hard-coding the
> environment's program into its successor and thereby achieve maximum reward
> right from the beginning.
>
>
>
> Recursive self-improvement **is** a special case of learning; you can't
> completely distinguish them.
>
>
>
> 3. A computer's processor speed and memory have no effect on the
> algorithmic complexity of a program running on it.
> Yes, I can see I didn't phrase that point properly, sorry.  I typed that
> prior email too hastily as I'm trying to get some work done ;-)
>
>
> The point I *wanted* to make in my third point, was that if you take a
> program with algorithmic information K, and give it the ability to modify
> its own hardware, then it can achieve algorithmic information M > K.
>
>
> However, it is certainly true that this can happen even without the program
> modifying its own hardware -- especially if you make fanciful assumptions
> like Turing machines with huge tapes ... but even without such fanciful
> assumptions.
>
>
> The key point, which I did not articulate properly in my prior message, is
> that: ** by engaging with the world, the program can intake new information,
> which can increase its algorithmic information **
>
> The new information a program P1 takes in from the **external world** may
> be random with regard to P1, yet may not be random with regard to {P1 + the
> new information taken in}.
>
>
> As self-modification may cause the intake of new information causing
> algorithmic information to increase arbitrarily much, your argument does not
> hold in the case of a program interacting with a world that has much higher
> algorithmic information than it does.
>
>
> And this of course is exactly the situation people are in.
>
> For instance, a program may learn that "In the past, on 10 occasions, I
> have taken in information from Bob that was vastly beyond my algorithmic
> information content at that time.  In each case this process helped me to
> achieve my goals, though in ways I would not have been able to understand
> before taking in the information.  So, once again, I am going to trust Bob
> to alter me with info far beyond my current comprehension and algorithmic
> information content."
>
>
> Sounds a bit like a child trusting their parent, eh?
>
> This is a separate point from my point about P1 and P2 in point 1.  But the
> two phenomena intersect, of course.
>
> -- Ben G
>
>
> This intake
>
>
>
>
>
>
>
>
>
>
>
>
>
>       agi | Archives
>
>  | Modify
>  Your Subscription
>
>
>
>
>
>
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to