--- On Tue, 10/14/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> On Tue, Oct 14, 2008 at 8:36 AM, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
> > Ben,
> > If you want to argue that recursive self improvement
> > is a special case of
> > learning, then I have no disagreement with the rest of
> > your argument.
> >
> You are slipping from strained interpretation of the technical
> argument to the informal point that argument was intended to
> rationalize. If interpretation of technical argument is weaker than
> original informal argument it was invented to support,
> there is no point in technical argument. Using the fact of 2+2=4
> won't give technical support to e.g. philosophy of solipsism.

I did not say that I agree with Ben's definition of RSI to include learning.

But no matter. Whichever definition you accept, RSI is not a viable path to 
AGI. An AI that is twice as smart as a human can make no more progress than 2 
humans. You don't have automatic self improvement until you have AI that is 
billions of times smarter. A team of a few people isn't going to build that. 
The cost of training such a system with 10^17 to 10^18 bits of useful knowledge 
is in the quadrillions of dollars, even if the hardware is free and the problem 
of brain emulation is solved. Until then, you have manual self improvement.

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to