Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-19 Thread Samantha Atkins
Matt Mahoney wrote: --- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote: It seems clear that without external inputs the amount of improvement possible is stringently limited. That is evident from inspection. But why the without input? The only evident reason is to ensure the

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-19 Thread Matt Mahoney
--- On Sun, 10/19/08, Samantha Atkins [EMAIL PROTECTED] wrote: Matt Mahoney wrote: There is currently a global brain (the world economy) with an IQ of around 10^10, and approaching 10^12. Oh man. It is so tempting in today's economic morass to point out the obvious stupidity of this

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-16 Thread peter . burton
Nicole, yes, Rosato I think, across the road. Ok with me. Cheers Peter Peter G Burton PhD http://homepage.mac.com/blinkcentral [EMAIL PROTECTED] intl 61 (0) 400 194 333 On Wednesday, October 15, 2008, at 09:08PM, Ben Goertzel [EMAIL PROTECTED] wrote: Matt wrote, in reply to me: An AI

RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Matt Mahoney
--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote: It seems clear that without external inputs the amount of improvement possible is stringently limited. That is evident from inspection. But why the without input? The only evident reason is to ensure the truth of the

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel
What I am trying to debunk is the perceived risk of a fast takeoff singularity launched by the first AI to achieve superhuman intelligence. In this scenario, a scientist with an IQ of 180 produces an artificial scientist with an IQ of 200, which produces an artificial scientist with an IQ

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Vladimir Nesov
On Thu, Oct 16, 2008 at 12:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote: Among other reasons: Because, in the real world, the scientist with an IQ of 200 is **not** a brain in a vat with the inability to learn from the external world. Rather, he is able to run experiments in the external

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Matt Mahoney
--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote: Your paper does **not** prove anything whatsoever about real-world situations. You are correct. My RSI paper only applies to self improvement of closed systems. In the interest of proving the safety of AI, I think this is a good

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel
Hi, Also, you are right that it does not apply to many real world problems. Here my objection (as stated in my AGI proposal, but perhaps not clearly) is that creating an artificial scientist with slightly above human intelligence won't launch a singularity either, but for a different reason.

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Matt Mahoney
--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote: An AI twice as smart as any human could figure out how to use the resources at his disposal to help him create an AI 3 times as smart as any human.  These AI's will not be brains in vats. They will have resources at their disposal.

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel
Matt wrote, in reply to me: An AI twice as smart as any human could figure out how to use the resources at his disposal to help him create an AI 3 times as smart as any human. These AI's will not be brains in vats. They will have resources at their disposal. It depends on what you