Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-19 Thread Samantha Atkins
Matt Mahoney wrote: --- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote: It seems clear that without external inputs the amount of improvement possible is stringently limited. That is evident from inspection. But why the without input? The only evident reason is to ensure the

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-19 Thread Matt Mahoney
--- On Sun, 10/19/08, Samantha Atkins [EMAIL PROTECTED] wrote: Matt Mahoney wrote: There is currently a global brain (the world economy) with an IQ of around 10^10, and approaching 10^12. Oh man. It is so tempting in today's economic morass to point out the obvious stupidity of this

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-16 Thread peter . burton
Nicole, yes, Rosato I think, across the road. Ok with me. Cheers Peter Peter G Burton PhD http://homepage.mac.com/blinkcentral [EMAIL PROTECTED] intl 61 (0) 400 194 333 On Wednesday, October 15, 2008, at 09:08PM, Ben Goertzel [EMAIL PROTECTED] wrote: Matt wrote, in reply to me: An AI

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-15 Thread Vladimir Nesov
On Wed, Oct 15, 2008 at 5:38 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Tue, 10/14/08, Terren Suydam [EMAIL PROTECTED] wrote: Matt, Your measure of intelligence seems to be based on not much more than storage capacity, processing power, I/O, and accumulated knowledge. This has the

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-15 Thread Matt Mahoney
--- On Wed, 10/15/08, Vladimir Nesov [EMAIL PROTECTED] wrote: Interstellar void must be astronomically intelligent, with all its incompressible noise... How do you know it's not compressible? Compression is not computable. To give a concrete example, the output of RC4 looks like random noise

RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Matt Mahoney
--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote: It seems clear that without external inputs the amount of improvement possible is stringently limited. That is evident from inspection. But why the without input? The only evident reason is to ensure the truth of the

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel
What I am trying to debunk is the perceived risk of a fast takeoff singularity launched by the first AI to achieve superhuman intelligence. In this scenario, a scientist with an IQ of 180 produces an artificial scientist with an IQ of 200, which produces an artificial scientist with an IQ

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Vladimir Nesov
On Thu, Oct 16, 2008 at 12:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote: Among other reasons: Because, in the real world, the scientist with an IQ of 200 is **not** a brain in a vat with the inability to learn from the external world. Rather, he is able to run experiments in the external

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Matt Mahoney
--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote: Your paper does **not** prove anything whatsoever about real-world situations. You are correct. My RSI paper only applies to self improvement of closed systems. In the interest of proving the safety of AI, I think this is a good

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel
Hi, Also, you are right that it does not apply to many real world problems. Here my objection (as stated in my AGI proposal, but perhaps not clearly) is that creating an artificial scientist with slightly above human intelligence won't launch a singularity either, but for a different reason.

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-15 Thread Charles Hixson
[EMAIL PROTECTED] wrote: From: Charles Hixson [EMAIL PROTECTED] Subject: Re: [agi] Updated AGI proposal (CMR v2.1) To: agi@v2.listbox.com Date: Tuesday, October 14, 2008, 2:12 PM If you want to argue this way (reasonable), then you need a specific definition of intelligence. One that allows

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Matt Mahoney
--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote: An AI twice as smart as any human could figure out how to use the resources at his disposal to help him create an AI 3 times as smart as any human.  These AI's will not be brains in vats. They will have resources at their disposal.

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-15 Thread Terren Suydam
. Objecting to it on the basis of the difficulty/impossibility of measuring intelligence seems like a bit of a tangent. --- On Wed, 10/15/08, Charles Hixson [EMAIL PROTECTED] wrote: From: Charles Hixson [EMAIL PROTECTED] Subject: Re: [agi] Updated AGI proposal (CMR v2.1) To: agi@v2.listbox.com

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel
Matt wrote, in reply to me: An AI twice as smart as any human could figure out how to use the resources at his disposal to help him create an AI 3 times as smart as any human. These AI's will not be brains in vats. They will have resources at their disposal. It depends on what you

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Vladimir Nesov
On Tue, Oct 14, 2008 at 8:36 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Ben, If you want to argue that recursive self improvement is a special case of learning, then I have no disagreement with the rest of your argument. But is this really a useful approach to solving AGI? A group of humans

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread William Pearson
2008/10/14 Terren Suydam [EMAIL PROTECTED]: --- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote: An AI that is twice as smart as a human can make no more progress than 2 humans. Spoken like someone who has never worked with engineers. A genius engineer can outproduce 20 ordinary

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
Hi Will, I think humans provide ample evidence that intelligence is not necessarily correlated with processing power. The genius engineer in my example solves a given problem with *much less* overall processing than the ordinary engineer, so in this case intelligence is correlated with some

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Terren Suydam [EMAIL PROTECTED] wrote: --- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote: An AI that is twice as smart as a human can make no more progress than 2 humans. Spoken like someone who has never worked with engineers. A genius engineer can

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
--- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote: An AI that is twice as smart as a human can make no more progress than 2 humans. Spoken like someone who has never worked with engineers. A genius engineer can outproduce 20 ordinary engineers in the same timeframe. Do you really

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
Matt, Your measure of intelligence seems to be based on not much more than storage capacity, processing power, I/O, and accumulated knowledge. This has the advantage of being easily formalizable, but has the disadvantage of missing a necessary aspect of intelligence. I have yet to see from

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread William Pearson
Hi Terren, I think humans provide ample evidence that intelligence is not necessarily correlated with processing power. The genius engineer in my example solves a given problem with *much less* overall processing than the ordinary engineer, so in this case intelligence is correlated with

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Ben Goertzel [EMAIL PROTECTED] wrote: Here is how I see this exchange... You proposed a so-called *mathematical* debunking of RSI. I presented some detailed arguments against this so-called debunking, pointing out that its mathematical assumptions and its

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Charles Hixson
If you want to argue this way (reasonable), then you need a specific definition of intelligence. One that allows it to be accurately measured (and not just in principle). IQ definitely won't serve. Neither will G. Neither will GPA (if you're discussing a student). Because of this, while I

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Eric Burton
An AI that is twice as smart as a human can make no more progress than 2 humans. Actually I'll argue that we can't make predictions about what a greater-than-human intelligence would do. Maybe the summed intelligence of 2 humans would be sufficient to do the work of a dozen. Maybe

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
[EMAIL PROTECTED] Subject: Re: [agi] Updated AGI proposal (CMR v2.1) To: agi@v2.listbox.com Date: Tuesday, October 14, 2008, 2:12 PM If you want to argue this way (reasonable), then you need a specific definition of intelligence. One that allows it to be accurately measured (and not just

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
, 10/14/08, William Pearson [EMAIL PROTECTED] wrote: From: William Pearson [EMAIL PROTECTED] Subject: Re: [agi] Updated AGI proposal (CMR v2.1) To: agi@v2.listbox.com Date: Tuesday, October 14, 2008, 1:13 PM Hi Terren, I think humans provide ample evidence that intelligence

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread BillK
On Tue, Oct 14, 2008 at 2:41 PM, Matt Mahoney wrote: But no matter. Whichever definition you accept, RSI is not a viable path to AGI. An AI that is twice as smart as a human can make no more progress than 2 humans. I can't say I've noticed two dogs being smarter than one dog. Admittedly, a

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Vladimir Nesov [EMAIL PROTECTED] wrote: On Tue, Oct 14, 2008 at 8:36 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Ben, If you want to argue that recursive self improvement is a special case of learning, then I have no disagreement with the rest of your argument.

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Ben Goertzel
Matt, But no matter. Whichever definition you accept, RSI is not a viable path to AGI. An AI that is twice as smart as a human can make no more progress than 2 humans. You don't have automatic self improvement until you have AI that is billions of times smarter. A team of a few people isn't

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Mike Tintner
Will:There is a reason why lots of the planets biomass has stayed as bacteria. It does perfectly well like that. It survives. Too much processing power is a bad thing, it means less for self-preservation and affecting the world. Balancing them is a tricky proposition indeed Interesting thought.

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Terren Suydam [EMAIL PROTECTED] wrote: Matt, Your measure of intelligence seems to be based on not much more than storage capacity, processing power, I/O, and accumulated knowledge. This has the advantage of being easily formalizable, but has the disadvantage of

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Ben Goertzel
I was eager to debunk your supposed debunking of recursive self-improvement, but I found that when I tried to open that PDF file, it looked like a bunch of gibberish (random control characters) in my PDF reader (Preview on OSX Leopard) ben g On Mon, Oct 13, 2008 at 12:19 PM, Matt Mahoney [EMAIL

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Matt Mahoney
--- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote: I was eager to debunk your supposed debunking of recursive self-improvement, but I found that when I tried to open that PDF file, it looked like a bunch of gibberish (random control characters) in my PDF reader (Preview on OSX

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Abram Demski
I can read the pdf just fine. I am also using mac's Preview program. So it is not that... --Abram On Mon, Oct 13, 2008 at 1:29 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote: I was eager to debunk your supposed debunking of recursive

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Eric Burton
On Mon, Oct 13, 2008 at 1:29 PM, Matt Mahoney [EMAIL PROTECTED] wrote: That's odd. Maybe you should run Windows :-( No. You should not run Windows --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed:

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Ben Goertzel
Hi, OK, I read the supposed refutation of recursive self-improvement at http://www.mattmahoney.net/rsi.html There are at least three extremely major problems with the argument. 1) By looking only at algorithmic information (defined in terms of program length) and ignoring runtime complexity,

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Matt Mahoney
speed and memory have no effect on the algorithmic complexity of a program running on it. -- Matt Mahoney, [EMAIL PROTECTED] --- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote: From: Ben Goertzel [EMAIL PROTECTED] Subject: Re: [agi] Updated AGI proposal (CMR v2.1) To: agi@v2.listbox.com

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Ben Goertzel
On Mon, Oct 13, 2008 at 11:30 PM, Matt Mahoney [EMAIL PROTECTED] wrote: Ben, Thanks for the comments on my RSI paper. To address your comments, You seem to be addressing minor lacunae in my wording, while ignoring my main conceptual and mathematical point!!! 1. I defined improvement as

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Matt Mahoney
Goertzel [EMAIL PROTECTED] wrote: From: Ben Goertzel [EMAIL PROTECTED] Subject: Re: [agi] Updated AGI proposal (CMR v2.1) To: agi@v2.listbox.com Date: Monday, October 13, 2008, 11:46 PM On Mon, Oct 13, 2008 at 11:30 PM, Matt Mahoney [EMAIL PROTECTED] wrote: Ben, Thanks for the comments on my

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Ben Goertzel
: From: Ben Goertzel [EMAIL PROTECTED] Subject: Re: [agi] Updated AGI proposal (CMR v2.1) To: agi@v2.listbox.com Date: Monday, October 13, 2008, 11:46 PM On Mon, Oct 13, 2008 at 11:30 PM, Matt Mahoney [EMAIL PROTECTED] wrote: Ben, Thanks for the comments on my RSI paper. To address your