Matt Mahoney wrote:
--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote:
It seems clear that without external inputs the amount of
improvement
possible is stringently limited. That is evident from
inspection. But
why the without input? The only evident reason
is to ensure the
--- On Sun, 10/19/08, Samantha Atkins [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
There is
currently a global brain (the world economy) with an IQ of
around 10^10, and approaching 10^12.
Oh man. It is so tempting in today's economic morass
to point out the
obvious stupidity of this
Nicole, yes, Rosato I think, across the road. Ok with me.
Cheers
Peter
Peter G Burton PhD
http://homepage.mac.com/blinkcentral
[EMAIL PROTECTED]
intl 61 (0) 400 194 333
On Wednesday, October 15, 2008, at 09:08PM, Ben Goertzel [EMAIL PROTECTED]
wrote:
Matt wrote, in reply to me:
An AI
On Wed, Oct 15, 2008 at 5:38 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Tue, 10/14/08, Terren Suydam [EMAIL PROTECTED] wrote:
Matt,
Your measure of intelligence seems to be based on not much
more than storage capacity, processing power, I/O, and
accumulated knowledge. This has the
--- On Wed, 10/15/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
Interstellar void must be astronomically intelligent, with
all its incompressible noise...
How do you know it's not compressible?
Compression is not computable. To give a concrete example, the output of RC4
looks like random noise
--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote:
It seems clear that without external inputs the amount of
improvement
possible is stringently limited. That is evident from
inspection. But
why the without input? The only evident reason
is to ensure the truth
of the
What I am trying to debunk is the perceived risk of a fast takeoff
singularity launched by the first AI to achieve superhuman intelligence. In
this scenario, a scientist with an IQ of 180 produces an artificial
scientist with an IQ of 200, which produces an artificial scientist with an
IQ
On Thu, Oct 16, 2008 at 12:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Among other reasons: Because, in the real world, the scientist with an IQ of
200 is **not** a brain in a vat with the inability to learn from the
external world.
Rather, he is able to run experiments in the external
--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Your paper does **not** prove anything whatsoever about real-world
situations.
You are correct. My RSI paper only applies to self improvement of closed
systems. In the interest of proving the safety of AI, I think this is a good
Hi,
Also, you are right that it does not apply to many real world problems.
Here my objection (as stated in my AGI proposal, but perhaps not clearly) is
that creating an artificial scientist with slightly above human intelligence
won't launch a singularity either, but for a different reason.
[EMAIL PROTECTED] wrote:
From: Charles Hixson [EMAIL PROTECTED]
Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
To: agi@v2.listbox.com
Date: Tuesday, October 14, 2008, 2:12 PM
If you want to argue this way (reasonable), then you need a
specific
definition of intelligence. One that allows
--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:
An AI twice as smart as any human could figure
out how to use the resources at his disposal to
help him create an AI 3 times as smart as any
human. These AI's will not be brains in vats.
They will have resources at their disposal.
. Objecting to it on the
basis of the difficulty/impossibility of measuring intelligence seems like a
bit of a tangent.
--- On Wed, 10/15/08, Charles Hixson [EMAIL PROTECTED] wrote:
From: Charles Hixson [EMAIL PROTECTED]
Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
To: agi@v2.listbox.com
Matt wrote, in reply to me:
An AI twice as smart as any human could figure
out how to use the resources at his disposal to
help him create an AI 3 times as smart as any
human. These AI's will not be brains in vats.
They will have resources at their disposal.
It depends on what you
On Tue, Oct 14, 2008 at 8:36 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Ben,
If you want to argue that recursive self improvement is a special case of
learning, then I have no disagreement with the rest of your argument.
But is this really a useful approach to solving AGI? A group of humans
2008/10/14 Terren Suydam [EMAIL PROTECTED]:
--- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote:
An AI that is twice as smart as a
human can make no more progress than 2 humans.
Spoken like someone who has never worked with engineers. A genius engineer
can outproduce 20 ordinary
Hi Will,
I think humans provide ample evidence that intelligence is not necessarily
correlated with processing power. The genius engineer in my example solves a
given problem with *much less* overall processing than the ordinary engineer,
so in this case intelligence is correlated with some
--- On Tue, 10/14/08, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Tue, 10/14/08, Matt Mahoney
[EMAIL PROTECTED] wrote:
An AI that is twice as smart as a
human can make no more progress than 2 humans.
Spoken like someone who has never worked with engineers. A
genius engineer can
--- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote:
An AI that is twice as smart as a
human can make no more progress than 2 humans.
Spoken like someone who has never worked with engineers. A genius engineer can
outproduce 20 ordinary engineers in the same timeframe.
Do you really
Matt,
Your measure of intelligence seems to be based on not much more than storage
capacity, processing power, I/O, and accumulated knowledge. This has the
advantage of being easily formalizable, but has the disadvantage of missing a
necessary aspect of intelligence.
I have yet to see from
Hi Terren,
I think humans provide ample evidence that intelligence is not necessarily
correlated with processing power. The genius engineer in my example solves a
given problem with *much less* overall processing than the ordinary engineer,
so in this case intelligence is correlated with
--- On Tue, 10/14/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Here is how I see this exchange...
You proposed a so-called *mathematical* debunking of RSI.
I presented some detailed arguments against this so-called debunking,
pointing out that its mathematical assumptions and its
If you want to argue this way (reasonable), then you need a specific
definition of intelligence. One that allows it to be accurately
measured (and not just in principle). IQ definitely won't serve.
Neither will G. Neither will GPA (if you're discussing a student).
Because of this, while I
An AI that is twice as smart as a human can make no more progress than 2
humans.
Actually I'll argue that we can't make predictions about what a
greater-than-human intelligence would do. Maybe the summed
intelligence of 2 humans would be sufficient to do the work of a
dozen. Maybe
[EMAIL PROTECTED]
Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
To: agi@v2.listbox.com
Date: Tuesday, October 14, 2008, 2:12 PM
If you want to argue this way (reasonable), then you need a
specific
definition of intelligence. One that allows it to be
accurately
measured (and not just
, 10/14/08, William Pearson [EMAIL PROTECTED] wrote:
From: William Pearson [EMAIL PROTECTED]
Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
To: agi@v2.listbox.com
Date: Tuesday, October 14, 2008, 1:13 PM
Hi Terren,
I think humans provide ample evidence that
intelligence
On Tue, Oct 14, 2008 at 2:41 PM, Matt Mahoney wrote:
But no matter. Whichever definition you accept, RSI is not a viable path to
AGI. An AI that is twice as smart as a
human can make no more progress than 2 humans.
I can't say I've noticed two dogs being smarter than one dog.
Admittedly, a
--- On Tue, 10/14/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Tue, Oct 14, 2008 at 8:36 AM, Matt Mahoney
[EMAIL PROTECTED] wrote:
Ben,
If you want to argue that recursive self improvement
is a special case of
learning, then I have no disagreement with the rest of
your argument.
Matt,
But no matter. Whichever definition you accept, RSI is not a viable path to
AGI. An AI that is twice as smart as a human can make no more progress than
2 humans. You don't have automatic self improvement until you have AI that
is billions of times smarter. A team of a few people isn't
Will:There is a reason why lots of the planets biomass has stayed as
bacteria. It does perfectly well like that. It survives.
Too much processing power is a bad thing, it means less for
self-preservation and affecting the world. Balancing them is a tricky
proposition indeed
Interesting thought.
--- On Tue, 10/14/08, Terren Suydam [EMAIL PROTECTED] wrote:
Matt,
Your measure of intelligence seems to be based on not much
more than storage capacity, processing power, I/O, and
accumulated knowledge. This has the advantage of being
easily formalizable, but has the disadvantage of
I updated my AGI proposal from a few days ago.
http://www.mattmahoney.net/agi2.html
There are two major changes. First I clarified the routing strategy and
justified it on an information theoretic basis. An organization is optimally
efficient when its members specialize with no duplication of
I was eager to debunk your supposed debunking of recursive self-improvement,
but I found that when I tried to open that PDF file, it looked like a bunch
of gibberish (random control characters) in my PDF reader (Preview on OSX
Leopard)
ben g
On Mon, Oct 13, 2008 at 12:19 PM, Matt Mahoney [EMAIL
--- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote:
I was eager to debunk your supposed debunking of recursive self-improvement,
but I found that when I tried to open that PDF file, it looked like a bunch
of gibberish (random control characters) in my PDF reader (Preview on OSX
I can read the pdf just fine. I am also using mac's Preview program.
So it is not that...
--Abram
On Mon, Oct 13, 2008 at 1:29 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote:
I was eager to debunk your supposed debunking of recursive
On Mon, Oct 13, 2008 at 1:29 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
That's odd. Maybe you should run Windows :-(
No. You should not run Windows
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
Hi,
OK, I read the supposed refutation of recursive self-improvement at
http://www.mattmahoney.net/rsi.html
There are at least three extremely major problems with the argument.
1)
By looking only at algorithmic information (defined in terms of program
length) and ignoring runtime complexity,
speed and memory have no effect on the algorithmic
complexity of a program running on it.
-- Matt Mahoney, [EMAIL PROTECTED]
--- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
To: agi@v2.listbox.com
On Mon, Oct 13, 2008 at 11:30 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Ben,
Thanks for the comments on my RSI paper. To address your comments,
You seem to be addressing minor lacunae in my wording, while ignoring my
main conceptual and mathematical point!!!
1. I defined improvement as
Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
To: agi@v2.listbox.com
Date: Monday, October 13, 2008, 11:46 PM
On Mon, Oct 13, 2008 at 11:30 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Ben,
Thanks for the comments on my
:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
To: agi@v2.listbox.com
Date: Monday, October 13, 2008, 11:46 PM
On Mon, Oct 13, 2008 at 11:30 PM, Matt Mahoney [EMAIL PROTECTED]
wrote:
Ben,
Thanks for the comments on my RSI paper. To address your
41 matches
Mail list logo