Matt Mahoney wrote:
--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote:
It seems clear that without external inputs the amount of
improvement
possible is stringently limited. That is evident from
inspection. But
why the without input? The only evident reason
is to ensure the
--- On Sun, 10/19/08, Samantha Atkins [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
There is
currently a global brain (the world economy) with an IQ of
around 10^10, and approaching 10^12.
Oh man. It is so tempting in today's economic morass
to point out the
obvious stupidity of this
Nicole, yes, Rosato I think, across the road. Ok with me.
Cheers
Peter
Peter G Burton PhD
http://homepage.mac.com/blinkcentral
[EMAIL PROTECTED]
intl 61 (0) 400 194 333
On Wednesday, October 15, 2008, at 09:08PM, Ben Goertzel [EMAIL PROTECTED]
wrote:
Matt wrote, in reply to me:
An AI
--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote:
It seems clear that without external inputs the amount of
improvement
possible is stringently limited. That is evident from
inspection. But
why the without input? The only evident reason
is to ensure the truth
of the
What I am trying to debunk is the perceived risk of a fast takeoff
singularity launched by the first AI to achieve superhuman intelligence. In
this scenario, a scientist with an IQ of 180 produces an artificial
scientist with an IQ of 200, which produces an artificial scientist with an
IQ
On Thu, Oct 16, 2008 at 12:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Among other reasons: Because, in the real world, the scientist with an IQ of
200 is **not** a brain in a vat with the inability to learn from the
external world.
Rather, he is able to run experiments in the external
--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Your paper does **not** prove anything whatsoever about real-world
situations.
You are correct. My RSI paper only applies to self improvement of closed
systems. In the interest of proving the safety of AI, I think this is a good
Hi,
Also, you are right that it does not apply to many real world problems.
Here my objection (as stated in my AGI proposal, but perhaps not clearly) is
that creating an artificial scientist with slightly above human intelligence
won't launch a singularity either, but for a different reason.
--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:
An AI twice as smart as any human could figure
out how to use the resources at his disposal to
help him create an AI 3 times as smart as any
human. These AI's will not be brains in vats.
They will have resources at their disposal.
Matt wrote, in reply to me:
An AI twice as smart as any human could figure
out how to use the resources at his disposal to
help him create an AI 3 times as smart as any
human. These AI's will not be brains in vats.
They will have resources at their disposal.
It depends on what you
10 matches
Mail list logo