Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-19 Thread Samantha Atkins

Matt Mahoney wrote:

--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote:

  

It seems clear that without external inputs the amount of
improvement 
possible is stringently limited.  That is evident from
inspection.  But 
why the without input?  The only evident reason
is to ensure the truth 
of the proposition, as it doesn't match any intended
real-world scenario 
that I can imagine.  (I've never considered the
Oracle AI scenario [an 
AI kept within a black box that will answer all your
questions without 
inputs] to be plausible.)



If input is allowed, then we can't clearly distinguish between self improvement 
and learning. Clearly, learning is a legitimate form of improvement, but it is 
not *self* improvement.

What I am trying to debunk is the perceived risk of a fast takeoff singularity 
launched by the first AI to achieve superhuman intelligence. In this scenario, 
a scientist with an IQ of 180 produces an artificial scientist with an IQ of 
200, which produces an artificial scientist with an IQ of 250, and so on. I 
argue it can't happen because human level intelligence is the wrong threshold. 
There is currently a global brain (the world economy) with an IQ of around 
10^10, and approaching 10^12.


Oh man.  It is so tempting in today's economic morass to point out the 
obvious stupidity of this purported super-super-genius.   Why would you 
assign such an astronomical intelligence to the economy?   Even from the 
POV of the best of Austrian micro-economic optimism it is not at all 
clear that billions of minds of human level IQ interacting with one 
another can be said to produce some such large exponential of the 
average human IQ.How much of the advancement of humanity is the 
result of a relatively few exceptionally bright minds rather than the 
billions of lesser intelligences?   Are you thinking more of the entire 
cultural environment rather than specifically the economy?



- samantha



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-19 Thread Matt Mahoney
--- On Sun, 10/19/08, Samantha Atkins [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  There is
  currently a global brain (the world economy) with an IQ of
  around 10^10, and approaching 10^12.
 
 Oh man.  It is so tempting in today's economic morass
 to point out the 
 obvious stupidity of this purported super-super-genius.  
 Why would you 
 assign such an astronomical intelligence to the economy?  

Without the economy, or the language and culture needed to support it, you 
would be foraging for food and sleeping in the woods. You would not know that 
you could grow crops by planting seeds, or that you could make a spear out of 
sticks and rocks and use it for hunting. There is a 99.9% chance that you would 
starve because the primitive earth could only support a few million humans, not 
a few billions.

I realize it makes no sense to talk of an IQ of 10^10 when current tests only 
go to about 200. But by any measure of goal achievement, such as dollars earned 
or number of humans that can be supported, the global brain has enormous 
intelligence. It is a known fact that groups of humans collectively make more 
accurate predictions than their members, e.g. prediction markets. 
http://en.wikipedia.org/wiki/Prediction_market
Such markets would not work if the members did not individually think that they 
were smarter than the group (i.e. disagree). You may think you could run the 
government better than current leadership, but it is a fact that people are 
better off (as measured by GDP and migration) in democracies than 
dictatorships. Group decision making is also widely used in machine learning, 
e.g. the PAQ compression programs.

 How much of the advancement of humanity is the 
 result of a relatively few exceptionally bright minds
 rather than the  billions of lesser intelligences? 

Very little, because agents at any intelligence level cannot detect higher 
intelligence. Socrates was executed. Galileo was arrested. Even today, there is 
a span of decades between pioneering scientific work and its recognition with a 
Nobel prize. So I don't expect anyone to recognize the intelligence of the 
economy. But your ability to read this email depends more on circuit board 
assemblers in Malaysia than you are willing to give the world credit for.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-16 Thread peter . burton
Nicole, yes, Rosato I think, across the road. Ok with me.
Cheers
Peter

Peter G Burton PhD
http://homepage.mac.com/blinkcentral
[EMAIL PROTECTED]
intl 61 (0) 400 194 333

 
On Wednesday, October 15, 2008, at 09:08PM, Ben Goertzel [EMAIL PROTECTED] 
wrote:
Matt wrote, in reply to me:


  An AI twice as smart as any human could figure
  out how to use the resources at his disposal to
  help him create an AI 3 times as smart as any
  human.  These AI's will not be brains in vats.
  They will have resources at their disposal.

 It depends on what you mean by twice as smart. Do you mean twice as many
 brain cells? Twice as much memory? Twice as fast? Twice as much knowledge?
 Able to score 200 on an adult IQ test (if such a thing existed)?

 Unless you tell me otherwise, I have to assume that it means able to do
 what 2 people can do (or 3 or 10, the exact number isn't important). In
 that case, I have to argue it is the global brain that is creating the AI
 with a very tiny bit of help from the parent AI. You would get the same
 result by hiring more people.



Whatever ...

You are IMO just distracting attention from the main point, by making odd
definitions...

No, of course my colloquial phrase twice as smart does not mean as smart
as two people put together.   That is not the accepted interpretation of
that colloquialism and you know it!

To make my statement clearer, one approach is to forget about quantitating
intelligence for the moment...

Let's talk about qualitative differences in intelligence.  Do you agree that
a dog is qualitatively much more intelligent than a roach, and a human is
qualitatively much more intelligent than a dog?

In this sense I could replace

 An AI twice as smart as any human could figure
 out how to use the resources at his disposal to
 help him create an AI 3 times as smart as any
 human.  These AI's will not be brains in vats.
 They will have resources at their disposal.

with


An AI that is qualitatively much smarter than
 any human could figure
  out how to use the resources at his disposal to
  help it create an AI that is qualitatively much
smarter than it.

  These AI's will not be brains in vats.
  They will have resources at their disposal.


On the other hand, if you insist on mathematical
definitions of intelligence, we could talk about, say,
the intelligence of a system
as the total prediction difficulty of
the set S of sequences, with the property that the
system can predict S during a period of time
of length T.   We can define prediction difficulty
as Shane Legg does in his PhD thesis.  We can
then average this over various time-lengths T,
using some appropriate weighting function.

(I'm not positing the above as an ideal definition
of intelligence ... just throwing one definition
out there... my conceptual point is quite independent
of the specific definition of intelligence you choose)

Using this sort of definition, my statement is surely
true, though it would take work to prove it.

Using this sort of definition, a system A2 that is
twice as smart as system A1, if allowed to interact
with an appropriate
environment vastly more complex than either
of the systems, would surely be capable of modifying
itself into a system A3 that is twice as smart as A2.

This seems extremely obvious and I don't want to
spend time right now proving it formally.  No doubt
writing out the proof would reveal various mathematical
conditions on the theorem statement...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel


 What I am trying to debunk is the perceived risk of a fast takeoff
 singularity launched by the first AI to achieve superhuman intelligence. In
 this scenario, a scientist with an IQ of 180 produces an artificial
 scientist with an IQ of 200, which produces an artificial scientist with an
 IQ of 250, and so on. I argue it can't happen because human level
 intelligence is the wrong threshold. There is currently a global brain (the
 world economy) with an IQ of around 10^10, and approaching 10^12. THAT is
 the threshold we must cross. And that seed was already planted 3 billion
 years ago.

 To argue this point, I need to discredit certain alternative proposals,
 such as intelligent agents making random variations of itself and then
 testing the children with puzzles of the parent's choosing. My paper proves
 that proposals of this form cannot work.



Your paper does **not** prove anything whatsoever about real-world
situations.

Among other reasons: Because, in the real world, the scientist with an IQ of
200 is **not** a brain in a vat with the inability to learn from the
external world.

Rather, he is able to run experiments in the external world (which has a far
higher algorithmic information than him, by the way), which give him **new
information** about how to go about making the scientist with an IQ of 220.

Limitations on the rate of self-improvement of scientists who are brains in
vats, are not really that interesting

(And this is separate from the other critique I made, which is that using
algorithmic information as a proxy for IQ is a very poor choice, given the
critical importance of runtime complexity in intelligence.  As an aside,
note there are correlations between human intelligence and speed of neural
processing!)

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Vladimir Nesov
On Thu, Oct 16, 2008 at 12:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Among other reasons: Because, in the real world, the scientist with an IQ of
 200 is **not** a brain in a vat with the inability to learn from the
 external world.

 Rather, he is able to run experiments in the external world (which has a far
 higher algorithmic information than him, by the way), which give him **new
 information** about how to go about making the scientist with an IQ of 220.

 Limitations on the rate of self-improvement of scientists who are brains in
 vats, are not really that interesting

 (And this is separate from the other critique I made, which is that using
 algorithmic information as a proxy for IQ is a very poor choice, given the
 critical importance of runtime complexity in intelligence.  As an aside,
 note there are correlations between human intelligence and speed of neural
 processing!)


Brain in a vat self-improvement is also interesting and worthwhile
endeavor. One problem to tackle, for example, is to develop more
efficient optimization algorithms, that will be able to faster find
better plans according to the goals (and naturally apply these
algorithms to decision-making during further self-improvement).
Advances in algorithms can bring great efficiency, and looking at what
modern computer science came up with, this efficiency rarely requires
an algorithm of in the least significant complexity. There is plenty
of ground to cover in the space of simple things, limitations on
complexity are pragmatically void.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Matt Mahoney
--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 Your paper does **not** prove anything whatsoever about real-world
 situations.

You are correct. My RSI paper only applies to self improvement of closed 
systems. In the interest of proving the safety of AI, I think this is a good 
thing. It proves that various scenarios where an AI rewrites its source code or 
makes random changes and tests them, will not work without external input, even 
if computing power is unlimited. This removes one possible threat of a fast 
takeoff singularity.

Also, you are right that it does not apply to many real world problems. Here my 
objection (as stated in my AGI proposal, but perhaps not clearly) is that 
creating an artificial scientist with slightly above human intelligence won't 
launch a singularity either, but for a different reason. It is not the 
scientist who creates a smarter scientist, but it is the whole global economy 
that creates it. George Will expresses the idea better than I do in 
http://www.newsweek.com/id/158752 Nobody can make a pencil, much less an AI.

The global brain *is* self improving, both by learning and by reorganizing 
itself to be more efficient. Without input, the self organization would reach a 
maximum and stop. Growth requires input as well as increased computing power by 
adding people and computers.

As for using algorithmic complexity as a proxy for intelligence (an upper 
bound, actually), perhaps you can suggest an alternative. Algorithmic 
complexity is how much we know. Less well-defined measures seem to break down 
into philosophical arguments over exactly what intelligence is.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel
Hi,


 Also, you are right that it does not apply to many real world problems.
 Here my objection (as stated in my AGI proposal, but perhaps not clearly) is
 that creating an artificial scientist with slightly above human intelligence
 won't launch a singularity either, but for a different reason. It is not the
 scientist who creates a smarter scientist, but it is the whole global
 economy that creates it. George Will expresses the idea better than I do in
 http://www.newsweek.com/id/158752 Nobody can make a pencil, much less an
 AI.


This strikes me as a very, very bad argument.

An AI twice as smart as any human could figure out how to use the resources
at his disposal to help him create an AI 3 times as smart as any human.
These AI's will not be brains in vats.  They will have resources at their
disposal.

Also, when we can build one AI twice as smart as any human, we can build a
million of them soon thereafter.  Unlike humans, software can easily be
copied.  So don't think about just one smart AI.  Think about a huge number
of them, with all the resources in the world at their potential disposal.




 As for using algorithmic complexity as a proxy for intelligence (an upper
 bound, actually), perhaps you can suggest an alternative. Algorithmic
 complexity is how much we know. Less well-defined measures seem to break
 down into philosophical arguments over exactly what intelligence is.


Algorithmic complexity is an abstraction of how much we know declaratively
rather than procedurally.

I am suggesting that one proxy for intelligence is the complexity of the
problems that a system can solve within a certain, fixed period of time.
This can be formalized in many ways, including using algorithmic information
theory to formalize problem complexity.  But the point is the
incorporation of running speed...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Matt Mahoney
--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 An AI twice as smart as any human could figure
 out how to use the resources at his disposal to
 help him create an AI 3 times as smart as any
 human.  These AI's will not be brains in vats.
 They will have resources at their disposal.

It depends on what you mean by twice as smart. Do you mean twice as many 
brain cells? Twice as much memory? Twice as fast? Twice as much knowledge? Able 
to score 200 on an adult IQ test (if such a thing existed)?

Unless you tell me otherwise, I have to assume that it means able to do what 2 
people can do (or 3 or 10, the exact number isn't important). In that case, I 
have to argue it is the global brain that is creating the AI with a very tiny 
bit of help from the parent AI. You would get the same result by hiring more 
people.

The fact is we have been creating smarter than human machines for 50 years now, 
depending on what intelligence test you use. And they have greatly increased 
our productivity by doing well the things that humans do poorly, much more than 
you could have gotten by hiring more people.

 Also, when we can build one AI twice as smart
 as any human, we can build a million of them
 soon thereafter.

All of whom will know exactly the same thing. Training each of them to do a 
specialized task will not be cheap. And no, they will not just learn on their 
own without human effort. On the job training has real costs in mistakes and 
lost productivity. Not everything they need to know is written down.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel
Matt wrote, in reply to me:


  An AI twice as smart as any human could figure
  out how to use the resources at his disposal to
  help him create an AI 3 times as smart as any
  human.  These AI's will not be brains in vats.
  They will have resources at their disposal.

 It depends on what you mean by twice as smart. Do you mean twice as many
 brain cells? Twice as much memory? Twice as fast? Twice as much knowledge?
 Able to score 200 on an adult IQ test (if such a thing existed)?

 Unless you tell me otherwise, I have to assume that it means able to do
 what 2 people can do (or 3 or 10, the exact number isn't important). In
 that case, I have to argue it is the global brain that is creating the AI
 with a very tiny bit of help from the parent AI. You would get the same
 result by hiring more people.



Whatever ...

You are IMO just distracting attention from the main point, by making odd
definitions...

No, of course my colloquial phrase twice as smart does not mean as smart
as two people put together.   That is not the accepted interpretation of
that colloquialism and you know it!

To make my statement clearer, one approach is to forget about quantitating
intelligence for the moment...

Let's talk about qualitative differences in intelligence.  Do you agree that
a dog is qualitatively much more intelligent than a roach, and a human is
qualitatively much more intelligent than a dog?

In this sense I could replace

 An AI twice as smart as any human could figure
 out how to use the resources at his disposal to
 help him create an AI 3 times as smart as any
 human.  These AI's will not be brains in vats.
 They will have resources at their disposal.

with


An AI that is qualitatively much smarter than
 any human could figure
  out how to use the resources at his disposal to
  help it create an AI that is qualitatively much
smarter than it.

  These AI's will not be brains in vats.
  They will have resources at their disposal.


On the other hand, if you insist on mathematical
definitions of intelligence, we could talk about, say,
the intelligence of a system
as the total prediction difficulty of
the set S of sequences, with the property that the
system can predict S during a period of time
of length T.   We can define prediction difficulty
as Shane Legg does in his PhD thesis.  We can
then average this over various time-lengths T,
using some appropriate weighting function.

(I'm not positing the above as an ideal definition
of intelligence ... just throwing one definition
out there... my conceptual point is quite independent
of the specific definition of intelligence you choose)

Using this sort of definition, my statement is surely
true, though it would take work to prove it.

Using this sort of definition, a system A2 that is
twice as smart as system A1, if allowed to interact
with an appropriate
environment vastly more complex than either
of the systems, would surely be capable of modifying
itself into a system A3 that is twice as smart as A2.

This seems extremely obvious and I don't want to
spend time right now proving it formally.  No doubt
writing out the proof would reveal various mathematical
conditions on the theorem statement...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com