Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-19 Thread Samantha Atkins

Matt Mahoney wrote:

--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote:

  

It seems clear that without external inputs the amount of
improvement 
possible is stringently limited.  That is evident from
inspection.  But 
why the without input?  The only evident reason
is to ensure the truth 
of the proposition, as it doesn't match any intended
real-world scenario 
that I can imagine.  (I've never considered the
Oracle AI scenario [an 
AI kept within a black box that will answer all your
questions without 
inputs] to be plausible.)



If input is allowed, then we can't clearly distinguish between self improvement 
and learning. Clearly, learning is a legitimate form of improvement, but it is 
not *self* improvement.

What I am trying to debunk is the perceived risk of a fast takeoff singularity 
launched by the first AI to achieve superhuman intelligence. In this scenario, 
a scientist with an IQ of 180 produces an artificial scientist with an IQ of 
200, which produces an artificial scientist with an IQ of 250, and so on. I 
argue it can't happen because human level intelligence is the wrong threshold. 
There is currently a global brain (the world economy) with an IQ of around 
10^10, and approaching 10^12.


Oh man.  It is so tempting in today's economic morass to point out the 
obvious stupidity of this purported super-super-genius.   Why would you 
assign such an astronomical intelligence to the economy?   Even from the 
POV of the best of Austrian micro-economic optimism it is not at all 
clear that billions of minds of human level IQ interacting with one 
another can be said to produce some such large exponential of the 
average human IQ.How much of the advancement of humanity is the 
result of a relatively few exceptionally bright minds rather than the 
billions of lesser intelligences?   Are you thinking more of the entire 
cultural environment rather than specifically the economy?



- samantha



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-19 Thread Matt Mahoney
--- On Sun, 10/19/08, Samantha Atkins [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  There is
  currently a global brain (the world economy) with an IQ of
  around 10^10, and approaching 10^12.
 
 Oh man.  It is so tempting in today's economic morass
 to point out the 
 obvious stupidity of this purported super-super-genius.  
 Why would you 
 assign such an astronomical intelligence to the economy?  

Without the economy, or the language and culture needed to support it, you 
would be foraging for food and sleeping in the woods. You would not know that 
you could grow crops by planting seeds, or that you could make a spear out of 
sticks and rocks and use it for hunting. There is a 99.9% chance that you would 
starve because the primitive earth could only support a few million humans, not 
a few billions.

I realize it makes no sense to talk of an IQ of 10^10 when current tests only 
go to about 200. But by any measure of goal achievement, such as dollars earned 
or number of humans that can be supported, the global brain has enormous 
intelligence. It is a known fact that groups of humans collectively make more 
accurate predictions than their members, e.g. prediction markets. 
http://en.wikipedia.org/wiki/Prediction_market
Such markets would not work if the members did not individually think that they 
were smarter than the group (i.e. disagree). You may think you could run the 
government better than current leadership, but it is a fact that people are 
better off (as measured by GDP and migration) in democracies than 
dictatorships. Group decision making is also widely used in machine learning, 
e.g. the PAQ compression programs.

 How much of the advancement of humanity is the 
 result of a relatively few exceptionally bright minds
 rather than the  billions of lesser intelligences? 

Very little, because agents at any intelligence level cannot detect higher 
intelligence. Socrates was executed. Galileo was arrested. Even today, there is 
a span of decades between pioneering scientific work and its recognition with a 
Nobel prize. So I don't expect anyone to recognize the intelligence of the 
economy. But your ability to read this email depends more on circuit board 
assemblers in Malaysia than you are willing to give the world credit for.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-16 Thread peter . burton
Nicole, yes, Rosato I think, across the road. Ok with me.
Cheers
Peter

Peter G Burton PhD
http://homepage.mac.com/blinkcentral
[EMAIL PROTECTED]
intl 61 (0) 400 194 333

 
On Wednesday, October 15, 2008, at 09:08PM, Ben Goertzel [EMAIL PROTECTED] 
wrote:
Matt wrote, in reply to me:


  An AI twice as smart as any human could figure
  out how to use the resources at his disposal to
  help him create an AI 3 times as smart as any
  human.  These AI's will not be brains in vats.
  They will have resources at their disposal.

 It depends on what you mean by twice as smart. Do you mean twice as many
 brain cells? Twice as much memory? Twice as fast? Twice as much knowledge?
 Able to score 200 on an adult IQ test (if such a thing existed)?

 Unless you tell me otherwise, I have to assume that it means able to do
 what 2 people can do (or 3 or 10, the exact number isn't important). In
 that case, I have to argue it is the global brain that is creating the AI
 with a very tiny bit of help from the parent AI. You would get the same
 result by hiring more people.



Whatever ...

You are IMO just distracting attention from the main point, by making odd
definitions...

No, of course my colloquial phrase twice as smart does not mean as smart
as two people put together.   That is not the accepted interpretation of
that colloquialism and you know it!

To make my statement clearer, one approach is to forget about quantitating
intelligence for the moment...

Let's talk about qualitative differences in intelligence.  Do you agree that
a dog is qualitatively much more intelligent than a roach, and a human is
qualitatively much more intelligent than a dog?

In this sense I could replace

 An AI twice as smart as any human could figure
 out how to use the resources at his disposal to
 help him create an AI 3 times as smart as any
 human.  These AI's will not be brains in vats.
 They will have resources at their disposal.

with


An AI that is qualitatively much smarter than
 any human could figure
  out how to use the resources at his disposal to
  help it create an AI that is qualitatively much
smarter than it.

  These AI's will not be brains in vats.
  They will have resources at their disposal.


On the other hand, if you insist on mathematical
definitions of intelligence, we could talk about, say,
the intelligence of a system
as the total prediction difficulty of
the set S of sequences, with the property that the
system can predict S during a period of time
of length T.   We can define prediction difficulty
as Shane Legg does in his PhD thesis.  We can
then average this over various time-lengths T,
using some appropriate weighting function.

(I'm not positing the above as an ideal definition
of intelligence ... just throwing one definition
out there... my conceptual point is quite independent
of the specific definition of intelligence you choose)

Using this sort of definition, my statement is surely
true, though it would take work to prove it.

Using this sort of definition, a system A2 that is
twice as smart as system A1, if allowed to interact
with an appropriate
environment vastly more complex than either
of the systems, would surely be capable of modifying
itself into a system A3 that is twice as smart as A2.

This seems extremely obvious and I don't want to
spend time right now proving it formally.  No doubt
writing out the proof would reveal various mathematical
conditions on the theorem statement...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-15 Thread Vladimir Nesov
On Wed, Oct 15, 2008 at 5:38 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- On Tue, 10/14/08, Terren Suydam [EMAIL PROTECTED] wrote:

 Matt,

 Your measure of intelligence seems to be based on not much
 more than storage capacity, processing power, I/O, and
 accumulated knowledge. This has the advantage of being
 easily formalizable, but has the disadvantage of missing a
 necessary aspect of intelligence.

 Usually when I say intelligence I mean amount of knowledge, which can
 be measured in bits. (Well not really, since Kolmogorov complexity is not
 computable). The other measures reduce to it. Increasing memory allows more
 knowledge to be stored. Increasing processing power and I/O bandwidth allows
 faster learning, or more knowledge accumulation over the same time period.

 Actually, amount of knowledge is just an upper bound. A random string has
 high algorithmic complexity but is not intelligent in any meaningful sense. My
 justification for this measure is based on the AIXI model. In order for an 
 agent
 to guess an environment with algorithmic complexity K, the agent must be able
 to simulate the environment, so it must also have algorithmic complexity K. An
 agent with higher complexity can guess a superset of environments that a lower
 complexity agent could, and therefore cannot do worse in accumulated reward.


Interstellar void must be astronomically intelligent, with all its
incompressible noise...

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-15 Thread Matt Mahoney
--- On Wed, 10/15/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Interstellar void must be astronomically intelligent, with
 all its incompressible noise...

How do you know it's not compressible?

Compression is not computable. To give a concrete example, the output of RC4 
looks like random noise if you don't know the key. Yet it is extremely simple 
algorithmically.
http://en.wikipedia.org/wiki/RC4

More generally, the universe might be simulated by the following algorithm: 
enumerate all Turing machines until life is found, running the n'th machine for 
n steps. In this case, the universe (including your interstellar void) has a 
complexity of log2 H = 407 bits, where H is the Bekenstein bound of the Hubble 
radius, 2.91 x 10^122 bits.

(Anyway, this is aside from my point that you apparently missed, that 
algorithmic complexity is an upper bound on intelligence only).

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Matt Mahoney
--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote:

 It seems clear that without external inputs the amount of
 improvement 
 possible is stringently limited.  That is evident from
 inspection.  But 
 why the without input?  The only evident reason
 is to ensure the truth 
 of the proposition, as it doesn't match any intended
 real-world scenario 
 that I can imagine.  (I've never considered the
 Oracle AI scenario [an 
 AI kept within a black box that will answer all your
 questions without 
 inputs] to be plausible.)

If input is allowed, then we can't clearly distinguish between self improvement 
and learning. Clearly, learning is a legitimate form of improvement, but it is 
not *self* improvement.

What I am trying to debunk is the perceived risk of a fast takeoff singularity 
launched by the first AI to achieve superhuman intelligence. In this scenario, 
a scientist with an IQ of 180 produces an artificial scientist with an IQ of 
200, which produces an artificial scientist with an IQ of 250, and so on. I 
argue it can't happen because human level intelligence is the wrong threshold. 
There is currently a global brain (the world economy) with an IQ of around 
10^10, and approaching 10^12. THAT is the threshold we must cross. And that 
seed was already planted 3 billion years ago.

To argue this point, I need to discredit certain alternative proposals, such as 
intelligent agents making random variations of itself and then testing the 
children with puzzles of the parent's choosing. My paper proves that proposals 
of this form cannot work.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel


 What I am trying to debunk is the perceived risk of a fast takeoff
 singularity launched by the first AI to achieve superhuman intelligence. In
 this scenario, a scientist with an IQ of 180 produces an artificial
 scientist with an IQ of 200, which produces an artificial scientist with an
 IQ of 250, and so on. I argue it can't happen because human level
 intelligence is the wrong threshold. There is currently a global brain (the
 world economy) with an IQ of around 10^10, and approaching 10^12. THAT is
 the threshold we must cross. And that seed was already planted 3 billion
 years ago.

 To argue this point, I need to discredit certain alternative proposals,
 such as intelligent agents making random variations of itself and then
 testing the children with puzzles of the parent's choosing. My paper proves
 that proposals of this form cannot work.



Your paper does **not** prove anything whatsoever about real-world
situations.

Among other reasons: Because, in the real world, the scientist with an IQ of
200 is **not** a brain in a vat with the inability to learn from the
external world.

Rather, he is able to run experiments in the external world (which has a far
higher algorithmic information than him, by the way), which give him **new
information** about how to go about making the scientist with an IQ of 220.

Limitations on the rate of self-improvement of scientists who are brains in
vats, are not really that interesting

(And this is separate from the other critique I made, which is that using
algorithmic information as a proxy for IQ is a very poor choice, given the
critical importance of runtime complexity in intelligence.  As an aside,
note there are correlations between human intelligence and speed of neural
processing!)

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Vladimir Nesov
On Thu, Oct 16, 2008 at 12:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Among other reasons: Because, in the real world, the scientist with an IQ of
 200 is **not** a brain in a vat with the inability to learn from the
 external world.

 Rather, he is able to run experiments in the external world (which has a far
 higher algorithmic information than him, by the way), which give him **new
 information** about how to go about making the scientist with an IQ of 220.

 Limitations on the rate of self-improvement of scientists who are brains in
 vats, are not really that interesting

 (And this is separate from the other critique I made, which is that using
 algorithmic information as a proxy for IQ is a very poor choice, given the
 critical importance of runtime complexity in intelligence.  As an aside,
 note there are correlations between human intelligence and speed of neural
 processing!)


Brain in a vat self-improvement is also interesting and worthwhile
endeavor. One problem to tackle, for example, is to develop more
efficient optimization algorithms, that will be able to faster find
better plans according to the goals (and naturally apply these
algorithms to decision-making during further self-improvement).
Advances in algorithms can bring great efficiency, and looking at what
modern computer science came up with, this efficiency rarely requires
an algorithm of in the least significant complexity. There is plenty
of ground to cover in the space of simple things, limitations on
complexity are pragmatically void.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Matt Mahoney
--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 Your paper does **not** prove anything whatsoever about real-world
 situations.

You are correct. My RSI paper only applies to self improvement of closed 
systems. In the interest of proving the safety of AI, I think this is a good 
thing. It proves that various scenarios where an AI rewrites its source code or 
makes random changes and tests them, will not work without external input, even 
if computing power is unlimited. This removes one possible threat of a fast 
takeoff singularity.

Also, you are right that it does not apply to many real world problems. Here my 
objection (as stated in my AGI proposal, but perhaps not clearly) is that 
creating an artificial scientist with slightly above human intelligence won't 
launch a singularity either, but for a different reason. It is not the 
scientist who creates a smarter scientist, but it is the whole global economy 
that creates it. George Will expresses the idea better than I do in 
http://www.newsweek.com/id/158752 Nobody can make a pencil, much less an AI.

The global brain *is* self improving, both by learning and by reorganizing 
itself to be more efficient. Without input, the self organization would reach a 
maximum and stop. Growth requires input as well as increased computing power by 
adding people and computers.

As for using algorithmic complexity as a proxy for intelligence (an upper 
bound, actually), perhaps you can suggest an alternative. Algorithmic 
complexity is how much we know. Less well-defined measures seem to break down 
into philosophical arguments over exactly what intelligence is.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel
Hi,


 Also, you are right that it does not apply to many real world problems.
 Here my objection (as stated in my AGI proposal, but perhaps not clearly) is
 that creating an artificial scientist with slightly above human intelligence
 won't launch a singularity either, but for a different reason. It is not the
 scientist who creates a smarter scientist, but it is the whole global
 economy that creates it. George Will expresses the idea better than I do in
 http://www.newsweek.com/id/158752 Nobody can make a pencil, much less an
 AI.


This strikes me as a very, very bad argument.

An AI twice as smart as any human could figure out how to use the resources
at his disposal to help him create an AI 3 times as smart as any human.
These AI's will not be brains in vats.  They will have resources at their
disposal.

Also, when we can build one AI twice as smart as any human, we can build a
million of them soon thereafter.  Unlike humans, software can easily be
copied.  So don't think about just one smart AI.  Think about a huge number
of them, with all the resources in the world at their potential disposal.




 As for using algorithmic complexity as a proxy for intelligence (an upper
 bound, actually), perhaps you can suggest an alternative. Algorithmic
 complexity is how much we know. Less well-defined measures seem to break
 down into philosophical arguments over exactly what intelligence is.


Algorithmic complexity is an abstraction of how much we know declaratively
rather than procedurally.

I am suggesting that one proxy for intelligence is the complexity of the
problems that a system can solve within a certain, fixed period of time.
This can be formalized in many ways, including using algorithmic information
theory to formalize problem complexity.  But the point is the
incorporation of running speed...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-15 Thread Charles Hixson
It doesn't need to satisfy everyone, it just has to be the definition 
that you are using in your argument, and which you agree to stick to.


E.g., if you define intelligence to be the resources used (given some 
metric) in solving some particular selection of problems, then that is a 
particular definition of intelligence.  It may not be a very good one, 
though, as it looks like a system that knows the answers ahead of time 
and responds quickly would win over one that understood the problems in 
depth.  Rather like a multiple choice test rather than an essay.


I'm sure that one could fudge the definition to skirt that particular 
pothole, but it would be an ad hoc patch.  I don't trust that entire 
mechanism of defining intelligence.  Still, if I know what you mean, I 
don't have to accept your interpretations to understand your argument.  
(You can't average across all domains, only across some pre-specified 
set of domains.  Infinity doesn't exist in the implementable universe.)


Personally, I'm not convinced by the entire process of measuring 
intelligence.  I don't think that there *IS* any such thing.  If it 
were a disease, I'd call intelligence a syndrome rather than a 
diagnosis.  It's a collection of partially related capabilities given 
one name to make them easy to think about, while ignoring details.  As 
such it has many uses, but it's easy to mistake it for some genuine 
thing, especially as it's an intangible.


As an analogy consider the gene for blue eyes.  There is no such 
gene.  There is a combination of genes that yields blue eyes, and it's 
characterized by the lack of genes for other eye colors.  (It's more 
complex than that, but that's enough.)


E.g., there appears to be a particular gene which is present in almost 
all people which enables them to parse grammatical sentences.  But there 
have been found a few people in one family where this gene is damaged.  
The result is that about half the members of that family can't speak or 
understand language.  Are they unintelligent?  Well, the can't parse 
grammatical sentences, and they can't learn language.  In most other 
ways they appear as intelligent as anyone else.


So I'm suspicious of ALL definitions of intelligence which treat it as 
some kind of global thing.  But if you give me the definition that you 
are using in an argument, then I can at least attempt to understand what 
you are saying.



Terren Suydam wrote:

Charles,

I'm not sure it's possible to nail down a measure of intelligence that's going 
to satisfy everyone. Presumably, it would be some measure of performance in 
problem solving across a wide variety of novel domains in complex (i.e. not 
toy) environments.

Obviously among potential agents, some will do better in domain D1 than others, 
while doing worse in D2. But we're looking for an average across all domains. 
My task-specific examples may have confused the issue there, you were right to 
point that out.

But if you give all agents identical processing power and storage space, then 
the winner will be the one that was able to assimilate and model each problem 
space the most efficiently, on average. Which ultimately means the one which 
used the *least* amount of overall computation.

Terren

--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote:

  

From: Charles Hixson [EMAIL PROTECTED]
Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
To: agi@v2.listbox.com
Date: Tuesday, October 14, 2008, 2:12 PM
If you want to argue this way (reasonable), then you need a
specific 
definition of intelligence.  One that allows it to be
accurately 
measured (and not just in principle).  IQ
definitely won't serve.  
Neither will G.  Neither will GPA (if you're discussing

a student).

Because of this, while I think your argument is generally
reasonable, I 
don't thing it's useful.  Most of what you are
discussing is task 
specific, and as such I'm not sure that
intelligence is a reasonable 
term to use.  An expert engineer might be, e.g., a lousy
bridge player.  
Yet both are thought of as requiring intelligence.  I would
assert that 
in both cases a lot of what's being measured is task
specific 
processing, i.e., narrow AI. 


(Of course, I also believe that an AGI is impossible in the
true sense 
of general, and that an approximately AGI will largely act
as a 
coordinator between a bunch of narrow AI pieces of varying
generality.  
This seems to be a distinctly minority view.)


Terren Suydam wrote:


Hi Will,

I think humans provide ample evidence that
  

intelligence is not necessarily correlated with processing
power. The genius engineer in my example solves a given
problem with *much less* overall processing than the
ordinary engineer, so in this case intelligence is
correlated with some measure of cognitive
efficiency (which I will leave undefined). Likewise, a
grandmaster chess player looks at a given position and can
calculate a better move in one second than you or me could
come up

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Matt Mahoney
--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 An AI twice as smart as any human could figure
 out how to use the resources at his disposal to
 help him create an AI 3 times as smart as any
 human.  These AI's will not be brains in vats.
 They will have resources at their disposal.

It depends on what you mean by twice as smart. Do you mean twice as many 
brain cells? Twice as much memory? Twice as fast? Twice as much knowledge? Able 
to score 200 on an adult IQ test (if such a thing existed)?

Unless you tell me otherwise, I have to assume that it means able to do what 2 
people can do (or 3 or 10, the exact number isn't important). In that case, I 
have to argue it is the global brain that is creating the AI with a very tiny 
bit of help from the parent AI. You would get the same result by hiring more 
people.

The fact is we have been creating smarter than human machines for 50 years now, 
depending on what intelligence test you use. And they have greatly increased 
our productivity by doing well the things that humans do poorly, much more than 
you could have gotten by hiring more people.

 Also, when we can build one AI twice as smart
 as any human, we can build a million of them
 soon thereafter.

All of whom will know exactly the same thing. Training each of them to do a 
specialized task will not be cheap. And no, they will not just learn on their 
own without human effort. On the job training has real costs in mistakes and 
lost productivity. Not everything they need to know is written down.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-15 Thread Terren Suydam

The small point I was trying to make was that cognitive architecture is much 
more important to the realization of AGI than the amount of processing power 
you have at your disposal, or some other such platform-related considerations. 

It doesn't seem like a very controversial point to me. Objecting to it on the 
basis of the difficulty/impossibility of measuring intelligence seems like a 
bit of a tangent. 

--- On Wed, 10/15/08, Charles Hixson [EMAIL PROTECTED] wrote:

 From: Charles Hixson [EMAIL PROTECTED]
 Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
 To: agi@v2.listbox.com
 Date: Wednesday, October 15, 2008, 8:09 PM
 It doesn't need to satisfy everyone, it just has to be
 the definition 
 that you are using in your argument, and which you agree to
 stick to.
 
 E.g., if you define intelligence to be the resources used
 (given some 
 metric) in solving some particular selection of problems,
 then that is a 
 particular definition of intelligence.  It may not be a
 very good one, 
 though, as it looks like a system that knows the answers
 ahead of time 
 and responds quickly would win over one that understood the
 problems in 
 depth.  Rather like a multiple choice test rather than an
 essay.
 
 I'm sure that one could fudge the definition to skirt
 that particular 
 pothole, but it would be an ad hoc patch.  I don't
 trust that entire 
 mechanism of defining intelligence.  Still, if I know what
 you mean, I 
 don't have to accept your interpretations to understand
 your argument.  
 (You can't average across all domains, only across some
 pre-specified 
 set of domains.  Infinity doesn't exist in the
 implementable universe.)
 
 Personally, I'm not convinced by the entire process of
 measuring 
 intelligence.  I don't think that there *IS* any
 such thing.  If it 
 were a disease, I'd call intelligence a syndrome rather
 than a 
 diagnosis.  It's a collection of partially related
 capabilities given 
 one name to make them easy to think about, while ignoring
 details.  As 
 such it has many uses, but it's easy to mistake it for
 some genuine 
 thing, especially as it's an intangible.
 
 As an analogy consider the gene for blue eyes. 
 There is no such 
 gene.  There is a combination of genes that yields blue
 eyes, and it's 
 characterized by the lack of genes for other eye colors. 
 (It's more 
 complex than that, but that's enough.)
 
 E.g., there appears to be a particular gene which is
 present in almost 
 all people which enables them to parse grammatical
 sentences.  But there 
 have been found a few people in one family where this gene
 is damaged.  
 The result is that about half the members of that family
 can't speak or 
 understand language.  Are they unintelligent?  Well, the
 can't parse 
 grammatical sentences, and they can't learn language. 
 In most other 
 ways they appear as intelligent as anyone else.
 
 So I'm suspicious of ALL definitions of intelligence
 which treat it as 
 some kind of global thing.  But if you give me the
 definition that you 
 are using in an argument, then I can at least attempt to
 understand what 
 you are saying.
 
 
 Terren Suydam wrote:
  Charles,
 
  I'm not sure it's possible to nail down a
 measure of intelligence that's going to satisfy
 everyone. Presumably, it would be some measure of
 performance in problem solving across a wide variety of
 novel domains in complex (i.e. not toy) environments.
 
  Obviously among potential agents, some will do better
 in domain D1 than others, while doing worse in D2. But
 we're looking for an average across all domains. My
 task-specific examples may have confused the issue there,
 you were right to point that out.
 
  But if you give all agents identical processing power
 and storage space, then the winner will be the one that was
 able to assimilate and model each problem space the most
 efficiently, on average. Which ultimately means the one
 which used the *least* amount of overall computation.
 
  Terren
 
  --- On Tue, 10/14/08, Charles Hixson
 [EMAIL PROTECTED] wrote:
 

  From: Charles Hixson
 [EMAIL PROTECTED]
  Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
  To: agi@v2.listbox.com
  Date: Tuesday, October 14, 2008, 2:12 PM
  If you want to argue this way (reasonable), then
 you need a
  specific 
  definition of intelligence.  One that allows it to
 be
  accurately 
  measured (and not just in principle). 
 IQ
  definitely won't serve.  
  Neither will G.  Neither will GPA (if you're
 discussing
  a student).
 
  Because of this, while I think your argument is
 generally
  reasonable, I 
  don't thing it's useful.  Most of what you
 are
  discussing is task 
  specific, and as such I'm not sure that
  intelligence is a reasonable 
  term to use.  An expert engineer might be, e.g., a
 lousy
  bridge player.  
  Yet both are thought of as requiring intelligence.
  I would
  assert that 
  in both cases a lot of what's being measured
 is task
  specific 
  processing, i.e., narrow AI

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel
Matt wrote, in reply to me:


  An AI twice as smart as any human could figure
  out how to use the resources at his disposal to
  help him create an AI 3 times as smart as any
  human.  These AI's will not be brains in vats.
  They will have resources at their disposal.

 It depends on what you mean by twice as smart. Do you mean twice as many
 brain cells? Twice as much memory? Twice as fast? Twice as much knowledge?
 Able to score 200 on an adult IQ test (if such a thing existed)?

 Unless you tell me otherwise, I have to assume that it means able to do
 what 2 people can do (or 3 or 10, the exact number isn't important). In
 that case, I have to argue it is the global brain that is creating the AI
 with a very tiny bit of help from the parent AI. You would get the same
 result by hiring more people.



Whatever ...

You are IMO just distracting attention from the main point, by making odd
definitions...

No, of course my colloquial phrase twice as smart does not mean as smart
as two people put together.   That is not the accepted interpretation of
that colloquialism and you know it!

To make my statement clearer, one approach is to forget about quantitating
intelligence for the moment...

Let's talk about qualitative differences in intelligence.  Do you agree that
a dog is qualitatively much more intelligent than a roach, and a human is
qualitatively much more intelligent than a dog?

In this sense I could replace

 An AI twice as smart as any human could figure
 out how to use the resources at his disposal to
 help him create an AI 3 times as smart as any
 human.  These AI's will not be brains in vats.
 They will have resources at their disposal.

with


An AI that is qualitatively much smarter than
 any human could figure
  out how to use the resources at his disposal to
  help it create an AI that is qualitatively much
smarter than it.

  These AI's will not be brains in vats.
  They will have resources at their disposal.


On the other hand, if you insist on mathematical
definitions of intelligence, we could talk about, say,
the intelligence of a system
as the total prediction difficulty of
the set S of sequences, with the property that the
system can predict S during a period of time
of length T.   We can define prediction difficulty
as Shane Legg does in his PhD thesis.  We can
then average this over various time-lengths T,
using some appropriate weighting function.

(I'm not positing the above as an ideal definition
of intelligence ... just throwing one definition
out there... my conceptual point is quite independent
of the specific definition of intelligence you choose)

Using this sort of definition, my statement is surely
true, though it would take work to prove it.

Using this sort of definition, a system A2 that is
twice as smart as system A1, if allowed to interact
with an appropriate
environment vastly more complex than either
of the systems, would surely be capable of modifying
itself into a system A3 that is twice as smart as A2.

This seems extremely obvious and I don't want to
spend time right now proving it formally.  No doubt
writing out the proof would reveal various mathematical
conditions on the theorem statement...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Vladimir Nesov
On Tue, Oct 14, 2008 at 8:36 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Ben,
 If you want to argue that recursive self improvement is a special case of
 learning, then I have no disagreement with the rest of your argument.

 But is this really a useful approach to solving AGI? A group of humans can
 generally make better decisions (more accurate predictions) by voting than any
 member of the group can. Did these humans improve themselves?

 My point is that a single person can't create much of anything, much less an
 AI smarter than himself. If it happens, it will be created by an organization 
 of
 billions of humans. Without this organization, you would probably not think to
 create spears out of sticks and rocks.

 That is my problem with the seed AI approach. The seed AI depends on the
 knowledge and resources of the economy to do anything. An AI twice as smart
 as a human could not do any more than 2 people could. You need to create an
 AI that is billions of times smarter to get anywhere.

 We are already doing that. Human culture is improving itself by accumulating
 knowledge, by becoming better organized through communication and
 specialization, and by adding more babies and computers.



You are slipping from strained interpretation of the technical
argument to the informal point that argument was intended to
rationalize. If interpretation of technical argument is weaker than
original informal argument it was invented to support, there is no
point in technical argument. Using the fact of 2+2=4 won't give
technical support to e.g. philosophy of solipsism.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread William Pearson
2008/10/14 Terren Suydam [EMAIL PROTECTED]:


 --- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 An AI that is twice as smart as a
 human can make no more progress than 2 humans.

 Spoken like someone who has never worked with engineers. A genius engineer 
 can outproduce 20 ordinary engineers in the same timeframe.

 Do you really believe the relationship between intelligence and output is 
 linear?

I'm going to use this post as a place to grind one of my axes, apologies Terren.

The relationship between processing power and results is not
necessarily linear or even positively  correlated. And as an increase
in intelligence above a certain level requires increased processing
power (or perhaps not? anyone disagree?).

When the cost of adding more computational power, outweighs the amount
of money or energy that you acquire from adding the power, there is
not much point adding the computational power.  Apart from if you are
in competition with other agents, that can out smart you. Some of the
traditional views of RSI neglects this and thinks that increased
intelligence is always a useful thing. It is not very

There is a reason why lots of the planets biomass has stayed as
bacteria. It does perfectly well like that. It survives.

Too much processing power is a bad thing, it means less for
self-preservation and affecting the world. Balancing them is a tricky
proposition indeed.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam

Hi Will,

I think humans provide ample evidence that intelligence is not necessarily 
correlated with processing power. The genius engineer in my example solves a 
given problem with *much less* overall processing than the ordinary engineer, 
so in this case intelligence is correlated with some measure of cognitive 
efficiency (which I will leave undefined). Likewise, a grandmaster chess 
player looks at a given position and can calculate a better move in one second 
than you or me could come up with if we studied the board for an hour. 
Grandmasters often do publicity events where they play dozens of people 
simultaneously, spending just a few seconds on each board, and winning most of 
the games.

Of course, you were referring to intelligence above a certain level, but if 
that level is high above human intelligence, there isn't much we can assume 
about that since it is by definition unknowable by humans.

Terren

--- On Tue, 10/14/08, William Pearson [EMAIL PROTECTED] wrote:
 The relationship between processing power and results is
 not
 necessarily linear or even positively  correlated. And as
 an increase
 in intelligence above a certain level requires increased
 processing
 power (or perhaps not? anyone disagree?).
 
 When the cost of adding more computational power, outweighs
 the amount
 of money or energy that you acquire from adding the power,
 there is
 not much point adding the computational power.  Apart from
 if you are
 in competition with other agents, that can out smart you.
 Some of the
 traditional views of RSI neglects this and thinks that
 increased
 intelligence is always a useful thing. It is not very
 
 There is a reason why lots of the planets biomass has
 stayed as
 bacteria. It does perfectly well like that. It survives.
 
 Too much processing power is a bad thing, it means less for
 self-preservation and affecting the world. Balancing them
 is a tricky
 proposition indeed.
 
   Will Pearson
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Terren Suydam [EMAIL PROTECTED] wrote:

 --- On Tue, 10/14/08, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  An AI that is twice as smart as a
  human can make no more progress than 2 humans. 
 
 Spoken like someone who has never worked with engineers. A
 genius engineer can outproduce 20 ordinary engineers in the
 same timeframe. 
 
 Do you really believe the relationship between intelligence
 and output is linear?

You are right, it is not, but that does not detract from my main point.

Two brains have twice as much storage capacity, processing power, and I/O as 
one brain. They have less than twice as much knowledge because some of it is 
shared. They can do less than twice as much work because the brain has a fixed 
rate of long term learning (2 bits per second), and a portion of that must be 
devoted to communicating with the other brain.

The intelligence of 2 brains is between 1 and 2 depending on the degree to 
which the intelligence test can be parallelized. The degree of parallelization 
is generally higher for humans than it is for dogs because humans can 
communicate more efficiently. Ants and bees communicate to some extent, so we 
observe that a colony is more intelligent (at finding food) than any individual.

I have said many times that humans cannot test for higher than human 
intelligence. Here is proof. We know from experiments that groups of humans 
make better predictions (by voting) than individuals. However, if individuals 
recognized that the group was smarter, then they would never disagree with it. 
But if they never disagreed, then the group would not be smarter.

With regard to RSI, we now have a global economy of 10^10 brains, which I 
estimate is about 10^8 times smarter (and growing) than any individual. It is 
less than 10^10 because of less than optimal organization. I estimate the 
inefficiency based on the cost of replacing an employee in lost productivity. 
So even an AGI that is 1000 times smarter than a human would only have the 
impact of adding a few thousand more people, whether you measure intelligence 
by instructions per second, memory, I/O bandwidth, or bits of knowledge.

I think Ben could see just how much their team depends on the (unrecognized) 
intelligence of the global brain if they imagined going back 100 years in time 
and asking how much progress they would be making toward AGI then?

-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam


--- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 An AI that is twice as smart as a
 human can make no more progress than 2 humans. 

Spoken like someone who has never worked with engineers. A genius engineer can 
outproduce 20 ordinary engineers in the same timeframe. 

Do you really believe the relationship between intelligence and output is 
linear?

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam

Matt,

Your measure of intelligence seems to be based on not much more than storage 
capacity, processing power, I/O, and accumulated knowledge. This has the 
advantage of being easily formalizable, but has the disadvantage of missing a 
necessary aspect of intelligence.

I have yet to see from you any acknowledgment that cognitive architecture is at 
all important to realized intelligence. Even your global brain requires an 
explanation of how cognition actually happens at each of the nodes, be they 
humans or AI. 

Cognitive architecture (whatever form that takes) determines the efficiency of 
an intelligence given more external constraints like processing power etc.  I 
assume that it is this aspect that is the primary target of significant 
(disruptive) improvement in RSI schemes.

Terren

--- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 Two brains have twice as much storage capacity, processing
 power, and I/O as one brain. They have less than twice as
 much knowledge because some of it is shared. They can do
 less than twice as much work because the brain has a fixed
 rate of long term learning (2 bits per second), and a
 portion of that must be devoted to communicating with the
 other brain.
 
 The intelligence of 2 brains is between 1 and 2 depending
 on the degree to which the intelligence test can be
 parallelized. The degree of parallelization is generally
 higher for humans than it is for dogs because humans can
 communicate more efficiently. Ants and bees communicate to
 some extent, so we observe that a colony is more intelligent
 (at finding food) than any individual.



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread William Pearson
Hi Terren,

 I think humans provide ample evidence that intelligence is not necessarily 
 correlated with processing power. The genius engineer in my example solves a 
 given problem with *much less* overall processing than the ordinary engineer, 
 so in this case intelligence is correlated with some measure of cognitive 
 efficiency (which I will leave undefined). Likewise, a grandmaster chess 
 player looks at a given position and can calculate a better move in one 
 second than you or me could come up with if we studied the board for an hour. 
 Grandmasters often do publicity events where they play dozens of people 
 simultaneously, spending just a few seconds on each board, and winning most 
 of the games.


What I meant was at processing power/memory Z, there is an problem
solving ability Y which is the maximum. To increase the problem
solving ability above Y you would have to increase processing
power/memory. That is when cognitive efficiency reaches one, in your
terminology. Efficiency is normally measured in ratios so that seems
natural.

There are things you can't model with limits of processing
power/memory which restricts your ability to solve them.

 Of course, you were referring to intelligence above a certain level, but if 
 that level is high above human intelligence, there isn't much we can assume 
 about that since it is by definition unknowable by humans.


Not quite what I meant.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 Here is how I see this exchange...

 You proposed a so-called *mathematical* debunking of RSI.
 
 I presented some detailed arguments against this so-called debunking,
 pointing out that its mathematical assumptions and its quantification of
 improvement bear little relevance to real-world AI now or in the future.

I can only disprove a mathematical argument. I think I have disproved RSI based 
on a model of self introspection without input. If you want to allow input, 
then you need to make a clear distinction between self improvement and learning.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Charles Hixson
If you want to argue this way (reasonable), then you need a specific 
definition of intelligence.  One that allows it to be accurately 
measured (and not just in principle).  IQ definitely won't serve.  
Neither will G.  Neither will GPA (if you're discussing a student).


Because of this, while I think your argument is generally reasonable, I 
don't thing it's useful.  Most of what you are discussing is task 
specific, and as such I'm not sure that intelligence is a reasonable 
term to use.  An expert engineer might be, e.g., a lousy bridge player.  
Yet both are thought of as requiring intelligence.  I would assert that 
in both cases a lot of what's being measured is task specific 
processing, i.e., narrow AI. 

(Of course, I also believe that an AGI is impossible in the true sense 
of general, and that an approximately AGI will largely act as a 
coordinator between a bunch of narrow AI pieces of varying generality.  
This seems to be a distinctly minority view.)


Terren Suydam wrote:

Hi Will,

I think humans provide ample evidence that intelligence is not necessarily correlated 
with processing power. The genius engineer in my example solves a given problem with 
*much less* overall processing than the ordinary engineer, so in this case intelligence 
is correlated with some measure of cognitive efficiency (which I will leave 
undefined). Likewise, a grandmaster chess player looks at a given position and can 
calculate a better move in one second than you or me could come up with if we studied the 
board for an hour. Grandmasters often do publicity events where they play dozens of 
people simultaneously, spending just a few seconds on each board, and winning most of the 
games.

Of course, you were referring to intelligence above a certain level, but if 
that level is high above human intelligence, there isn't much we can assume about that 
since it is by definition unknowable by humans.

Terren

--- On Tue, 10/14/08, William Pearson [EMAIL PROTECTED] wrote:
  

The relationship between processing power and results is
not
necessarily linear or even positively  correlated. And as
an increase
in intelligence above a certain level requires increased
processing
power (or perhaps not? anyone disagree?).

When the cost of adding more computational power, outweighs
the amount
of money or energy that you acquire from adding the power,
there is
not much point adding the computational power.  Apart from
if you are
in competition with other agents, that can out smart you.
Some of the
traditional views of RSI neglects this and thinks that
increased
intelligence is always a useful thing. It is not very

There is a reason why lots of the planets biomass has
stayed as
bacteria. It does perfectly well like that. It survives.

Too much processing power is a bad thing, it means less for
self-preservation and affecting the world. Balancing them
is a tricky
proposition indeed.

  Will Pearson




  




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Eric Burton
An AI that is twice as smart as a human can make no more progress than 2
humans.

Actually I'll argue that we can't make predictions about what a
greater-than-human intelligence would do. Maybe the summed
intelligence of 2 humans would be sufficient to do the work of a
dozen. Maybe


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
Charles,

I'm not sure it's possible to nail down a measure of intelligence that's going 
to satisfy everyone. Presumably, it would be some measure of performance in 
problem solving across a wide variety of novel domains in complex (i.e. not 
toy) environments.

Obviously among potential agents, some will do better in domain D1 than others, 
while doing worse in D2. But we're looking for an average across all domains. 
My task-specific examples may have confused the issue there, you were right to 
point that out.

But if you give all agents identical processing power and storage space, then 
the winner will be the one that was able to assimilate and model each problem 
space the most efficiently, on average. Which ultimately means the one which 
used the *least* amount of overall computation.

Terren

--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote:

 From: Charles Hixson [EMAIL PROTECTED]
 Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
 To: agi@v2.listbox.com
 Date: Tuesday, October 14, 2008, 2:12 PM
 If you want to argue this way (reasonable), then you need a
 specific 
 definition of intelligence.  One that allows it to be
 accurately 
 measured (and not just in principle).  IQ
 definitely won't serve.  
 Neither will G.  Neither will GPA (if you're discussing
 a student).
 
 Because of this, while I think your argument is generally
 reasonable, I 
 don't thing it's useful.  Most of what you are
 discussing is task 
 specific, and as such I'm not sure that
 intelligence is a reasonable 
 term to use.  An expert engineer might be, e.g., a lousy
 bridge player.  
 Yet both are thought of as requiring intelligence.  I would
 assert that 
 in both cases a lot of what's being measured is task
 specific 
 processing, i.e., narrow AI. 
 
 (Of course, I also believe that an AGI is impossible in the
 true sense 
 of general, and that an approximately AGI will largely act
 as a 
 coordinator between a bunch of narrow AI pieces of varying
 generality.  
 This seems to be a distinctly minority view.)
 
 Terren Suydam wrote:
  Hi Will,
 
  I think humans provide ample evidence that
 intelligence is not necessarily correlated with processing
 power. The genius engineer in my example solves a given
 problem with *much less* overall processing than the
 ordinary engineer, so in this case intelligence is
 correlated with some measure of cognitive
 efficiency (which I will leave undefined). Likewise, a
 grandmaster chess player looks at a given position and can
 calculate a better move in one second than you or me could
 come up with if we studied the board for an hour.
 Grandmasters often do publicity events where they play
 dozens of people simultaneously, spending just a few seconds
 on each board, and winning most of the games.
 
  Of course, you were referring to intelligence
 above a certain level, but if that level is high
 above human intelligence, there isn't much we can assume
 about that since it is by definition unknowable by humans.
 
  Terren
 
  --- On Tue, 10/14/08, William Pearson
 [EMAIL PROTECTED] wrote:

  The relationship between processing power and
 results is
  not
  necessarily linear or even positively  correlated.
 And as
  an increase
  in intelligence above a certain level requires
 increased
  processing
  power (or perhaps not? anyone disagree?).
 
  When the cost of adding more computational power,
 outweighs
  the amount
  of money or energy that you acquire from adding
 the power,
  there is
  not much point adding the computational power. 
 Apart from
  if you are
  in competition with other agents, that can out
 smart you.
  Some of the
  traditional views of RSI neglects this and thinks
 that
  increased
  intelligence is always a useful thing. It is not
 very
 
  There is a reason why lots of the planets biomass
 has
  stayed as
  bacteria. It does perfectly well like that. It
 survives.
 
  Too much processing power is a bad thing, it means
 less for
  self-preservation and affecting the world.
 Balancing them
  is a tricky
  proposition indeed.
 
Will Pearson
 
  
 

 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
--- On Tue, 10/14/08, William Pearson [EMAIL PROTECTED] wrote:
 There are things you can't model with limits of
 processing
 power/memory which restricts your ability to solve them.

Processing power, storage capacity, and so forth, are all important in the 
realization of an AI but I don't see how they limit your ability to model or 
solve problems except in terms of performance... i.e. can a problem be solved 
within time T. Those are factors outside of the black box of intelligence. 

Cognitive architecture is the guts of the black box. Any attempt to create AGI 
cannot be taken seriously if it doesn't explain what intelligence does, inside 
the black box, whether you're talking about an individual agent or a globally 
distributed one.

(By the way, it's worth noting that problem solving ability Y is uncomputable 
since it's basically just a twist on Kolmogorov Complexity. Which is to say, 
you can never prove that you have the perfect (un-improvable) cognitive 
architecture given finite resources.)

With toy problems like chess, increasing computing power can compensate for 
what amounts to a wildly inefficient cognitive architecture. In the real world 
of AGI, you have to work on efficiency first because the complexity is just too 
high to manage. So while you can get linear improvement on Y by increasing 
out-of-the-black-box factors, it's inside the box you get the non-linear, 
punctuated gains that are in all likelihood necessary to create AGI.

Terren

--- On Tue, 10/14/08, William Pearson [EMAIL PROTECTED] wrote:

 From: William Pearson [EMAIL PROTECTED]
 Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
 To: agi@v2.listbox.com
 Date: Tuesday, October 14, 2008, 1:13 PM
 Hi Terren,
 
  I think humans provide ample evidence that
 intelligence is not necessarily correlated with processing
 power. The genius engineer in my example solves a given
 problem with *much less* overall processing than the
 ordinary engineer, so in this case intelligence is
 correlated with some measure of cognitive
 efficiency (which I will leave undefined). Likewise, a
 grandmaster chess player looks at a given position and can
 calculate a better move in one second than you or me could
 come up with if we studied the board for an hour.
 Grandmasters often do publicity events where they play
 dozens of people simultaneously, spending just a few seconds
 on each board, and winning most of the games.
 
 
 What I meant was at processing power/memory Z, there is an
 problem
 solving ability Y which is the maximum. To increase the
 problem
 solving ability above Y you would have to increase
 processing
 power/memory. That is when cognitive efficiency reaches
 one, in your
 terminology. Efficiency is normally measured in ratios so
 that seems
 natural.
 
 There are things you can't model with limits of
 processing
 power/memory which restricts your ability to solve them.
 
  Of course, you were referring to intelligence
 above a certain level, but if that level is high
 above human intelligence, there isn't much we can assume
 about that since it is by definition unknowable by humans.
 
 
 Not quite what I meant.
 
   Will
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread BillK
On Tue, Oct 14, 2008 at 2:41 PM, Matt Mahoney wrote:
 But no matter. Whichever definition you accept, RSI is not a viable path to 
 AGI. An AI that is twice as smart as a
 human can make no more progress than 2 humans.


I can't say I've noticed two dogs being smarter than one dog.
Admittedly, a pack of dogs can do hunting better, but they are not 'smarter'.
Numbers just increase capabilities.

Two humans can lift a heavier object than one human, but they are not
twice as smart.

As Ben says, I don't see a necessary connection between RSI and 'smarts'.
It's a technique applicable from very basic levels.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Tue, Oct 14, 2008 at 8:36 AM, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  Ben,
  If you want to argue that recursive self improvement
  is a special case of
  learning, then I have no disagreement with the rest of
  your argument.
 
 You are slipping from strained interpretation of the technical
 argument to the informal point that argument was intended to
 rationalize. If interpretation of technical argument is weaker than
 original informal argument it was invented to support,
 there is no point in technical argument. Using the fact of 2+2=4
 won't give technical support to e.g. philosophy of solipsism.

I did not say that I agree with Ben's definition of RSI to include learning.

But no matter. Whichever definition you accept, RSI is not a viable path to 
AGI. An AI that is twice as smart as a human can make no more progress than 2 
humans. You don't have automatic self improvement until you have AI that is 
billions of times smarter. A team of a few people isn't going to build that. 
The cost of training such a system with 10^17 to 10^18 bits of useful knowledge 
is in the quadrillions of dollars, even if the hardware is free and the problem 
of brain emulation is solved. Until then, you have manual self improvement.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Ben Goertzel
Matt,


 But no matter. Whichever definition you accept, RSI is not a viable path to
 AGI. An AI that is twice as smart as a human can make no more progress than
 2 humans. You don't have automatic self improvement until you have AI that
 is billions of times smarter. A team of a few people isn't going to build
 that. The cost of training such a system with 10^17 to 10^18 bits of useful
 knowledge is in the quadrillions of dollars, even if the hardware is free
 and the problem of brain emulation is solved. Until then, you have manual
 self improvement.


Here is how I see this exchange...

You proposed a so-called *mathematical* debunking of RSI.

I presented some detailed arguments against this so-called debunking,
pointing out that its mathematical assumptions and its quantification of
improvement bear little relevance to real-world AI now or in the future.

You then responded by ignoring my detailed arguments, and retreating into
informal, nonmathematical generalizations ... and furthermore, ones that
don't seem to make much sense to me (or others on this list, if the
responses are indicative...)

I don't know what you mean by twice as smart but I'm sure I can make more
than twice as much progress at science and engineering as someone with half
my IQ ;-p ... my IQ is around 180 whereas someone with an IQ of 90 couldn't
even understand this email let alone design an AGI or a machine learning
algorithm, etc. ... they probably couldn't even do my taxes for me ;-p

It is not clear why you think an AGI needs to be billions of times smarter
than a human to undergo dramatic RSI.  It might not need to be *any* smarter
than a smart human ... maybe an AGI with the same IQ as a smart human but an
underlying architecture built with RSI in mind, could be able to rapidly
self-improve.  In fact I strongly suspect this is the case, though I can't
prove it ... and nor can you disprove it, without making unrealistic
assumptions that render your disproof irrelevant!!

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Mike Tintner

Will:There is a reason why lots of the planets biomass has stayed as
bacteria. It does perfectly well like that. It survives.
Too much processing power is a bad thing, it means less for
self-preservation and affecting the world. Balancing them is a tricky
proposition indeed

Interesting thought. But do you (or anyone else) have any further thoughts 
about what  the proper balance between brain and body relative to what set 
of functions/behaviours is, or how it is determined or adjusted? (Obviously 
a v. difficult question)/ 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Terren Suydam [EMAIL PROTECTED] wrote:

 Matt,
 
 Your measure of intelligence seems to be based on not much
 more than storage capacity, processing power, I/O, and
 accumulated knowledge. This has the advantage of being
 easily formalizable, but has the disadvantage of missing a
 necessary aspect of intelligence.

Usually when I say intelligence I mean amount of knowledge, which can be 
measured in bits. (Well not really, since Kolmogorov complexity is not 
computable). The other measures reduce to it. Increasing memory allows more 
knowledge to be stored. Increasing processing power and I/O bandwidth allows 
faster learning, or more knowledge accumulation over the same time period.

Actually, amount of knowledge is just an upper bound. A random string has high 
algorithmic complexity but is not intelligent in any meaningful sense. My 
justification for this measure is based on the AIXI model. In order for an 
agent to guess an environment with algorithmic complexity K, the agent must be 
able to simulate the environment, so it must also have algorithmic complexity 
K. An agent with higher complexity can guess a superset of environments that a 
lower complexity agent could, and therefore cannot do worse in accumulated 
reward.

 I have yet to see from you any acknowledgment that
 cognitive architecture is at all important to realized
 intelligence. Even your global brain requires an explanation
 of how cognition actually happens at each of the nodes, be
 they humans or AI.

Cognitive architecture is not relevant to Legg and Hutter's universal 
intelligence (expected reward in random AIXI environments). It is only 
important for specific subsets of possible goals, like the ones that are 
important to us. If you define intelligence by the Turing test, then obviously 
the cognitive architecture should model a human brain.

In my global brain model, nodes trade messages when the receivers can compress 
them smaller than the senders, achieving distributed data compression. In 
general, compression is not computable regardless of architecture. In practice 
the messages are natural language text, so the architecture is important. It 
will probably be a neural language model.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Matt Mahoney
I updated my AGI proposal from a few days ago.
http://www.mattmahoney.net/agi2.html

There are two major changes. First I clarified the routing strategy and 
justified it on an information theoretic basis. An organization is optimally 
efficient when its members specialize with no duplication of knowledge or 
skills. To achieve this, we use a market economy to trade messages where 
information has negative value. It is mutually beneficial for peers to trade 
messages when the receivers can compress them more tightly than the senders. 
This results in convergence to an optimal mapping of peers to clusters of data 
in semantic space.

The routing strategy is for a peer to use cached messages from its neighbors as 
estimates of the neighbor's database. For a message X and each neighbor j, it 
computes the distance D(X,Y_j) where Y_j is a concatenation of cached messages 
from peer j. Then it routes X to j because it estimates that j can store X most 
efficiently. Routing stops when j is itself.

The distance function is non-mutual information: D(X,Y) = K(X|Y) + K(Y|X) where 
K is Kolmogorov complexity, the size of the shortest program that can output X 
or Y given the other message as input. When I wrote my thesis, I assumed a 
vector space language model, but I just now realized that D is a measure, 
compatible with Euclidean distance in the vector space model. K is not 
computable, but we can approximate K using the output size of a text 
compressor. The economic model rewards good compression algorithms.

The second change is a new section (5) addressing long term safety. I think I 
have debunked RSI, proving that the friendly seed AI approach could not work 
even in theory. This leaves an evolutionary improvement model in which peers 
compete for resources in a hostile environment. The other risks I have 
identified are competition from uploads with property rights, intelligent 
worms, and a singularity that redefines humanity making the question of human 
extinction moot. I don't have good solutions to these risks. I did not mention 
all possible risks, e.g. gray goo.

To answer Mike Tintner's remark, yes, $1 quadrillion is expensive, but I think 
that AGI will pay for itself many times over. It won't address the basic 
instability and unpredictability of speculative investment markets. It will 
probably make matters worse by enabling nonstop automated trading and waves of 
panic selling traveling at the speed of light.

As before, comments are welcome.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Ben Goertzel
I was eager to debunk your supposed debunking of recursive self-improvement,
but I found that when I tried to open that PDF file, it looked like a bunch
of gibberish (random control characters) in my PDF reader (Preview on OSX
Leopard)

ben g

On Mon, Oct 13, 2008 at 12:19 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 I updated my AGI proposal from a few days ago.
 http://www.mattmahoney.net/agi2.html

 There are two major changes. First I clarified the routing strategy and
 justified it on an information theoretic basis. An organization is optimally
 efficient when its members specialize with no duplication of knowledge or
 skills. To achieve this, we use a market economy to trade messages where
 information has negative value. It is mutually beneficial for peers to trade
 messages when the receivers can compress them more tightly than the senders.
 This results in convergence to an optimal mapping of peers to clusters of
 data in semantic space.

 The routing strategy is for a peer to use cached messages from its
 neighbors as estimates of the neighbor's database. For a message X and each
 neighbor j, it computes the distance D(X,Y_j) where Y_j is a concatenation
 of cached messages from peer j. Then it routes X to j because it estimates
 that j can store X most efficiently. Routing stops when j is itself.

 The distance function is non-mutual information: D(X,Y) = K(X|Y) + K(Y|X)
 where K is Kolmogorov complexity, the size of the shortest program that can
 output X or Y given the other message as input. When I wrote my thesis, I
 assumed a vector space language model, but I just now realized that D is a
 measure, compatible with Euclidean distance in the vector space model. K is
 not computable, but we can approximate K using the output size of a text
 compressor. The economic model rewards good compression algorithms.

 The second change is a new section (5) addressing long term safety. I think
 I have debunked RSI, proving that the friendly seed AI approach could not
 work even in theory. This leaves an evolutionary improvement model in which
 peers compete for resources in a hostile environment. The other risks I have
 identified are competition from uploads with property rights, intelligent
 worms, and a singularity that redefines humanity making the question of
 human extinction moot. I don't have good solutions to these risks. I did not
 mention all possible risks, e.g. gray goo.

 To answer Mike Tintner's remark, yes, $1 quadrillion is expensive, but I
 think that AGI will pay for itself many times over. It won't address the
 basic instability and unpredictability of speculative investment markets. It
 will probably make matters worse by enabling nonstop automated trading and
 waves of panic selling traveling at the speed of light.

 As before, comments are welcome.

 -- Matt Mahoney, [EMAIL PROTECTED]




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Matt Mahoney
--- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 I was eager to debunk your supposed debunking of recursive self-improvement, 
 but I found that when I tried to open that PDF file, it looked like a bunch 
 of gibberish (random control characters) in my PDF reader (Preview on OSX 
 Leopard)

That's odd. Maybe you should run Windows :-(

Anyway I posted an HTML version. Not sure why PDF wouldn't work. I created both 
in OpenOffice.

http://www.mattmahoney.net/rsi.pdf
http://www.mattmahoney.net/rsi.html

Anyone else have trouble reading the PDF version?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Abram Demski
I can read the pdf just fine. I am also using mac's Preview program.
So it is not that...

--Abram

On Mon, Oct 13, 2008 at 1:29 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 I was eager to debunk your supposed debunking of recursive self-improvement, 
 but I found that when I tried to open that PDF file, it looked like a bunch 
 of gibberish (random control characters) in my PDF reader (Preview on OSX 
 Leopard)

 That's odd. Maybe you should run Windows :-(

 Anyway I posted an HTML version. Not sure why PDF wouldn't work. I created 
 both in OpenOffice.

 http://www.mattmahoney.net/rsi.pdf
 http://www.mattmahoney.net/rsi.html

 Anyone else have trouble reading the PDF version?

 -- Matt Mahoney, [EMAIL PROTECTED]



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Eric Burton
 On Mon, Oct 13, 2008 at 1:29 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 That's odd. Maybe you should run Windows :-(

No. You should not run Windows


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Ben Goertzel
Hi,

OK, I read the supposed refutation of recursive self-improvement at

http://www.mattmahoney.net/rsi.html

There are at least three extremely major problems with the argument.

1)
By looking only at algorithmic information (defined in terms of program
length) and ignoring runtime complexity, you are ignoring much of the value
to be achieved via RSI.

Suppose program P1 can solve problems of class C and size 500 in 3 hours per
problem.   Then, suppose P1 spends 50 hours transforming itself into a new
program,P2, that can solve problems of class C and size 500 in one second
per problem.

Furthermore, suppose the RAM available in the machine at hand cannot hold
bothP1 and P2 at the same time.

In this case, it's obvious there's a huge advantage involved in P1 replacing
itself withP2 ... if solving problems of class C is important for P1
achieving its goals, and if P2 is oriented toward achieving the same goal.

Your argument is blind to this advantage because it ignores runtime
complexity.  Your argument is fixated on the fact that P2 can be generated
by information consistingof {P1 plus the data P1 has observed} ... but so
what?   Program length is not, initself, all that useful thing to be looking
at in the context of real-world computing.  We need to be thinking about
both space and time complexity.

2)
You don't consider the program as interacting with an environment.  IMO you
shouldbe using the mathematical setup that Hutter uses in his main theorems
about AIXI and AIXItl.  In this setup, the AI is an agent that takes actions
in an environment, which then responds to its actions.

Furthermore, you should enhance Hutter's setup to consider the case where
the agenthas not only fixed RAM (together potentially with a larger amount
of memory that is slower to access), but also has processing cycle that is
defined in terms of the cycle time of the environment, so that it only
gets N internal processing cycles per each opportunity to sense/act.

Considering the argument in this kind of more realistic setting, the
critical importance of runtime as I noted above would immediately become
apparent.

3)
You don't consider that a smarter program might be able to figure out ways
to increase its processor speed or RAM capacity, thus breaking your
theoretical assumptions altogether.  In this case, P2 could have an
arbitrarily larger algorithmic information than P1, contradicting your
result (by using a different, more realistic assumption).

...

In short, what you have shown is that, according to an uninteresting measure
(algorithmic
information), RSI is not very dramatically useful in an artificial situation
(no environment,
no restrictions on processor cycle consumption, no ability for intelligence
to lead to hardware modification).


-- Ben G


p.s. I read many PDF files each day using the same OS and viewer, and have
never before seen the kind of problem I did with your pdf file.  But I don't
know what the source of the problem was.  Anyway I read the HTML file just
fine, thanks!


On Mon, Oct 13, 2008 at 1:29 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote:
  I was eager to debunk your supposed debunking of recursive
 self-improvement, but I found that when I tried to open that PDF file, it
 looked like a bunch of gibberish (random control characters) in my PDF
 reader (Preview on OSX Leopard)

 That's odd. Maybe you should run Windows :-(

 Anyway I posted an HTML version. Not sure why PDF wouldn't work. I created
 both in OpenOffice.

 http://www.mattmahoney.net/rsi.pdf
 http://www.mattmahoney.net/rsi.html

 Anyone else have trouble reading the PDF version?

 -- Matt Mahoney, [EMAIL PROTECTED]



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Matt Mahoney
Ben,
Thanks for the comments on my RSI paper. To address your comments,

1. I defined improvement as achieving the same goal (utility) in less time or 
achieving greater utility in the same time. I don't understand your objection 
that I am ignoring run time complexity.

2. I agree that an AIXI type interactive environment is a more appropriate 
model than a Turing machine receiving all of its input at the beginning. The 
problem is how to formally define improvement in a way that distinguishes it 
from learning. I am open to suggestions.

To see why this is a problem, consider an agent that after a long time, guesses 
the environment's program and is able to achieve maximum reward from that point 
forward. The agent could improve itself by hard-coding the environment's 
program into its successor and thereby achieve maximum reward right from the 
beginning.

3. A computer's processor speed and memory have no effect on the algorithmic 
complexity of a program running on it.


-- Matt Mahoney, [EMAIL PROTECTED]

--- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
To: agi@v2.listbox.com
Date: Monday, October 13, 2008, 8:33 PM


Hi,

OK, I read the supposed refutation of recursive self-improvement at

http://www.mattmahoney.net/rsi.html

There are at least three extremely major problems with the argument.  


1)
By looking only at algorithmic information (defined in terms of program length) 
and ignoring runtime complexity, you are ignoring much of the value to be 
achieved via RSI.

Suppose program P1 can solve problems of class C and size 500 in 3 hours per 
problem.   Then, suppose P1 spends 50 hours transforming itself into a new 
program,P2, that can solve problems of class C and size 500 in one second per 
problem.


Furthermore, suppose the RAM available in the machine at hand cannot hold 
bothP1 and P2 at the same time.

In this case, it's obvious there's a huge advantage involved in P1 replacing 
itself withP2 ... if solving problems of class C is important for P1 achieving 
its goals, and if P2 is oriented toward achieving the same goal.


Your argument is blind to this advantage because it ignores runtime 
complexity.  Your argument is fixated on the fact that P2 can be generated by 
information consistingof {P1 plus the data P1 has observed} ... but so what?   
Program length is not, initself, all that useful thing to be looking at in the 
context of real-world computing.  We need to be thinking about both space and 
time complexity.


2)
You don't consider the program as interacting with an environment.  IMO you 
shouldbe using the mathematical setup that Hutter uses in his main theorems 
about AIXI and AIXItl.  In this setup, the AI is an agent that takes actions in 
an environment, which then responds to its actions.  


Furthermore, you should enhance Hutter's setup to consider the case where the 
agenthas not only fixed RAM (together potentially with a larger amount of 
memory that is slower to access), but also has processing cycle that is defined 
in terms of the cycle time of the environment, so that it only gets N 
internal processing cycles per each opportunity to sense/act.


Considering the argument in this kind of more realistic setting, the critical 
importance of runtime as I noted above would immediately become apparent.

3)

You don't consider that a smarter program might be able to figure out ways to 
increase its processor speed or RAM capacity, thus breaking your theoretical 
assumptions altogether.  In this case, P2 could have an arbitrarily larger 
algorithmic information than P1, contradicting your result (by using a 
different, more realistic assumption).


...

In short, what you have shown is that, according to an uninteresting measure 
(algorithmic
information), RSI is not very dramatically useful in an artificial situation 
(no environment,
no restrictions on processor cycle consumption, no ability for intelligence to 
lead to hardware modification).



-- Ben G




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Ben Goertzel
On Mon, Oct 13, 2008 at 11:30 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 Ben,
 Thanks for the comments on my RSI paper. To address your comments,



You seem to be addressing minor lacunae in my wording, while ignoring my
main conceptual and mathematical point!!!




 1. I defined improvement as achieving the same goal (utility) in less
 time or achieving greater utility in the same time. I don't understand your
 objection that I am ignoring run time complexity.


OK, you are not ignoring run time completely ... BUT ... in your
measurement of the benefit achieved by RSI, you're not measuring the amount
of run-time improvement achieved, you're only  measuring algorithmic
information.

What matters in practice is, largely, the amount of run-time improvement
achieved.   This is the point I made in the details of my reply -- which you
have not counter-replied to.

I contend that, in my specific example, program P2 is a *huge* improvement
over P1, in a way that is extremely important to practical AGI yet is not
captured by your algorithmic-information-theoretic measurement method.  What
is your specific response to my example??



 2. I agree that an AIXI type interactive environment is a more appropriate
 model than a Turing machine receiving all of its input at the beginning. The
 problem is how to formally define improvement in a way that distinguishes it
 from learning. I am open to suggestions.

 To see why this is a problem, consider an agent that after a long time,
 guesses the environment's program and is able to achieve maximum reward from
 that point forward. The agent could improve itself by hard-coding the
 environment's program into its successor and thereby achieve maximum reward
 right from the beginning.


Recursive self-improvement **is** a special case of learning; you can't
completely distinguish them.



 3. A computer's processor speed and memory have no effect on the
 algorithmic complexity of a program running on it.


Yes, I can see I didn't phrase that point properly, sorry.  I typed that
prior email too hastily as I'm trying to get some work done ;-)

The point I *wanted* to make in my third point, was that if you take a
program with algorithmic information K, and give it the ability to modify
its own hardware, then it can achieve algorithmic information M  K.

However, it is certainly true that this can happen even without the program
modifying its own hardware -- especially if you make fanciful assumptions
like Turing machines with huge tapes ... but even without such fanciful
assumptions.

The key point, which I did not articulate properly in my prior message, is
that: ** by engaging with the world, the program can intake new information,
which can increase its algorithmic information **

The new information a program P1 takes in from the **external world** may be
random with regard to P1, yet may not be random with regard to {P1 + the new
information taken in}.

As self-modification may cause the intake of new information causing
algorithmic information to increase arbitrarily much, your argument does not
hold in the case of a program interacting with a world that has much higher
algorithmic information than it does.

And this of course is exactly the situation people are in.

For instance, a program may learn that In the past, on 10 occasions, I have
taken in information from Bob that was vastly beyond my algorithmic
information content at that time.  In each case this process helped me to
achieve my goals, though in ways I would not have been able to understand
before taking in the information.  So, once again, I am going to trust Bob
to alter me with info far beyond my current comprehension and algorithmic
information content.

Sounds a bit like a child trusting their parent, eh?

This is a separate point from my point about P1 and P2 in point 1.  But the
two phenomena intersect, of course.

-- Ben G


This intake



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Matt Mahoney
Ben,
If you want to argue that recursive self improvement is a special case of 
learning, then I have no disagreement with the rest of your argument.

But is this really a useful approach to solving AGI? A group of humans can 
generally make better decisions (more accurate predictions) by voting than any 
member of the group can. Did these humans improve themselves?

My point is that a single person can't create much of anything, much less an AI 
smarter than himself. If it happens, it will be created by an organization of 
billions of humans. Without this organization, you would probably not think to 
create spears out of sticks and rocks.

That is my problem with the seed AI approach. The seed AI depends on the 
knowledge and resources of the economy to do anything. An AI twice as smart as 
a human could not do any more than 2 people could. You need to create an AI 
that is billions of times smarter to get anywhere.

We are already doing that. Human culture is improving itself by accumulating 
knowledge, by becoming better organized through communication and 
specialization, and by adding more babies and computers.


-- Matt Mahoney, [EMAIL PROTECTED]

--- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
To: agi@v2.listbox.com
Date: Monday, October 13, 2008, 11:46 PM





On Mon, Oct 13, 2008 at 11:30 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

Ben,

Thanks for the comments on my RSI paper. To address your comments,

You seem to be addressing minor lacunae in my wording, while ignoring my  main 
conceptual and mathematical point!!!
 





1. I defined improvement as achieving the same goal (utility) in less time or 
achieving greater utility in the same time. I don't understand your objection 
that I am ignoring run time complexity.


OK, you are not ignoring run time completely ... BUT ... in your measurement 
of the benefit achieved by RSI, you're not measuring the amount of run-time 
improvement achieved, you're only  measuring algorithmic information.


What matters in practice is, largely, the amount of run-time improvement 
achieved.   This is the point I made in the details of my reply -- which you 
have not counter-replied to. 

I contend that, in my specific example, program P2 is a *huge* improvement over 
P1, in a way that is extremely important to practical AGI yet is not captured 
by your algorithmic-information-theoretic measurement method.  What is your 
specific response to my example??

 

2. I agree that an AIXI type interactive environment is a more appropriate 
model than a Turing machine receiving all of its input at the beginning. The 
problem is how to formally define improvement in a way that distinguishes it 
from learning. I am open to suggestions.




To see why this is a problem, consider an agent that after a long time, guesses 
the environment's program and is able to achieve maximum reward from that point 
forward. The agent could improve itself by hard-coding the environment's 
program into its successor and thereby achieve maximum reward right from the 
beginning.



Recursive self-improvement **is** a special case of learning; you can't 
completely distinguish them.
 


3. A computer's processor speed and memory have no effect on the algorithmic 
complexity of a program running on it.
Yes, I can see I didn't phrase that point properly, sorry.  I typed that prior 
email too hastily as I'm trying to get some work done ;-)


The point I *wanted* to make in my third point, was that if you take a program 
with algorithmic information K, and give it the ability to modify its own 
hardware, then it can achieve algorithmic information M  K.


However, it is certainly true that this can happen even without the program 
modifying its own hardware -- especially if you make fanciful assumptions like 
Turing machines with huge tapes ... but even without such fanciful assumptions.


The key point, which I did not articulate properly in my prior message, is 
that: ** by engaging with the world, the program can intake new information, 
which can increase its algorithmic information **

The new information a program P1 takes in from the **external world** may be 
random with regard to P1, yet may not be random with regard to {P1 + the new 
information taken in}.  


As self-modification may cause the intake of new information causing 
algorithmic information to increase arbitrarily much, your argument does not 
hold in the case of a program interacting with a world that has much higher 
algorithmic information than it does.


And this of course is exactly the situation people are in.

For instance, a program may learn that In the past, on 10 occasions, I have 
taken in information from Bob that was vastly beyond my algorithmic information 
content at that time.  In each case this process helped me to achieve my goals, 
though in ways I would not have been able to understand before taking

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Ben Goertzel
OK, well now you are backing away from your claim of a mathematical disproof
of RSI!!

What you did IMHO was to prove there is limited value in RSI by defining RSI
in a very limited way, and then measuring the value of this limited-RSI in a
manner that does not capture the practical value of any kind of RSI...

I don't agree that an AGI will be programmed by billions of humans.  I think
an AGI will be created by a fairly small team of programmers and
scientists.  Of course, this effort will build atop the prior work of a
large number of other scientists and engineers -- the ones who built the
computer chips, the Internet, the programming languages, and so forth.  But
I see no reason why the actual programming and design of the AGI can't be
done by a fairly small team...

I agree that RSI is not how human intelligence predominantly works, but my
goal is not to replicate human intelligence, rather to create better forms
of intelligence that can help humans better than we can help ourselves
directly ... and can also move on to levels inaccessible to humans...

-- Ben G

On Tue, Oct 14, 2008 at 12:36 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

 Ben,
 If you want to argue that recursive self improvement is a special case of
 learning, then I have no disagreement with the rest of your argument.

 But is this really a useful approach to solving AGI? A group of humans can
 generally make better decisions (more accurate predictions) by voting than
 any member of the group can. Did these humans improve themselves?

 My point is that a single person can't create much of anything, much less
 an AI smarter than himself. If it happens, it will be created by an
 organization of billions of humans. Without this organization, you would
 probably not think to create spears out of sticks and rocks.

 That is my problem with the seed AI approach. The seed AI depends on the
 knowledge and resources of the economy to do anything. An AI twice as smart
 as a human could not do any more than 2 people could. You need to create an
 AI that is billions of times smarter to get anywhere.

 We are already doing that. Human culture is improving itself by
 accumulating knowledge, by becoming better organized through communication
 and specialization, and by adding more babies and computers.


 -- Matt Mahoney, [EMAIL PROTECTED]

 --- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 From: Ben Goertzel [EMAIL PROTECTED]
 Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
 To: agi@v2.listbox.com
 Date: Monday, October 13, 2008, 11:46 PM





 On Mon, Oct 13, 2008 at 11:30 PM, Matt Mahoney [EMAIL PROTECTED]
 wrote:

 Ben,

 Thanks for the comments on my RSI paper. To address your comments,

 You seem to be addressing minor lacunae in my wording, while ignoring my
 main conceptual and mathematical point!!!






 1. I defined improvement as achieving the same goal (utility) in less
 time or achieving greater utility in the same time. I don't understand your
 objection that I am ignoring run time complexity.


 OK, you are not ignoring run time completely ... BUT ... in your
 measurement of the benefit achieved by RSI, you're not measuring the amount
 of run-time improvement achieved, you're only  measuring algorithmic
 information.


 What matters in practice is, largely, the amount of run-time improvement
 achieved.   This is the point I made in the details of my reply -- which you
 have not counter-replied to.

 I contend that, in my specific example, program P2 is a *huge* improvement
 over P1, in a way that is extremely important to practical AGI yet is not
 captured by your algorithmic-information-theoretic measurement method.  What
 is your specific response to my example??



 2. I agree that an AIXI type interactive environment is a more appropriate
 model than a Turing machine receiving all of its input at the beginning. The
 problem is how to formally define improvement in a way that distinguishes it
 from learning. I am open to suggestions.




 To see why this is a problem, consider an agent that after a long time,
 guesses the environment's program and is able to achieve maximum reward from
 that point forward. The agent could improve itself by hard-coding the
 environment's program into its successor and thereby achieve maximum reward
 right from the beginning.



 Recursive self-improvement **is** a special case of learning; you can't
 completely distinguish them.



 3. A computer's processor speed and memory have no effect on the
 algorithmic complexity of a program running on it.
 Yes, I can see I didn't phrase that point properly, sorry.  I typed that
 prior email too hastily as I'm trying to get some work done ;-)


 The point I *wanted* to make in my third point, was that if you take a
 program with algorithmic information K, and give it the ability to modify
 its own hardware, then it can achieve algorithmic information M  K.


 However, it is certainly true that this can happen even without the program
 modifying