Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread Eric Burton
A succesful AGI should have n methods of data-mining its experience
for knowledge, I think. If it should have n ways of generating those
methods or n sets of ways to generate ways of generating those methods
etc I don't know.

On 8/28/08, j.k. [EMAIL PROTECTED] wrote:
 On 08/28/2008 04:47 PM, Matt Mahoney wrote:
 The premise is that if humans can create agents with above human
 intelligence, then so can they. What I am questioning is whether agents at
 any intelligence level can do this. I don't believe that agents at any
 level can recognize higher intelligence, and therefore cannot test their
 creations.

 The premise is not necessary to arrive at greater than human
 intelligence. If a human can create an agent of equal intelligence, it
 will rapidly become more intelligent (in practical terms) if advances in
 computing technologies continue to occur.

 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster,
 will accomplish one genius-year of work every second. I would argue that
 by any sensible definition of intelligence, we would have a
 greater-than-human intelligence that was not created by a being of
 lesser intelligence.




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread Abram Demski
I like that argument.

Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.

--Abram

On Thu, Aug 28, 2008 at 9:04 PM, j.k. [EMAIL PROTECTED] wrote:
 On 08/28/2008 04:47 PM, Matt Mahoney wrote:

 The premise is that if humans can create agents with above human
 intelligence, then so can they. What I am questioning is whether agents at
 any intelligence level can do this. I don't believe that agents at any level
 can recognize higher intelligence, and therefore cannot test their
 creations.

 The premise is not necessary to arrive at greater than human intelligence.
 If a human can create an agent of equal intelligence, it will rapidly become
 more intelligent (in practical terms) if advances in computing technologies
 continue to occur.

 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster, will
 accomplish one genius-year of work every second. I would argue that by any
 sensible definition of intelligence, we would have a greater-than-human
 intelligence that was not created by a being of lesser intelligence.




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.

On 08/29/2008 10:09 AM, Abram Demski wrote:

I like that argument.

Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.

--Abram

   


Exactly. A better transistor or a lower complexity algorithm for a 
computational bottleneck in an AGI (and implementing such) is a 
self-improvement that improves the AGI's ability to make further 
improvements -- i.e., RSI.


Likewise, it is not inconceivable that we will soon be able to improve 
human intelligence by means such as increasing neural signaling speed 
(assuming the increase doesn't have too many negative effects, which it 
might) and improving other *individual* aspects of brain biology. This 
would be RSI, too.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread William Pearson
2008/8/29 j.k. [EMAIL PROTECTED]:
 On 08/28/2008 04:47 PM, Matt Mahoney wrote:

 The premise is that if humans can create agents with above human
 intelligence, then so can they. What I am questioning is whether agents at
 any intelligence level can do this. I don't believe that agents at any level
 can recognize higher intelligence, and therefore cannot test their
 creations.

 The premise is not necessary to arrive at greater than human intelligence.
 If a human can create an agent of equal intelligence, it will rapidly become
 more intelligent (in practical terms) if advances in computing technologies
 continue to occur.

 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster, will
 accomplish one genius-year of work every second.

Will it? It might be starved for lack of interaction with the world
and other intelligences, and so be a lot less productive than
something working at normal speeds.

Most learning systems aren't constrained by lack of processing power
for how long it takes them to learn things (AIXI excepted), but by the
speed of running an experiment.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread Matt Mahoney
It seems that the debate over recursive self improvement depends on what you 
mean by improvement. If you define improvement as intelligence as defined by 
the Turing test, then RSI is not possible because the Turing test does not test 
for superhuman intelligence. If you mean improvement as more memory, faster 
clock speed, more network bandwidth, etc., then yes, I think it is reasonable 
to expect Moore's law to continue after we are all uploaded. If you mean 
improvement in the sense of competitive fitness, then yes, I expect evolution 
to continue, perhaps very rapidly if it is based on a computing substrate other 
than DNA. Whether you can call it self improvement or whether the result is 
desirable is debatable. We are, after all, pondering the extinction of Homo 
Sapiens and replacing it with some unknown species, perhaps gray goo. Will the 
nanobots look back at this as an improvement, the way we view the extinction of 
Homo Erectus?

My question is whether RSI is mathematically possible in the context of 
universal intelligence, i.e. expected reward or prediction accuracy over a 
Solomonoff distribution of computable environments. I believe it is possible 
for Turing machines if and only if they have access to true random sources so 
that each generation can create successively more complex test environments to 
evaluate their offspring. But this is troubling because in practice we can 
construct pseudo-random sources that are nearly indistinguishable from truly 
random in polynomial time (but none that are *provably* so).

 -- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.

On 08/29/2008 01:29 PM, William Pearson wrote:

2008/8/29 j.k.[EMAIL PROTECTED]:
   

An AGI with an intelligence the equivalent of a 99.-percentile human
might be creatable, recognizable and testable by a human (or group of
humans) of comparable intelligence. That same AGI at some later point in
time, doing nothing differently except running 31 million times faster, will
accomplish one genius-year of work every second.
 


Will it? It might be starved for lack of interaction with the world
and other intelligences, and so be a lot less productive than
something working at normal speeds.

   


Yes, you're right. It doesn't follow that its productivity will 
necessarily scale linearly, but the larger point I was trying to make 
was that it would be much faster and that being much faster would 
represent an improvement that improves its ability to make future 
improvements.


The numbers are unimportant, but I'd argue that even if there were just 
one such human-level AGI running 1 million times normal speed and even 
if it did require regular interaction just like most humans do, that it 
would still be hugely productive and would represent a phase-shift in 
intelligence in terms of what it accomplishes. Solving one difficult 
problem is probably not highly parallelizable in general (many are not 
at all parallelizable), but solving tens of thousands of such problems 
across many domains over the course of a year or so probably is. The 
human-level AGI running a million times faster could simultaneously 
interact with tens of thousands of scientists at their pace, so there is 
no reason to believe it need be starved for interaction to the point 
that its productivity would be limited to near human levels of 
productivity.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread William Pearson
2008/8/29 j.k. [EMAIL PROTECTED]:
 On 08/29/2008 01:29 PM, William Pearson wrote:

 2008/8/29 j.k.[EMAIL PROTECTED]:


 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster,
 will
 accomplish one genius-year of work every second.


 Will it? It might be starved for lack of interaction with the world
 and other intelligences, and so be a lot less productive than
 something working at normal speeds.



 Yes, you're right. It doesn't follow that its productivity will necessarily
 scale linearly, but the larger point I was trying to make was that it would
 be much faster and that being much faster would represent an improvement
 that improves its ability to make future improvements.

 The numbers are unimportant, but I'd argue that even if there were just one
 such human-level AGI running 1 million times normal speed and even if it did
 require regular interaction just like most humans do, that it would still be
 hugely productive and would represent a phase-shift in intelligence in terms
 of what it accomplishes. Solving one difficult problem is probably not
 highly parallelizable in general (many are not at all parallelizable), but
 solving tens of thousands of such problems across many domains over the
 course of a year or so probably is. The human-level AGI running a million
 times faster could simultaneously interact with tens of thousands of
 scientists at their pace, so there is no reason to believe it need be
 starved for interaction to the point that its productivity would be limited
 to near human levels of productivity.

Only if it had millions of times normal human storage capacity and
memory bandwidth, else it couldn't keep track of all the
conversations, and sufficient bandwidth for ten thousand VOIP calls at
once.

We should perhaps clarify what you mean by speed here? The speed of
the transistor is not all of what makes a system useful. It is worth
noting that processor speed hasn't gone up appreciably from the heady
days of Pentium 4s with 3.8 GHZ in 2005.

Improvements have come from other directions (better memory bandwidth,
better pipelines and multi cores). The hard disk is probably what is
holding back current computers at the moment.


  Will Pearson





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.

On 08/29/2008 03:14 PM, William Pearson wrote:

2008/8/29 j.k.[EMAIL PROTECTED]:

... The human-level AGI running a million
times faster could simultaneously interact with tens of thousands of
scientists at their pace, so there is no reason to believe it need be
starved for interaction to the point that its productivity would be limited
to near human levels of productivity.

 

Only if it had millions of times normal human storage capacity and
memory bandwidth, else it couldn't keep track of all the
conversations, and sufficient bandwidth for ten thousand VOIP calls at
once.
   
And sufficient electricity, etc. There are many other details that would 
have to be spelled out if we were trying to give an exhaustive list of 
every possible requirement. But the point remains that *if* the 
technological advances that we expect to occur actually do occur, then 
there will be greater-than-human intelligence that was created by 
human-level intelligence -- unless one thinks that memory capacity, chip 
design and throughput, disk, system, and network bandwidth, etc., are 
close to as good as they'll ever get. On the contrary, there are more 
promising new technologies on the horizon than one can keep track of 
(not to mention current technologies that can still be improved), which 
makes it extremely unlikely that any of these or the other relevant 
factors are close to practical maximums.

We should perhaps clarify what you mean by speed here? The speed of
the transistor is not all of what makes a system useful. It is worth
noting that processor speed hasn't gone up appreciably from the heady
days of Pentium 4s with 3.8 GHZ in 2005.

Improvements have come from other directions (better memory bandwidth,
better pipelines and multi cores).
I didn't believe that we could drop a 3 THz chip (if that were 
physically possible) onto an existing motherboard and it would scale 
linearly or that a better transistor would be the *only* improvement 
that occurs. When I said 31 million times faster, I meant the system 
as a whole would be 31 million times faster at achieving its 
computational goals. This will obviously require many improvements in 
processor design, system architecture, memory, bandwidth, physics  
materials sciences, and others, but the scenario I was trying to discuss 
was one in which these sorts of things will have occurred.


This is getting quite far off topic from the point I was trying to make 
originally, so I'll bow out of this discussion now.


j.k.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-28 Thread Matt Mahoney
Here is Vernor Vinge's original essay on the singularity.
http://mindstalk.net/vinge/vinge-sing.html

 
The premise is that if humans can create agents with above human intelligence, 
then so can they. What I am questioning is whether agents at any intelligence 
level can do this. I don't believe that agents at any level can recognize 
higher intelligence, and therefore cannot test their creations. We rely on 
competition in an external environment to make fitness decisions. The parent 
isn't intelligent enough to make the correct choice.

-- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 7:00:07 PM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to AGI 
(was Re: [agi] The Necessity of Embodiment))

Matt:If RSI is possible, then there is the additional threat of a fast 
takeoff of the kind described by Good and Vinge

Can we have an example of just one or two subject areas or domains where a 
takeoff has been considered (by anyone)  as possibly occurring, and what 
form such a takeoff might take? I hope the discussion of RSI is not entirely 
one of airy generalities, without any grounding in reality. 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-28 Thread Mike Tintner

Thanks. But like I said, airy generalities.

That machines can become faster and faster at computations and accumulating 
knowledge is certain. But that's narrow AI.


For general intelligence, you have to be able first to integrate as well as 
accumulate knowledge.  We have learned vast amounts about the brain in the 
last few years, for example - perhaps more than in previous history. But 
this hasn't led to any kind of comparably fast advances in integrating that 
knowledge.


You also have to be able second to discover knowledge  - be creative - fill 
in some of the many gaping holes in every domain of knowledge. That again 
doesn't march to a mathematical formua.


Hence, I suggest, you don't see any glimmers of RSI in any actual domain of 
human knowledge. If it were possible at all you should see some signs 
however small.


The whole idea of RSI strikes me as high-school naive - completely lacking 
in any awareness of the creative, systemic structure of how knowledge and 
technology actually advance in different domains.


Another example: try to recursively improve the car - like every part of 
technology it's not a solitary thing, but bound up in vast technological 
ecosystems (here - roads,oil,gas stations etc etc),  that cannot be improved 
in simple, linear fashion.


Similarly, I suspect each individual's mind/intelligence depends on complex 
interdependent systems and paradigms of knowledge. And so of necessity would 
any AGI's mind. (Not that mind is possible without a body).





Matt: Here is Vernor Vinge's original essay on the singularity.

http://mindstalk.net/vinge/vinge-sing.html


The premise is that if humans can create agents with above human 
intelligence, then so can they. What I am questioning is whether agents at 
any intelligence level can do this. I don't believe that agents at any 
level can recognize higher intelligence, and therefore cannot test their 
creations. We rely on competition in an external environment to make 
fitness decisions. The parent isn't intelligent enough to make the correct 
choice.


-- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 7:00:07 PM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))


Matt:If RSI is possible, then there is the additional threat of a fast
takeoff of the kind described by Good and Vinge

Can we have an example of just one or two subject areas or domains where a
takeoff has been considered (by anyone)  as possibly occurring, and what
form such a takeoff might take? I hope the discussion of RSI is not 
entirely

one of airy generalities, without any grounding in reality.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-28 Thread j.k.

On 08/28/2008 04:47 PM, Matt Mahoney wrote:

The premise is that if humans can create agents with above human intelligence, 
then so can they. What I am questioning is whether agents at any intelligence 
level can do this. I don't believe that agents at any level can recognize 
higher intelligence, and therefore cannot test their creations.


The premise is not necessary to arrive at greater than human 
intelligence. If a human can create an agent of equal intelligence, it 
will rapidly become more intelligent (in practical terms) if advances in 
computing technologies continue to occur.


An AGI with an intelligence the equivalent of a 99.-percentile human 
might be creatable, recognizable and testable by a human (or group of 
humans) of comparable intelligence. That same AGI at some later point in 
time, doing nothing differently except running 31 million times faster, 
will accomplish one genius-year of work every second. I would argue that 
by any sensible definition of intelligence, we would have a 
greater-than-human intelligence that was not created by a being of 
lesser intelligence.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com