--- On Sun, 6/22/08, William Pearson <[EMAIL PROTECTED]> wrote:

> From: William Pearson <[EMAIL PROTECTED]>
> > Two questions:
> > 1) Do you know enough to estimate which scenario is
> more likely?
> 
> Well since intelligence explosions haven't happened previously in our
> light cone, it can't be a simple physical pattern, so I think
> non-exploding intelligences have the evidence for being simpler on
> their side. So we might find them more easily. I also think I have
> solid reasoning to think intelligence exploding is unlikely, which
> requires paper length rather than post length. So it I think I do, but
> should I trust my own rationality?

I agree. I raised this question recently on SL4 but I don't think it has been 
resolved. Namely, is there a non-evolutionary model for recursive self 
improvement? By non-evolutionary, I mean that the parent AI, and not the 
environment, chooses which of its children are more intelligent.

I am looking for a mathematical model, or a model that could be experimentally 
verified. It could use a simplified definition of intelligence, for example, 
ability to win at chess. In this scenario, an agent would produce a modified 
copy of itself and play its copy to the death. After many iterations, a 
successful model should produce a good chess-playing agent. If this is too 
computationally expensive or too complex to analyze mathematically, you could 
substitute a simpler game like tic-tac-toe or prisoner's dilemma. Another 
variation would use mathematical problems that we believe are hard to solve but 
easy to verify, such as traveling salesman, factoring, or data compression.

I find the absence of such models troubling. One problem is that there are no 
provably hard problems. Problems like tic-tac-toe and chess are known to be 
easy, in the sense that they can be fully analyzed with sufficient computing 
power. (Perfect chess is O(1) using a giant lookup table). At that point, the 
next generation would have to switch to a harder problem that was not 
considered in the original design. Thus, the design is not friendly.

Other problems like factoring can always be scaled by using larger numbers, but 
there is no proof that the problem is harder to solve than to verify. We only 
believe so because all of humanity has failed to find a fast solution (which 
would break RSA), but this is not a proof. Even if we use provably uncomputable 
problems like data compression or the halting problem, there is no provably 
correct algorithm for selecting among these a subset of problems such that at 
least half are hard to solve.

One counter argument is that maybe human level intelligence is required for 
RSI. But there is a vast difference between human intelligence and humanity's 
intelligence. Producing an AI with an IQ of 200 is not self-improvement if you 
use any knowledge that came from other humans. RSI would be humanity producing 
an AI that is smarter than all of humanity. I have no doubt that will happen 
for some definition of "smarter", but without a model of RSI I don't believe it 
will be humanity's choice. Just like you can have children, some of whom will 
be smarter than you, but you won't know which ones.

Another counter argument is we could proceed without proof: if problem X is 
hard, then RSI is possible. However we lack models even with this relaxation. 
Suppose factoring is hard. An agent makes a modified copy of itself and 
challenges its child to a factoring context. Last one to answer dies. This 
might work except that most mutations would be harmful and there would be 
enough randomness in the test that intelligence would decline over time. I 
would be interested if anyone could get a model like this to work for any X 
believed to be harder to solve than to verify.

I believe that RSI is necessarily evolutionary (and therefore not controllable 
by us), because you can't test for any level of intelligence without already 
being that smart. However, I don't believe the issue is settled, either.


-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to