--- On Sun, 9/28/08, Brad Paulsen <[EMAIL PROTECTED]> wrote:

> Recently, someone on this list (I apologize for not making a
> note of this person's name) raised the question whether we might
> find a "shortcut" to AGI. 

That was me.

> The author went on to opine that, because the problems associated
> with achieving AGI had been considered by some of the world's most
> brilliant minds at various times over thousands of years of human history
> and because the problem, nonetheless, remains unsolved, it was extremely
> unlikely such a shortcut would ever be found.

I mean that a more productive approach would be to try to understand why the 
problem is so hard. Two other hard problems with high payoffs come to mind. One 
is energy. The other is certain classes of problems in computer science.

Early engineers hacked away at the energy problem by designing complex machines 
that could power themselves, known as perpetual motions. Hundreds of clever 
designs were tried, but all failed when built. In trying to understand these 
failures, physicists developed the laws of thermodynamics and the principle of 
conservation of energy. As a result, engineers became more productive because 
they designed power generation systems consistent with these laws rather than 
waste their time with ever more intricate mechanisms for extracting free 
energy. We did this, even though there is no proof that conservation of energy 
is correct. Rather, it is a theory consistent with thousands of experiments. 
(In fact the law was wrong. Einstein later modified it to include mass).

The second example concerns certain hard problems such as Boolean 
satisfiability, subset-sum, and the traveling salesman problem. Early 
programmers worked on these one at a time, trying to find the most efficient 
solutions. The great insight was the introduction of complexity classes and the 
notion of NP-completeness: that a solution to any one of these problems (and 
there are thousands) could be used to solve all the others. This is not a proof 
that the problems are hard. We have not proven that P != NP. But it does mean 
that if we can prove that a problem is NP-complete, then we are probably better 
off looking at a different approach. The result is that programmers are more 
productive.

I make a distinction between unlikely events, like finding a violation of the 
laws of thermodynamics or proving P = NP, and provably impossible events (with 
high perceived payoffs) such as solving the halting problem or recursive data 
compression. A lot of people waste time on impossible problems too, because 
they don't understand the proofs.

I put AI in the former category. My interest is to understand why AI is so 
hard, so that we can put our effort where it will be most productive. There is 
a lot more work to be done in this area. Based on what I have done so far, I 
estimate that automating the economy using AI will cost US $1 quadrillion over 
30 years, largely for the cost of software and customized job training 
(including the indirect costs of mistakes made by partially trained AI). I 
estimate the hardware cost of natural language modeling is similar to that of 
vision, in the range of tens or hundreds of gigabytes and 1 teraflop for real 
time performance (meaning 10 years to compress 1 GB of text). We would need 
10^10 of these. The total complexity would be 10^17 to 10^18 bits of knowledge 
that needs to be extracted from human brains.

I have looked at shortcuts. The most promising of these at the moment seems to 
be recursive self improvement (RSI), the idea that a program could rewrite its 
own code to achieve some goal more efficiently. The idea is that if humans can 
produce superhuman intelligence, then so can they, only faster. I question 
that. Intelligence is not a point on a line. Computers have been smarter than 
humans in some areas for 50 years. Exactly what threshold has to be crossed?

If RSI is practical, then we should have working mathematical, software, or 
physical models of it. I have created a trivial self improving program in C 
(see http://www.mattmahoney.net/rsi.pdf ) and proved that all such systems grow 
extremely slowly (O(log n)) in algorithmic complexity. However, this proof is 
for a narrow, formal definition of RSI. A study of more general forms of RSI is 
needed. (Any suggestions?)

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to