Eric Burton <[EMAIL PROTECTED]> wrote:

>>These have profound impacts on AGI design. First, AIXI is (provably) not 
>>computable,
>>which means there is no easy shortcut to AGI. Second, universal intelligence 
>>is not
>>computable because it requires testing in an infinite number of environments. 
>>Since
>>there is no other well accepted test of intelligence above human level, it 
>>casts doubt on
>>the main premise of the singularity: that if humans can create agents with 
>>greater than
>>human intelligence, then so can they.
>
>I don't know for sure that these statements logically follow from one
>another.

They don't. I cannot prove that there is no non-evolutionary model of recursive 
self improvement (RSI). Nor can I prove that there is. But it is a question we 
need to answer before an evolutionary model becomes technically feasible, 
because an evolutionary model is definitely unfriendly.

>Higher intelligence bootstrapping itself has already been proven on
>Earth. Presumably it can happen in a simulation space as well, right?

If you mean the evolution of humans, that is not an example of RSI. One 
requirement of friendly AI is that an AI cannot alter its human-designed goals. 
(Another is that we get the goals right, which is unsolved). However, in an 
evolutionary environment, the parents do not get to choose the goals of their 
children. Evolution chooses goals that maximize reproductive fitness, 
regardless of what you want.

I have challenged this list as well as the singularity and SL4 lists to come up 
with an example of a mathematical, software, biological, or physical example of 
RSI, or at least a plausible argument that one could be created, and nobody 
has. To qualify, an agent has to modify itself or create a more intelligent 
copy of itself according to an intelligence test chosen by the original. The 
following are not examples of RSI:

1. Evolution of life, including humans.
2. Emergence of language, culture, writing, communication technology, and 
computers.
3. A chess playing (or tic-tac-toe, or factoring, or SAT solving) program that 
makes modified copies of itself by
randomly flipping bits in a compressed representation of its source
code, and playing its copies in death matches.
4. Selective breeding of children for those that get higher grades in school.
5. Genetic engineering of humans for larger brains.

1 fails because evolution is smarter than all of human civilization if you 
measure intelligence in bits of memory. A model of evolution uses 10^37 bits 
(10^10 bits of DNA per cell x 10^14 cells in the human body x 10^10 humans x 
10^3 ratio of biomass to human mass). Human civilization has at most 10^25 bits 
(10^15 synapses in the human brain x 10^10 humans).

2 fails because individual humans are not getting smarter with each generation, 
at least not nearly as fast as civilization is advancing. Rather, there are 
more humans, and we are getting better organized through specialization of 
tasks. Human brains are not much different than they were 10,000 years ago.

3 fails because there are no known classes of problems that are provably hard 
to solve but easy to verify. Tic-tac-toe and chess have bounded complexity. It 
has not been proven that factoring is harder than multiplication. We don't know 
that P != NP, and even if we did, many NP-complete problems have special cases 
that are easy to solve, and we don't know how to program the parent to avoid 
these cases through successive generations.

4 fails because there is no evidence that above a certain level (about IQ 200) 
that childhood intelligence correlates with adult success. The problem is that 
adults of average intelligence can't agree on how success should be measured*.

5 fails for the same reason.

*For example, the average person recognizes Einstein as a genius not because 
they are
awed by his theories of general relativity, but because other people
have said so. If you just read his papers (without understanding their great 
insights) and knew that he never learned to drive a car, you might conclude 
differently.

 -- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to