I think if an artificial intelligence of length n was able to fully
grok itself and had a space of at least n in which to try out
modifications, it would be pretty simple for that intelligence to
figure out when the intelligences it's engineering in the allocated
space exhibit shiny new featuresets in places where it falls down. In
terms of suitability-to-task as an ethical replacement for human
decision-making, or somesuch. Surely!

Eric B

On 8/27/08, Abram Demski <[EMAIL PROTECTED]> wrote:
> Matt,
>
> Thanks for the reply. There are 3 reasons that I can think of for
> calling Goedel machines "bounded":
>
> 1. As you assert, once a solution is found, it stops.
> 2. It will be on a finite computer, so it will eventually reach the
> one best version of itself that it can reach.
> 3. It can only make provably correct steps, which is very limiting
> thanks to Godel's incompleteness theorem.
>
> I'll try to argue that each of these limits can be overcome in
> principle, and we'll see if the result satisfies your RSI criteria.
>
> First, I do not think it is terribly difficult to define a Goedel
> machine that does not halt. It interacts with its environment, and
> there is some utility value attached to this interaction, and it
> attempts to rewrite its code to maximize this utility.
>
> The second and third need to be tackled together, because the main
> reason that a Goedel machine can't improve its own hardware is because
> there is uncertainty involved, so it would never be provably better.
> There is always some chance of hardware malfunction. So, I think it is
> necessary to grant the possibility of modifications that are merely
> very probably correct. Once this is done, 2 and 3 fall fairly easily,
> assuming that the machine begins life with a good probabilistic
> learning system. That is a big assumption, but we can grant it for the
> moment I think?
>
> For the sake of concreteness, let's say that the utility value is some
> (probably very complex) attempt to logically describe Eliezer-style
> Friendliness, and that the probabilistic learning system is an
> approximation of AIXI (which the system will improve over time along
> with everything else). (These two choices don't reflect my personal
> tastes, they are just examples.)
>
> By tweaking the allowances the system makes, we might either have a
> slow self-improver that is, say, 99.999% probable to only improve
> itself in the next 100 years, or a faster self-improver that is 50%
> guaranteed.
>
> Does this satisfy your criteria?
>
> On Wed, Aug 27, 2008 at 9:14 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>> Abram Demski <[EMAIL PROTECTED]> wrote:
>>>Matt,
>>>
>>>What is your opinion on Goedel machines?
>>>
>>> http://www.idsia.ch/~juergen/goedelmachine.html
>>
>>
>> Thanks for the link. If I understand correctly, this is a form of bounded
>> RSI, so it could not lead to a singularity. A Goedel machine is
>> functionally equivalent to AIXI^tl in that it finds the optimal
>> reinforcement learning solution given a fixed environment and utility
>> function. The difference is that AIXI^tl does a brute force search of all
>> machines up to length l for time t each, so it run in O(t 2^l) time. A
>> Goedel machine achieves the same result more efficiently through a series
>> of self improvments by proving that each proposed modification (including
>> modifications to its own proof search code) is a actual improvement. It
>> does this by using an instruction set such that it is impossible to
>> construct incorrect proof verification code.
>>
>> What I am looking for is unbounded RSI capable of increasing intelligence.
>> A Goedel machine doesn't do this because once it finds a solution, it
>> stops. This is the same problem as a chess playing program that plays
>> randomly modified copies of itself in death matches. At some point, it
>> completely solves the chess problem and stops improving.
>>
>> Ideally we should use a scalable test for intelligence such as Legg and
>> Hutter's universal intelligence, which measures expected accumulated
>> reward over a Solomonoff distribution of environments (random programs).
>> We can't compute this exactly because it requires testing an infinite
>> number of environments, but we can approximate it to arbitrary precision
>> by randomly sampling environments.
>>
>> RSI would require a series of increasingly complex test environments
>> because otherwise there is an exact solution such that RSI would stop once
>> found. For any environment with Kolmogorov complexity l, and agent can
>> guess all environments up to length l. But this means that RSI cannot be
>> implemented by a Turing machine because a parent with complexity l cannot
>> test its children because it cannot create environments with complexity
>> greater than l.
>>
>> RSI would be possible with a true source of randomness. A parent could
>> create arbitrarily complex environments by flipping a coin. In practice,
>> we usually ignore the difference between pseudo-random sources and true
>> random sources. But in the context of Turing machines that can execute
>> exponential complexity algorithms efficiently, we can't do this because
>> the child could easily guess the parent's generator, which has low
>> complexity.
>>
>> One could argue that the real universe does have true random sources, such
>> as quantum mechanics. I am not convinced. The universe does have a
>> definite quantum state, but it is not possible to know it because a memory
>> within the universe cannot have more information than the universe.
>> Therefore, any theory of physics must appear random.
>>  -- Matt Mahoney, [EMAIL PROTECTED]
>>
>>
>>
>> ----- Original Message ----
>> From: Abram Demski <[EMAIL PROTECTED]>
>> To: agi@v2.listbox.com
>> Sent: Monday, August 25, 2008 3:30:59 PM
>> Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
>> Necessity of Embodiment)
>>
>> Matt,
>>
>> What is your opinion on Goedel machines?
>>
>> http://www.idsia.ch/~juergen/goedelmachine.html
>>
>> --Abram
>>
>> On Sun, Aug 24, 2008 at 5:46 PM, Matt Mahoney <[EMAIL PROTECTED]>
>> wrote:
>>> Eric Burton <[EMAIL PROTECTED]> wrote:
>>>
>>>
>>>>>These have profound impacts on AGI design. First, AIXI is (provably) not
>>>>> computable,
>>>>>which means there is no easy shortcut to AGI. Second, universal
>>>>> intelligence is not
>>>>>computable because it requires testing in an infinite number of
>>>>> environments. Since
>>>>>there is no other well accepted test of intelligence above human level,
>>>>> it casts doubt on
>>>>>the main premise of the singularity: that if humans can create agents
>>>>> with greater than
>>>>>human intelligence, then so can they.
>>>>
>>>>I don't know for sure that these statements logically follow from one
>>>>another.
>>>
>>> They don't. I cannot prove that there is no non-evolutionary model of
>>> recursive self improvement (RSI). Nor can I prove that there is. But it
>>> is a question we need to answer before an evolutionary model becomes
>>> technically feasible, because an evolutionary model is definitely
>>> unfriendly.
>>>
>>>>Higher intelligence bootstrapping itself has already been proven on
>>>>Earth. Presumably it can happen in a simulation space as well, right?
>>>
>>> If you mean the evolution of humans, that is not an example of RSI. One
>>> requirement of friendly AI is that an AI cannot alter its human-designed
>>> goals. (Another is that we get the goals right, which is unsolved).
>>> However, in an evolutionary environment, the parents do not get to choose
>>> the goals of their children. Evolution chooses goals that maximize
>>> reproductive fitness, regardless of what you want.
>>>
>>> I have challenged this list as well as the singularity and SL4 lists to
>>> come up with an example of a mathematical, software, biological, or
>>> physical example of RSI, or at least a plausible argument that one could
>>> be created, and nobody has. To qualify, an agent has to modify itself or
>>> create a more intelligent copy of itself according to an intelligence
>>> test chosen by the original. The following are not examples of RSI:
>>>
>>> 1. Evolution of life, including humans.
>>> 2. Emergence of language, culture, writing, communication technology, and
>>> computers.
>>> 3. A chess playing (or tic-tac-toe, or factoring, or SAT solving) program
>>> that makes modified copies of itself by
>>> randomly flipping bits in a compressed representation of its source
>>> code, and playing its copies in death matches.
>>> 4. Selective breeding of children for those that get higher grades in
>>> school.
>>> 5. Genetic engineering of humans for larger brains.
>>>
>>> 1 fails because evolution is smarter than all of human civilization if
>>> you measure intelligence in bits of memory. A model of evolution uses
>>> 10^37 bits (10^10 bits of DNA per cell x 10^14 cells in the human body x
>>> 10^10 humans x 10^3 ratio of biomass to human mass). Human civilization
>>> has at most 10^25 bits (10^15 synapses in the human brain x 10^10
>>> humans).
>>>
>>> 2 fails because individual humans are not getting smarter with each
>>> generation, at least not nearly as fast as civilization is advancing.
>>> Rather, there are more humans, and we are getting better organized
>>> through specialization of tasks. Human brains are not much different than
>>> they were 10,000 years ago.
>>>
>>> 3 fails because there are no known classes of problems that are provably
>>> hard to solve but easy to verify. Tic-tac-toe and chess have bounded
>>> complexity. It has not been proven that factoring is harder than
>>> multiplication. We don't know that P != NP, and even if we did, many
>>> NP-complete problems have special cases that are easy to solve, and we
>>> don't know how to program the parent to avoid these cases through
>>> successive generations.
>>>
>>> 4 fails because there is no evidence that above a certain level (about IQ
>>> 200) that childhood intelligence correlates with adult success. The
>>> problem is that adults of average intelligence can't agree on how success
>>> should be measured*.
>>>
>>> 5 fails for the same reason.
>>>
>>> *For example, the average person recognizes Einstein as a genius not
>>> because they are
>>> awed by his theories of general relativity, but because other people
>>> have said so. If you just read his papers (without understanding their
>>> great insights) and knew that he never learned to drive a car, you might
>>> conclude differently.
>>>
>>>  -- Matt Mahoney, [EMAIL PROTECTED]
>>>
>>>
>>>
>>> -------------------------------------------
>>> agi
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to