-- Matt Mahoney, [EMAIL PROTECTED]
--- On Wed, 9/10/08, Rene de Visser <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I propose the following
>
> Define
>
> _a = (output - resource_usage) / resource_usage ;
> a measure of
> effeciency
>
> _b = d_a / dt ; i.e. the derivative of _a with respect
> to time
>
> where output and resource_usage are appropriately
> calculated depending on
> context (scarcity/need).
This leads to programs like:
main(){while(1) printf("paperclip\n");}
which outputs lots of text without using many resources, and therefore is
intelligent.
Unfortunately we cannot define what output is useful above our own intelligence
level. We have to rely on external tests by systems that are more complex than
a single human brain. I know of two such systems. One is the economy, which
measures intelligence in dollars. The other is evolution, which measures
intelligence in number of descendants.
> We could then consider an AGI as a system that given a
> series of function
> definitions f(n) from a particular domain D and the ability
> to evaluate
> these functions over a set of parameters _x to give an
> output _y,
>
> is a system that improves its effeciency in calculating _x
> for a given _y
> and f(n), both for n fixed and for related f(n). (i.e. an
> inverse problem)
>
> i.e. _y = eval f(n) _x ; the application of a
> particular function f(n)
> from D to _x.
>
> With this framework, measuring resource usage is a
> necessary part of
> measuring intelligence, as is measuring change in
> efficiency over time
> (learning).
>
> What existing work already exists in this direction?
Legg and Hutter have proposed a definition of universal intelligence as the
expected accumulated reward in an environment simulated by random programs [1].
In practice we have to randomly sample from the infinite set of possible
environments. Alternatively, the Turing test [2] is widely accepted, and is
equivalent to text compression [3]. But the Turing test only defines human
level intelligence, nothing higher. Universal intelligence avoids this problem,
but we don't know which environments are useful. If we knew that, we would
already be that intelligent, or at least have an iterative process to get there.
I have asked this list as well as the singularity and SL4 lists whether there
are any non-evolutionary models (mathematical, software, physical, or
biological) for recursive self improvement (RSI), i.e. where the parent and not
the environment decides what the goal is and measures progress toward it. But
as far as I know, the answer is no.
Your example is one of a type that I have proposed, in which an agent chooses a
mathematical puzzle that is hard to solve but easy to verify, generates a
random puzzle, then makes randomly modified copies of itself and competes to
the death with the copies to solve the puzzle first. There are many problems
which we believe to be hard to solve but easy to check, such as subset-sum,
chess, factoring, cryptanalysis, and data compression, but none that are
provably hard.
Also, you can't just proceed on the assumption that a problem is hard. You have
to prove that a problem is hard when scaled to arbitrarily high levels of
intelligence (as measured by ability to solve it). For example, you might
choose subset-sum (find a set of integers that add to 0) without waiting for a
proof that P != NP and say that you have RSI if it is true. But that is not
enough. You also have to choose puzzles in such a way as to avoid shortcuts.
For example you have to avoid sets that contain both x and -x, and sets with
all positive numbers (provably no solution). More intelligent agents could
exploit more subtle shortcuts, which means that agents have to get smarter to
generate puzzles that avoid them. However, *that* aspect of intelligence is not
the one being optimized.
The question of RSI is obviously important to friendly AI. If RSI is possible,
then the problem is how to make the seed AI friendly and make its goal stable
through improvement, which is a very hard problem. If RSI is not possible, then
we know that friendliness is not just hard, it is impossible; intelligence will
be a strictly evolutionary process.
References
1. Legg, Shane, and Marcus Hutter (2006), A Formal Measure of Machine
Intelligence, Proc. Annual machine learning conference of Belgium and The
Netherlands (Benelearn-2006). Ghent, 2006.
http://www.vetta.org/documents/ui_benelearn.pdf
2. Turing, A. M., (1950) Computing Machinery and Intelligence, Mind, 59:433-460.
3. http://cs.fit.edu/~mmahoney/compression/rationale.html
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com