> Eliezer/Ben,
>
> When you've had time to draw breath can you explain, in non-obscure,
> non-mathematical language, what the implications of the AIXI-tl
> discussion are?
>
> Thanks.
>
> Cheers, Philip


Here's a brief attempt...

AIXItl is a non-practical AGI software design, which basically consists fo
two parts

* a metaprogram
* an operating program

The operating program controls its actions.  The metaprogram works by
searching the set of all programs of size less than L than finish running in
less than T time steps, and finding the best one, and installing this best
one as the operating program.

Clearly this is a very slow approach to AI since it has to search a huge
space of programs each time it does anything.

There is a theorem that says...

Given any AI system at all, if you give AIXItl a big enough t and l, then it
can outperform the other AI system.

Note that this is an unfair contest, because the AIXItl is effectively being
given a lot more compute power than the other system.

But basically, what the theorem shows is that if you don't need to worry
about computing resources, then AI design is trivial -- you can just use
AIXItl, which is a very simple program.

This is not pragmatically useful at all, because in reality we DO have to
worry about computing resources.

What Eliezer has pointed out is that AIXItl's are bad at figuring out what
each other are going to do.

If you put a bunch of AIXItl's in a situation where they have to figure out
what each other are going to do, they probably will fail.  The reason is
that what each AIXItl does is to evaluate a lot of programs much faster than
it is, and choose one to be its operating program.  An AIXItl is not
configured to study programs that are as slow as it is, so it's not
configured to study other programs that are its clones, or are of similar
complexity to it.

On the other hand, humans are dumber than AIXItl's (for big t and l), but
they are smarter at figuring out what *each other* are going to do, because
they are built to be able to evaluate programs (other humans) around as slow
as they are.

This is a technical reflection of the basic truth that

* just because one AI system is a lot smarter than another when given any
problem of fixed complexity to solve
* doesn't mean the smarter AI system is better at figuring out and
interacting with others of *its kind*, than the dumber one is at figuring
out and interacting with others of *its kind*.

Of course, I glossed over a good bit in trying to summarize the ideas
nonmathematically..

In this way, Novamentes are more like humans than AIXItl's.

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to