Eliezer,

> A (selfish) human upload can engage in complex cooperative
> strategies with
> an exact (selfish) clone, and this ability is not accessible to AIXI-tl,
> since AIXI-tl itself is not tl-bounded and therefore cannot be simulated
> by AIXI-tl, nor does AIXI-tl have any means of abstractly
> representing the
> concept "a copy of myself".  Similarly, AIXI is not computable and
> therefore cannot be simulated by AIXI.  Thus both AIXI and AIXI-tl break
> down in dealing with a physical environment that contains one or more
> copies of them.  You might say that AIXI and AIXI-tl can both do anything
> except recognize themselves in a mirror.

I disagree with the bit about 'nor does AIXI-tl have any means of abstractly
representing the concept "a copy of myself".'

It seems to me that AIXI-tl is capable of running programs that contain such
an abstract representation.  Why not?  If the parameters are right, it can
run programs vastly more complex than a human brain upload...

For example, an AIXI-tl can run a program that contains the AIXI-tl
algorithm, as described in Hutter's paper, with t and l left as free
variables.  This program can then carry out reasoning using predicate logic,
about AIXI-tl in general, and about AIXI-tl for various values of t and l.

Similarly, AIXI can run a program that contains a mathematical description
of AIXI similar to the one in Hutter's paper.  This program can then prove
theorems about AIXI using predicate logic.

For instance, if AIXI were rewarded for proving math theorems about AGI,
eventually it would presumably learn to prove theorems about AIXI, extending
Hutter's theorems and so forth.

> The simplest case is the one-shot Prisoner's Dilemna against your own
> exact clone.  It's pretty easy to formalize this challenge as a
> computation that accepts either a human upload or an AIXI-tl.  This
> obviously breaks the AIXI-tl formalism.  Does it break AIXI-tl?  This
> question is more complex than you might think.  For simple problems,
> there's a nonobvious way for AIXI-tl to stumble onto incorrect hypotheses
> which imply cooperative strategies, such that these hypotheses are stable
> under the further evidence then received.  I would expect there to be
> classes of complex cooperative problems in which the chaotic attractor
> AIXI-tl converges to is suboptimal, but I have not proved it.  It is
> definitely true that the physical problem breaks the AIXI formalism and
> that a human upload can straightforwardly converge to optimal cooperative
> strategies based on a model of reality which is more correct than any
> AIXI-tl is capable of achieving.
>
> Ultimately AIXI's decision process breaks down in our physical universe
> because AIXI models an environmental reality with which it interacts,
> instead of modeling a naturalistic reality within which it is embedded.
> It's one of two major formal differences between AIXI's foundations and
> Novamente's.  Unfortunately there is a third foundational difference
> between AIXI and a Friendly AI.

I don't agree at all.

In a Prisoner's Dilemma between two AIXI-tl's, why can't each one run a
program that:

* uses an abstract mathematical representation of AIXI-tl, similar to the
one given in the Hutter paper
* use predicate logic to prove theorems about the behavior of the other
AIXI-tl

How is this so different than what two humans do when reasoning about each
others' behavior?  A given human cannot contain within itself a detailed
model of its own clone; in practice, when a human reasons about the behavior
of it clone, it is going to use some abstract representation of that clone,
and do some precise or uncertain reasoning based on this abstract
representation.

-- Ben G




-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to