Eliezer,

I will print your message and read it more slowly tomorrow morning when my
brain is better rested.

But I can't resist some replies now, albeit on 4 hours of sleep ;)

> Because AIXI-tl is not an entity deliberately allocating computing power;
> its control process is fixed.  AIXI-tl will model a process that proves
> theorems about AIXI-tl only if that process is the best predictor of the
> environmental information seen so far.

Well... a human's control process is fixed too, in a way.  We cannot rewire
our brains, our biological motivators.  And a human will accurately model
other humans only if its fixed motivators have (directly or indirectly)
led it to do so...

Of course, humans are very different from AIXI-tl, because in humans there
is a gradation from totally hard-wired to totally ephemeral/flexible,
whereas
in AIXI-tl there's a rigid dichotomy between the hard-wired control program
and the ephemeral operating program.

In this way Novamentes will be more like humans, but with the flexibility to
change their hard-wired motivators as well, if they REALLY want to...


[snipped out description of problem scenario]

> Lee Corbin can work out his entire policy in step (2), before step (3)
> occurs, knowing that his synchronized other self - whichever one he is -
> is doing the same.

OK -- now, if AIXItl were starting out with the right program, it could do
this too, because the program could reason "that other AIXItl is gonna do
the same thing as me, so based on this knowledge, what should I do...."

But you seem to be assuming that

a) the Lee Corbin starts out with a head full of knowledge achieved through
experience

b) the AIXItl starts out without a reasonable operating program, and has
to learn everything from scratch during the experiment

What if you used, for the competition a Lee Corbin with a tabula rasa brain,
an infant Lee Corbin.  It wouldn't perform very well, as it wouldn't
even understand the competition.

Of course, if you put a knowledgeable human up against a new baby AIXI-tl,
the knowledgeable human can win an intelligence contest.  You don't need
the Prisoner's Dilemma to prove this.  Just ask them both what 2+2 equals.
The baby AIXI-tl will have no way to know.

Now, if you give the AIXI-tl enough time and experience to learn about
Prisoners Dilemma situations -- or, to learn about selves and minds and
computer systems -- then it will evolve an operating program that knows
how to reason somewhat like a human does, with concepts like "that other
AIXI-tl is just like me, so it will think and act like I do."


> The major point is as follows:  AIXI-tl is unable to arrive at a valid
> predictive model of reality because the sequence of inputs it sees, on
> successive rounds, are being produced by AIXI-tl trying to model the
> inputs using tl-bounded programs, while in fact those inputs are really
> the outputs of the non-tl-bounded AIXI-tl.  If a tl-bounded program
> correctly predicts the inputs seen so far, it will be using some
> inaccurate model of the actual reality, since no tl-bounded program can
> model the actual computational process AIXI-tl uses to select outputs.

Yah, but Lee Corbin can't model (in perfect detail) the actual computational
process the other Lee Corbin uses to select outputs, either.  So what?


> Humans can use a naturalistic representation of a reality in which they
> are embedded, rather than being forced like AIXI-tl to reason about a
> separated environment; consequently humans are capable of rationally
> reasoning about correlations between their internal mental processes and
> other parts of reality, which is the key to the complex cooperation
> problem with your own clone - the realization that you can actually
> *decide* your clone's actions in step (2), if you make the right
> agreements with yourself and keep them.

I don't see why an AIXI-tl with a clever operating program coming into the
competition couldn't make the same realization that a human does.

So your argument is that a human baby mind exposed ONLY to prisoners'
dilemma
interactions as its environment would somehow learn to "realize it can
decide its clone's actions", whereas a baby AIXI-tl mind exposed only to
these
interactions cannot carry out this learning?

> (b)  This happens because of a hidden assumption built into the
> formalism,
> wherein AIXI devises a Cartesian model of a separated environmental
> theatre, rather than devising a model of a naturalistic reality that
> includes AIXI.

It seems to me this has to do with the nature of AIXI-tl's operating
program.

With the right operating program, AIXI-tl would model reality in a way that
included AIXI-tl.  It would do so, only if this operating program were
useful
to it....

For example, if you wrapped up AIXI-tl in a body with skin and actuators and
sensors, it would find that modeling the world as containing AIXI-tl was a
very
useful strategy.  Just as baby humans find that modeling the world as
containing
baby humans is a very useful strategy...

> (c)  There's no obvious way to repair the formalism.  It's been
> diagonalized, and diagonalization is usually fatal.  The AIXI homunculus
> relies on perfectly modeling the environment shown on its Cartesian
> theatre; a naturalistic model includes the agent itself embedded in
> reality, but the reflective part of the model is necessarily imperfect
> (halting problem).

But the reflective part of the human mind is ALSO necessarily imperfect...
I don't see how you've shown AIXI-tl to have a deficiency not also shared
by the human mind's learning algorithms...

> (d)  It seems very likely (though I have not actually proven it) that in
> addition to breaking the formalism, the physical challenge
> actually breaks
> AIXI-tl in the sense that a tl-bounded human outperforms it on complex
> cooperation problems.

I am very unconvinced of this.

> (e)  This conjectured outperformance reflects the human use of a type of
> rational (Bayesian) reasoning apparently closed to AIXI, in that humans
> can reason about correlations between their internal processes
> and distant
> elements of reality, as a consequence of (b) above.

It seems to me that AIXI-tl can reason about correlations between its
internal
processes and other elements of reality -- especially if it is given a
"codic
modality", i.e. the ability to sense its own internal processes and reason
about them.

It seems like you are arguing that there are problems an embodied,
experienced
mind can solve better than a tabula rasa, unembodied mind.  This has nothing
to
do with the comparison of AIXI-tl to other learning algorithms, though.

What it questions, I think, is the AIXI-theory characterization of
intelligence
as "pure, long-term optimization ability."

-- Ben G


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to