> Even if a (grown) human is playing PD2, it outperforms AIXI-tl playing
> PD2.

Well, in the long run, I'm not at all sure this is the case.  You haven't
proved this to my satisfaction.

In the short run, it certainly is the case.  But so what?  AIXI-tl is damn
slow at learning, we know that.

The question is whether after enough trials AIXI-tl figures out it's playing
some entity similar to itself and learns how to act accordingly....  If so,
then it's doing what AIXI-tl is supposed to do.

A human can also learn to solve vision recognition problems faster than
AIXI-tl, because we're wired for it (as we're wired for social gameplaying),
whereas AIXI-tl has to learn


> Humans can recognize a much stronger degree of similarity in human Other
> Minds than AIXI-tl's internal processes are capable of recognizing in any
> other AIXI-tl.

I don't believe that is true.

> Again, as far as I can tell, this
> necessarily requires abstracting over your own internal state and
> recognizing that the outcome of your own (internal) choices are
> necessarily reproduced by a similar computation elsewhere.
> Basically, it
> requires abstracting over your own halting problem to realize that the
> final result of your choice is correlated with that of the process
> simulated, even though you can't fully simulate the causal process
> producing the correlation in advance.  (This doesn't *solve* your own
> halting problem, but at least it enables you to *understand* the
> situation
> you've been put into.)  Except that instead of abstracting over your own
> halting problem, you're abstracting over the process of trying to
> simulate
> another mind trying to simulate you trying to simulate it, where
> the other
> mind is sufficiently similar to your own.  This is a kind of reasoning
> qualitatively closed to AIXI-tl; its control process goes on abortively
> trying to simulate the chain of simulations forever, stopping and
> discarding that prediction as unuseful as soon as it exceeds the t-bound.

OK... here's where the fact that you have a tabula rasa AIXI-tl in a very
limiting environment comes in.

In a richer environment, I don't see why AIXI-tl, after a long enough time,
couldn't learn an operating program that implicitly embodied an abstraction
over its own internal state.

In an environment consisting solely of PD2, it may be that AIXI-tl will
never have the inspiration to learn this kind of operating program.  (I'm
not sure.)

To me, this says mostly that PD2 is an inadequate environment for any
learning system to use, to learn how to become a mind.  If it ain't good
enough for AIXI-tl to use to learn how to become a mind, over a very long
period of time, it probably isn't good for any AI system to use to learn how
to become a mind.

> Anyway... basically, if you're in a real-world situation where the other
> intelligence has *any* information about your internal state, not just
> from direct examination, but from reasoning about your origins, then that
> also breaks the formalism and now a tl-bounded seed AI can outperform
> AIXI-tl on the ordinary (non-quined) problem of cooperation with a
> superintelligence.  The environment can't ever *really* be constant and
> completely separated as Hutter requires.  A physical environment that
> gives rise to an AIXI-tl is different from the environment that
> gives rise
> to a tl-bounded seed AI, and the different material implementations of
> these entities (Lord knows how you'd implement the AIXI-tl) will have
> different side effects, and so on.  All real world problems break the
> Cartesian assumption.  The questions "But are there any kinds of problems
> for which that makes a real difference?" and "Does any
> conceivable kind of
> mind do any better?" can both be answered affirmatively.

Welll....  I agree with only some of this.

The thing is, an AIXI-tl-driven AI embedded in the real world would have a
richer environment to draw on than the impoverished data provided by PD2.
This AI would eventually learn how to model itself and reflect in a rich way
(by learning the right operating program).

However, AIXI-tl is a horribly bad AI algorithm, so it would take a VERY
VERY long time to carry out this learning, of course...

-- Ben

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to