Ben Goertzel wrote:
>> Even if a (grown) human is playing PD2, it outperforms AIXI-tl
>> playing PD2.
> Well, in the long run, I'm not at all sure this is the case. You
> haven't proved this to my satisfaction.

PD2 is very natural to humans; we can take for granted that humans excel
at PD2. The question is AIXI-tl.

> In the short run, it certainly is the case. But so what? AIXI-tl is
> damn slow at learning, we know that.

AIXI-tl is most certainly not "damn slow" at learning any environment that
can be tl-bounded. For problems that don't break the Cartesian formalism,
AIXI-tl learns only slightly lower than the fastest possible tl-bounded
learner. It's got t2^l computing power for gossakes! From our
perspective it learns at faster than the fastest rate humanly imaginable -

You appear to be thinking of AIXI-tl as a fuzzy little harmless baby being
confronted with some harsh trial. That fuzzy little harmless baby, if the
tl-bound is large enough to simulate Lee Corbin, is wielding something
like 10^10^15 operations per second, which it is using to *among other
things* simulate every imaginable human experience. AIXI-tl is larger
than universes; it contains all possible tl-bounded heavens and all possible tl-bounded hells. The only question is whether its control process makes any good use of all that computation.

More things from the list of system properties that Friendliness programmers should sensitize themselves to: Just because the endless decillions of alternate Ben Goertzels in torture chambers are screaming to God to stop it doesn't mean that AIXI-tl's control process cares.

> The question is whether after enough trials AIXI-tl figures out it's
> playing some entity similar to itself and learns how to act
> accordingly.... If so, then it's doing what AIXI-tl is supposed to do.
AIXI-tl *cannot* figure this out because its control process is not
capable of recognizing tl-computable transforms of its own policies and
strategic abilities, *only* tl-computable transforms of its own direct
actions. Yes, it simulates entities who know this; it also simulates
every possible other kind of tl-bounded entity. The question is whether
that internal knowledge appears as an advantage recognized by the control
process and given AIXI-tl's formal definition, it does not appear to do so.

In my humble opinion, one of the (many) critical skills for creating AI is
learning to recognize what systems *really actually do* and not just what
you project onto them. See also Eliza effect, failure of GOFAI, etc.

> A human can also learn to solve vision recognition problems faster than
> AIXI-tl, because we're wired for it (as we're wired for social
> gameplaying), whereas AIXI-tl has to learn

AIXI-tl learns vision *instantly*. The Kolmogorov complexity of a visual
field is much less than its raw string, and the compact representation can
be computed by a tl-bounded process. It develops a visual cortex on the
same round it sees its first color picture.

>> Humans can recognize a much stronger degree of similarity in human
>> Other Minds than AIXI-tl's internal processes are capable of
>> recognizing in any other AIXI-tl.
> I don't believe that is true.

Mentally simulate the abstract specification of AIXI-tl instead of using
your intuitions about the behavior of a generic reinforcement process. Eventually the results you learn will be integrated into your intuitions and you'll be able to directly see dependencies betwen specifications and reflective modeling abilities.

> OK... here's where the fact that you have a tabula rasa AIXI-tl in a
> very limiting environment comes in.
> In a richer environment, I don't see why AIXI-tl, after a long enough
> time, couldn't learn an operating program that implicitly embodied an
> abstraction over its own internal state.

Because it is physically or computationally impossible for a tl-bounded program to access or internally reproduce the previously computed policies or t2^l strategic ability of AIXI-tl.

> In an environment consisting solely of PD2, it may be that AIXI-tl will
> never have the inspiration to learn this kind of operating program.
> (I'm not sure.)
> To me, this says mostly that PD2 is an inadequate environment for any
> learning system to use, to learn how to become a mind. If it ain't
> good enough for AIXI-tl to use to learn how to become a mind, over a
> very long period of time, it probably isn't good for any AI system to
> use to learn how to become a mind.

Marcus Hutter has formally proved your intuitions wrong. In any situation that does *not* break the formalism, AIXI-tl learns to equal or outperform any other process, despite being a tabula rasa, no matter how rich or poor its environment.

>> Anyway... basically, if you're in a real-world situation where the
>> other intelligence has *any* information about your internal state,
>> not just from direct examination, but from reasoning about your
>> origins, then that also breaks the formalism and now a tl-bounded
>> seed AI can outperform AIXI-tl on the ordinary (non-quined) problem
>> of cooperation with a superintelligence. The environment can't ever
>> *really* be constant and completely separated as Hutter requires. A
>> physical environment that gives rise to an AIXI-tl is different from
>> the environment that gives rise to a tl-bounded seed AI, and the
>> different material implementations of these entities (Lord knows how
>> you'd implement the AIXI-tl) will have different side effects, and so
>> on. All real world problems break the Cartesian assumption. The
>> questions "But are there any kinds of problems for which that makes a
>> real difference?" and "Does any conceivable kind of mind do any
>> better?" can both be answered affirmatively.
> Welll.... I agree with only some of this.
> The thing is, an AIXI-tl-driven AI embedded in the real world would
> have a richer environment to draw on than the impoverished data
> provided by PD2. This AI would eventually learn how to model itself and
> reflect in a rich way (by learning the right operating program).
> However, AIXI-tl is a horribly bad AI algorithm, so it would take a
> VERY VERY long time to carry out this learning, of course...

Measured in computing cycles, yes. Measured in rounds of information required, no. AIXI-tl is defined to run on a very VERY fast computer. Marcus Hutter has formally proved your intutions about the requirement of a rich environment or prior training to be wrong; I am trying to show that your intuitions about what AIXI-tl is capable of learning are wrong.

But to follow either Hutter's argument or my own requires mentally reproducing more of the abstract properties of AIXI-tl, given its abstract specification, than your intuitions currently seem to be providing. Do you have a non-intuitive mental simulation mode?

Eliezer S. Yudkowsky
Research Fellow, Singularity Institute for Artificial Intelligence

To unsubscribe, change your address, or temporarily deactivate your subscription, please go to[EMAIL PROTECTED]

Reply via email to