Eliezer S. Yudkowsky wrote:
> Bill Hibbard wrote:
> > On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote:
> >
> >>It *could* do this but it *doesn't* do this.  Its control process is such
> >>that it follows an iterative trajectory through chaos which is forbidden
> >>to arrive at a truthful solution, though it may converge to a stable
> >>attractor.
> >
> > This is the heart of the fallacy. Neither a human nor an AIXI
> > can know "that his synchronized other self - whichever one
> > he is - is doing the same". All a human or an AIXI can know is
> > its observations. They can estimate but not know the intentions
> > of other minds.
>
> The halting problem establishes that you can never perfectly understand
> your own decision process well enough to predict its decision in advance,
> because you'd have to take into account the decision process including the
> prediction, et cetera, establishing an infinite regress.
>
> However, Corbin doesn't need to know absolutely that his other self is
> synchronized, nor does he need to know his other self's decision in
> advance.  Corbin only needs to establish a probabilistic estimate, good
> enough to guide his actions, that his other self's decision is correlated
> with his *after* the fact.  (I.e., it's not a halting problem where you
> need to predict yourself in advance; you only need to know your own
> decision after the fact.)
>
> AIXI-tl is incapable of doing this for complex cooperative problems
> because its decision process only models tl-bounded things and AIXI-tl is
> not *remotely close* to being tl-bounded.

Now you are using a different argument. You previous argument was:

> Lee Corbin can work out his entire policy in step (2), before step
> (3) occurs, knowing that his synchronized other self - whichever one
> he is - is doing the same.

Now you have Corbin merely estimating his clone's intentions.
While it is true that AIXI-tl cannot completely simulate itself,
it also can estimate another AIXI-tl's future behavior based on
observed behavior.

Your argument is now that Corbin can do it better. I don't
know if this is true or not.

> . . .
> Let's say that AIXI-tl takes action A in round 1, action B in round 2, and
> action C in round 3, and so on up to action Z in round 26.  There's no
> obvious reason for the sequence {A...Z} to be predictable *even
> approximately* by any of the tl-bounded processes AIXI-tl uses for
> prediction.  Any given action is the result of a tl-bounded policy but the
> *sequence* of *different* tl-bounded policies was chosen by a t2^l process.

Your example sequence is pretty simple and should match a
nice simple universal turing machine program in an AIXI-tl,
well within its bounds. Furthermore, two AIXI-tl's will
probably converge on a simple sequence in prisoner's
dilemma. But I have no idea if they can do it better than
Corbin and his clone.

Bill

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to