Brian Atkins wrote:
> Ben Goertzel wrote:
>>
>> So your basic point is that, because these clones are acting by
>> simulating programs that finish running in <T timesteps, they're not
>> going to be able to simulate each other very accurately.
>>
>> Whereas, a pair of clones each possessing a more flexible control
>> algorithm could perform better in the game. Because, if a more
>> flexible player wants to simulate his opponent, he can choose to
>> devote nearly ALL his thinking-time inbetween moves to simulating his
>> opponent. Because these more flexible players are not constrained to
>> a rigid control algorithm that divides up their time into little
>> bits, simulating a huge number of fast programs.
>
> From my bystander POV I got something different out of this exchange of
> messages... it appeared to me that Eliezer was not trying to say that
> his point was regarding having more time for simulating, but rather
> that humans possess a qualitatively different "level" of reflectivity
> that allows them to "realize" the situation they're in, and therefore
> come up with a simple strategy that probably doesn't even require much
> simulating of their clone. It is this reflectivity difference that I
> thought was more important to understand... or am I wrong?

The really fundamental difference is that humans can invent new reflective choices in their top-level control process that correlate with distant reality and act as actions unavailable to AIXI-tl. This is what's going on when you decide your own clone's strategy in step (2). Corbin is "acting for his clone". He can do this because of a correlation between himself and his environment that AIXI is unable to take advantage of because AIXI is built on the assumption of a Cartesian theatre.

Being able to simulate processes that think naturalistically, doesn't necessarily help; you need to be able to do it in the top level of your control process. Why? Because the only way the Primary and Secondary AIXI-tl could benefit from policies that simulate identical decisions, is if the Primary and Secondary chose identical policies, which would require a kind of intelligence in their top-level decision process that AIXI-tl doesn't have. The Primary and Secondary can only choose identical or sufficiently similar policies by coincidence or strange attractors, because they don't have the reflective intelligence to do it deliberately. They don't even have enough reflective intelligence to decide and store complete plans in step (2).

In a naturalistic universe, where there is no sharp boundary between the physics of you and the physics of the rest of the world, the capability to invent new top-level internal reflective choices can be very important, pragmatically, in terms of properties of distant reality that directly correlate with your choice to your benefit, if there's any breakage at all of the Cartesian boundary - any correlation between your mindstate and the rest of the environment.

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to