Ben Goertzel wrote:
>> In a naturalistic universe, where there is no sharp boundary between
>> the physics of you and the physics of the rest of the world, the
>> capability to invent new top-level internal reflective choices can be
>> very important, pragmatically, in terms of properties of distant
>> reality that directly correlate with your choice to your benefit, if
>> there's any breakage at all of the Cartesian boundary - any
>> correlation between your mindstate and the rest of the environment.
>
> Unless, you are vastly smarter than the rest of the universe. Then you
> can proceed like an AIXItl and there is no need for top-level internal
> reflective choices ;)

Actually, even if you are vastly smarter than the rest of the entire universe, you may still be stuck dealing with lesser entities (though not humans; superintelligences at least) who have any information at all about your initial conditions, unless you can make top-level internal reflective choices.

The chance that environmental superintelligences will cooperate with you in PD situations may depend on *their* estimate of *your* ability to generalize over the choice to defect and realize that a similar temptation exists on both sides. In other words, it takes a top-level internal reflective choice to adopt a cooperative ethic on the one-shot complex PD rather than blindly trying to predict and outwit the environment for maximum gain, which is built into the definition of AIXI-tl's control process. A superintelligence may cooperate with a comparatively small, tl-bounded AI, but be unable to cooperate with an AIXI-tl, provided there is any inferrable information about initial conditions. In one sense AIXI-tl "wins"; it always defects, which formally is a "better" choice than cooperating on the oneshot PD, regardless of what the opponent does - assuming that the environment is not correlated with your decisionmaking process. But anyone who knows that assumption is built into AIXI-tl's initial conditions will always defect against AIXI-tl. A small, tl-bounded AI that can make reflective choices has the capability of adopting a cooperative ethic; provided that both entities know or infer something about the other's initial conditions, they can arrive at a knowably correlated reflective choice to adopt cooperative ethics.

AIXI-tl can learn the iterated PD, of course; just not the oneshot complex PD.

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to