Ben Goertzel wrote:
AIXI and AIXItl are systems that are designed to operate with an initial
fixed goal.  As defined, they don't modify the overall goal they try to
achieve, they just try to achieve this fixed goal as well as possible
through adaptively determining their actions.

Basically, at each time step, AIXI searches through the space of all
programs to find the program that, based on its experience, will best
fulfill its given goal.  It then lets this "best program" run and determine
its next action.  Based on that next action, it has a new program space
search program... etc.

AIXItl does the same thing but it restricts the search to a finite space of
programs, hence it's a computationally possible (but totally impractical)
algorithm.

The harmfulness or benevolence of an AIXI system is therefore closely tied
to the definition of the goal that is given to the system in advance.
Actually, Ben, AIXI and AIXI-tl are both formal systems; there is no internal component in that formal system corresponding to a "goal definition", only an algorithm that humans use to determine when and how hard they will press the reward button.

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to