Ben Goertzel wrote:
> Bill Hibbard wrote:
>>
>> Solomonoff Induction (http://www.idsia.ch/~marcus/kolmo.htm) provides
>> a good theoretical basis for intelligence, and in that context
>> behavior is determined by only two things:
>>
>> 1. The behavior of the external world. 2. Reinforcement values.
>>
>> Real systems include lots of other stuff, but only to create a
>> computationally efficient approximation to the behavior of Solomonoff
>> Induction (which is basically uncomputable). You can try to build
>> ethics into this "other stuff", but then you aren't "building them in
>> from the start".
>
> In the "Artificial General Intelligence" (formerly known as "Real AI")
> edited volume we're putting together, you can see these connections
> forming...
>
> We have, for example,
>
> * a paper by Marcus Hutter giving a Solomonoff induction based theory
> of general intelligence
Interesting you should mention that. I recently read through Marcus
Hutter's AIXI paper, and while Marcus Hutter has done valuable work on a
formal definition of intelligence, it is not a solution of Friendliness
(nor do I have any reason to believe Marcus Hutter intended it as one).
In fact, as one who specializes in AI morality, I was immediately struck
by two obvious-seeming conclusions on reading Marcus Hutter's formal
definition of intelligence:
1) There is a class of physically realizable problems, which humans can
solve easily for maximum reward, but which - as far as I can tell - AIXI
cannot solve even in principle;
2) While an AIXI-tl of limited physical and cognitive capabilities might
serve as a useful tool, AIXI is unFriendly and cannot be made Friendly
regardless of *any* pattern of reinforcement delivered during childhood.
Before I post further, is there *anyone* who sees this besides me?
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
- RE: [agi] AGI morality Ben Goertzel
- RE: [agi] AGI morality Bill Hibbard
- RE: [agi] AGI morality Ben Goertzel
- RE: [agi] AGI morality Bill Hibbard
- RE: [agi] AGI morality Ben Goertzel
- RE: [agi] AGI morality - goals ... Philip Sutton
- RE: [agi] AGI morality - goals ... Ben Goertzel
- RE: [agi] AGI morality - goals ... Philip Sutton
- RE: [agi] AGI morality - goals ... Bill Hibbard
- Re: [agi] AGI morality - goals ... Eliezer S. Yudkowsky
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- RE: [agi] unFriendly AIXI Ben Goertzel
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- RE: [agi] unFriendly AIXI Ben Goertzel
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- RE: [agi] unFriendly AIXI Ben Goertzel
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- RE: [agi] unFriendly AIXI Ben Goertzel
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- RE: [agi] unFriendly AIXI Ben Goertzel