Ben Goertzel wrote:
>
> So, yeah, intuitively it's trivial to build a super powerful AI given
> infinite computational resources.
Intuitively it's trivial to wipe out all life on Earth given infinite
computational resources. Anything more positive than that is nontrivial.
> And I think the proof of this will be trivial when the surrounding math
> concepts are formulated more nicely. Now the proof is complicated
> because the surrounding math concepts are immature and expressed in
> overcomplicated ways.
>
> And work like Hutter's is going to be part of the process of arriving
> at the "right" formulations of the surrounding mathematical concepts
> regarding intelligence, goals, etc.
>
> Anyway, perhaps these comments have made my attitude clearer.
>
> As an AGI designer, Hutter's work tells me NOTHING of any use. In that
> sense I feel it's "not that big a deal." On the other hand, it's
> excellent math/science, and it's part of a large process that may
> eventually lead to a deep useful general theory of AGI. In that sense
> it's certainly worthwhile.
It seems to me that this answer *assumes* that Hutter's work is completely
right, an assumption in conflict with the uneasiness you express in your
previous email. If Novamente can do something AIXI cannot, then Hutter's
work is very highly valuable because it provides a benchmark against which
this becomes clear.
If you intuitively feel that Novamente has something AIXI doesn't, then
Hutter's work is very highly valuable whether your feeling proves correct
or not, because it's by comparing Novamente against AIXI that you'll learn
what this valuable thing really *is*. This holds true whether the answer
turns out to be "It's capability X that I didn't previously really know
how to build, and hence didn't see as obviously lacking in AIXI" or "It's
capability X that I didn't previously really know how to build, and
hence didn't see as obviously emerging from AIXI".
So do you still feel that Hutter's work tells you nothing of any use?
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- RE: [agi] unFriendly AIXI Ben Goertzel
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- RE: [agi] unFriendly AIXI Ben Goertzel
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- Re: [agi] unFriendly AIXI Philip Sutton
- RE: [agi] unFriendly AIXI Ben Goertzel
- Re: [agi] unFriendly AIXI Shane Legg
- RE: [agi] unFriendly AIXI Ben Goertzel
- RE: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- RE: [agi] unFriendly AIXI Ben Goertzel
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- RE: [agi] unFriendly AIXI Ben Goertzel
- RE: [agi] unFriendly AIXI Ben Goertzel
- Re: [agi] AIXI and Solomonoff induction Cliff Stabbert
- Re[2]: [agi] AIXI and Solomonoff induction Cliff Stabbert
- Re[2]: [agi] AIXI and Solomonoff induction Bill Hibbard
- RE: [agi] AIXI and Solomonoff induction Ben Goertzel
- Re: [agi] AIXI and Solomonoff induction Shane Legg
- Re[2]: [agi] AIXI and Solomonoff induction Cliff Stabbert
