Ben Goertzel wrote:
>
> It's "right" as mathematics...
>
> I don't think his definition of intelligence is the maximally useful
> one, though I think it's a reasonably OK one.
>
> I have proposed a different but related definition of intelligence,
> before, and have not been entirely satisfied with my own definition,
> either. I like mine better than Hutter's... but I have not proved any
> cool theorems about mine...
Can Hutter's AIXI satisfy your definition?
>> If Novamente can do something AIXI cannot, then Hutter's work is very
>> highly valuable because it provides a benchmark against which this
>> becomes clear.
>>
>> If you intuitively feel that Novamente has something AIXI doesn't,
>> then Hutter's work is very highly valuable whether your feeling
>> proves correct or not, because it's by comparing Novamente against
>> AIXI that you'll learn what this valuable thing really *is*. This
>> holds true whether the answer turns out to be "It's capability X that
>> I didn't previously really know how to build, and hence didn't see as
>> obviously lacking in AIXI" or "It's capability X that I didn't
>> previously really know how to build, and hence didn't see as
>> obviously emerging from AIXI".
>>
>> So do you still feel that Hutter's work tells you nothing of any use?
>
> Well, it hasn't so far.
>
> It may in the future. If it does I'll say so ;-)
>
> The thing is, I (like many others) thought of algorithms equivalent to
> AIXI years ago, and dismissed them as useless. What I didn't do is
> prove anything about these algorithms, I just thought of them and
> ignored them.... Partly because I didn't see how to prove the
> theorems, and partly because I thought even once I proved the theorems,
> I wouldn't have anything pragmatically useful...
It's not *about* the theorems. It's about whether the assumptions
**underlying** the theorems are good assumptions to use in AI work. If
Novamente can outdo AIXI then AIXI's assumptions must be 'off' in some way
and knowing this *explicitly*, as opposed to having a vague intuition
about it, cannot help but be valuable.
Again, it sounds to me like, in this message, you're taking for *granted*
that AIXI and Novamente have the same theoretical foundations, and that
hence the only issue is design and how much computing power is needed, in
which case I can see why it would be intuitively straightforward to you
that (a) Novamente is a better approach than AIXI and (b) AIXI has nothing
to say to you about the pragmatic problem of designing Novamente, nor are
its theorems relevant in building Novamente, etc. But that's exactly the
question I'm asking you. *Do* you believe that Novamente and AIXI rest on
the same foundations?
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- RE: [agi] unFriendly AIXI Ben Goertzel
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- Re: [agi] unFriendly AIXI Philip Sutton
- RE: [agi] unFriendly AIXI Ben Goertzel
- Re: [agi] unFriendly AIXI Shane Legg
- RE: [agi] unFriendly AIXI Ben Goertzel
- Re: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- RE: [agi] unFriendly AIXI Ben Goertzel
- RE: [agi] unFriendly AIXI Eliezer S. Yudkowsky
- RE: [agi] unFriendly AIXI Ben Goertzel
- RE: [agi] unFriendly AIXI Ben Goertzel
- Re: [agi] AIXI and Solomonoff induction Cliff Stabbert
- Re[2]: [agi] AIXI and Solomonoff induction Cliff Stabbert
- Re[2]: [agi] AIXI and Solomonoff induction Bill Hibbard
- RE: [agi] AIXI and Solomonoff induction Ben Goertzel
- Re: [agi] AIXI and Solomonoff induction Shane Legg
- Re[2]: [agi] AIXI and Solomonoff induction Cliff Stabbert
- Re: [agi] AIXI and Solomonoff induction Shane Legg
- Re: [agi] AIXI and Solomonoff induction Cliff Stabbert
