You can always build the utility function into the assumed universal Turing machine underlying the definition of algorithmic information...
I guess this will improve learning rate by some additive constant, in the long run ;) ben On Sun, Jun 27, 2010 at 4:22 PM, Joshua Fox <[email protected]> wrote: > This has probably been discussed at length, so I will appreciate a > reference on this: > > Why does Legg's definition of intelligence (following on Hutters' AIXI and > related work) involve a reward function rather than a utility function? For > this purpose, reward is a function of the word state/history which is > unknown to the agent while a utility function is known to the agent. > > Even if we replace the former with the latter, we can still have a > definition of intelligence that integrates optimization capacity over > possible all utility functions. > > What is the real significance of the difference between the two types of > functions here? > > Joshua > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity+ Advisor, Singularity University and Singularity Institute External Research Professor, Xiamen University, China [email protected] " “When nothing seems to help, I go look at a stonecutter hammering away at his rock, perhaps a hundred times without as much as a crack showing in it. Yet at the hundred and first blow it will split in two, and I know it was not that blow that did it, but all that had gone before.” ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
