The definition of universal intelligence being over all utility functions 
implies that the utility function is unknown. Otherwise there is a fixed 
solution.

 -- Matt Mahoney, [email protected]




________________________________
From: Joshua Fox <[email protected]>
To: agi <[email protected]>
Sent: Sun, June 27, 2010 4:22:19 PM
Subject: [agi] Reward function vs utility


This has probably been discussed at length, so I will appreciate a reference on 
this:

Why does Legg's definition of intelligence (following on Hutters' AIXI and 
related work) involve a reward function rather than a utility function? For 
this purpose, reward is a function of the word state/history which is unknown 
to the agent while  a utility function is known to the agent. 

Even if  we replace the former with the latter, we can still have a definition 
of intelligence that integrates optimization capacity over possible all utility 
functions. 

What is the real  significance of the difference between the two types of 
functions here?

Joshua
agi | Archives  | Modify Your Subscription  


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to