On Mon, Dec 1, 2008 at 8:04 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> The value of AIXI is not that it solves the general intelligence problem, but 
> rather
> it explains why the problem is so hard.

It doesn't explain why it's hard (is impossible "hard"?). That you
can't solve a problem exactly, doesn't mean that there is no simple
satisfactory solution.


> It also justifies a general principle that is
> already used in science and in practical machine learning algorithms:
> to choose the simplest hypothesis that fits the data. It formally defines
> "simple" as the length of the shortest program that outputs a description
> of the hypothesis.

It's Solomonoff's universal induction, a much earlier result. Hutter
generalized Solomonoff's induction to decision-making and proved some
new results, but the idea of simple hypotheses prior and proof that it
does good at learning are Solomonoff's.

See ( http://www.scholarpedia.org/article/Algorithmic_probability )
for introduction.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to