On Tue, Dec 11, 2012 at 6:25 PM, Tim Tyler <[email protected]> wrote:

> "Differences between Kolmogorov Complexity and Solomonoff Probability: 
> Consequences for AGI"
>
>  - http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_7.pdf
>
> It's Occam's razor refuted :-)

Not really. Both formalize Occam's Razor. Kolmogorov complexity is an
approximation of Solomonoff induction (shortest program M vs. average
weighted by 2^-|M|), which is valid because the shortest M dominates
the average. What the paper shows is that the weighted average gives
better predictions than the approximation. This agrees with what we
already knew from many other machine learning experiments. I use
weighted mixtures of predictions in the PAQ compression algorithm. We
knew hundreds of years ago that 12 jurors are collectively smarter
than 1 judge.

The authors speculate in the second part of the paper that using
mixtures would lead to more exploration vs. exploitation in
reinforcement learners. They don't back it up with experiments,
however. I think it is a credit assignment problem unrelated to model
mixing.

--
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to