At 10:59 AM +0200 4/3/02, Juergen Schmidhuber wrote: >The theory of inductive inference is Bayesian, of course. >But Bayes' rule by itself does not yield Occam's razor.
"By itself?" No one said it did. Of course assumptions must be made. At minimum one always has to choose priors in Bayesian inference. Our paper shows that there is a Bayesian interpretation that yields something very suggestive of Ockham's razor. It is appealing in that if one has a "simple" versus a "complex" hypothesis, "simple" meaning that the prior probability is concentrated and "complex" meaning that it is vague and spread out, "simple" meaning that you don't have many knobs to tweak, "complex" meaning the opposite, then the "simple" hypothesis will be favored over the "complex" one unless the data lie well away from where the "simple" hypothesis has placed its bets. Bayesians distinguish this from Ockham's formulation by calling it the "Bayesian Ockham's razor", recognizing that it is not what William of Ockham wrote, "Entia non sunt multiplicanda sine necessitate" (or one of his other genuine formulations). Please don't read more into our article than is there. "By itself." First you said that the AP "by itself" has no predictive power. I missed the "by itself" so misunderstood you, but when I understood what you were saying I agreed. Now you say that Bayes' rule "by itself" does not yield Ockham's razor. Jim and I never said that it did. I am hard pressed to see how anything nontrivial relating to the real world can be gotten from any principle "by itself," so I don't regard these comments as very profound, or very useful. [Remainder of article snipped] Bill

