I've made a number of updates to my website on Jaynes's book, 
_Probability Theory: The Logic of Science_ 
(http://leuther-analytics.com/jaynes/index.html).  The most significant 
is an extended note on Jaynes's treatment of the Marginalization Paradox 
(Chapter 15), in which I conclude that Jaynes was wrong.  As Jaynes 
does, I focused on the specific example of the change-point problem. 
Here are some of the things I show:

- - Bayesian B_2 can only obtain his result by the invalid step of taking 
a divergent integral; this divergent integral arises because the 
improper prior over \eta results in p(y | z) also being improper.

- - If B_2 follows Jaynes's advice (which Jaynes unaccountably fails to 
follow in this discussion) of solving the problem with proper priors and 
then taking the limit of the solutions as the improper prior is 
approached, he gets the same answer as B_1.

- - One way of seeing the source of the problem is that as we approach the 
improper prior over \eta, p(y | z) retains a significant probability 
mass in precisely the region where p(\zeta | y,z) is far from 
convergence.  (This is an issue of non-uniform convergence.)

If you have read Chapter 15 of PTLOS or are familiar with the 
Marginalization Paradox, I would greatly appreciate your feedback on 
what I have written.  If you are reading PTLOS and have puzzled over the 
discussion of the MP, I hope what I have written will help clarify things.

Reply via email to