Let me summarize the discussion. We have:
1)      John Bailey: http://www.frontiernet.net/~jmb184/
2)      Mike Hardy: http://www-math.mit.edu/~hardy/
3)      Herman Rubin: http://www.stat.purdue.edu/people/hrubin/

Bailey: Entropy is an important concept in Bayesian Inference.
Hardy: Few people working in Bayesian Inference care about Entropy.
Rubin: The people that use entropy or whatever other so called
"neutral" priors are using unjustified computational copouts.

My position: 
1)      Hurray for Bailey!
2)      Sure Mike but they should know better.
3)      I disagree with Rubin's position with all the energy in my
reproductive system.

First of all, as far as it is known today, Entropy, Probability, and
(more recently discovered) Codes (as in binary codes) are pretty much
aspects of the same thing. At a fundamental level, Entropy is just the
number of available distinguishable possibilities in the (neg)-log
scale so that exp(Entropy)=1/N =Uniform probability over the space of
distinguishable states. Moreover, there is a one-to-one correspondence
between probability distributions and codes (or rather code lengths of
root-free (prefix) codes)
(e.g. see Grunwald's tutorial 
http://quantrm2.psy.ohio-state.edu/injae/workshop.htm ) . Thus, any
one caring about the meaning and use of Probability theory (Bayesians
or members of the national riffle association alike)  aught to care
about Entropy and Codes.

Second. More than seventy (70) years of DeFinetti/Savage subjectivism
have produce ZIP beyond beautiful sun tans from the coasts of Spain!

Third. Current action in fundamental statistical inference (aside from
computational issues) is about objective (or as objective as possible)
quantifications of prior information. Information geometry, MDL
principle, Entropic Priors, Bayesian Networks and Statistical Learning
Theory are pushing the envelope.

[EMAIL PROTECTED] (Herman Rubin) wrote in message 
news:<a8fhr9$[EMAIL PROTECTED]>...
> In article <[EMAIL PROTECTED]>,
> John Bailey <[EMAIL PROTECTED]> wrote:
> >On 01 Apr 2002 18:21:18 GMT, [EMAIL PROTECTED] (Michael J Hardy) wrote:
>  
> >>> >    Look:  Most practitioners of Bayesian inference probably do not
> >>> >know what entropy is.  That appears to contradict what you said in
> >>> >your posting that I first answered.  Can you dispute that?
>  
>  In an earlier post, John Bailey's response to Hardy's statement was:
> >>> I will definitely dispute the first part.  
> >>> I suppose there may be *practitioners of Bayesian inference
> >>> who are weak on the concept of entropy* but it is clearly and
> >>> unambiguously a part of the theory of its use.
>  
>  Mike Hardy then replied:
> >>     I don't doubt that people you worked with are familiar with
> >>entropy, nor that some people who do Bayesian inference use entropy,
> >>but it is perfectly obvious that such familiarity is not needed in
> >>order to do Bayesian inference.  Why do you call it "clearly and
> >>unambiguously a part of the theory of its use"?
>  
> >In my exposures to Bayesian methodology all have included a discussion
> >of how to determine a neutral Bayesian prior and the use of maximum
> >entropy as a means to that end.
>  
> >John 
> 
> Statistics is not methodology.  Treating it as such causes
> people to use totally inappropriate procedures.
> 
> The first thing is to state the problem, and stating a 
> mathematically convenient formulation can be worse than
> useless.  Bayesian reasoning requires that the USER be
> the provider of the loss-prior combination.  Now one
> might want to use something simpler if it can be proved
> to be reasonably good.
> 
> So we can use least squares without normality, as the
> Gauss-Markov Theorem tells us that the results are just
> about as good without normality as with.  This is not
> true for using mathematically convenient but inappropriate
> priors.  Also, it is not how well the prior is approximated,
> but how well the solution is.
> 
> Bayesian priors should not be "neutral", unless it can
> be shown that not much is lost by using such a prior.
> Conjugate priors, "uninformative" priors, maximum entropy
> priors, as such are unjustified computational copouts.
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to