***  For details on how to be removed from this list visit the  ***
***          CCP4 home page http://www.ccp4.ac.uk         ***


On 8/22/05 12:53 PM, "Marcus David Collins"
<[EMAIL PROTECTED]> a écrit :

> I think many people would disagree, arguing that LS does represent a choice of
> error distribution when it is not otherwise known.  In fact, LS makes several
> assumptions about error (errors are independent, have the same variance,
> expectation is zero..., see the wikipedia page from the original message>)
> Just because we do not actively choose an error distribution does not mean
> that one is not chosen.  When we use LS, and claim a "best fit" of the data,
> we are making the assumption that the errors are normal.

This is incorrect. The statistical justification for LS (Gauss-Markov
theorem) assumes nothing about the form of the error distribution aside from
(1) zero expectation, (2) noncorrelation (*not* independence), and (3) equal
variance.  In fact, weighted least squares can correct exactly for
violations of (2) and (3). So, WLS only assumes (1). The error distribution
can certainly be non-normal and the optimal properties guaranteed by the G-M
theorem will still hold.

> On Mon, 22 Aug 2005, Ian Tickle wrote:

[snip]

>> The ML results are by definition the most likely

This also is not really right. ML gives the parameter which produces the
observed data with the highest likelihood. This is very different from
giving the parameter that is most likely, which can only be found via
Bayesian methods. The two methods produce the same estimate only when
the mode of the Bayesian prior distribution of the parameter coincides
with the ML estimate.

Cheers,

Douglas



Reply via email to