[EMAIL PROTECTED] (dave martin) wrote: > Fisher's F-test can be used to quantitatively compare > two models of data.
I assume that you mean here to treat the ratio of mean square errors as the test statistic for an F test. I'm not seeing the justification for this: it is assumed in an ordinary F test that the numerator and denominator are independent; this is almost certainly not the case when comparing errors made by two models of the same data. More fundamentally, an F test, or any significance test, cannot tell you what you really want to know. More on this below. > The F-test can answer the question "Are these two models > significantly different at the X% level?". This is an interesting question, but it certainly is not the question that is answered by a significance test. You know at the outset that the two models are different, so you will almost certainly get a rejection of the null hypothesis with a large enough data set (assuming that you can correctly compute the distribution of the test statistic). So a significance test mostly tells you whether or not you have a large data set. This is not very interesting. All the standard statistics books say that statistical significance is not the same as practical significance. True enough, but why bother, then, with hundreds of pages of statistical significance when you know it's not what you really want? For what it's worth, Robert Dodier -- ``Whenever I hear bagpipes I want to go home, mainly because my home doesn't have anyone playing bagpipes in it.'' -- Naich . . ================================================================= Instructions for joining and leaving this list, remarks about the problem of INAPPROPRIATE MESSAGES, and archives are available at: . http://jse.stat.ncsu.edu/ . =================================================================
