This occurs when you calculate mixed effects models. The statistics
programs make different assumptions about the error structure and
therefore calculate different F values. This is described in
Ayres, M. P., and D. L. Thomas. 1990. Alternative formulations of the
mixed-model ANOVA applied to quantitative genetics. Evolution
44:221-226.
Hocking, R. R. (1973) A discussion of the two-way mixed model. Amer.
Statist. 27:148-152
McLean, R. A., Sanders, W. L., Stroup, W. W. (1991) A unified approach
to mixed linear models. Amer. Statist. 45: 54-64
At the time when I needed this I talked the issue over with Dr.
Brunner, Professor in statistics at the University of Göttingen. He
recommended not using the SAS-formulas because they are based on the
assumption of negatively correlated interaction terms which he thinks
is not very likely.
I deal with the issue by having my stats program (JMP) calculate the
sum of squares and then calculate the rest in Excel according to the
formulas recommended by a stats book I trust (e.g. Kirk, Winer, or Zar).
Martin
Am 2009-06-10 um 04:09 schrieb MaryBeth Voltura:
I am reviewing an old dataset that I had originally analyzed in
Statview
(5.0.1), and re-ran some statistics in SPSS (v.16.0), with very
different results. I am running ANOVA on food intake, using body mass
as a covariate, with 3 experimental diet groups. The two programs
produce different sums of squares and utilize different degrees of
freedom for the independent variables, thus producing very different
p-values.
Has anyone working with these two programs run into anything similar?
BTW, if I run the ANOVA with no covariate, the sum of squares and
F-statistic and p-values match up between Statview and SPSS.
Any ideas?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Mary Beth Voltura, Assistant Professor
Department of Biological Sciences
SUNY Cortland
Cortland NY 13045
607-753-2713
[email protected]