"Pedro J. Aphalo" <[EMAIL PROTECTED]> writes: > Douglas Bates wrote: > > > > [EMAIL PROTECTED] (Bj�rn-Helge Mevik) writes: > > > > > Mona Riihimaki <[EMAIL PROTECTED]> writes: > > > > > > > I've done lme-analysis with R; [...] I'd need also the mean squares. > > > > > > AFAIK, lme doesn't calculate sum of squares (or mean squares). It > > > maximises the likelihood (or restricted likelihood) and uses tests > > > based on likelihood ratios. > > > > Yes - you are correct. > > > although the function is called anova.lme, is it still correct to talk > about "anova results" when referring to the results of these tests? and > in the case of the Wald tests in the single lme object case?
Anova applied to lme objects generates different types of tests according to whether it is used with one argument or more than one argument. (We took Oscar Wilde's admonition that "Consistency is the last refuge of the unimaginative" to heart.) With more than one argument, likelihood ratio statistics and their p-values are returned. These are appropriate for comparing models in which the random-effects structure has changed. Bear in mind that the p-values can be conservative because the null hypothesis is usually on the boundary of a constrained parameter space. With a single argument, F-tests on terms in the fixed-effects part of the model are returned. These tests are conditional on the values of the parameters determining the random-effects distribution. This is usually not a problem because these parameters are asymptotically uncorrelated with the fixed-effects parameters. I would refer to the results for more than one argument as "conservative likelihood ratio tests" or just "likelihood ratio tests". ______________________________________________ [EMAIL PROTECTED] mailing list http://www.stat.math.ethz.ch/mailman/listinfo/r-help
