OK, I do see that there is a problem in my first email. I have noticed this with repeated measures designs. Otherwise, of course, there is only one error term for all factors. But, with repeated measures designs this is not the case.


On Friday, July 11, 2003, at 10:00 PM, Spencer Graves wrote:


People tend to get the quickest and most helpful responses when they provide a toy problem that produces what they think are anamolous results

here is an admittedly poor example with factors a and b and s subjects.


a<-factor(rep(c(0,1),12))
b<-factor(rep(c(0,0,1,1),6))
s<- factor(rep(1:6,each=4))
x <- c(49.5, 62.8, 46.8, 57, 59.8, 58.5, 55.5, 56, 62.8, 55.8, 69.5, 55, 62, 48.8, 45.5, 44.2, 52, 51.5, 49.8, 48.8, 57.2, 59, 53.2, 56)


now

summary(aov(x~a*b+Error(s/(a*b))))

gives a table of results
but, if one wanted to generate a confidence interval for factor b one needs to reanalyze the results thusly


ss<-aggregate(x, list(s=s, b=b), mean)
summary(aov(x~b+Error(s/b), data=ss))

This yields an error term half the size as that reported for b in the combined ANOVA. I would suggest that the way the ss and MSE are reported is erroneous since they should be able to be used to directly calculate confidence intervals or make mean comparisons without having to collapse and reanalyze for every effect.

Furthermore, I am guessing that this problem makes it impossible to get a correct average MSE that includes the interaction term. OK, far from impossible, but very difficult to verify that the term is correct.

NOTE F for b is the same in the first ANOVA and the second.

______________________________________________
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

Reply via email to