My thanks to Andy, James, and Florian for their responses to my question. The replies were, as always, prompt, helpful, and lucid. I have a couple of quick further questions about model comparison: I think all three replies included suggestions of doing likelihood ratio tests to assess the significance of a single fixed factor in the model. How reliable is this? As far as I can recall, Baayen in his book and in the JML paper only uses this to evaluate random factors, and the paper by Bolker et al that Andy cited recommends against it in the case of fixed factors. Are there good alternatives? Finally, a quick follow up question regarding Florian's six-step procedure, reproduced below. In step 5 you suggest I interpret the coefficients in the full _or_ the reduced model. So is it acceptable to look at the coefficients of a factor or an interaction even if the factor or interaction does not "survive" a likelihood ratio test, i.e. does not significantly contribute to the fit of the model? I hope that makes sense, thank you again for all the help! Jakke
1) l <- lmer(logRT~A*B+(1+A*B|Subject)+(1+A*B| Item), data) 2) follow the procedure outline on our lab blog to figure out which random effects you need: http://hlplab.wordpress.com/2009/05/14/random-effect-should-i-stay-or-should -i-go/ 3) take the resulting model and compare it against a model without the interaction, using anova(l, l.woInteraction). 4) if removal of the interaction is not significant, you could further compare the model against a model with only A (see above). 5) Interpret coefficients in the full model or in the reduced model (I would do the former unless I don't have much data or cannot reduce collinearity, but you may prefer the latter). 6) If you find any of the scripts of references given above useful, cite/refer to them, so that others can find them ;)
_______________________________________________ R-lang mailing list [email protected] http://pidgin.ucsd.edu/mailman/listinfo/r-lang
