[R] mixed effects or fixed effects?

2007-01-24 Thread dan kumpik
Hi,

I am running a learning experiment in which both training subjects and 
controls complete a pretest and posttest. All analyses are being 
conducted in R. We are looking to compare two training methodologies, 
and so have run this experiment twice, once with each methodology. 
Methodology is a between-subjects factor. Trying to run this analysis 
with every factor included (ie, subject as a random factor, session 
nested within group nested within experiment) seems to me (after having 
tried) to be clumsy and probably uninterpretable.
My favoured model for the analysis is a linear mixed-effects model, and 
to combine the data meaningfully, I have collated all the pretest data 
for controls and trained subjects from each experiment, and assumed this 
data to represent a population sample for naive subjects for each 
experiment. I have also ditched the posttest data for the controls, and 
assumed the posttest training data to represent a population sample for 
trained subjects for each experiment. I have confirmed the validity of 
these assumptions by ascertaining that a) controls and trained listeners 
did not differ significantly at pretest for either experiment; and b) 
control listeners did not learn significantly between pretest and 
posttest (and therefore their posttest data are not relevant). This was 
done using a linear mixed-effects model for each experiment, with 
subject as a random factor and session (pretest vs posttest) nested 
within Group (trained vs control).
Therefore, the model I want to use to analyse the data would ideally be 
a linear mixed-effects model, with subject as a random factor, and 
session (pre vs post) nested within experiment. Note that my removal of 
the Group (Trained vs Control) factor simplifies the model somewhat, and 
makes it more interpretable in terms of evaluating the relative effects 
of each experiment.
What I would like to know is- a) would people agree that this is a 
meaningful way to combine my data? I believe the logic is sound, but am 
slightly concerned that I am ignoring a whole block of posttest data for 
the controls (even though this does not account for a significant amount 
of the variance); and b) given that each of my trained subjects appear 
twice- one in the pretest and once in the posttest, and the controls 
only appear once- in the pretest sample, is there any problem with 
making subject a random factor? Conceptually, I see no problem with 
this, but I would like to be sure before I finish writing up.

Many thanks for your time

Dan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R interpretation

2007-01-22 Thread dan kumpik
Hi,

I am new to R (and not really a stats expert) and am having trouble 
interpreting its output. I am running a human learning experiment, with 
6 scores per subject in both the pretest and the posttest. I believe I 
have fitted the correct model for my data- a mixed-effects design, with 
subject as a random factor and session (pre vs post) nested within group 
(trained vs control).

I am confused about the output. The summary command gives me this table:


   D.lme- lme(score~GROUP/session, random=~1|subject, data=ILD4L )
   summary(D.lme)


Linear mixed-effects model fit by REML
   Data: ILD4L
Subset: EXP == F
  AIC   BIC   logLik
-63.69801 -45.09881 37.84900

Random effects:
   Formula: ~1 | subject
  (Intercept)  Residual
StdDev:   0.1032511 0.1727145

Fixed effects: score ~ GROUP/session
   Value  Std.Error  DF   t-value p-value
(Intercept) 0.10252778 0.05104328 152  2.008644  0.0463
GROUPT  0.09545347 0.06752391  12  1.413625  0.1829
GROUPC:sessionpost -0.00441389 0.04070919 152 -0.108425  0.9138
GROUPT:sessionpost -0.23586042 0.03525520 152 -6.690090  0.
   Correlation:
 (Intr) GROUPT GROUPC
GROUPT -0.756
GROUPC:sessionpost -0.399  0.301
GROUPT:sessionpost  0.000 -0.261  0.000

Standardized Within-Group Residuals:
  Min  Q1 Med  Q3 Max
-2.66977386 -0.52935645 -0.08616759  0.57215015  3.26532101

Number of Observations: 168
Number of Groups: 14


I believe the fixed-effects section of this output to be telling me that
my model intercept (which I assume to be the control group pretest?) is
significantly different from 0, and that GROUPT (i.e. the trained group)
does not differ significantly from the intercept- therefore no pretest
difference between groups?
The next line is, I believe showing that the GROUPC x sessionpost
interaction (i.e., control posttest scores?) is not significantly
different from the intercept (i.e. control pretest scores). Finally, I
am interpreting the final line as indicating that the GROUPT x
sessionpost interaction (ie, trained posttest scores?) is significantly
different from the trained pretest scores (GROUPT). A treatment contrast 
that I would like to apply would be for Control-post vs Trained-post, to 
see if the groups differ after training, but I'm not sure how to do 
this- and I feel I am probably overcomplicating the matter.

also,
I am confused about how to report this output in my publication. For 
instance, what should I be reporting for df? Those found on the output 
of the anova table?

Would it be possible to look through this for me and indicate how to
interpret the R output, and also how I should be reporting this? 
Apologies for asking such basic questions, but I would like to start 
using R more regularly and to make sure I am doing so correctly.

Many thanks,

Dan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.