Dear Wayne,
Here is what I would do:
setwd('/Users/mk/myTeach/2012-1-7720/analyses/GrayWayne')
gw <- read.table('data_110608.txt', header = TRUE)
gw <- gw[, -c(1, 4)]
names(gw) <- c('subject', 'block', 'density', 'time')
gw$subject <- factor(gw$subject)
library(lattice)
bwplot(time ~ density, data = gw)
# in order to find the best transformation of data into normality
gw.lm <- lm(time ~ density * subject * block, data = gw)
library(car)
boxCox(gw.lm) # reciprocal seems like the best (not perfect) candidate
gw$speed <- 1000/gw$time
library(lme4)
# first mixed model
gw.mm <- lmer(speed ~ density * block + (1 | subject), data = gw)
library(arm)
display(gw.mm)
gw.mm1 <- lmer(speed ~ density + block + (1 | subject), data = gw)
# block does have an effect, density doesn't
display(gw.mm1)
m1.res <- resid(gw.mm1) # check normality of resids
qqnorm(m1.res)
qqline(m1.res) # not too bad
# see if effect of blocks varies by subject
gw.mm2 <- lmer(speed ~ density + block + (1 | subject) + (0 + density |
subject), data = gw)
display(gw.mm2)
# gw.mm2 is not better model; stick with gw.mm1
# CI on effect of density [-0.07, 0.05], which confirms what you thought, but I
don't see which interactions you have in mind other than with blocks
______________________________________________
Professor Michael Kubovy
University of Virginia
Department of Psychology
for mail add: for FedEx or UPS add:
P.O.Box 400400 Gilmer Hall, Room 102
Charlottesville, VA 22904-4400 McCormick Road
USA Charlottesville, VA
22903
room phone
Office: B011 +1-434-982-4729
Lab: B019 +1-434-982-4751
WWW: http://www.people.virginia.edu/~mk9y/
On Jul 2, 2011, at 3:12 PM, Wayne Gray wrote:
> Greetings.
>
> I am not totally sure where to post this query so forgive me if this is the
> wrong SIG. However, I do teach stats in conjunction with experimental design
> and the question is one that is of considerable interest right now to several
> of my grad students and myself - hence this presents a weak rationale for
> sending the query to this listserv.
>
> As background, I am very familiar with Type III marginal SS for ANOVAs.
> However, we have a situation where a reviewer is insisting on an analysis
> that requires thin slicing our data so that we do not have observations in
> some of the cells for some of our Ss. I think I understand what R is telling
> me, but I am not positive that I do. Even worse, I don't know how to explain
> the analysis (assuming I have interpreted it correctly) to the editor or to
> the readers of the journal who, like me, are familiar with Type III ANOVAs.
>
> I have tried to attach the R file plus the data file to this email. I am not
> sure whether the listserv will allow attachments. If not these files can be
> found here:
>
> Rcode: files.me.com/graywayne/7z7db3
> data: files.me.com/graywayne/688878
>
> What I think is the "takeaway" point is that there is no evidence in our data
> that the factor "dens.targ" is significant or that any of its interactions
> are significant. Given that the other analyses strongly support our
> interpretation of the results, I would like to conclude that any effect of
> density of the target stimuli on response time is very weak at best. This is
> a very satisfactory conclusion to me, however, I want to go the extra mile to
> show the editor that we tested this as best we could. Although it might
> appear in the paper, it is not clear that it would. It may be the sort of
> thing you do to present to the Editor but leave out of the final version of
> the paper.
>
> Any help, comments, pointers, etc will be much appreciated.
>
> BTW: This is a visual search paradigm where the factor of interest is the
> density of distractors in the quadrant in which the target is found. The data
> are limited to those cases in which the initial visual saccade is to a "dense
> quadrant" or a "nondense quadrant" for those cases in which the initially
> saccaded-to-quadrant also contained the "target." The DV, TRIAL.TIME, is the
> time from the beginning of the trial to the point where the subject indicates
> they have found the target by clicking a key.
>
> Yours,
>
> Wayne Gray
>
>> anova.e1sq <- with(e1sq, summary(aov(TRIAL.TIME ~ BLOCK.NUMBER*dens.targ +
>> Error(SUBJECT/(BLOCK.NUMBER*dens.targ)))))
>> anova.e1sq
>
> Error: SUBJECT
> Df Sum Sq Mean Sq F value Pr(>F)
> BLOCK.NUMBER 3 9237318 3079106 0.9906 0.5030
> dens.targ 1 8661 8661 0.0028 0.9612
> BLOCK.NUMBER:dens.targ 3 254237 84746 0.0273 0.9927
> Residuals 3 9324782 3108261
>
> Error: SUBJECT:BLOCK.NUMBER
> Df Sum Sq Mean Sq F value Pr(>F)
> BLOCK.NUMBER 3 2516104 838701 1.4333 0.2557
> dens.targ 1 166064 166064 0.2838 0.5987
> BLOCK.NUMBER:dens.targ 3 362702 120901 0.2066 0.8909
> Residuals 26 15213849 585148
>
> Error: SUBJECT:dens.targ
> Df Sum Sq Mean Sq F value Pr(>F)
> dens.targ 1 272316 272316 2.4674 0.1602
> BLOCK.NUMBER:dens.targ 3 404184 134728 1.2207 0.3710
> Residuals 7 772557 110365
>
> Error: SUBJECT:BLOCK.NUMBER:dens.targ
> Df Sum Sq Mean Sq F value Pr(>F)
> BLOCK.NUMBER:dens.targ 3 367036 122345 0.9171 0.4444
> Residuals 30 4002169 133406
>
> Error: Within
> Df Sum Sq Mean Sq F value Pr(>F)
> Residuals 418 59572437 142518
>
>
>
>
>
> _______________________________________________
> [email protected] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-teaching
[[alternative HTML version deleted]]
_______________________________________________
[email protected] mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-teaching