Yes, whenever you remover lower terms but keep higher level terms, you get
the simple effects at all levels of the lower level terms you 'removed'.
Quite convenient, I think ;)

On Tue, Sep 20, 2016, 16:52 João Veríssimo <jl.veriss...@gmail.com> wrote:

> Besides the case in which a 'main' effect is removed but the interaction
> is included, I've also noticed the same behavior when removing a 2-way
> interaction, but keeping the 3-way interaction included:
>
> > m1 <- lmer(RT ~ Type * Form * Group) + (1|Subject) + (1|Item), mydf)
> > m2 <- update(m1, . ~ . - Type:Form)
>
> > anova(m1, m2)
> m1: RT ~ Type * Form * Group + (1 | Subject) + (1 | Item)
>
> m2: RT ~ Type + Form + Group + (1 | Subject) + (1 | Item) +
> m2:     Type:Group + Form:Group + Type:Form:Group
>
>    Df    AIC    BIC  logLik deviance Chisq Chi Df Pr(>Chisq)
> m1 15 1657.2 1743.1 -813.59   1627.2
> m2 15 1657.2 1743.1 -813.59   1627.2     0      0          1
>
> Best,
> João
>
> On Tue, 2016-09-20 at 19:27 +0000, T. Florian Jaeger wrote:
> > Guys,
> >
> >
> > just a quick note, in case it's not apparent to everyone (I had
> > emailed this earlier to Rachel): what happens in Rachel's model is
> > simply that R defaults to simple effects coding when a 'main' effect
> > is removed while the interaction is still included (note that this, I
> > think, overrides whatever contrasts you have specified for the factor
> > you remove). That's actually a very useful default. To me, the thing
> > that was puzzling at first is the same thing that Roger commented on:
> > it should be just the same when you remove a two-way or a three-way
> > factor. indeed, when i tried to replicate Rachel's problem, I did/do
> > get the same (simple effects reparameterization) regardless of how
> > many levels the factor that I remove has.
> >
> >
> > Florian
> >
> > On Tue, Sep 20, 2016 at 2:41 PM Wednesday Bushong
> > <wednesday.bush...@gmail.com> wrote:
> >
> >         Let me also say something w.r.t. coding because I think you
> >         also expressed doubt about what kind of coding scheme to use.
> >
> >
> >         The crucial thing to remember is that when interpreting
> >         coefficients from R model summary outputs, a coefficient is
> >         interpreted as moving from a value of 0 to 1 on that
> >         particular variable, when the values of the other variables
> >         are set to 0.
> >
> >
> >         In the case of dummy coding, then, the "main effect" of
> >         Listener is actually the difference in logodds going from the
> >         first level of listener to the second level of listener when
> >         the two SyntaxType dummy variables are at 0 -- that is, when
> >         SyntaxType is at the first level. So this is really just a
> >         pairwise comparison between two groups, and doesn't have
> >         anything to say about the average effect of Listener across
> >         the SyntaxType groups. In order to get the interpretation of
> >         Listener to be across the average of all SyntaxType groups,
> >         you would have to contrast code SyntaxType (b/c then 0 will be
> >         the avg of all the levels). Similar interpretations in a fully
> >         dummy-coded model go for the other main effect terms (i.e.,
> >         each SyntaxType effect is interpreted w.r.t the reference
> >         level of Listener) and the interaction terms
> >         (Listener:SyntaxType will be the effect of listener at the
> >         other SyntaxType levels; notice that this isn't even close to
> >         what we would normally conceptualize as an "interaction"! So
> >         be careful with coding!).
> >
> >
> >         Of course, you can mix and match your coding schemes -- for
> >         instance, if you want to get the main effect of Listener at
> >         the avg. of SyntaxType but wanted pairwise comparisons of
> >         SyntaxType within one particular Listener group, you could
> >         contrast code SyntaxType and dummy code Listener appropriately
> >         -- but in general, the most common thing to do will be
> >         contrast coding all factors, which will give you the standard
> >         ANOVA output interpretation.
> >
> >
> >         -Wed
> >
> >         On Tue, Sep 20, 2016 at 1:58 PM Wednesday Bushong
> >         <wednesday.bush...@gmail.com> wrote:
> >
> >                 Hi Rachel,
> >
> >
> >                 I think at times like this it's useful to look at
> >                 exactly how R assigns factors. When you add
> >                 interactions, R does a lot of behind-the-scenes work
> >                 that isn't immediately apparent. One way to look into
> >                 this in more detail is this really nice function
> >                 "model.matrix", which given a data frame and a model
> >                 formula, will show you all of the coding variables
> >                 that are created in order to fit the model and what
> >                 their values are for each combination of factors in
> >                 the dataset. I've bolded this below.
> >
> >
> >                 # create data frame w/ each factor level combo
> >                 d <- data.frame(Listener.f = rep(c("Listener1",
> >                 "Listener2"), 3),
> >                 SyntaxType.f = c(rep("Syntax1", 2), rep("Syntax2", 2),
> >                 rep("Syntax3", 2)),
> >                 Target_E2_pref = rnorm(6))
> >                 # make factor
> >                 d$Listener.f <- factor(d$Listener.f)
> >                 d$SyntaxType.f <- factor(d$SyntaxType.f)
> >
> >
> >                 # create model formulas corresponding to full and
> >                 reduced model
> >                 mod.formula <- formula(~ 1 + Listener.f *
> >                 SyntaxType.f, d)
> >                 mod.formula.reduced <- formula(~ 1 +
> >                 SyntaxType.f + Listener.f:SyntaxType.f, d)
> >
> >                 # get var assignments for all factor level combos
> >                 mod.matrix <- model.matrix(mod.formula, d)
> >                 mod.matrix.reduced
> >                 <- model.matrix(mod.formula.reduced, d)
> >
> >
> >                 If you look at mod.matrix and mod.matrix.reduced,
> >                 you'll see that they each have the same
> >                 dimensionality. Digging in further, we can see why
> >                 this is. Let's look at the column names of each model
> >                 matrix:
> >
> >
> >                 colnames(mod.matrix)
> >                 [2] "Listener.fListener2"
> >                 [3] "SyntaxType.fSyntax2"
> >                 [4] "SyntaxType.fSyntax3"
> >                 [5] "Listener.fListener2:SyntaxType.fSyntax2"
> >                 [6] "Listener.fListener2:SyntaxType.fSyntax3"
> >
> >
> >                 colnames(mod.matrix.reduced)
> >                 [1] "(Intercept)"
> >                 [2] "SyntaxType.fSyntax2"
> >                 [3] "SyntaxType.fSyntax3"
> >                 [4] "SyntaxType.fSyntax1:Listener.fListener2"
> >                 [5] "SyntaxType.fSyntax2:Listener.fListener2"
> >                 [6] "SyntaxType.fSyntax3:Listener.fListener2"
> >
> >
> >                 I've bolded the differences. Now don't ask me why, but
> >                 the way that R appears to handle subtracting a main
> >                 effect from a model but keeping the interaction is to
> >                 add in another interaction dummy variable that makes
> >                 the model equivalent. (If you look at the values that
> >                 each factor combo takes on, you'll see that this
> >                 particular dummy variable is 1 when Listener =
> >                 Listener2 and SyntaxType = Syntax1, and 0 otherwise).
> >
> >
> >                 The way to solve this is presented in Roger's paper he
> >                 linked above (pg. 4 being the most relevant here). His
> >                 particular example is for contrast coding but you can
> >                 make it work in the exact same way with dummy
> >                 coding (but make sure that dummy coding is what you
> >                 really want to use given the specific hypothesis
> >                 you're testing!):
> >
> >
> >                 # make numeric versions of factors
> >                 d$Listener.numeric <- sapply(d$Listener.f,function(i)
> >                 contr.treatment(2)[i,]) # can easily replace w/
> >                 whatever coding scheme you want
> >                 d$Syntax1.numeric <- sapply(d$SyntaxType.f,function(i)
> >                 contr.treatment(3)[i,])[1, ]
> >                 d$Syntax2.numeric <- sapply(d$SyntaxType.f,function(i)
> >                 contr.treatment(3)[i,])[2, ]
> >
> >
> >                 # check model matrix
> >                 mod.formula.new <- formula(~ 1 + Syntax1.numeric +
> >                 Syntax2.numeric + Listener.numeric:Syntax1.numeric +
> >                 Listener.numeric:Syntax2.numeric, d)
> >                 mod.matrix.new <- model.matrix(mod.formula.new, d)
> >                 colnames(mod.matrix.new)
> >
> >
> >                 [1] "(Intercept)"
> >                 [2] "Syntax1.numeric"
> >                 [3] "Syntax2.numeric"
> >                 [4] "Syntax1.numeric:Listener.numeric"
> >                 [5] "Syntax2.numeric:Listener.numeric"
> >
> >
> >                 Now things are as they should be: no more mysterious
> >                 extra dummy variable containing information about the
> >                 main effect of Listener! This last model is what you
> >                 should compare your original to get the significance
> >                 of the main effect of Listener.
> >
> >
> >                 Hope this was helpful!
> >
> >
> >                 Best,
> >                 Wednesday
>
>
>

Reply via email to