In GLM the adj SS for any term in the model equals SStot
times the squared semi-partial correlation between Y and
that term controlling for all the other terms (the unique con-
tribution).
For the data in question, the adj SS for IQ is greater than
SS for IQ when IQ is enterred first in the model (2057.9 >
1539.9).
This means that squared Corr(Y,IQ.G) > squared Corr(Y,IQ)
... the squared semi-partial r is greater than the squared
simple r. This identifies a situation called complementarity,
or suppression, in which the unique contribution of a set of
regressors exceeds the sum of their individual unique
contributions (the obtained multple Rsqr exceeds the sum
of the squared simple validities when all regressors are
independent).
This is tough to teach, especially when you use overlapping
Venn diagrams (ala Cohen and Cohen) to illustrate how predicted
variance is accounted for by redundant regressors because Venn
diagrams fail to reveal what happens under conditions of
complementarity or suppression.
I explore the weird world of partial relationships with
my students using a minitab macro (see attached file)
that lets you define Corr(Y,X1) and Corr(Y,X2) and then
graphs the beta coefficients, semi-partial and partial
correlation coefficients and multiple R as a function of
the correlation between the regressors. As corr(X1,X2)
varies between its possible minimum and maximum values
the graphed functions reveal patterns that Venn diagrams
fail to show.
dennis roberts wrote:
> some time ago, i sent out a note about a handout i had re: ancova. now, in
> that handout, i illustrated a very simple case of how ancova might account
> for some of the within groups 'error'. in that handout, i showed, near the
> end ... some minitab output for the analysis. now, in that output ... the
> adjusted SS adds up to MORE than what the simple anova adds too. NOTE: the
> dependent measure in the Exp and Cont group example was performance on a
> test .. and the covariate was IQ.
>
> the one way shows:
>
> One-way Analysis of Variance
>
> Analysis of Variance
> Source DF SS MS F P
> Factor 1 252 252 1.54 0.231
> Error 18 2949 164
> Total 19 3201
>
> and the ancova shows:
>
> Analysis of Variance for TOTY, using Adjusted SS for Tests
>
> Source DF Seq SS Adj SS Adj MS F P
> TOTIQ 1 1539.9 2057.9 2057.9 39.26 0.000
> Group 1 770.0 770.0 770.0 14.69 0.001
> Error 17 891.0 891.0 52.4
> Total 19 3200.9
>
> in the handout, i showed that the adjusted SS(TOT) equals the sum of the
> 770 and 891 values for Group and Error in the Adj SS columns ... but where
> does the 2057 come from and, when you add to the 770 and 891 values .. you
> get a much larger value than the original 3201?
>
> what would be the simplest way to discuss this with students? in what way
> could you use the original data on the dependent measure ... and show how
> this new SS(TOT) value could be obtained?
>
> thanks
>
> ----------------------------------------------
> 208 Cedar Bldg., University Park, PA 16802
> AC 814-863-2401 Email mailto:[EMAIL PROTECTED]
> WWW: http://roberts.ed.psu.edu/users/droberts/drober~1.htm
> FAX: AC 814-863-1002
--
| David V. Cross | Office: (631) 632-7820 |
| Department of Psychology | Home: (631) 751-1238 |
| SUNY at Stony Brook | Fax: (631) 751-1389 |
| Stony Brook, NY, 11794-2500 | email: [EMAIL PROTECTED] |
b-vs-r12.mac