In response to my comment ...
> >You have not described your "post hoc" analysis. For openers, which
> >"Tukey's test" did you use (there are at least three post hoc tests due,
> >or at any rate attributed, to Tukey)?
... JE replied:
> I used Tukey's Honestly Significant Difference Test. I'll try
> Scheffe's post hoc.
This still does not describe what you used the test ON. Were they simply
pairwise comparisons (treatment 0 vs treatment 1, 0 vs 2, 0 vs 3, 1 vs 2,
1 vs 3, 2 vs 3)? -- which is what Tukey's HSD is usually applied to.
Or did they involve more complex contrasts, e.g.
(treatment 0 + trtmt 2 + trtmt 3) vs 3*(trtmt 1)
as suggested in my next paragraph?
> > If this is what you see, your post hoc analysis should compare
> >the mean of (1,0) with the average of the means of the other three cells
> >(the Scheffe' method would be preferable to the Tukey method).
There were two points in invoking the Scheffe' method:
(1) The Scheffe' method controls the error rate experiment-wise rather
than merely comparison-wise.
(2) The other method with an experiment-wise error rate, called the
Tukey method (but it is not the HSD method you used), has smaller
confidence intervals than the Scheffe' method for pairwise comparisons,
but for more complex comparisons (like the contrast specified above)
the Scheffe' method has shorter confidence intervals.
The HSD test is not properly a post-hoc test at all (if by "post
hoc" one means that one is implicitly entertaining ANY contrast among the
several subgroup means, and letting the choice(s) of contrast(s) be
determined by the pattern(s) observed in the data). It's what one would
use for comparisons (= contrasts) determined a priori, or what are called
"planned comparisons", when one is deliberately eschewing the notion of
"all possible comparisons/contrasts" in favour of the greater sensitivity
available by the planned-ness (so to speak).
On Sun, 9 Jan 2000, JE wrote in part:
> After performing a basic GLM unianova that showed significance with
> drug A, B, and A*B, I created another column that was numbered 0-3 for
> the 4 different combinations of treatments: 0 = no A, no B; 1 = A, no
> B; 2 = no A, B; 3 = A and B.
Notice that (for A = {0,1} and B = {0,1}) your new variable is A + 2B
(or, equivalently, the two-digit binary variable BA, with values
00, 01, 10, 11)
> Whether doing this represents a complete post hoc analysis; I'm not sure.
Depends on how many formal comparisons (contrasts) you examined, and
which ones they were.
> A pseudo spreadsheet of the data (percentagewise just about the same
> as my data) looks like this in SPSS
I'm not sure what you mean by "percentagewise".
> Drug A Drug B Treatment Set Cell Number
> 0 0 0 1 100
> 1 0 1 1 50
> 0 1 2 1 102
> 1 1 3 1 99
> 0 0 0 2 98
> 1 0 1 2 51
> 0 1 2 2 101
> 1 1 3 2 103
>
> etc.
>
> When I performed the primary unianova analysis, Drug A and Drug B were
> the two independent variables, and Cell Number was the Dependent
> variable. The post hoc comparison I used was to plug the Treatment
> column (independent variable) into a univariate anova analysis, and
> compare treatment with cell number (dependent variable) using Tukey's
> HSD as a post hoc.
You cannot possibly mean what this appears to say. I presume you were
comparing some treatment(s) with (an)other(s) using mean cell number as
the quantity being compared; you will not have been comparing Treatment
(having values 0, 1, 2, 3) with Cell Number (having values 50 and 100,
approximately). Anybody can see that Treatment and Cell Number are
different, and the difference is surely not interesting.
> (Drug A and Drug B columns are not used when I
> performed post hoc analysis. Set (experimental set) is also excluded
> during analysis, and is just here for illustration.
Why is Set excluded? Looks like another between-subjects factor to me,
and if so it ought to have been included in a three-way anova.
> I know this is a relatively simple-minded approach, but it's the best
> method I could come up with at the time. The fundamental question is
> whether it is technically wrong to perform the analysis the way I did;
> is this method I chose going to screw up my conclusions because the
> analysis yields erroneous results ?
What leads you to think the results are erroneous? Although you have
been pretty fuzzy about what you actually did, and you have not reported
nor described your results, you've provided no clear evidence of anything
"technically wrong" (whatever that might mean). Your analyses may have
been inefficient, and the results less insightful than they might have
been, but neither of those is "wrong". It is possible, I suppose, that
the results are _misleading_; not knowing what those results are, I
cannot address this. And if there are other factors in the experiment
that you have not taken into account in your analyses (Set, for instance;
and for that matter perhaps sex) you may not be seeing all that there is
to be seen in this corner of the universe.
-- DFB.
------------------------------------------------------------------------
Donald F. Burrill [EMAIL PROTECTED]
348 Hyde Hall, Plymouth State College, [EMAIL PROTECTED]
MSC #29, Plymouth, NH 03264 603-535-2597
184 Nashua Road, Bedford, NH 03110 603-471-7128