On Thu, 05 Apr 2007 14:49:28 -0700, Annete Taylor wrote:
>Dear knowledgeable tipsters:

I'm somewhat knowledgeable but there might a a few (e.g.,
Karl Wuensch) who may be more knowledgeable.  Your 
questions are probably more appropriate for one of the 
stat or SPSS mailing lists/Usenet groups.  Which reminds 
me, is Dave Nichols still associated with SPSS?

>Sorry for the cross posting to tips and Psychteach; please 
>delete if you have  already seen this question in the other list. 
>
>I have some SPSS questions:
>Please answer me off list.

Oh, what the hell! ;-)

>First of all, I am running a mixed ANOVA with one repeated 
>measures variable with 5 levels and one between measures variables 
>with 2 levels. I wanted to run planned comparisons but SPSS 12 
>won't let me. It tells me that I need at least 3 groups and that I 
>don't have three groups. Can someone explain this to me and 
>tell how to run my analysis?

SPSS has always had a bizzare implementation, IMHO, for
the analysis of repeated measures/mixed designs, far inferior 
to the programs of BMDP (RIP).  SAS has had it own 
peculiarities but I think that they've improved in recent years 
(I admit to not being a SAS person).  You don't mention which 
procedure you're using in SPSS -- I assume that you're using 
MANOVA but I realize that you might be using GLM though 
I'm not really sure which version of SPSS GLM was made 
available.  In any event, it's possible that whatever procedure 
you're using, SPSS is balking at doing multiple comparisons 
with a two level factor since the F ratio is a direct test of this 
factor (in this case the F-ratio is equivalent to the squared value 
of the t-test for the two means inovlved in the main effect).  I 
suspect, however, that you want to test components of the 2x5
interaction and apply multiple  comparisons to the main
effects.  Is this correct?  SPSS may not allow this to be done,
though the rationale may not be clear.

>Second, SPSS has several (about 12) different planned 
>comparisons I can run. I know that some are more conservative 
>and some less conservative, but how does one decide 
>between so very many which ones to run?

There are no hard and fast rules for this but one can use
the following criteria:

(1)  The LSD procedure (i.e., multiple t-tests) is the most
powerful multiple comparison procedure but it also has the
highest overall/familywise Type I error rate.  If there are
statistically significant results, they may be real or Type I
errors, nonetheless you'll find the largest number of 
significant results with this procedure.

(2)  The Scheffe procedure I believe is still the most
conservative procedure, that is, it has the least power but
it will you allow one to perform all possible multiple
comparisons, that is, pairwise comparisons and combinations
of means.  If it's significant by Scheffe, it's likely to be
significant by all other procedures but this will provide the
fewest significant results.

(3) I believe that all other procedure will provide different
levels of liberalism/conservatism of results, that is, intermediate
degrees of power and control of overall Type I error rates.
The choice of one procedure over the other may be as
dependent upon the specific conditions of one's data as
one's experience/attitude/knowledge of different tests.
I have a fondness for Bonferroni corrected t-tests but this
is mostly motivated by the simplicity of the test and its
conceptual basis -- there are other tests wjocj can be more
powerful depending upon the number of means being
compared.

>Third, for planned comparisons, can't I just run t-tests for 
>the comparisons of interest, rank order them from highest 
>to lowest and divide each obtained p-value by alpha divided 
>by the number of total comparisons for the lowest p-value, 
>alpha divided by n-1 total comparisons for the next and so 
>on, until I reach the point of non-significant comparison of alpha?

Planned comparisons assume (a) that a subset of comparisons
will be made relative to all comparisons and (b) there is some
theoretical/rational basis for choosing certain comparisons.
In this case, one just does the ANOVA to get the appropriate
error term.  Look at the latest edition of Kirk and his chapter
on multiple comparisons for guidance.  In earlier editions,
Kirk pointed out that many researcher simply used alpha=.05
for each planned comparison, especially if the number of such
comparisons was small.  It seems to make more sense to use
a Bonferroni correction and divide the overall alpha=-.05 by
the number of comparisons being made and using this for each
individual test (though one could allocate a higher per comparison
alpha for more "important" comparisons).  If this practice has
changed, I'd like to hear about it.

>If I do that, then how do I get effect size analyses? Effect size 
>analyses in SPSS seem to be tied to post-hoc comparisons. Is 
>it sufficient to say that my confidence intervals don't overlap?

I'm not sure I understand what you're saying here.  The usual
effect size measure provided by SPSS is partial eta square which,
if memory serves, is somewhat equivalent to a semi-partial
correlation coefficient squared.  Your statement about confidence
intervals suggests that you're focusing on something else.  Are you
talking about standardized differences between means?  If so,
it might be easiest to calculate these by hand, unless I'm missing
something.

>Well, one more finally, why in the world would I want to do an 
>omnibus post-hoc when I have a hypothesis driving planned 
>comparisons and how does all this work out in SPSS?

The simple answer, I think, is that once you know what equations
you need to use for the procedure you're doing, you use SPSS
to provide you with the components of the test you want to do
and do the rest by hand.  That way you're assured that the analysis
you want done is actually being done (it's not always clear what
SPSS is doing or why it is doing it).  If one is expert in SPSS
programming, especially in the use of its matrix manipulation
procedure and in the use of scripts, I imagine that that one can
make SPSS jump through these hoops.  Otherwise, it may make
more sense to select a specific test that one can do by hand (or
program the equation either in SPSS or another program like
Excel) and use components from the SPSS analysis of variance
procedure necessary for the test (e.g., Mean Square error from 
the ANOVA -- ignoring the F-tests since the planned comparisons
imply that one isn't interested in these)

>YUCK! Why can stats be what they were 30 years ago when 
>I was in grad school?

Because we've come a long way since then?  And though programs
like SPSS have also progressed, it still doesn't seem to be able to
do certain analyses (e.g., those involving repreated-measures) in
reasonable ways.

Just my 2 cents.

Take care,
-Mike Palij
New York University
[EMAIL PROTECTED]



---
To make changes to your subscription go to:
http://acsun.frostburg.edu/cgi-bin/lyris.pl?enter=tips&text_mode=0&lang=english

Reply via email to