1. From this I infer that what you want to do is compare the two groups (between subjects) at each of the five levels of the repeated factor (if you wanted to test the repeated factor at each of the two levels of the between factor SPSS should have complied). In my limited experience with mixed designs like this homogeneity of variance is iffy, so I am inclined to use individual rather than pooled error terms. This is easily accomplished by asking SPSS to do a good old fashioned t test at each level of the repeated factor. Suppose that R1, R2, R3, R4, and R5 are the five variables coding the repeated effect and G is the grouping variable. Compare means, Independent Samples t, G is grouping variable, R1 to R5 the test variables. OK. Worried that you might burn in hell if you allow familywise error to exceed .05? Just use a Bonferroni adjusted criterion of .01, but be aware that Satan smiles every time we make a Type II error. Want an F instead of a t ? As Mike suggested, just square the t. The p will be the same.
2. If you have only three groups, use Fisher's procedure. As Mike pointed out, it is more powerful. What he did not point out is that it does cap alpha familywise at the nominal level, so there is no good reason to use a more conservative procedure, unless you just really want to make Satan smile again. More than three groups? Use the REGWQ, which will hold the familywise error at no more than the nominal level and is more powerful than the Tukey. For special cases there may be better choices. 3. Some say it does not matter whether your comparions are planned or not, others say it does. If you belong to the later camp you can just tell yourself that you planned to make every possible comparison among means and thus you don't have to worry about familywise error. :-) 4. If, as I suspect, you are simply comparing two means (five times), I can provide a SPSS script that will compute the value of g (estimate of Cohen's d) and put a confidence interval on it. "Percentage of variance explained" statistics are commonly misinterpreted, so I avoid them if I can. Deciding between eta-squared (or the similar omega-squared) and partial eta-squared can be a challenge -- can you or can you not justify removing from the total variance the variance accounted for by the other factor(s)? With partial eta-squared in a factorial design you can end up accounting for over 100% of the variance. 5. To make Satan smile again. You would probably not have much difficulty convincing me that the omnibus test is silly and that a set of focused contrasts that address your research questions is the better way to go. Cheers, Karl W. -----Original Message----- From: Mike Palij [mailto:[EMAIL PROTECTED] Sent: Thursday, April 05, 2007 11:33 PM To: Teaching in the Psychological Sciences (TIPS) Cc: Mike Palij Subject: [tips] re: SPSS help 1. >First of all, I am running a mixed ANOVA with one repeated >measures variable with 5 levels and one between measures variables >with 2 levels. I wanted to run planned comparisons but SPSS 12 >won't let me. It tells me that I need at least 3 groups and that I >don't have three groups. Can someone explain this to me and >tell how to run my analysis? 2. >Second, SPSS has several (about 12) different planned >comparisons I can run. I know that some are more conservative >and some less conservative, but how does one decide >between so very many which ones to run? 3. >Third, for planned comparisons, can't I just run t-tests for >the comparisons of interest 4. >then how do I get effect size analyses? Effect size >analyses in SPSS seem to be tied to post-hoc comparisons. Is >it sufficient to say that my confidence intervals don't overlap? 5. >Well, one more finally, why in the world would I want to do an >omnibus post-hoc <test> when I have a hypothesis driving planned >comparisons and how does all this work out in SPSS? --- To make changes to your subscription go to: http://acsun.frostburg.edu/cgi-bin/lyris.pl?enter=tips&text_mode=0&lang=english