Thanks for your answer... Donald Burrill wrote: > The mean you calculate for a subject is the proportion of times that > person chose "1". If you are willing to assume that the 8 decisions > are repeats of the same performance (at least in the sense of being > exchangeable), the number of "1"s is binomially distributed. Unless > the aforementioned assumption is wildly incorrect, the proportions of > the 20 Ss may be assumed to be approximately normally distributed with > mean P (the population proportion) and variance P(1-P)/N. You can then > apply standard tests of the hypotheses > (1) that P(experimental Ss) = P(control Ss); > (2) that P(experimental Ss) = 0.5; > (3) that P(control Ss) = 0.5. > I presume that your experimental manipulation implies two groups of > 10 Ss each. This may be a little thin for the normal approximation.
actually no, I have one experimental group, no control group, and I want to test wheather the mean of the 'individual subject means' differ from a given value (here .5). so my H0 would be -> that P(experimental Ss) = 0.5 respectively one-tailed -> that P(experimental Ss) > 0.5 > If that worries you, you could carry out the test assuming only that > the binomial distribution applies (which entails rather more computation) > and compare those results with results obtained assuming normality. > > On Thu, 26 Sep 2002, Jan Malte Wiener wrote: > > >>I have data that i do not exactly know how to statistically analyze: >> >>subjects are repeatedly asked to make a decision (e.g. left-right -> >>coded as 0 or 1). i have 20 subjects, each subject made 8 decisions. >> >>i now want to analyse whether my experimental manipulation induced a >>systematic bias in subjects answers. if that wasn't true i expected a >>chance level of 0.5 (50% left, 50% right). >> >>the way i am analysing my data right now is that i calculate the mean of >>the single trials for each subject (mean of (0,1,1,1,1,0,0,1) = 0.625). >>now i have a vector of single subjects preferences. >> >>assuming this distribution was normally distributed i could perform a >>one-sample t-test against a chance level (e.g. 0.5). > > > Did the experimental manipulation apply to all 20 subjects, or did you > have a control group that was not manipulated? > > >>obviously my data are not normally distributed -> > > > As remarked above, they may be be approximately normal. > > >>so i guess my question really is: which non-parametric test does test >>a distribution against a given theoretical value ? > > > You mean, "test the mean of a distribution vs. a given value"? > yes > >>Someone told me to use a 1-sample Wilcoxon signed rank test ??? > > > Could do, I suppose; but the basic rank-sum tests don't perform well > when there are lots of ties, and you only have 9 possible values (0/8 > to 8/8) for each of your 20 Ss. And the "large-sample" form of the > Wilcoxon assumes approximate normality anyway. > I thought the Wilcoxon test was a non-parametric test and would therefore make no or minimal assumptions about the distribution of the data? greetinx jan wiener -- Jan Malte Wiener Max-Planck-Institute for Biological Cybernetics Spemannstr. 38, 72076 Tuebingen, Germany Tel.: +49 7071 601 631 Email: [EMAIL PROTECTED] . . ================================================================= Instructions for joining and leaving this list, remarks about the problem of INAPPROPRIATE MESSAGES, and archives are available at: . http://jse.stat.ncsu.edu/ . =================================================================
