My two cents: Decide on what is the smallest effect that you would
consider to be of importance. If you think Type I and Type II errors are
equally serious, then set both alpha and beta to .05, that is, find N for 95%
power. G*Power does this with ease, but you are unlikely to like the answer.
If precision of estimation of the effect size is of importance, even
bigger is better (less wide confidence intervals for effect size).
With respect to independent samples t test, you can use the procedure
that G*Power specifies for that design, with d as the effect size, or you can
use the point biserial regression procedure, with r as the effect size.
Do note that the size of the point biserial is greatly affected by the
ratio of the sample sizes, which is not true with d. See
http://core.ecu.edu/psyc/wuenschk/StatHelp/d-r.htm
When discussing issues of power and effect size, always pay attention
to speakers from NYU. :-)
Cheers,
Karl L. Wuensch
-----Original Message-----
From: Mike Palij [mailto:[email protected]]
Sent: Tuesday, August 27, 2013 3:06 PM
To: Teaching in the Psychological Sciences (TIPS)
Cc: Michael Palij
Subject: RE: [tips] Sample Size: How to Determine it?
I was going to stay out of this discussion but I have to address a couple of
points, one of which is made by Rick at the end of his post:
(1) The major problem with power analysis is that it requires one to have
knowledge of POPULATION PARAMETERS, that is, the means, the standard deviation,
the correlations, and so on. NOTE: a researcher has sample data from which
descriptive statistics and inferential statistics are calculated which will
have sampling error and possible other types of error that make the sample
estimates of the mean, standard deviation, correlation, etc., misleading. The
proper thing to do before collecting the data is to conduct an A Priori power
analysis. But An A Priori power analysis assumes that one knows the relevant
population means, standard deviations, correlations, effect size, and so on
that are involved. This is a problem because far too many researchers don't
have a clue what these values are or should be. If you don't know what the
population parameters are, step away from the data and let a professional try
to do something with it.
(2) Rick Froman below refers to Russ Lenth's website where one can use his
software for some calculations -- I suggest one use G*Power instead -- as well
as his position that retrospective or observed power analysis is bad, m'kay? I
suggest that one instead read Geoff Cumming's "Understanding the New
Statistics" which goes into much more detail about effect sizes, confidence
intervals, and meta-analysis -- all of which are inter-related; see:
http://www.amazon.com/Understanding-The-New-Statistics-Meta-Analysis/dp/041587968X/ref=sr_1_1?ie=UTF8&qid=1377628036&sr=8-1&keywords=cummings+meta-analysis
Cummings makes a stronger argument than Lenth. However, I would also suggest
that one read my review of Cummings'
book in PsycCritiques which takes issue with the anti-retrospective or
anti-observed power analysis situation; see:
Palij, M. (2012). New statistical rituals for old. PsycCRITIQUES 57 (24).
(3) Pragmatically, most psychologists who do statistical analysis rely almost
solely on the sample information to reach conclusions about the population
parameters. This is where concerns about whether the probability of one's
obtained statistic like a t-test is statistically significant or what to do if
one has a p(obt t)= .06.
The p-value doesn't really matter if you know that that two sample means you
have come from different populations, right?
Which is why one is urged to use confidence intervals instead.
But psychologists will look at the observed power level provided by SPSS'
MANOVA or GLM procedures if they have done an ANOVA because they did not select
the power level before they collected their data. And it is only then that
they might realize, Ooops!, I don't really have enough statistical power to
reject a false null hypothesis.
But this is an old tale that all Tipsters should be familiar with given our
current statistical practices -- see Cummings' book if one needs a refresher on
what some consider proper statistical analysis in contemporary psychological
research.
Then, again, really knowing the phenomenon you're studying and having strong
theory, such as signal detection theory in psychophysics or recognition memory
research, may go a much longer way than wondering whether one has a
statistically significant result.
-Mike Palij
New York University
[email protected]
------------ Original Message ---------------- On Tue, 27 Aug 2013 10:53:07
-0700, Rick Froman wrote:
I am assuming this was an independent samples t test where some participants
heard the "mother nature" language and others didn't. Using the d of .53 they
obtained as my estimate of what effect size they would be interested in
obtaining (or that they think would be worthwhile to note), it appears that,
with a df of 50, they had less than a 50/50 chance of finding a significant
result of that size if one existed in the population. As others have pointed
out, you need to determine before the study begins, what effect size you are
interested in obtaining. For example, you may believe that even a .05 effect
size (1/20th of a standard deviation difference between the two means) could be
meaningful given the question. If so, you are going to need a very large sample
size to have a high probability of finding a significant result if such a small
difference exists in the population. By my calculations*, if you wanted to have
at least an 80 percent chance of detecting an effect size of at least
.50 (half
a standard deviation difference between the means) with an independent sample t
test, you would need to have 128 participants in the study (64 in each group).
If you wanted to have an 80% chance of detecting a .05 (5 percent) effect size
in such a case, you would need 12560 participants (6280 in each group).
*My power calculations came from
http://homepage.stat.uiowa.edu/~rlenth/Power/ .
The author has a nice discussion of power and why retrospective power analysis
is worthless under the Advice section on that page.
---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here:
http://fsulist.frostburg.edu/u?id=13060.c78b93d4d09ef6235e9d494b3534420e&n=T&l=tips&o=27388
or send a blank email to
leave-27388-13060.c78b93d4d09ef6235e9d494b35344...@fsulist.frostburg.edu
---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here:
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=27394
or send a blank email to
leave-27394-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu