Mike Granaas wrote:
I think that we might agree: I would say that studies need a clear a
priori rational (theoretical or empirical) prior to being conducted. It
is only in that context that effect sizes can become meaningful. If a
Even then standardized effect sizes may not be very
I remember I read somewhere about different effect size measures
and now I found the spot: A book by Michael Oakes, U. of Sussex,
Statistical Inference 1990. The measures were (xbar-ybar)/s,
Proportion misclassified, r squared (biserial corr) and w squared
(which I think means the same as Rsq
I think, some other folks are being sloppy about effect sizes.
Power analysis for the social sciences is a book that
defines small, medium and large effects in terms that are
convenient and *usually* appropriate
for the *social sciences* -- it makes no pretenses
that these are universally
Hi, this is about Jim Clark's reply to dennis roberts.
On 12 Sep 2001, dennis roberts wrote:
At 07:23 PM 9/12/01 -0500, jim clark wrote:
What your table shows is that _both_ dimensions are informative.
That is, you cannot derive effect size from significance, nor
significance from
Rolf Dalin wrote:
Yes it would be the same debate. No matter how small the p-value it
gives very little information about the effect size or its practical
importance.
Neither do standardized effect sizes.
Thom
=
Instructions
Hi
On 13 Sep 2001, Rolf Dalin wrote:
Hi, this is about Jim Clark's reply to dennis roberts.
I'm not sure how both informative gets translated into neither
very informative. Seems like a perverse way of thinking to me.
Moreover, your original question was then what benefit is there
to
At 02:33 PM 9/13/01 +0100, Thom Baguley wrote:
Rolf Dalin wrote:
Yes it would be the same debate. No matter how small the p-value it
gives very little information about the effect size or its practical
importance.
Neither do standardized effect sizes.
agreed ... of course, we would all be
On Thu, 13 Sep 2001, Paul R. Swank wrote in part:
Dennis said
other than being able to say that the experimental group ... ON AVERAGE ...
had a mean that was about 1.11 times (control group sd units) larger than
the control group mean, which is purely DESCRIPTIVE ... what can you say
jim clark wrote:
Sometimes I think that people are looking for some magic
bullet in statistics (i.e., significance, effect size,
whatever) that is going to avoid all of the problems and
misinterpretations that arise from existing practices. I think
that is a naive belief and that we
Hi
On 13 Sep 2001, Herman Rubin wrote:
jim clark [EMAIL PROTECTED] wrote:
Or consider a study with a small effect size that is significant.
The fact that the effect is significant indicates that some
non-chance effect is present and it might very well be important
theoretically or even
Hi
I found the Rosenthal reference that addresses the following
point:
On 13 Sep 2001, Herman Rubin wrote:
The effect size is NOT small, or it would not save more
than a very small number of lives. If it were small,
considering the dangers of aspirin, it would not be used
for this purpose.
here are some data ... say we randomly assigned 30 Ss ... 15 to each
condition and found the following:
MTB desc c1 c2
Descriptive Statistics: exp, cont
Variable N Mean Median TrMean StDevSE Mean
exp 15 26.13 27.00 26.00
Dennis Roberts [EMAIL PROTECTED] wrote:
: given a simple effect size calculation ... some mean difference compared to
: that is ... can we not get both NS or sig results ... when calculated
: effect sizes are small, medium, or large?
: if that is true ... then what benefit is there to look at
On Thu, 13 Sep 2001, Dennis Roberts wrote:
see the article that focuses on this even if they do report effect sizes ... )
what we need in all of this is REPLICATION ... and, the accumulation of
evidence about the impact of independent variables that we consider to have
important potential
Dennis said
other than being able to say that the experimental group ... ON AVERAGE ...
had a mean that was about 1.11 times (control group sd units) larger than
the control group mean, which is purely DESCRIPTIVE ... what can you say
that is important?
However, can you say even that unless it
In article [EMAIL PROTECTED],
jim clark [EMAIL PROTECTED] wrote:
Hi
On 13 Sep 2001, Rolf Dalin wrote:
Hi, this is about Jim Clark's reply to dennis roberts.
.
Sometimes I think that people are looking for some magic
bullet in statistics (i.e.,
In article [EMAIL PROTECTED],
jim clark [EMAIL PROTECTED] wrote:
Hi
On 12 Sep 2001, Dennis Roberts wrote:
that is ... can we not get both NS or sig results ... when calculated
effect sizes are small, medium, or large?
if that is true ... then what benefit is there to look at
significance
given a simple effect size calculation ... some mean difference compared to
some pooled group or group standard deviation ... is it not possible to
obtain the following combinations (assuming some significance test is done)
effect size
small
At 04:04 PM 9/12/01 -0400, you wrote:
if that is true ... then what benefit is there
to look at significance AT ALL
To get published, get tenure, and avoid having to live in a cardboard box
in the park. Ha ha!
Lise
Hi
On 12 Sep 2001, dennis roberts wrote:
At 07:23 PM 9/12/01 -0500, jim clark wrote:
What your table shows is that _both_ dimensions are informative.
That is, you cannot derive effect size from significance, nor
significance from effect size. To illustrate why you need both,
consider a
Hi
On 12 Sep 2001, Dennis Roberts wrote:
given a simple effect size calculation ... some mean difference compared to
some pooled group or group standard deviation ... is it not possible to
obtain the following combinations (assuming some significance test is done)
At 07:23 PM 9/12/01 -0500, jim clark wrote:
Hi
What your table shows is that _both_ dimensions are informative.
That is, you cannot derive effect size from significance, nor
significance from effect size. To illustrate why you need both,
consider a study with small n that happened to get a
22 matches
Mail list logo