On Wed, 02 Mar 2016 14:31:18 -0800, Lenore Frigo wrote:
For a research methods class, I'm in search of some examples
where results report a Pearson's r with a confidence interval (with
or without a p-value/NHST). Finding such examples has been
surprisingly difficult (searches hit articles about confidence
intervals,
not those that happen to report them).
About 20 or so years ago I asked a senior researcher in public
health with whom I was doing research the following:
"Why do researchers report the odds-ratio with its confidence
interval but they don't do the same for the Pearson r?"
NOTE: this was for journal publishing research on HIV/AIDS
and substance use.
His answer was that was just the style of reporting people using
though the confidence interval for the Pearson r should be reported
(we didn't -- when in Rome....).
I think that something similar has occurred in psychology. The
Pearson r is one of the oldest statistics we have and pre-dates
the concept of confidence interval by decades, so there is a
history of not reporting the confidence interval. When Neyman
came up with he confidence interval, using it implied that one was
in Neyman's "camp" in contrast to Sir Ronald Fisher's "camp"
where confidence intervals were considered to be as dumb
as a bag of hammer. Fisher argued that the confidence interval
was a ridiculous concept because it was based on the
belief that one would replicate the study 100 times.
Remember: the confidence interval does not provide
the probability that the interval contains the population parameter
of interest (it either contains it [p = 1.00] or it doesn't [p= 0.00]),
rather it says that if this study/process that produced the confidence
interval was repeated 100 times, 95% of these new intervals
would contain the population parameter (that is if one uses a
95% confidence interval). Fisher argued that confidence intervals
were appropriate for a manufacturing practice that puts out
a large number of samples and not individual experiments.
Fisher attempted to come up with something called fiducial intervals
which would represent an interval with a 95% chance of
containing the population parameter but this turns out to be
much more difficult to do and Fisher didn't not come up with
a useful solution.
For the history of these ideas see the following book:
Lehmann, E. L. (2011). Fisher, Neyman, and the creation of
classical statistics. New York, NY: Springer.
However, as Lehmann points out, most people interpret
confidence intervals as though they are fidiucial intervals,
something that distressed both Neyman and Fisher. The
reason, I think is obvious, the Neyman definition doesn't
really make much sense (who is going to replicate a study
100 times?) while the Fisherian definition does but does not
apply to confidence intervals.
So, I think that there is a basic argument about whether
one should really report confidence intervals at all. For a
single correlation it provides the same information as the
t-test for the Pearson r, namely, does the Pearson r equal
zero. If one is seriously interested in the variability of the
Pearson r, that's why God created the standard error which,
conceptually, may be easier to understand than a confidence
interval.
I'd greatly appreciate any leads on example that have r and
confidence intervals reported. Or even any suggestions for how
to search for that sort of thing? (Or much more broadly, any thoughts
on teaching CIs and going beyond NHST?)
Like I say above, it has not become standard practice for
reporting confidence intervals for individual correlations, so
I doubt that you'll find too many examples (especially in situations
where the research cherrypicked the correlation from a correlation
matrix and would have to calculate the CI by hand). It is easier
to find confidence intervals for the intraclass coefficients, and
other statistics where it has become standard practice to do
so (that is, an agreed upon statistical ritual has been developed).
On proponent of the use of CI and related statistics is Geoff
Cumming and you might want to look at his book; here it is
on Amazon:
http://www.amazon.com/gp/product/041587968X?keywords=cumming%20%26%2334%3Bnew%20statistsics%26%2334%3B&qid=1456961421&ref_=sr_1_fkmr0_2&sr=8-2-fkmr0
For a contrary view, you can read my review of Cumming's
book; see:
https://www.researchgate.net/publication/236866116_New_statistical_rituals_for_old
Ultimately, it comes down to doing "mindful statistics", that is,
not relying on statistical rituals to guide one's statistical analysis.
-Mike Palij
New York University
[email protected]
---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here:
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=48233
or send a blank email to
leave-48233-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu