Hi

For me the only "it" that is wrong is this article ... in many ways.

1.  Is it really the case that replication is rare in the social sciences?  I 
don't think so ... or else the increasing number of meta-analyses that we have 
are somehow a fluke.  The authors do criticize meta-analyses (e.g., only valid 
when similar protocols used???), but I do not know that literature well. On the 
face of it, collapsing across diverse studies (i.e., dissimilar protocols??) 
and finding a common effect strikes me as stronger rather than weaker evidence 
for the effect.  Or one can search for moderator variables.

2.  As many people at the site mention, misuse of a tool (statistics, e=mc**2, 
...) says more about the users (and their instructors) than about the tool.  
And is citing a passage from one stats book evidence of "widespread" 
misinterpretation?

3.  The relationship between statistical significance and substantial or 
important is not at all a simple one.  In addition to ignoring effect size, as 
someone mentioned, article fails to mention that small differences can be 
important (e.g., aspirin study) and that large effect sizes absent statistical 
significance should be treated extremely cautiously (replication, anyone?).

4.  It is common in psychology (I don't know about other disciplines) to report 
p values, rather than crude significant (p < .05), allowing one to determine 
just how unlikely the outcome is given some null hypothesis.  Is it really the 
case that we should treat all differences between groups the same irrespective 
of whether the p for the difference is .4, .1, .05, .0004, ...?  Does it tell 
us nothing theoretical or applied, depending on the study?

5.  Although one cannot assign a number to the likelihood of replication, is it 
really the case that the p value is irrelevant to replication.  An outcome that 
is less likely to have occurred purely by chance appears more likely to 
replicate than one that is more likely to have occurred by chance, isn't it?  
Would we have the same expectation for a replication of a difference if its p 
value was .4, .1, .05, .0004?

6. Replacement of hypothesis testing with confidence intervals or whatever is 
quite a challenge when one considers such things as: higher order interactions, 
partitioning effects and interactions into specific contrasts, looking for 
linear or other patterns in the data, ....  And as someone else mentioned, you 
still need to decide on a p value for your confidence interval.

7. There are already "Bayes-like" elements to hypothesis testing.  Directional 
tests, planned contrasts, ... require less statistical evidence from the 
current study to conclude a pattern exists than do tests not guided by prior 
findings or theory.  What is this other than playing with prior probabilities?

8.  The criticism of "statistics" across science in general would appear to 
undermine the case being made given the huge advances that have been made in 
those sciences during the period being criticized (not to say that one cannot 
cherry pick bad examples).

9.  I would like to see either (a) a simulation or (b) actual analyses across a 
broad range of studies, with the various approaches being recommended for data 
analysis applied to all the simulated or actual studies to document how 
different researchers' conclusions would be using the different statistical 
approaches.  I predict the statistics are less important to research 
conclusions than this article proposes.

Probably more comments if I were to spend more time with the article.  Perhaps 
it is just a coincidence, for example, but the medical examples given appear to 
be undermining studies that have been critical of certain drugs.

Take care
Jim






James M. Clark
Professor of Psychology
204-786-9757
204-774-4134 Fax
[email protected]

>>> <[email protected]> 21-Mar-10 10:22:51 AM >>>
Alerted by a colleague, I  recommend an instructive if 
depressing essay on the problematic use of statistics in science.

http://www.sciencenews.org/view/feature/id/57091/title/Odds_ar 
e,_its_w or http://tinyurl.com/yh7sk7r 

Teaser:

"Supposedly, the proper use of statistics makes relying on 
scientific results a safe bet. But in practice, widespread misuse 
of statistical methods makes science more like a crapshoot."

Stephen
--------------------------------------------
Stephen L. Black, Ph.D.          
Professor of Psychology, Emeritus   
Bishop's University               
e-mail:  sblack at ubishops.ca
2600 College St.
Sherbrooke QC  J1M 1Z7
Canada
-----------------------------------------------------------------------

---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13251.645f86b5cec4da0a56ffea7a891720c9&n=T&l=tips&o=1419
or send a blank email to 
leave-1419-13251.645f86b5cec4da0a56ffea7a89172...@fsulist.frostburg.edu

---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=1440
or send a blank email to 
leave-1440-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to