Excellent paper, and probably is comprehensible for many of our undergraduate 
students. 

It is, of course, the foundational Bayesian argument against simplistic NHST, 
but one of the best expositions to lay out the fundamental issues that I've 
seen. 

My only problem with the Bayesian approach, described elegantly in the article, 
is that the posterior probabilities are so dependent on the prior 
probabilities. Just look at the lovely diagram. There is a genuine problem in 
determining with any accuracy the prior probabilities of many of our findings. 
How do we go about getting our priors? By the way, this is one of my problems 
with power analysis, that 'estimating' effect size is so difficult. 

I recently collected data for a project.  The phenomenon has been shown before 
by others, but one early study didn't find the effect. If I'm doing my study 
only on that phenomenon only, there's no new information and the paper is going 
to be very difficult to get published because it is not adding meaningfully to 
what is known (low respect for replication being a problem in our business). 
But, what is my prior probability? I've got 3 papers (and one poster by myself 
from an earlier election) that say the effect is there and 1 paper that says 
the effect is not there. Is my prior probability 75%? Do I take into account 
the p-values from their studies? Do I conduct a Bayesian series of analyses to 
determine the current posterior probability after the sequence of papers have 
been published to estimate my prior? 

Furthermore, my goal in doing the study was to show why 3 papers show the 
effect and one does not, because I see a critical difference between them. The 
3 that show the effect all have one common characteristic while the one failure 
to show the effect has a contrasting characteristic. There is theoretical 
rational for that characteristic to be a moderator of the effect. What is my 
prior probability of that moderating effect, particularly given it's not be 
examined in this domain? 

In a sense I recognize that I'm arguing for us to somehow continue our general 
approach to research in psychology in much the same way as in the past: Hey! 
I've got an idea, what do we know about it, let's design a study, let's analyze 
and publish. The Bayesian approach would suggest a much more systematic and 
careful approach, slowly building up knowledge in steps small enough that the 
prior probabilities are narrowly determinable… that is probably a good thing. 
But, it would require a huge cultural shift that I am not sure we are willing 
do to. Probably some kind of transitional period is needed in which researchers 
are expected to provide both kinds of analysis for top level journals, with 
that filtering down to lower level journals over time, then the old style 
analysis no longer being accepted in top level journals. 

Paul

On Feb 12, 2014, at 5:07 PM, Christopher Green wrote:

> An interesting article about the problems of p-values that might even be 
> understandable to undergraduates. 
> http://www.nature.com/news/scientific-method-statistical-errors-1.14700
> 
> Chris
> .......
> Christopher D Green
> Department of Psychology
> York University
> Toronto, ON M6C 1G4
> 
> chri...@yorku.ca
> http://www.yorku.ca/christo
> ---
> You are currently subscribed to tips as: pcbernha...@frostburg.edu.
> To unsubscribe click here: 
> http://fsulist.frostburg.edu/u?id=13441.4e79e96ebb5671bdb50111f18f263003&n=T&l=tips&o=34162
> or send a blank email to 
> leave-34162-13441.4e79e96ebb5671bdb50111f18f263...@fsulist.frostburg.edu
> 



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=34180
or send a blank email to 
leave-34180-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to