Let me make a few points:
(1) I have respect for Karl W., Pat S. and Jack C. I just
don't agree with everything they say. I hope that I am
not too disagreeable in my disagreements with them
(well, maybe not with Pat but that's another story).
(2) My first reaction to John Kulig's post (down below)
was to decide whether I should laugh or cry. Let's be
clear about what the editorial for the journal "Basic
and Applied Social Psychology" says because to say
that it bans NHST is an oversimplification.
(a) NO INFERENTIAL TESTS WILL BE ALLOWED.
The editorial is structured into Question-Answer format
and I quote the first question-answer here:
|Question 1. Will manuscripts with p-values be desk
|rejected automatically?
|Answer to Question 1. No. If manuscripts pass the
|preliminary inspection, they will be sent out for review.
|But prior to publication, authors will have to remove all
|vestiges of the NHSTP (p-values, t-values, F-values,
|statements about ''significant'' differences or lack
|thereof, and so on).
I don't know if this makes Karl happy or not because it is
the "reverse discrimination" that he experienced but one
has to ask "Is it really the case that all inferential statistical
tests are invalid or bad?" What about permutation tests?
How will we know is a difference is due to systematic
effects or sampling error? What would Frederic Lord's
statistician do, the one who performed a statistical test
on the numbers of college football jerseys in order to
determine whether freshman had systematic lower numbers
than the seniors? He is now in violation of Stevens and
the editors Trafimow and Marks! What to do?
(b) NO CONFIDENCE INTERVALS OR BAYESIAN ANALYSIS.
It should be obvious that confidence intervals suffer from whatever
problems one wants to claim about NHST (see the argument
made in the editorial as well as elsewhere) and Bayesian
analysis should be prohibited for reasons that only Bayesian
care about (trust me on this point; one historical point that
few people make is that both Fisher and Neyman rejected
Bayesian methods -- a popular position during the 19th century
and first half of the 20th century -- and attempts to resurrect it
may succeed only in limited situations where the assumptions
are met, a position that applies also the Fisherian and Neyman
approaches).
(c) NO INFERENTIAL STATISTICAL PROCEDURES WILL BE
REQUIRED.
This point will probably be cheered most by researchers who
couldn't do inferential statistics if their lives depended upon.
Now the editors, David Trafimow and Michael Marks, say that
this should NOT make it easy to publish in "Basic and Applied
Social Psychology" but they really do not indicate how the
rigor will be assessed? Will researchers be required to
specify the probability distributions that their measurement
come from because this would determine which descriptive
statistics are most important to report (as well as explaining
why the probability distribution is appropriate)? I have no doubt
that many researchers will attempt to publish the worst kind of
crap research in this journal because the word will be out that
"the journal doesn't require statistical analysis!!!" This would
be comparable to the belief of a woman I knew back when I
was an undergraduate who said that she was applying only
to one psychology graduate program because they only taught
Bayesian statistics in the grad stat course and "Bayesian was
all subjective and not mathematical". Boy, was she surprised.
I have no problem with purely descriptive or observational
research as long as it was planned as such -- Jane Goodall's
work with chimpanzees serves as a prototypical example.
However, manipulating variables in experimental or quasi-experimental
designs and then reporting just descriptive statistics is just
plain stupid. It is as though the editors either don't want researchers
to distinguish between systematic differences and difference
due to random factor or sampling error.
(3) I consider this policy as dumb as Geoff Loftus' policy to enforce
only the use of confidence intervals while he was editor of the
journal "Memory and Cognition". The policy was eventually
abandoned and I'm willing to start a pool to predict what year
the above policy is significantly modified or abandoned.
I leave it to someone who cares to explain the comparable
stupidity of the ASP journals requiring P-rep in results sections,
that is, until it was realized that P-rep didn't really do what people
thought it did.
(4) There are a few people around today who consider themselves
"Neo-Fisherians" because they base their statistical analysis on
a form of the original Fisherian inferential framework instead of
the Neyman-Pearson framework which, when it comes right down
to it, makes some pretty unreasonable assumptions (e.g., the
basis for using confidence intervals is the belief that if one, say,
replicated an experiment 100 times and one used a 95%CI,
then 95% of the intervals would contain the population parameter;
no researcher is ever going to do 100 replications to determine
if they are consistent [simulations don't count, live data does]
but this kind of thinking may be appropriate for manufacturing
where large numbers of samples of widgets are supposed to
have a mean width/circumference/whatever -- this is a classic
objection to the use of confidence intervals). Perhaps we will
experience a "Great Awakening" and have researchers convert
to neo-Fisherian practices and so on. Then again, some people
may not be willing to give up their hard-learned beliefs and
continue to engage in the rituals they find most comfortable
with be it t-tests or confidence intervals or effect size estimation
(with explaining what probability distributions they are using
and why they are using them) and so on. Instead of trying to
understand the nature of the phenomenon or situation that
they are describing (yes, I'm looking at you experimental designs),
some people will engage in holy wars are which statistical
practices are God's truth and which are heresy. Y'know,
acting like schmucks.
On Tue, 24 Feb 2015 11:33:59 -0800, Karl L Wuensch wrote:
Every time I have tried to publish a research manuscript without
p values (but with effect size confidence intervals) I have been
instructed to provide p values. It is as if it is not real science
if there are not p values.
There's history on this point. Some editors require this, some
require that, and though at the time it may have seemed like
a good idea to apply a particular criterion (i.e., give me a p-value
or don't give me a p-value) only history will reveal which was
best (and in what sense). Prediction: both are partially right
and partially wrong and future researchers will reanalyze the
data according the then current fashion.
NHST (or, as Cohen quipped, SHIT) was banned by the American
Journal of Public Health way back in the 80's. There have been
previous attempts to do the same in Psychology, with little effect
[that is, the confidence interval may exclude zero but both ends
are pretty close to zero :-) ].
Remember, Jack was a Fisherian before he converted to Neyman's
religion which was similar to a Catholic becoming an Episcopalian.
Remember: the distinctions that psychologists make about statistics
may not be similar to the distinctions that statisticians make. See
the quote by Dave Krantz in my book review of Cumming's book:
https://www.researchgate.net/publication/236866116_New_statistical_rituals_for_old
Background reading:
Shrout, P. E. (1997). Should significance tests be banned?
Introduction to a special section exploring the pros and cons.
Psychological Science, 8, 1-2.
Fidler, F., Thomason, N., Cumming, G., Finch, S., & Leeman, J.
(2004). Editors can lead researchers to confidence intervals,
but can't make them think. Psychological Science, 15, 119-126.
Let me make a suggestion that goes beyond what Pat Shrout
and Geoff Cumming et al suggest:
Use the best or most appropriate statistical analysis you can for
the data you have, taking into consideration all of the factors that
may affect internal validity, external validity, and statistical
conclusion
validity (fans of construct validity and other types of validity can
chime in but a guy has to draw a line somewhere). Remember that
any result you have is tentative and requires further support through
replication -- failure to replicate is a problem that is not discussed
in
the editorial by Trafimow & Marks (a charitable attribution is that
they are bold enough in advocating the criterion they will us; an
uncharitable attribution is that this is one way to draw attention to
the publication and get more submissions).
And quoting David Bakan: "Don't be a schmuck."
-Mike Palij
New York University
[email protected]
On Tuesday, February 24, 2015 11:04 AM, John Kulig wrote:
FYI:
http://www.tandfonline.com/doi/full/10.1080/01973533.2015.1012991#.VOxksXZ=
---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here:
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=42278
or send a blank email to
leave-42278-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu