Concerning the request, which has resulted in a number of
somewhat-relevant responses --
==============
Date: 04/30/2000
Author: Uplandcrow <[EMAIL PROTECTED]>
< snip >
"I am looking for examples of articles that use a stat procedure
incorrectly. For example, I have one artivle from a business journal
that conducts OLS but does not present any F or t tests or even
standard errors. Yet the authors make inferences about their subject
based on their results (essentially on R^2).
"In short, if you know of assessable articles which (in your view)
misuse a particular method (especially descriptive states, ANOVA, OLS,
logit, and probit) I'd be interested in the reference. Perhaps there
is a web site you know of that deals with this? I am not out to
denegrate anyone's research, merely to point out (common?) mistakes as
a way to teach my students to be careful in their research."
================
- Most of what fits, "stat procedure used incorrectly", is caught
before publication. Further, I don't think you can point to an actual
"misuse" of ANOVA, etc., without denigrating both the author and the
journal. If they make a *really obvious* error, it *should* ruin
the reputation of both, for a long time.
Or, there are bad news reports, that don't really say what the study
said. Or, there are claims that (despite the ANOVA, whatever) the
basic *inference* is not legitimate.
The examples that I think of that really come close to being
"incorrect", in recent years, are "meta-analyses". Half are
incompetent -- if you consider it incompetent to average numbers that
are heterogeneous, and that test as heterogeneous, and to report the
average as useful and meaningful. Oh, the authors probably don't know
to test for the heterogeneity. However, the mistake is about the same
as showing you any other mean that includes an extreme outlier.
Another "weak" procedure: A few years ago, there was the use of
step-wise regression as if it could test variables and demonstrate
hypotheses. But that is related to rules of inference; it is also
"bad description" but what is incorrect is mostly the *conclusion*
rather than the procedure.
A recent renouncing: After the value of bran (in preventing colon
disease) was discounted a couple of weeks ago, I saw quotes in the
newpaper from a leader at National Institutes of Health, and also from
a journal editor. They both said that the dietary value of bran was
never well-established a few years ago; their own journals/agencies
did try to keep the story straight, and keep the claims tentative;
some original authors tend to become enthusiasts, and inevitably,
unsupportable claims get bandied around, even though the basic
scientists keep their doubts.
So: Here is another aspect of error -- what is reported in a journal,
as opposed to what is claimed in a newspaper.
For the classroom: if you are asking students to consider what is
allowed in inference, perhaps you should allow them to face off, and
set up a debate on some disputed cases. "The Bell Curve" has
inspired a book or two of criticism, directed towards the inference
making and also towards its statistics. (This is the book where
Murray uses entirely new statistics on the schools, in order to reach
exactly the same conclusions about black people that he had reached a
decade earlier.)
Gary Kleck (I think that's the name) has produced extensive survey
statistics about handgun usage, and I think he has gathered together
other arguments -- or someone has. There has also been some published
discussion of the arguments, in both magazines and journals. I read a
fine article a year ago, in the Sunday magazine of the Washington
Post.
In either of these cases, the authors can claim that their procedures
"are conventional"; and their opponents can point out that the
conventional precedents (a) never made any difference to anybody, and
(b) had to be different in important ways, and (c) probably *were*
criticized, somewhere, for exactly the same reasons. As you read, you
ask, "How true is THAT? - concerning the original, or the critique."
Anyway, "misused stats" like a bad F-test are not very interesting. I
have seen an illegal transformation; I have seen a t-test of less than
1.0 that was mistaken for "significant". These were never published.
No one should get excited. But, errors of descriptions, and errors of
inference? - yes.
--
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
===========================================================================
This list is open to everyone. Occasionally, less thoughtful
people send inappropriate messages. Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.
For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===========================================================================