Re: misusing stats: examples

2000-05-16 Thread Rich Ulrich

On 15 May 2000 07:31:17 -0700, [EMAIL PROTECTED] (Michael Granaas)
wrote:

 snip 
 The misinterpretation of results by the popular press has become a core
 topic for me in recent years.  While some of the misinterpretations may be
 harmless (I doubt that eating extra fiber would hurt you unless it lulls
 you into a false sense of security about about your health).  On the other
 hand some misinterpretations lead to all kinds of mischief.  In recent
 weeks the press locally has jumped on the report that women earn about
 $.73 for every $1.00 that a man earns.  This is being reported locally as
 the pay difference FOR THE SAME JOB!  But the data are talking about
 the large aggregate (on average, if you will) not about folks within
 the same jobs.

 - well, where did you see this?  The $.73  is a bit dated, but I am
afraid that from the original reporting that I have seen, they are
right and you are wrong, as to the intentions.  I thought it was more
like $.79  or $.83, across all industries and occupations, nowadays,
but there is still a gap in the U.S., which is less than in many
countries.  It's nearly vanished in a few occupations, if narrow ones
-  For instance, all U.S. Senators get paid the same.

There has been more than one such report.  The statistical matching
and control has often done pretty well, and with imagination.  There's
been a gap.  In about 1970, when I first entered a workforce, it would
be true that male college professors, after 10 years of tenure in an
English department, would expect to have a distinct and definite
income edge over their similarly qualified female counterparts.  There
would have been, then, in academia, one of the best work places for
equality, at least a 15% difference -- so far as I think I recollect.

What gets harder to figure out is whether the tenured man should be
compared to a *tenured*  female, or should he be compared to the
female who was denied tenure solely because she is female?  My own
sister got extremely pissed off, about 1972, when the insurance agency
where she worked  automatically recruited a *male* as "management
trainee" --  younger, stupid-er, higher paid, with no better
background -- instead of considering, at all, the females who were
underemployed as secretaries in their own offices.

The more extreme comparisons today do try to "control for" the
unfairness in the background; and that can be controversial, too.  How
much penalty should there be, for dropping out to have a child?

There is additional difficulty in trying to compare and contrast jobs
that are "traditionally male"  versus  "traditionally female"  and
which (among "male" jobs) may still have high barriers for entry.

The last time that I saw a dollar-earned comparison, in was from a
scoffer who hyperbolized, mis-cited, invented arguments, and generally
insulted the statistics profession -- as if none of the studies, ever
done by anybody,  had ever controlled for anything.  This was in the
local newspaper.  I keep pretty flexible in my expectations for the
local newspaper.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: misusing stats: examples

2000-05-15 Thread Michael Granaas

On Fri, 12 May 2000, Rich Ulrich wrote:

   snip 
 
 Or, there are bad news reports, that don't really say what the study
 said.  
more snipping 
 So: Here is another aspect of error -- what is reported in a journal,
 as opposed to what is claimed in a newspaper.
 
 

The misinterpretation of results by the popular press has become a core
topic for me in recent years.  While some of the misinterpretations may be
harmless (I doubt that eating extra fiber would hurt you unless it lulls
you into a false sense of security about about your health).  On the other
hand some misinterpretations lead to all kinds of mischief.  In recent
weeks the press locally has jumped on the report that women earn about
$.73 for every $1.00 that a man earns.  This is being reported locally as
the pay difference FOR THE SAME JOB!  But the data are talking about
the large aggregate (on average, if you will) not about folks within
the same jobs.

The public concern about this discrepency can lead to the passage of
unnecessary legislation and a fair amount of public acrimony.

I would agree that the misinterpretation of otherwise legitimate results
is a major topic for discussion.  Much more so than incorrect use of
statistical procedures.

MG

***
Michael M. Granaas
Associate Professor[EMAIL PROTECTED]
Department of Psychology
University of South Dakota Phone: (605) 677-5295
Vermillion, SD  57069  FAX:   (605) 677-6604
***
All views expressed are those of the author and do not necessarily
reflect those of the University of South Dakota, or the South
Dakota Board of Regents.



===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



RE: misusing stats: examples

2000-05-15 Thread dennis roberts

At 10:30 AM 5/15/00 -0500, Simon, Steve, PhD wrote:
There have been a lot of interesting comments in this thread. Let me just
add my two cents.

Anyway, what I tell them is that nine times out of ten, the mistake was not
in how the data was analyzed, but in how it was collected. After all, if you
collect the wrong data, it doesn't matter how sophisticated the analysis is,
does it?

many MANY years ago ... early in the 80s ... i presented a little paper at 
a little conference at a little school titled: THE OVERRATED IMPORTANCE OF 
STATISTICS IN RESEARCH ... where i took a little 2 group 
experimental/control group design ... and listed some steps in this process 
such as:

defining TARGET population
taking a sample FROM that population
subdivision OF sample into exp and cont groups
IMPLEMENTATION of the treatment
use of RELIABLE measures
using APPROPRIATE analyses
engaging in REASONABLE interpretations of the data
and other things ...

and tried to show that in the entire scheme of things ... that the process 
of using appropriate statistical analysis was the LEAST important of the 
batch ... and in fact, rather paled in importance when compared to the mess 
one can easily get into when one or more of the OTHER 'steps' has some flaw 
... sometimes potentially fatal

recovery from some INappropriate analysis is accomplished rather easily 
but, recovery from mistakes in the other areas is usually almost impossible 
to do ...

i sometimes have a relook at this paper ... especially when we tend to get 
so hung up in trivial details of various analysis methods ... and 
especially when we try to make a big deal out of something like a p value 
... for some specific test ... when i bet a quarter to a penny that there 
are so many other problems within the context of 'typical' studies as to 
make such p value squabbles rather silly ...
Dennis Roberts, EdPsy, Penn State University
208 Cedar Bldg., University Park PA 16802
Email: [EMAIL PROTECTED], AC 814-863-2401, FAX 814-863-1002
WWW: http://roberts.ed.psu.edu/users/droberts/drober~1.htm
FRAMES: http://roberts.ed.psu.edu/users/droberts/drframe.htm



===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: misusing stats: examples

2000-05-15 Thread Rich Ulrich

On 15 May 2000 08:58:41 -0700, [EMAIL PROTECTED] (Simon, Steve, PhD)
wrote:
  ...  "Here's a draft of what I have written." (review of article
for Steve's Web site).  On-line reference given for article.  
 
 Thornley, Ben, and Adams, Clive "Content and quality of 2000 controlled
 trials in schizophrenia over 50 years" British Medical Journal 1998; 317:
 1181-1184.
 
 Overview of research studies 
 - Studies published between 1948 and 1997. 
 - Patients with schizophrenia and other non-affective psychoses. 
 
 Variety of interventions 
 - Drugs (e.g., anti-psychotics and anti-depressants) 
 - Therapy (e.g., individual, group, and family) 
 - Miscellaneous (e.g., electroconvulsive treatments) 
 
 Four difficulties
 
 1. Types of patients 
 - The ideal study would be community based. 
 - Only 14% of actual studies were community based. 
 
 2. Number of patients 
 - The ideal study should include at least 300 patients. 
 - The average number was only 65 patients. 
 - Only 3% of studies met the target of 300 or more patients. 
 
 3. Length of the studies 
 - The ideal study should last at least six months. 
 - More than half of the studies lasted six weeks or less. 
 - Only 19% of the studies met the target of six months or more duration. 
 
 4. Measurement 
 - The ideal studies should concentrate on a small number of standard
 measures. 
 - These 2000 studies employed 640 different measures. 
 - There were 369 measures that were used once and never used again. 
 
 Conclusions
 
 Much of the work in schizophrenia failed to meet appropriate research
 standards. Too many of the studies... 
 - examined the wrong patients, 
 - studied too few patients, 
 - ended too soon, 
 - used fragmentary measurements. 
 
 Research in schizophrenia leaves much room for improvement.
my reaction to the article :
Okay, I have been involved in research with schizophrenic patients
since 1970.  And I have been scornful about meta-analyses for about
all the studies I have read with soft criteria, and this one deserves
scorn, too.  And it does not even try to average an outcome measure;
it displays how badly one can draw conclusions just based on
"lumping."

A big problem is always the selection of studies.  Here is a
"meta-analysis" that reviews studies, over a 50 year period, which did
not, hardly ever, try to be "controlled studies."  Most had small N,
followed patients for under 6 weeks (instead of over 6 months), and
were not  "blind" or double-blind.  The big conclusion and criticism
is that these studies had small N, short followup, and were not blind,
etc.

hmmm.

This is, approximately, "all studies"?  Why does he thing that long,
expensive studies should predominate?  (Would that not be an inversion
of nature?)  What, pray tell, determines the fitting mix of large
studies and small studies?  There is never, ever, *any* virtue in the
smaller study, or shorter study?  There is only one kind of allowable
study?  

It would be more useful, I think, to take the set of studies that did
*pretend*  to be control studies.  How big were they?  What were their
questions, and outcome measures?  How many achieved useful results?  I
think that what the BJM published was a poor imitation of science.

And, what *should*  they say about the 95% that did not pretend to be
controlled studies?  -- especially if the N is small, time is short,
etc., these must be totally, wholly worthless studies which have no
justification for being published? --  unless these authors are
overlooking some alternate ends

I have worked on big studies.  The study that I started working on in
1970 was drug versus placebo (plus a factor for social treatment), two
years of followup with 374 outpatients, across 3 clinics.  Note, this
is the midpoint of the time period of these authors.  But before we
published a few years later, no one knew that drug would beat placebo!
And it would keep on beating it, even after 6 months, and after a
year!  The N, by the way, was *far*  larger than we needed for the
original question, but it was large enough that we were able to spin
off an important extra study:  now that drug *did* (amazingly,
unexpectedly) appear to be useful for two years, what would happen if
we followed patients longer, when we took away their meds, after those
couple of successful years?

This article in the British Journal of Medicine is (IMHO) what
Americans sometimes call "a hatchet job."  Now I see that it may have
helped to inspire and justify the non-funding of studies "because they
don't have enough power, not having 300 subjects."  I have read a hint
of that before, and I thought that it was just malicious, bureaucratic
double-talk, from someone opposed to spending money.  I did not
realize that the committee might consider themselves on the
scientific-cutting edge, having read the BJM.  Of course, the
non-funding of studies with N  *over*  300 is ever-justified because
the studies would be too difficult, and/or  would cost too 

Re: misusing stats: examples

2000-05-11 Thread Thom Baguley

Gene Gallagher wrote:
 I have recently seen examples of the thrip fallacy in the op-ed
 pages of the Boston Globe.  Massachusetts has implemented
 state-wide standardized testing and has increased state funding
 for school districts with low test scores.  Statistical analysis
 reveals that Five or six socioeconomic factors
 (parents educational level, annual salary, % two-parent households,
 etc) account for over 90% of the variance in town-to-town K-12
 standardized test scores.  The implication is that only 10% of
 the variance in mean test scores COULD be due to differences in
 curriculum, teacher quality, or financing for the school (Take
 that Teacher's Unions!).  Some might conclude that spending
 money on schools  teachers since only 10% of the town-to-town
 variance in these scores could be due to factors outside the home.
   This fallacy fails to consider that a high median income and
 other socioeconomic factors often are strongly associated with
 a better tax base, lower class sizes, better trained teachers,
 more innovative curriculum etc.
   This fallacy should have a name, but I don't know it.  I point
 my students to Wright's path analysis and structural
 modeling approaches (LISREL, and AMOS) to show alternatives
 to the misleading inference based on an R^2 in a multiple
 regression equation.

There is an additional fallacy here (I think). As I understand it they used
town-town means to infer a small effect of other factors on children's
education. This is an example of the ecological fallacy. The town mean scores
allow no firm inference about the effect of any factor on individual children
(they could be similar in magnitude, different in magnitude or even in
different directions).

Thom


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: misusing stats: examples

2000-05-11 Thread Robert Dawson


- Original Message -
From: Gene Gallagher  Here is an error that is subtle, but very
common.  The statistical
 test (multiple regression) was applied perfectly, but the
 statistical inference was wrong.
 My first reference to this type of error is in the classic,
 but highly controversial, ecology treatise by Andrewartha  Birch
 (1954): The distribution and abundance of animals, p. 580.

 These Australian ecologists wanted to show that animal
 populations aren't controlled by density-dependent factors like
 competition or predation.  They regress 14 years of thrip (an
 insect) abundance vs weather variables.  They considered weather a
 density-independent factor (mortality from a storm or a hot day
 isn't directly related to animal density).
   They conclude, "...altogether, 78 per cent of the variance
 in thrip maximal abundance was explained by four quantities which
 were calculated entirely from meteorological records.  This left
 virtually no chance of finding any other systematic cause for
 variation, because 22 per cent is a rather small residium to
 be left as due to random sampling errors.
 All the variation in maximal numbers from year to year may therefore
 be attributed to causes that are not related to density:  not only
 did we not find a "density-dependent factor," but we also showed
that
 there was no room for one."

Also, as both weather and population data tend to be
autocorrelated,
simple regression would tend to overestimate the significance of
correlation (though not the correlation itself), by imputing more
degrees of freedom than genuinely present.

-Robert Dawson




===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: misusing stats: examples

2000-05-10 Thread Gene Gallagher

In article [EMAIL PROTECTED],
Uplandcrow wrote:
 I teach research methods for social science at a small liberal arts
 college.
 The level of math in the class is low, I use Richard Black's "Doing
 Quantitative Research in the Soc. Sci." and excerpts from
 Gujarati's "Basic Econometrics."
 SNIP
 I am looking for examples of articles that use a stat procedure
 incorrectly.

Here is an error that is subtle, but very common.  The statistical
test (multiple regression) was applied perfectly, but the
statistical inference was wrong.
My first reference to this type of error is in the classic,
but highly controversial, ecology treatise by Andrewartha  Birch
(1954): The distribution and abundance of animals, p. 580.

These Australian ecologists wanted to show that animal
populations aren't controlled by density-dependent factors like
competition or predation.  They regress 14 years of thrip (an
insect) abundance vs weather variables.  They considered weather a
density-independent factor (mortality from a storm or a hot day
isn't directly related to animal density).
  They conclude, "...altogether, 78 per cent of the variance
in thrip maximal abundance was explained by four quantities which
were calculated entirely from meteorological records.  This left
virtually no chance of finding any other systematic cause for
variation, because 22 per cent is a rather small residium to
be left as due to random sampling errors.
All the variation in maximal numbers from year to year may therefore
be attributed to causes that are not related to density:  not only
did we not find a "density-dependent factor," but we also showed that
there was no room for one."

  The logical/statistical flaw in the Australian thrip story was
published in Smith, F.E. (1961) Density dependence in the Australian
thrips. Ecology 42: 403-407.  Since weather accounted for such a
high proportion of the variance in the data (78%), AB assumed other
factors could not be important.  This is a fallacy.  Smith argues
that some density-dependent factors, unmeasured but probably correlated
with weather, must be acting to control abundances.

I have recently seen examples of the thrip fallacy in the op-ed
pages of the Boston Globe.  Massachusetts has implemented
state-wide standardized testing and has increased state funding
for school districts with low test scores.  Statistical analysis
reveals that Five or six socioeconomic factors
(parents educational level, annual salary, % two-parent households,
etc) account for over 90% of the variance in town-to-town K-12
standardized test scores.  The implication is that only 10% of
the variance in mean test scores COULD be due to differences in
curriculum, teacher quality, or financing for the school (Take
that Teacher's Unions!).  Some might conclude that spending
money on schools  teachers since only 10% of the town-to-town
variance in these scores could be due to factors outside the home.
  This fallacy fails to consider that a high median income and
other socioeconomic factors often are strongly associated with
a better tax base, lower class sizes, better trained teachers,
more innovative curriculum etc.
  This fallacy should have a name, but I don't know it.  I point
my students to Wright's path analysis and structural
modeling approaches (LISREL, and AMOS) to show alternatives
to the misleading inference based on an R^2 in a multiple
regression equation.
  One could experimentally demonstrate this fallacy by
transferring students from affluent communities to communities whose
schools have dismal standardized test scores.  Somehow, I don't
think the parents would accept the statistical argument that the
their children's mean scores could decline at most 10% since 90% of
the variance was due to socio-economic variables.

--
Eugene D. Gallagher
ECOS, UMASS/Boston


Sent via Deja.com http://www.deja.com/
Before you buy.


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: misusing stats: examples

2000-05-10 Thread Jill Binker

Ralph Johnson and J. anthony Blair are the authors of _Logical
Self-Defense_. (I know, I did a year of grad study in Informal Logic with
them.)

At 8:13 PM -0400 5/9/00, Donald F. Burrill wrote:
On Tue, 9 May 2000, Jerry Winegarden wrote, in reply to Uplandcrow's
request:

   Uplandcrow wrote:
 SNIP
I am looking for examples of articles that use a stat procedure
incorrectly.

 A "MUST READ" for this class:  "How To Lie with Statistics".  Best
 little book in the world!  Many wonderful practical examples of the
 misuse of statistics.

 With a heavy political season upon us and all the wonderful ads with
 the very compelling graphs, every citizen should be required to read
 this little book! :-)

Also very good for every citizen:  "Logical Self-Defense", by a couple of
faculty members at the University of Windsor (Ontario, Canada, across the
river from Detroit) whose names elude me.  Uses lots of examples culled
from the public press of errors in logical thinking.  Some of the errors
are essentially statistical;  but even the ones that aren't ought to be
in every teaching statistician's armamentarium of bad examples.
   -- Don.
 
 Donald F. Burrill [EMAIL PROTECTED]
 348 Hyde Hall, Plymouth State College,  [EMAIL PROTECTED]
 MSC #29, Plymouth, NH 03264 603-535-2597
 184 Nashua Road, Bedford, NH 03110  603-471-7128



===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Jill Binker
Fathom Dynamic Statistics Software
KCP Technologies, an affiliate of Key College Publishing and
Key Curriculum Press
1150 65th St
Emeryville, CA  94608
1-800-995-MATH (6284)
[EMAIL PROTECTED]
http://www.keypress.com
__


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: misusing stats: examples

2000-05-09 Thread Donald F. Burrill

On Tue, 9 May 2000, Jerry Winegarden wrote, in reply to Uplandcrow's 
request: 

   Uplandcrow wrote:
 SNIP
I am looking for examples of articles that use a stat procedure 
incorrectly.
 
 A "MUST READ" for this class:  "How To Lie with Statistics".  Best 
 little book in the world!  Many wonderful practical examples of the 
 misuse of statistics.

 With a heavy political season upon us and all the wonderful ads with 
 the very compelling graphs, every citizen should be required to read 
 this little book! :-)

Also very good for every citizen:  "Logical Self-Defense", by a couple of 
faculty members at the University of Windsor (Ontario, Canada, across the 
river from Detroit) whose names elude me.  Uses lots of examples culled 
from the public press of errors in logical thinking.  Some of the errors 
are essentially statistical;  but even the ones that aren't ought to be 
in every teaching statistician's armamentarium of bad examples.
-- Don. 
 
 Donald F. Burrill [EMAIL PROTECTED]
 348 Hyde Hall, Plymouth State College,  [EMAIL PROTECTED]
 MSC #29, Plymouth, NH 03264 603-535-2597
 184 Nashua Road, Bedford, NH 03110  603-471-7128  



===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: misusing stats: examples

2000-05-08 Thread V. Partridge

Found references to two of Hurlbert's papers:

Hurlbert, S. H. 1984. Pseudoreplication and the design of ecological
field experiments. Ecological Monographs 54:187-211. 

Hurlbert, S. H. 1990. Spatial distribution of the montane unicorn.
Oikos. 58: 257-271.

"V. Partridge" wrote:
 
 Look up papers by Stuart Hurlburt, who points up commonly-made errors in
 ecological research.  Two of note are his paper on pseudoreplication (in
 Ecological Monographs, circa 1989, I think) and "The Spatial
 Distribution of the Montane Unicorn" (I don't recall the journal).
 
 V. Partridge
 
 Uplandcrow wrote:
 
  I teach research methods for social science at a small liberal arts college.
  The level of math in the class is low, I use Richard Black's "Doing
  Quantitative Research in the Soc. Sci." and excerpts from Gujarati's "Basic
  Econometrics."
 
  (FYI, if you have not seen Black's text yet, take a look. It is a wonderful
  teaching textbook, best I've seen)
 
  I am looking for examples of articles that use a stat procedure incorrectly.
  For example, I have one artivle from a business journal that conducts OLS but
  does not present any F or t tests or even standard errors. Yet the authors make
  inferences about their subject based on their results (essentially on R^2).
 
  In short, if you know of assessable articles which (in your view) misuse a
  particular method (especially descriptive states, ANOVA, OLS, logit, and
  probit) I'd be interested in the reference. Perhaps there is a web site you
  know of that deals with this? I am not out to denegrate anyone's research,
  merely to point out (common?) mistakes as a way to teach my students to be
  careful in their research.
 
  Thanks


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: misusing stats: examples

2000-05-05 Thread V. Partridge

Look up papers by Stuart Hurlburt, who points up commonly-made errors in
ecological research.  Two of note are his paper on pseudoreplication (in
Ecological Monographs, circa 1989, I think) and "The Spatial
Distribution of the Montane Unicorn" (I don't recall the journal).

V. Partridge

Uplandcrow wrote:
 
 I teach research methods for social science at a small liberal arts college.
 The level of math in the class is low, I use Richard Black's "Doing
 Quantitative Research in the Soc. Sci." and excerpts from Gujarati's "Basic
 Econometrics."
 
 (FYI, if you have not seen Black's text yet, take a look. It is a wonderful
 teaching textbook, best I've seen)
 
 I am looking for examples of articles that use a stat procedure incorrectly.
 For example, I have one artivle from a business journal that conducts OLS but
 does not present any F or t tests or even standard errors. Yet the authors make
 inferences about their subject based on their results (essentially on R^2).
 
 In short, if you know of assessable articles which (in your view) misuse a
 particular method (especially descriptive states, ANOVA, OLS, logit, and
 probit) I'd be interested in the reference. Perhaps there is a web site you
 know of that deals with this? I am not out to denegrate anyone's research,
 merely to point out (common?) mistakes as a way to teach my students to be
 careful in their research.
 
 Thanks


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: misusing stats: examples

2000-05-01 Thread Art Kendall

would you please compile responses and re-post them?

Uplandcrow wrote:

 I teach research methods for social science at a small liberal arts college.
 The level of math in the class is low, I use Richard Black's "Doing
 Quantitative Research in the Soc. Sci." and excerpts from Gujarati's "Basic
 Econometrics."

 (FYI, if you have not seen Black's text yet, take a look. It is a wonderful
 teaching textbook, best I've seen)

 I am looking for examples of articles that use a stat procedure incorrectly.
 For example, I have one artivle from a business journal that conducts OLS but
 does not present any F or t tests or even standard errors. Yet the authors make
 inferences about their subject based on their results (essentially on R^2).

 In short, if you know of assessable articles which (in your view) misuse a
 particular method (especially descriptive states, ANOVA, OLS, logit, and
 probit) I'd be interested in the reference. Perhaps there is a web site you
 know of that deals with this? I am not out to denegrate anyone's research,
 merely to point out (common?) mistakes as a way to teach my students to be
 careful in their research.

 Thanks



===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: misusing stats: examples

2000-05-01 Thread Uplandcrow

would you please compile responses and re-post them?

Yes, I plan to. I've gotten 4 or 5 good sugestions and I want to look at the
articles. Then I will post the citations and a summary.

Cheers,

Jon


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: misusing stats: examples

2000-04-30 Thread Michael F.

Uplandcrow wrote:
  
 I am looking for examples of articles that use a stat procedure incorrectly.

A literature search of important journals in the subject area in which your 
students major might show that common problems have been addressed.  For example, 
the articles below address problems that can arise with the application of 
multivariate models to clinical data.

Concato J et al, Ann Intern Med 1993;118:201-10.
Simon R  Altman DG, Br J Cancer 1994;69:979-85.

 For example, I have one artivle from a business journal that conducts OLS but
 does not present any F or t tests or even standard errors. Yet the authors make
 inferences about their subject based on their results (essentially on R^2).

Of course, there are situations when the above approach could be appropriate.
 
-- 
Michael


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===