On 15 May 2000 08:58:41 -0700, [EMAIL PROTECTED] (Simon, Steve, PhD)
wrote:
 < ...  "Here's a draft of what I have written." (review of article
for Steve's Web site).  On-line reference given for article.  >
> 
> Thornley, Ben, and Adams, Clive "Content and quality of 2000 controlled
> trials in schizophrenia over 50 years" British Medical Journal 1998; 317:
> 1181-1184.
> 
> Overview of research studies 
> - Studies published between 1948 and 1997. 
> - Patients with schizophrenia and other non-affective psychoses. 
> 
> Variety of interventions 
> - Drugs (e.g., anti-psychotics and anti-depressants) 
> - Therapy (e.g., individual, group, and family) 
> - Miscellaneous (e.g., electroconvulsive treatments) 
> 
> Four difficulties
> 
> 1. Types of patients 
> - The ideal study would be community based. 
> - Only 14% of actual studies were community based. 
> 
> 2. Number of patients 
> - The ideal study should include at least 300 patients. 
> - The average number was only 65 patients. 
> - Only 3% of studies met the target of 300 or more patients. 
> 
> 3. Length of the studies 
> - The ideal study should last at least six months. 
> - More than half of the studies lasted six weeks or less. 
> - Only 19% of the studies met the target of six months or more duration. 
> 
> 4. Measurement 
> - The ideal studies should concentrate on a small number of standard
> measures. 
> - These 2000 studies employed 640 different measures. 
> - There were 369 measures that were used once and never used again. 
> 
> Conclusions
> 
> Much of the work in schizophrenia failed to meet appropriate research
> standards. Too many of the studies... 
> - examined the wrong patients, 
> - studied too few patients, 
> - ended too soon, 
> - used fragmentary measurements. 
> 
> Research in schizophrenia leaves much room for improvement.
================my reaction to the article :
Okay, I have been involved in research with schizophrenic patients
since 1970.  And I have been scornful about meta-analyses for about
all the studies I have read with soft criteria, and this one deserves
scorn, too.  And it does not even try to average an outcome measure;
it displays how badly one can draw conclusions just based on
"lumping."

A big problem is always the selection of studies.  Here is a
"meta-analysis" that reviews studies, over a 50 year period, which did
not, hardly ever, try to be "controlled studies."  Most had small N,
followed patients for under 6 weeks (instead of over 6 months), and
were not  "blind" or double-blind.  The big conclusion and criticism
is that these studies had small N, short followup, and were not blind,
etc.

hmmm.

This is, approximately, "all studies"?  Why does he thing that long,
expensive studies should predominate?  (Would that not be an inversion
of nature?)  What, pray tell, determines the fitting mix of large
studies and small studies?  There is never, ever, *any* virtue in the
smaller study, or shorter study?  There is only one kind of allowable
study?  

It would be more useful, I think, to take the set of studies that did
*pretend*  to be control studies.  How big were they?  What were their
questions, and outcome measures?  How many achieved useful results?  I
think that what the BJM published was a poor imitation of science.

And, what *should*  they say about the 95% that did not pretend to be
controlled studies?  -- especially if the N is small, time is short,
etc., these must be totally, wholly worthless studies which have no
justification for being published? --  unless these authors are
overlooking some alternate ends....

I have worked on big studies.  The study that I started working on in
1970 was drug versus placebo (plus a factor for social treatment), two
years of followup with 374 outpatients, across 3 clinics.  Note, this
is the midpoint of the time period of these authors.  But before we
published a few years later, no one knew that drug would beat placebo!
And it would keep on beating it, even after 6 months, and after a
year!  The N, by the way, was *far*  larger than we needed for the
original question, but it was large enough that we were able to spin
off an important extra study:  now that drug *did* (amazingly,
unexpectedly) appear to be useful for two years, what would happen if
we followed patients longer, when we took away their meds, after those
couple of successful years?

This article in the British Journal of Medicine is (IMHO) what
Americans sometimes call "a hatchet job."  Now I see that it may have
helped to inspire and justify the non-funding of studies "because they
don't have enough power, not having 300 subjects."  I have read a hint
of that before, and I thought that it was just malicious, bureaucratic
double-talk, from someone opposed to spending money.  I did not
realize that the committee might consider themselves on the
scientific-cutting edge, having read the BJM.  Of course, the
non-funding of studies with N  *over*  300 is ever-justified because
the studies would be too difficult, and/or  would cost too much.

Okay, Steve, that was my first response...

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


===========================================================================
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===========================================================================

Reply via email to