Comments on some of the interesting points raised in response to my
proposed double-blind test of psychotherapy:
On Wed, 11 Aug 1999, Don Allen wrote:
>
> Not quite double blind, but very very close. You would also have to
> ensure that the therapists were equally convinced of the effectiveness of
> their treatments otherwise differential experimenter enthusiasm could
> affect the Ss responses. That quibble aside, I agree that it would be a
> good test of cognitive (or any other) therapy.
Good quibble. But the graduate student suggestion (sorry, forgot who
made it) is one way around this. My understanding of this suggestion
is that graduate students (or hired paraprofessionals, if grad
students are too well informed) could be instructed in both therapies,
and would be suitably motivated to believe in each.
Alternatively, it could just be argued that any lack of
of therapist enthusiasm would impair the credibility of the
therapy. That is, if the clients perceived that the therapist didn't
believe in the therapy, they would find it less credible. So ensuring
that the therapies are equally credible would guard against this.
BTW, conventional talk therapists would probably object to the use of
paraprofessionals, because no one but a highly qualified professional
is competent to give treatment. But behaviour modifiers believe
otherwise (and evidence is on their side).
Rick Adams objected:
> except that, since the experimenters themselves would know if
> they were administering the placebo therapy or not, their
> interactions with the subjects could be compromised by the knowledge
> and the results skewed. :(
> In a medical double-blind experiment neither the subject _nor_
> the administering physician know whether they are in the
> experimental group or the control group.
Same response as above. The placebo nature of one of the treatments
could be made fully double-blind by using hired paraprofessionals who
have been led to believe that the treatment is real.
and Rick continued...
> Try using a non-existant form of therapy with a client and see how
> quickly the client begins to demonstrate lack of confidence in your
> capability as a therapist (a critical component in the therapeutic
> process). Thus, even if the observers were unaware of the nature of
> the therapy, a true double-blind protocol wouldn't exist.
..which would show up in the clients' ratings that the placebo
treatment was less credible. Remember, the design requires
equally-credible treatments. But also remember that any new therapy
must initially be non-existent, and that never stopped anyone from
believing in it. And people believe in some pretty wild therapies.
and Rick said in a later post:
> Good thoughts, but I'll stick to my premise that in some cases
> double-blind research is not practical, and alternate ways of
> assessing the relative merits of a therapeutic approach (i.e., case
> study, longitudinal study, etc.) must be accepted.
I'd have to disagree. Even a placebo-controlled randomized study
flawed in the ways discussed above is preferable to case or
longitudinal studies (same thing, actually), which can never provide
trustworthy data. only suggestions to be confirmed by more adequate
means.
Finally, Paul Brandon commented:
> There is also some question as to whether there are really any
> perfectly double blind drug studies, since subjects can often
> discriminate placebo from drug based on the side effects of drugs
> (even pigeons can be taught to do this).
This is a great point, reminding us that even the best double-blind
placebo-controlled drug study (than which none is more pure) is not
itself without sin. This is more than academic. Kirsch and Sapirstein
(1998), in a highly controversial review (I know because the editors
added a warning which said so), claimed that anti-depressant
medication (including famous Prozac) may be no more than placebos.
Their argument was that subjects could distinguish between the
medication and the placebo on the basis of side effects, and therefore
can tell which group they were in. In particular, they cite a
meta-analysis of Prozac, in which there there was a correlation of .85
between therapeutic effectiveness and report of side effects. That is,
the more side effects, the more effective the drug was found to be.
But, like democracy, the double-blind study is the worst form
of experimentation, except for all those other forms that have been
tried from time to time.
So here's my recipe, revised
-take two therapies, one the experimental, the other
attention-placebo
-ensure that they are equally credible to the clients; confirm
afterwards
-randomly assign clients to one or the other of the treatments
-administer therapy by specially-trained paraprofessionals who
have been led to believe equally in the efficacy of the two
treatments
-assess by independent judges who do not know which client has
received which therapy
Bingo (again): double-blind as good as it gets, every bit as good
as in a drug study (maybe better). Quod erat demonstratum.
-Stephen
Reference
Kirsch, I., & Sapirstein, G. (1998). Listening to Prozac but
hearing placebo: a meta-analysis of anti-depressant medication
Prevention & Treatment, 1, article 0002a
[electronic journal available at:
journals.apa.org/prevention/volume1/pre0010002a.html]
------------------------------------------------------------------------
Stephen Black, Ph.D. tel: (819) 822-9600 ext 2470
Department of Psychology fax: (819) 822-9661
Bishop's University e-mail: [EMAIL PROTECTED]
Lennoxville, QC
J1M 1Z7
Canada Department web page at http://www.ubishops.ca/ccc/div/soc/psy
------------------------------------------------------------------------