Marie Helweg wrote:

> Here is the institute's website. It looks like they are still going
strong. They have PhD students and publish their research in their own
> publication. When challenged (I contacted them a couple of years ago) they
refer to a few individual case studies where FC did work (but no
> large scale studies with lots of participants).
> It typically generates heated discussion among my students that the
institute is still working when FC does not work (at least for most
> children). It really is one of the best videos for making it clear why we
need research.

    I shouldn't be taking the time right now to do this, but...

    I think that people who push these "revolutionary miracle cures" tend
not to understand exactly what it is that makes "large scale studies with
lots of participants" different from "individual case studies", and I think
we tend to give up too much ground to that misunderstanding. I took a quick
look at that site, and did see one of those case studies in one of the past
issues of their newsletter. Suppose I believe (as I do) that the facilitated
communication idea is dead wrong, and that FC does not "work" (that is, that
it is nothing but a way to comfort parents, and does not actually do
anything for the autistic child). At the same time, let's suppose (as I do)
that the case studies in their newsletters are relatively honest accounts of
actual events (that is, that they suffer from no dishonesty more serious
than the normal selective picking and choosing of events to write about).
    The case study I saw there described a child who had been labeled
"autistic", who received FC treatments, and now apparently communicates
relatively normally. Since we're psychology teachers here, I'm sure that I
don't have to lay out a list of alternative explanations (besides the one
intended: that FC treatment cured a genuinely autistic child) for the events
described there. I didn't read it carefully enough to be able to confidently
say that they did nothing to eliminate any of those alternatives, but I'll
bet they did very little (for example, I would assume that this was nothing
more than an initial misdiagnosis).

    I think that people not well informed about research methods look at
case studies like that, and then hear calls for large scale studies, and
think "Well, so what if we can't demonstrate that it works in general? Look
at the case study - we KNOW it worked for that child, because it says so
right there". That is, I think that the lay public takes calls for large
scale studies with experimental controls and statistical analyses as
admissions that "it worked for that one child", but as dismissals of the
importance of working for just one child (and of course there would be
something pathetic about a science that responded to a dramatic instance of
a cure by saying "So what? If if doesn't work for people in general who
cares if it cured one child?").

** But that's not it at all: those large scale studies are the method of
confirming that it worked _for even just the one child_. The case study does
NOT allow you to infer that it worked for that child, because of all of
those unaccounted for alternative explanations. The large scale studies DO
account for at least some of those alternatives, including the most
important ones. **

    When I did my dissertation work on students' understandings of research,
time and again I heard students respond to negative findings of large scale
studies of their pet beliefs by saying "Well, I KNOW it worked for me". It
seems like a way of dismissing counterintuitive findings, but I think it's
based on a very basic misunderstanding of why we do "large scale studies" -
an assumption that the value of those larger samples is ONLY that they
increase the external validity of a study (that is, our ability to
generalize the findings). That assumption is dead wrong. Larger samples with
statistical analyses and the kinds of expermental controls not possible in
case studies increase the INTERNAL validity of the study as well (and in
fact, that is MORE important than the effect on the external validity).

    Imagine I had a deck of cards sitting in front of me, face down, and I
told you that I was going to "intuit" the color of top card. I close my
eyes, grunt and groan and pretend to be receiving signals, and then announce
that it's a black card. I turn it over, and sure enough, it's black, a 10 of
Clubs. I claim the power to intuit the color of cards. Then James Randi
takes me aside with another deck, and asks to run a "large scale study",
with a sample size of 52 observations. One at a time I grunt and groan and
intuit the cards' colors, and at the end of the study, I've gotten 26 of 'em
right (and 26 of 'em wrong). The problem with my "case study" (the intuiting
of the color of the single card at the top) is not just that it fails to
demonstrate a generalizable ability. The problem is that it fails to
demonstrate any ability at all. Similarly, after the "large scale study", I
don't get to say "well, it doesn't work for MOST cards, but I still have the
ability to intuit some of them, like the 10 of Clubs - after all, my first
study demonstrated that".

    Back to the FC issue. I worry that when we say things like what Marie
said here (sorry, Marie - I'm picking on you. This must be what it's like to
be a presidential candidate, hey?):

> It typically generates heated discussion among my students that the
institute is still working when FC does not work (at least for most
> children).

    The problem is that the parenthetical disclaimer "at least for most
children" suggests to our students that the FC case studies DO demonstrate
that FC worked for those individual children who were actually the subjects
of the case studies. But they do no such thing, any more than my getting 26
of 52 cards right demonstrates that I have a wonderful intuitive power that
works roughly half the time. Nope.

Paul Smith
Alverno College
Milwaukee


---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]

Reply via email to