Stanovich is another great source of examples of "we can't measure it but we know that it works." I just showed Prisoners of Silence (the video about facilitated communication) to my research methods course (always riveting). I reread an article in the American Psychologist (Jacobson et al, 1995) discussing the anti science and pseudo science of the FC movement. In one of the replies to this article Allen and Allen (1999) ask "Can the scientific method be applied to human interaction?" They argue "...it seemed, at least to us, that they [Jacobson et al] missed the larger and perhaps more important point of the debate concerning
this obviously controversial topic. That is, is human interaction able to be examined or studied using the scientific method?" (p. 986). Allen and Allen do not argue that FC works but that it can't really be studied at all (because it involves a human interaction). I guess the field of social psychology, clinical psychology, developmental psychology (and a few others) are out too then.

Paul, I'd love to see what type of teaching materials you develop to get at/teach this point.

Marie

Jacobson, J. W., Mulick, J. A., & Schwartz, A. A. (1995). A history of facilitated communication: Science, pseudoscience, and antiscience. American Psychologist, 50, 750-765.

Allen, B., & Allen, S. (1996). Can the scientific method be applied to human interaction? American Psychologist, 51, 986.

Paul Smith wrote:
I just read those responses (today's NYTimes), and was not at all surprised to find a certain kind of strange reasoning that I think deserves attention in our research classes. Two of the letter writers offered versions of "The effectiveness of psychotherapy can't be measured, but we know that it works because...". The illogic of that statement is typically hidden with some flowery language about the mystical complexity of the human experience while "mere grubby evaluation" is denigrated with mechanical, technical sounding words. I think it signals some kind of odd understanding of the notion of measurement, an understanding that simultaneously tries to include and to exclude forms of measurement that don't involve machines and numbers. I am very tempted to try to get at this with an assessment in my research methods class this semester.

I was also a bit bothered to see that one of the letter writers (and apparently the editor who chose the letters to print) seemed to be overly impressed by some study that showed changes in brain structure in response to some therapy. I see no reason why structural changes revealed by brain scans would be considered better evidence for effectiveness of therapy than, say, simple behavioral observations (Didja get out of the house today? Great!), or self-reported measures of mood.

Paul Smith
Alverno College
Milwaukee

On 3/2/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
An op-ed piece was published recently in the New York Times, in which
a psychoanalyst declared that his profession was beyond mere grubby
evaluation of effectiveness. The colleague who drew this to my
attention also sent me a collection of responses to it, among them a
great one from our very own Scott Lilienfeld. I'm sure he won't mind
if I reproduce it here (please excuse me if the formatting gets
messed up in the re-posting; it looks ok at the moment).


--- You are currently subscribed to tips as: [EMAIL PROTECTED] To unsubscribe send a blank email to [EMAIL PROTECTED]

-- 
*********************************************
Marie Helweg-Larsen, Ph.D.
Associate Professor of Psychology
Dickinson College, P.O. Box 1773
Carlisle, PA 17013
Office: (717) 245-1562, Fax: (717) 245-1971
Webpage: www.dickinson.edu/~helwegm
*********************************************

---

You are currently subscribed to tips as: [email protected]
To unsubscribe send a blank email to [EMAIL PROTECTED]

Reply via email to