Rick,
But we can't test every wacky or implausible idea because it one day might become popular and damage lives. I think that for most research one should design a study that will show a relationship between the variables examined. I bet the author of the dissertation was not attempting to debunk goat watching as therapy but rather expected to find an effect. I'm sure we all guide our students to develop hypotheses that seem likely (based on the literature) and that we don't allow them to predict null effects. My point is, that in the whole universe of hypotheses most are NOT meant to advance knowledge by debunking popular but unsupported therapies. There is clearly a specific and very important place for such research (as Scott argues) but I think that predicting a null hypotheses needs special justification of some sort (perhaps widespread popularity of the dubious practice or perhaps possibility of serious harm). With respect to FC the key failure was not that we (non FC supporters) failed to test it and thereby showed it to be false (before it damaged lives and wasted money). The key failure was that Biklen advanced his ideas (and was supported and funded by Syracuse University) before obtaining scientific evidence that it works. He is still advancing his ideas (and is still supported by Syracuse University last I checked) even though there is now an incredibly amount of evidence showing it to not work. My point is simply that it is the people advancing the ideas (or wacky therapies) that must show them to be true.
Marie

Rick Froman wrote:

Scott:
I am aware of much of your work in the field which is why I expressed my
thoughts as I did (to appeal directly to what I knew you believed, not
to try to persuade you of something I thought you didn't believe).
You say that there is a difference between putting a therapy that is
being widely used clinically (like FC or rebirthing or Therapeutic
Touch) to the test and testing more questionable hypotheses with less
widespread use.
Regarding the widespread use criteria: I recently viewed again the
Prisoners of Silence video on FC. There was a time when FC started in
Australia when Douglas Biklen visited and brought it back to the US when
there would have been a window of opportunity to put it to an empirical
test before releasing it on a hopeful and credulous public. As one
person who was victimized by false allegations of abuse pointed out,
shouldn't there be as much testing of such practices before they are
released as there is when a new drug comes on the market? But, instead,
there was little or no empirical testing until many hopes had been
raised and lives destroyed.

As to the silliness of thinking that one experience with an animal will
have a life-changing affect on attachment, how exactly does that differ
from the concept of rebirthing which you suggest should be put to the
test? Rebirthing also involved brief sessions that were supposed to have
an effect on attachment. Before it became controversial with actual
physical harm, I don't know how widespread it was but it was probably no
more so than the programs today promoting contact with animals and
nature as a way to overcome various psychological problems.

I agree that the research design was flawed but I disagree that the
hypothesis was silly on its face. Much sillier hypotheses have gone
untested and eventually became popular, raised false hopes, wasted money
and hurt people. I applaud attempts to test such questionable ideas
before they get to that point.

Rick

Dr. Rick Froman
Professor of Psychology
John Brown University
2000 W. University
Siloam Springs, AR  72761
[EMAIL PROTECTED]
(479) 524-7295
http://www.jbu.edu/academics/sbs/faculty/rfroman.asp


-----Original Message-----
From: Scott Lilienfeld [mailto:[EMAIL PROTECTED] Sent: Thursday, January 19, 2006 10:08 AM
To: Teaching in the Psychological Sciences
Subject: Re: astonishing Psy.D. dissertation

Rick: I agree with much of what you say, especially in your concluding sentence. In fact, the journal I edit (Scientific Review of Mental Health Practice) is devoted explicitly to distinguishing scientific from

nonscientific methods, and we routinely publish literature reviews and empirical investigations of novel, still unsubstantiated, and controversial methods, even those that seem bizarre or implausible on their face. So we are in full agreement there.

But there's a world of difference between (a) putting a therapy that is being widely used clinically, such as facilitated communication, rebirthing, or Therapeutic Touch, to an empirical test (which I firmly and strongly support) and (b) testing a questionable hypothesis (namely,

that witnessing the birth of an animal during a single videotaped interaction can ameliorate the symptoms of children with severe and lasting attachment problems) using a research design that is flawed (e.g., an independent variable that almost surely exerts multiple and perhaps even opposing effects within and between subjects, a group of children diagnosed with a condition that is almost the most poorly validated in the DSM and that studies indicate is highly heterogeneous in its symptom picture, the selection of a sample that is probably among

the least likely to show the predicted effects, the confound between witnessing an animal's birth and witnessing direct interaction of that animal with its children, and so on).

You are most certainly correct that many people are using animal-assisted therapy programs and I enthusiastically support efforts to test these programs empirically (indeed, I've written on the topic of

animal-assisted therapy myself). The study in question was not designed to test such a program. If hundreds of therapists around the country were running around treating children's severe attachment problems by showing them the births of goats and other animals, then your point would apply - and we would be in full agreement. My view then would be "Although I personally find this treatment tp be rather dubious, lots of

therapists are using it. Therefore, for both social and scientific reasons, it's important for us to keep an open mind and subject it to empirical examination." Instead, this study was designed to test the author's hypothesis, derived from his own conjectures, that a single brief videotaped presentation of an animal's birth can exert lasting effects of adjustment among children who have severe and lasting adjustment problems. It was not designed to test a treatment that is currently being used by anyone.

Reasonable people can hold differing points of view as whether this is an interesting, important, or reasonable research hypothesis to test. As

you can tell, my view is in the negative.

Either way, I believe that a central part of training our students to be

good researchers lies not merely in the art and science of executing a study correctly. It also lies in choosing good and important research questions. I have long believed that this is one of the areas in which our field perpetually falls short in training students. And I also believe that it is one of the major factors distinguishing good from bad

researchers. Good researchers, I maintain, have a knack for "sniffing" out research questions that are likely to bear fruit.

...Scott

Rick Froman wrote:

Scott Lilienfeld wrote:


"(I also don't agree in principle that one can't judge at least some
of
the merits of a research project by reading an Abstract, as a silly
research question is a silly research question regardless of how well
or
carefully the study is executed, but that's another matter)."
I agree that this study had methodological flaws and was too limited to
fulfill the requirements of a doctoral dissertation. On the other hand,
many Psy.D. programs do not have a dissertation requirement at all and
many that do consider a detailed literature review to be a
dissertation.
Without reading the dissertation, it is difficult to say how detailed
and developed the literature review may have been (or if, in this case,
there was even a "literature" to review). I applaud the attempt to test
an idea that has currency in the field without accompanying empirical
support. I wish I could persuade all of my undergrads and grads who
seek
a career in counseling to have such a mindset of putting widely held
assumptions and unquestioned therapeutic approaches to an empirical
test.


I disagree that this is a silly research question on its face. Just
because goats are used, doesn't earn it the Golden Fleece (although I
am
sure Senator Proxmire would have criticized the research had he still
been doing this and if it had received federal funds). Is it a silly
research question to put Healing Touch therapy to an empirical test?
What about Facilitated Communication? There are many programs today
using outdoor experiences and interactions with animals in a
therapeutic
way. Should these therapies remain untested because it would seem silly
to test them? Remember, the results found no significant effect. How
likely would a study like this be published if it wasn't a
dissertation?
Anecdotal evidence of positive effects are all over the place but since
such questions are seen as below serious research, no one takes a
chance
at getting negative findings and the resulting difficulty in finding an
outlet for the research. If research articles such as this are seen as
silly on their face because they use goats, it is hypocritical of us to
criticize therapists for not empirically validating questionable
treatments. Serious scientists may have better things to do than test
the silly (and in some cases dangerous, wasteful and misguided) ideas
that are passing for therapy in some circles but I hope those we are
training to be counselors will develop the attitude of putting even
their most widely held and cherished beliefs to the test.

Rick

Dr. Rick Froman
Professor of Psychology
John Brown University
2000 W. University
Siloam Springs, AR  72761
[EMAIL PROTECTED]
(479) 524-7295
http://www.jbu.edu/academics/sbs/faculty/rfroman.asp
<http://www.jbu.edu/academics/sbs/faculty/rfroman.asp>
________________________________



---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to
[EMAIL PROTECTED]




--
*********************************************
Marie Helweg-Larsen, Ph.D.
Associate Professor of Psychology
Dickinson College, P.O. Box 1773
Carlisle, PA 17013
Office: (717) 245-1562, Fax: (717) 245-1971
Webpage: www.dickinson.edu/~helwegm
*********************************************


---
You are currently subscribed to tips as: [email protected]
To unsubscribe send a blank email to [EMAIL PROTECTED]

Reply via email to