Re: IRB's Gone Wild?

2004-05-07 Thread Annette Taylor, Ph. D.
I am probably past my quota but now I am getting offended. Clearly you took my 
statement completely out of context and used it add more fuel to a fire which 
did not need it. Your question is absurd if the entire context of this 
discussion was in place. 

Annette

Quoting Bill Scott [EMAIL PROTECTED]:

 Annette Taylor wrote:
 
   even in a minimum risk study you are abusing your participants if you
 are asking them to give up their time and energy on a useless task.
 --
 
 Does this mean an IRB should not approve a replication of Martin Orne's
 classic demonstration of experimental demand characteristics where he asked
 participants to add up columns of numbers and then tear up their work over
 and over for hours on end? He meant it to be as useless a task as possible,
 and he wanted to see how long they would do it just because he asked them to
 do it for an experiment. Was he abusing his participants?
 
 see http://journals.apa.org/prevention/volume5/pre0050035a.html
 
 Bill Scott
 
 
 ---
 You are currently subscribed to tips as: [EMAIL PROTECTED]
 To unsubscribe send a blank email to [EMAIL PROTECTED]
 


Annette Kujawski Taylor, Ph. D.
Department of Psychology
University of San Diego 
5998 Alcala Park
San Diego, CA 92110
[EMAIL PROTECTED]

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]


RE: IRB's Gone Wild?

2004-05-07 Thread Annette Taylor, Ph. D.
As a blanket statement without any context, I will say this: for many published 
instruments there are reliability data available. In my experience when an IRB 
asks for that kind of information, that is what they are asking for; if there 
aren't any, then there aren't any, and a brief statement of why that's not a 
problem should generally be sufficient. I don't see it at all as 
micromanagement nor as politically motivated. I think a prudent IRB would 
request that kind of information if it will help them make a decision. 

We know next to nothing about the proposal which started all of this and the 
context is very important. We don't know if the person/committee reviewing ths 
study felt uncomfortable about something within the overall proposal and felt 
that some tangible bit of information might settle his/her/their minds about 
the potential risks involved. We don't know if this is a committee request or 
an individual request.

We only know that this is a student with a self-constructed questionnaire about 
music preferences. We know nothing of how the student presented the study to 
the reviewers, or about the purpose of the study, or of the content of the 
items. Basically we have minimal information here and are trying to come to 
some grand conclusions about how IRBs operate.

In addition, the letters IRB tend to raise people's hackles for whatever 
reasons and immediately you begin to lose objectivity in the discussions. This 
is a shame because I have never, in 20 years and several different institutions 
been involved with a 'bad' IRB. So maybe the fault is mine. I lack that 
negative experience and therefore cannot understand why some people are so 
negative.

Annette

ps Patricia, where are you?

Quoting Hetzel, Rod [EMAIL PROTECTED]:

 IRBs should focus on assessing the potential risk for harm to
 participants and should not address psychometric issues of the study. I
 believe this for a few reasons.
 
 First, an IRB cannot make an educated decision on psychometric issues if
 they do not have expertise in that particular content area. I may decide
 to use a measure that doesn't yield very reliable scores, but may be the
 best measure available. Plus, as the primary investigator for a research
 study it is my responsibilty to choose the measures. If I choose
 measures that are poorly constructed or do not produce reliable/valid
 scores, then the editors reviewing my paper for publication will
 (hopefully) catch it.
 
 Second, if the IRB is going to evaluate score reliability, then at what
 cut-off point are they going to decide that an instrument poses a risk?
 Are they going to go by the .70 criteria? Higher? Lower? This is a
 slipperly slope that is best avoided.
 
 Third, technically reliability is a property of scores and is not a
 property of tests themselves. When tests are developed, they do not have
 a reliability coefficient stamped upon them by the almighty publisher.
 Researchers should ALWAYS calculate score reliability and validty with
 their current samples and not rely on previous estimates from other
 samples. In fact, many journal editors are now requiring researchers to
 do this prior to submitting articles for publication.  
 
 The argument that participants need to be protected from the potential
 risk of wasting their time completing surveys that do not provide
 reliable scores is a weak argument. Maybe we should not let students
 complete paper surveys because they will run the risk of getting paper
 cuts. But we couldn't have computer surveys because participants may be
 at risk of developing carpal tunnel syndrome. Maybe we should require
 researchers to write their surveys backwards so left-handed participants
 won't run the risk of smearing their answers and getting ink on the
 hand. Okay, so I know I'm being ridiculous here (it's the day before
 grades are due!). This whole situation has too much micro-management and
 I wonder what kind of political factors are playing into their decision.
 
 Rod
 
 __
 Roderick D. Hetzel, Ph.D.
 Department of Psychology
 LeTourneau University
 Post Office Box 7001
 2100 South Mobberly Avenue
 Longview, Texas  75607-7001
  
 Office:   Education Center 218
 Phone:903-233-3893
 Fax:  903-233-3851
 Email:[EMAIL PROTECTED]
 
 
 ---
 You are currently subscribed to tips as: [EMAIL PROTECTED]
 To unsubscribe send a blank email to [EMAIL PROTECTED]
 


Annette Kujawski Taylor, Ph. D.
Department of Psychology
University of San Diego 
5998 Alcala Park
San Diego, CA 92110
[EMAIL PROTECTED]

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]


Re: IRB's Gone Wild?

2004-05-07 Thread Bill Scott
I took your words out of context? In what context is the word abusing
meant in the following quote?
    even in a minimum risk study you are abusing your participants if
you
  are asking them to give up their time and energy on a useless task.

Certainly you do not mean abuse such as that experienced by recent Iraqi
prisoners. It seems to me that IRB speak throws around such terms as
abuse and risk and ethics in such a shoot-from-the-hip manner as to
make them meaningless.

Bill Scott

- Original Message - 
From: Annette Taylor, Ph. D.
 I am probably past my quota but now I am getting offended. Clearly you
took my
 statement completely out of context and used it [to] add more fuel to a
fire which
 did not need it. Your question is absurd if the entire context of this
 discussion was in place.

 Annette
Quoting Bill Scott [EMAIL PROTECTED]:
 Annette Taylor wrote:

   even in a minimum risk study you are abusing your participants if
you
 are asking them to give up their time and energy on a useless task.
 --

 Does this mean an IRB should not approve a replication of Martin Orne's
 classic demonstration of experimental demand characteristics where he
asked
 participants to add up columns of numbers and then tear up their work over
 and over for hours on end? He meant it to be as useless a task as
possible,
 and he wanted to see how long they would do it just because he asked them
to
 do it for an experiment. Was he abusing his participants?

 see http://journals.apa.org/prevention/volume5/pre0050035a.html



---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]


Re: IRB's gone wild

2004-05-07 Thread DAVID KREINER
I may be too late to contribute anything helpful to this thread (I get
the digest version so I'm always a day behind or so).There is
obviously a concern about to what extent the IRB should be evaluating
the quality of the research, and intelligent people can have different
opinions on this.  On the practical side, I would suggest that one or
two faculty from the department ask to meet with the IRB to discuss the
issue in general (not just the specific protocol that was submitted). 
The IRB members are most likely reasonable people who would appreciate a
dialogue about how to best fulfill their responsibilities.  So my
suggestion (as the IRB administrator at my institution) is to listen to
their concerns, explain your concerns, and help them come up with a good
policy that fits your institution. 



David Kreiner
Professor of Psychology and 
Assistant Dean of The Graduate School
Central Missouri State University
[EMAIL PROTECTED]

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]


Re: IRB's Gone Wild?

2004-05-06 Thread Miguel Roig

I suspect that the operating assumption boils down to the notion that
even in minimal risk research there always some risk. Thus, if
there is a possibility that the instrument is flawed, why waste Ss' time
and expose them to any degree of risk?
Miguel
At 09:10 AM 5/6/2004 -0400, you wrote:
Our
relatively new IRB has sent back a proposal from a colleague. The
IRB refuses to evaluate the proposal without the author addressing issues
of RELIABILITY and VALIDITY of measures. I find this to be a bit
scary. While I feel that the IRB is properly charged with
evaluating the risk to participants using a given method, I do not feel
that the IRB has any place evaluating the appropriateness of the method
beyond the evaluation of risk...especially in cases with minimum
risk. My contention is that the reliabilty and validity of measures
should be outside the perview of the IRB unless risk levels exceed
minimum and a cost/benefit decision must be discussed. 
Thoughts? Can anyone help me out here?
--- 
You are currently subscribed to tips as: [EMAIL PROTECTED] 
To unsubscribe send a blank email to
[EMAIL PROTECTED] 

___

Miguel Roig, Ph.D.

Associate Professor of Psychology

Notre Dame Division of St. John's
College
St. John's University

300 Howard Avenue

Staten Island, New York 10301 
Voice: (718) 390-4513 
Fax: (718) 390-4347 
E-mail: [EMAIL PROTECTED] 
Http://facpub.stjohns.edu/~roigm
On plagiarism and ethical writing:
http://facpub.stjohns.edu/~roigm/plagiarism/
___


---

You are currently subscribed to tips as: [EMAIL PROTECTED]

To unsubscribe send a blank email to [EMAIL PROTECTED]




RE: IRB's Gone Wild?

2004-05-06 Thread Robert Herdegen



Not 
that I necessarily agree with this particular IRB, butI think that it *is* 
a cost-benefit matter. Indeed, this is an issue that I discuss with my Research 
Methods students when we cover research ethics. The argument would be that if 
the measures are lacking reliability and validity, then there is nothing that we 
can gain by using them in the research. And if there is nothing to be gained by 
doing the research, then even "minimal risk" to the participants (note that it 
isn't "no risk") cannot be justified. There is a potential cost with absolutely 
no scientific benefit. Of course, what this ignores is the *educational* benefit 
that may accrue students conducting research. The counter argument to 
thatis that the students will gain little educational benefit by 
conducting research that has no validity. (When getting into 
thisdiscussion with students--both those in the ResearchMethods 
class and later when we discuss research ethics in our senior 
seminar--wefollow it through all of these arguments. Frequently the 
students leave class very frustrated because at the end of the discussion they 
"don't know what the right answer is" andare stillwrestling with the 
issues.At that point I know I've done my job right!) Our IRB rarely 
questions the particulars of the instruments in the proposals our students send 
up to them, but we *try* to be pretty careful about what gets sent to them in 
the first place.


Robert T. 
Herdegen IIIElliott Professor of Psychology and ChairmanDepartment of 
PsychologyHampden-Sydney CollegeHampden-Sydney, VA 
23943434-223-6166[EMAIL PROTECTED] 

-Original 
Message-From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED]Sent: Thursday, May 06, 2004 9:10 
AMTo: Teaching in the Psychological SciencesSubject: IRB's 
Gone Wild?
Our relatively new IRB has sent back a proposal from a 
  colleague. The IRB refuses to evaluate the proposal without the author 
  addressing issues of RELIABILITY and VALIDITY of measures. I find this 
  to be a bit scary. While I feel that the IRB is properly charged with 
  evaluating the risk to participants using a given method, I do not feel that 
  the IRB has any place evaluating the appropriateness of the method beyond the 
  evaluation of risk...especially in cases with minimum risk. My 
  contention is that the reliabilty and validity of measures should be outside 
  the perview of the IRB unless risk levels exceed minimum and a cost/benefit 
  decision must be discussed. Thoughts? Can anyone help me 
  out here? --- You are currently subscribed to tips as: 
  [EMAIL PROTECTED] To unsubscribe send a blank email to 
  [EMAIL PROTECTED] 
---

You are currently subscribed to tips as: [EMAIL PROTECTED]

To unsubscribe send a blank email to [EMAIL PROTECTED]




Re: IRB's Gone Wild?

2004-05-06 Thread Claudia Stanny
I've seen cases where this has happened. I agree that IRBs should not be in
the
business of evaluating research methods for minimal risk research. These IRBs
try to justify this micromanaging by appealing to the risk of wasting the
participant's time with a study that is unlikely to provide a meaningful
answer to a question. This is a slippery slope. I've also seen cases in which
an IRB member wanted to change which variables researcher included because the
IRB member thought those variables were more important than the ones the
researcher chose to select. My experience has been that this has been less
of a
problem with university-wide IRBs than with departmental IRBs. 

What experiences have others had? 

Claudia Stanny

At 09:10 AM 5/6/2004 -0400, you wrote: 

 Our relatively new IRB has sent back a proposal from a colleague.  The IRB
 refuses to evaluate the proposal without the author addressing issues of
 RELIABILITY and VALIDITY of measures.  I find this to be a bit scary.  While
 I feel that the IRB is properly charged with evaluating the risk to
 participants using a given method, I do not feel that the IRB has any place
 evaluating the appropriateness of the method beyond the evaluation of
 risk...especially in cases with minimum risk.  My contention is that the
 reliabilty and validity of measures should be outside the perview of the IRB
 unless risk levels exceed minimum and a cost/benefit decision must be
 discussed.  

 Thoughts?  Can anyone help me out here? 







Claudia J. Stanny, Ph.D.
Associate Professor Web Site:  http://uwf.edu/cstanny/
Department of PsychologyPhone:  (850) 474 - 3163
University of West Florida  FAX:(850) 857 - 6060
Pensacola, FL  32514 - 5751 


---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]


Re: IRB's Gone Wild?

2004-05-06 Thread Annette Taylor, Ph. D.
As chair of our IRB I have sometimes done the same thing, especially if the 
measures send up a red flag somehow. If the measures are reliable and valid 
then this is an extremely easy task. If they are not, then even in a minimum 
risk study you are abusing your participants if you are asking them to give up 
their time and energy on a useless task.

As a psychologist I find I am more mindful of such issues than my colleagues 
from other disciplines. Some of them--especially from the 'hard' sciences--seem 
clueless about tests even having reliability and validity. I don't think the 
IRB has gone wild at all. It is doing its job. This should be easily 
accomplished and easily remedied.

If that was the only thing problematic with the proposal I would have marked it 
as 'approved pending modifications then usually within a day of getting the 
requested information would have gotten back to the researcher and told them it 
was approved. At least at our school it is not a big hassle. Over the years of 
doing such things I find most researchers end up grateful for the heads-up on a 
problem with their studies--it boils down to it's not what you say but how you 
say it.

Annette

Quoting [EMAIL PROTECTED]:

 Our relatively new IRB has sent back a proposal from a colleague.   The IRB 
 refuses to evaluate the proposal without the author addressing issues of 
 RELIABILITY and VALIDITY of measures.   I find this to be a bit scary.  
 While I 
 feel that the IRB is properly charged with evaluating the risk to
 participants 
 using a given method, I do not feel that the IRB has any place evaluating the
 
 appropriateness of the method beyond the evaluation of risk...especially in 
 cases with minimum risk.   My contention is that the reliabilty and validity
 of 
 measures should be outside the perview of the IRB unless risk levels exceed 
 minimum and a cost/benefit decision must be discussed.   
 
 Thoughts?   Can anyone help me out here?
 
 
 ---
 You are currently subscribed to tips as: [EMAIL PROTECTED]
 To unsubscribe send a blank email to [EMAIL PROTECTED]
 


Annette Kujawski Taylor, Ph. D.
Department of Psychology
University of San Diego 
5998 Alcala Park
San Diego, CA 92110
[EMAIL PROTECTED]

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]


Re: IRB's Gone Wild?

2004-05-06 Thread RJRersb
Ok.  I see your points.  

Allow me to expand:  This project involves a student developed survey on music preference and simple correlations with demographic info.  The focus of the IRB is on the reliability and validity of this instrument.  I see little risk in asking someone their music preference and little opportunity or utility to address validity and reliability of a a homegorwn and simplistic instrument.  Even if arguably appropriate, how can validity and reliability issues be responsibly dealt with in this case?
---

You are currently subscribed to tips as: [EMAIL PROTECTED]

To unsubscribe send a blank email to [EMAIL PROTECTED]




Re: IRB's Gone Wild?

2004-05-06 Thread Christopher D. Green
Annette Taylor, Ph. D. wrote:
As chair of our IRB I have sometimes done the same thing, especially 
if the measures send up a red flag somehow. If the measures are 
reliable and valid then this is an extremely easy task. If they are 
not, then even in a minimum risk study you are abusing your 
participants if you are asking them to give up their time and energy 
on a useless task.
Who gives up their time and energy? Participants are usually 
compensated for their time and energy. Participants don't give up time 
and energy any more than other employees do (and surely, as employees 
ourselves, we know how much useless work employees are asked to do).  
It isn't (or, rather, shouldn't be, given the absurd amount of power 
that IRBs have been given of late) for the IRB to pre-empt of the 
editorial process by attempting to pass judgment on the quality of 
research methodology.

Regards,
--
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario, Canada
M3J 1P3
e-mail: [EMAIL PROTECTED]
phone: 416-736-5115 ext. 66164
fax: 416-736-5814
http://www.yorku.ca/christo/

.

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]


Re: IRB's Gone Wild?

2004-05-06 Thread Marie Helweg-Larsen




I have to disagree with Annette (and others) on this one (I almost
always agree with Annette  :-) ).
I was the chair of the IRB where I taught before and now I serve on the
college wide IRB.
I think this is exactly an example on an IRB gone wild. I think this is
an example that contributes to the perception that IRB are overstepping
their bounds as the "ethics police" and unduly interfering with
research (especially minimally risk research). I know Tricia
Keith-Speigel is doing research on this as we speak - perhaps she can
chime in.
I am aware of the fact that IRBs can chose to set the bar higher than
the federal regulations (which certainly this is an instance of).
However, I think this is dangerous here for several reasons (in no
particular order).
-many measures used by researchers (incl. me) do not have reliability
and validity data. There would be no way to provide evidence of
reliability and validity because it does not exist (beyond "these are
the measures that I've used before or that other people use")
-often you ask new questions that you simply have to write yourself.
-in order to collect data on reliability and validity you often have to
ask questions that are "bad"
-a very important educational function is for students to write items
themselves and do the best they can. They then realize (just like we
do) that they are probably not that great. Having the IRB serve as a
policing function for such research is not only unrealistic - I do not
think that is the job of the IRB.
-IRBs are rarely experts in the research "you" do. How can they
reasonably make a judgment about the quality of measures?
-according to the federal guidelines anonymous, minimal research is
EXEMPT from review. Unless the IRB has chosen to set a higher bar than
the federal guidelines, most research that is done by students is never
reviewed (again, if one follows the regs).
Just some thoughts before teaching my last class of the semester (yeah!)
Marie


Annette Taylor, Ph. D. wrote:

  As chair of our IRB I have sometimes done the same thing, especially if the 
measures send up a red flag somehow. If the measures are reliable and valid 
then this is an extremely easy task. If they are not, then even in a minimum 
risk study you are abusing your participants if you are asking them to give up 
their time and energy on a useless task.

As a psychologist I find I am more mindful of such issues than my colleagues 
from other disciplines. Some of them--especially from the 'hard' sciences--seem 
clueless about tests even having reliability and validity. I don't think the 
IRB has gone wild at all. It is doing its job. This should be easily 
accomplished and easily remedied.

If that was the only thing problematic with the proposal I would have marked it 
as 'approved pending modifications" then usually within a day of getting the 
requested information would have gotten back to the researcher and told them it 
was approved. At least at our school it is not a big hassle. Over the years of 
doing such things I find most researchers end up grateful for the heads-up on a 
problem with their studies--it boils down to "it's not what you say but how you 
say it".

Annette

Quoting [EMAIL PROTECTED]:

  
  
Our relatively new IRB has sent back a proposal from a colleague.   The IRB 
refuses to evaluate the proposal without the author addressing issues of 
RELIABILITY and VALIDITY of measures.   I find this to be a bit scary.  
While I 
feel that the IRB is properly charged with evaluating the risk to
participants 
using a given method, I do not feel that the IRB has any place evaluating the

appropriateness of the method beyond the evaluation of risk...especially in 
cases with minimum risk.   My contention is that the reliabilty and validity
of 
measures should be outside the perview of the IRB unless risk levels exceed 
minimum and a cost/benefit decision must be discussed.   

Thoughts?   Can anyone help me out here?


---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]


  
  

Annette Kujawski Taylor, Ph. D.
Department of Psychology
University of San Diego 
5998 Alcala Park
San Diego, CA 92110
[EMAIL PROTECTED]

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]
  


-- 
*
Marie Helweg-Larsen, Ph.D.
Associate Professor of Psychology
Dickinson College, P.O. Box 1773
Carlisle, PA 17013
Office: (717) 245-1562, Fax: (717) 245-1971
Webpage: www.dickinson.edu/~helwegm
*


---

You are currently subscribed to tips as: [EMAIL PROTECTED]

To unsubscribe send a blank email to [EMAIL PROTECTED]





Re: IRB's Gone Wild?

2004-05-06 Thread Annette Taylor, Ph. D.
Yes, and now I see your point. I think that the student can respond to this 
readily. First of all, obviously there are no reliabilty/validity 'data'. So 
all the student has to note is that the instrument has face validity--can go 
over the items individually if need be to satisfy the IRB and then justify 
their inclusion, along with the demographics, based on the literature. I'd say 
this can be done within 30-60 minutes and might make the student re-think 
through his/her items. If there are no 'sensitive' items I don't foresee that 
there should be any problems--the student may want to mention that. 
Pedagogically not a bad idea and would satisfy the cost-benefit aspect of IRB 
review.

Maybe the student's mistake was in not submitting this as an 'exempt' status 
review and taking the time to make a case for the 'exempt' status At least 
for most IRBs that would mean only one person--the chair or administrator--
looks to see that it satisfies the requirements of exemption from full and 
detailed review and can quickly decide to agree, or to ask for expedited. 

Annette

Quoting [EMAIL PROTECTED]:

 Ok.   I see your points.   
 
 Allow me to expand:   This project involves a student developed survey on 
 music preference and simple correlations with demographic info.   The focus
 of 
 the IRB is on the reliability and validity of this instrument.   I see little
 
 risk in asking someone their music preference and little opportunity or
 utility 
 to address validity and reliability of a a homegorwn and simplistic 
 instrument.   Even if arguably appropriate, how can validity and reliability
 issues be 
 responsibly dealt with in this case?
 
 
 ---
 You are currently subscribed to tips as: [EMAIL PROTECTED]
 To unsubscribe send a blank email to [EMAIL PROTECTED]
 


Annette Kujawski Taylor, Ph. D.
Department of Psychology
University of San Diego 
5998 Alcala Park
San Diego, CA 92110
[EMAIL PROTECTED]

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]


Re: IRB's Gone Wild?

2004-05-06 Thread Annette Taylor, Ph. D.
We will have to disagree here completely. It is the job of the IRB to decide on 
the quality of research if the quality shifts the balance of cost/benefit to 
cost. Participants do give up theri time and energy and are not often 
compensated. Most subject pools use a genteel form of coercion that we leave a 
blind eye to--do this 3 of 5 times per semester or do a much more onerous task, 
or don't pass the course. Let's be real. Most students do not want to 
participate in research but most intro psych students have to, or they have to 
do article reviews or some such nonsense.

I don't think IRBs are too powerful at all. You need to sit on an IRB for a 
couple of years to see what comes before committees to get a real sense of what 
confounded garbage often makes it way to us. As chair I am often the only one 
reading the vast majority of studies and I have say I have seen some truly 
terrible proposals. It has changed my perspective completely. 

I think unless you have had the experience of this you might not understand the 
perspective of those who have see truly horribly confounded studies come before 
them. There is also a real danger to the understanding of science that comes 
from people participating in bad studies.

Annette

Quoting Christopher D. Green [EMAIL PROTECTED]:

 Annette Taylor, Ph. D. wrote:
 
  As chair of our IRB I have sometimes done the same thing, especially 
  if the measures send up a red flag somehow. If the measures are 
  reliable and valid then this is an extremely easy task. If they are 
  not, then even in a minimum risk study you are abusing your 
  participants if you are asking them to give up their time and energy 
  on a useless task.
 
 Who gives up their time and energy? Participants are usually 
 compensated for their time and energy. Participants don't give up time 
 and energy any more than other employees do (and surely, as employees 
 ourselves, we know how much useless work employees are asked to do).  
 It isn't (or, rather, shouldn't be, given the absurd amount of power 
 that IRBs have been given of late) for the IRB to pre-empt of the 
 editorial process by attempting to pass judgment on the quality of 
 research methodology.
 
 Regards,
 -- 
 Christopher D. Green
 Department of Psychology
 York University
 Toronto, Ontario, Canada
 M3J 1P3
 e-mail: [EMAIL PROTECTED]
 phone: 416-736-5115 ext. 66164
 fax: 416-736-5814
 http://www.yorku.ca/christo/
 
 .
 
 
 
 ---
 You are currently subscribed to tips as: [EMAIL PROTECTED]
 To unsubscribe send a blank email to [EMAIL PROTECTED]
 


Annette Kujawski Taylor, Ph. D.
Department of Psychology
University of San Diego 
5998 Alcala Park
San Diego, CA 92110
[EMAIL PROTECTED]

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]


Re: IRB's Gone Wild?

2004-05-06 Thread Christopher D. Green
Annette Taylor, Ph. D. wrote:
We will have to disagree here completely. It is the job of the IRB to 
decide on the quality of research if the quality shifts the balance of 
cost/benefit to cost. Participants do give up theri time and energy 
and are not often compensated. Most subject pools use a genteel form 
of coercion that we leave a blind eye to--do this 3 of 5 times per 
semester or do a much more onerous task, or don't pass the course. 
Let's be real. Most students do not want to participate in research 
but most intro psych students have to, or they have to do article 
reviews or some such nonsense.
Annette,
If one is gong to be real then one should admit that the risk of 
minimal risk research is probably less than that of going to the class 
itself. The rhetoric of cost in a not-very-well controlled study is, 
I'm afraid, the main form of nonsense here.Has ANYONE EVER been injured, 
e.g., memorizing a list of words and then spitting them back out? There 
is no cost worthy of the name here at all. There are no ethical 
considerations that don't serve more to denigrate the term ethics than 
to protect anyone at risk. It is a power play, pure and simple. 
Being a participant in research is part of the education itself and is 
no more coercive than reading assingments and tests.

And just for the record (since you assumed otherwise) I have sat on my 
department's ethics review committee.

Regards,
--
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario, Canada
M3J 1P3
e-mail: [EMAIL PROTECTED]
phone: 416-736-5115 ext. 66164
fax: 416-736-5814
http://www.yorku.ca/christo/

.

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]


Re: IRB's Gone Wild?

2004-05-06 Thread Jeff Bartel
On 6 May 2004 at 8:23, Annette Taylor wrote:

 On Thu, 6 May 2004, jim clark wrote:

  With respect
  to the last point, it would be interesting to see if
  participating in bad studies harms or helps students'
  understanding of science.
 
 Good study idea!
 

I agree, but getting that study through the IRB might be difficult. :)

Jeff

--
Jeffrey Bartel
Assistant Professor
Department of Psychology
Shippensburg University
[EMAIL PROTECTED] / 717.477.1324

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]


Re: IRB's Gone Wild?

2004-05-06 Thread Lenore Frigo
I'm stumped. How do you know if a new measurement is reliable or valid before actually testing it by collecting data from participants? 
Lenore Frigo
[EMAIL PROTECTED]

		Do you Yahoo!?Win a $20,000 Career Makeover at Yahoo! HotJobs 

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]

RE: IRB's Gone Wild?

2004-05-06 Thread Hetzel, Rod
Title: Message




I'm stumped. How do you know if a new measurement is reliable or valid 
before actually testing it by collecting data from participants? 
Lenore Frigo
[EMAIL PROTECTED]



Or 
alternatively, how do you knowthat an existing measure is going to produce 
reliable/valid scores with your particular sample?


__
Roderick D. Hetzel, 
Ph.D.
Department of 
Psychology
LeTourneau 
University
Post Office Box 
7001
2100 South Mobberly 
Avenue
Longview, Texas 
75607-7001

Office:EducationCenter 218
Phone:903-233-3893
Fax: 903-233-3851
Email: 
[EMAIL PROTECTED]
Homepage:http://www.letu.edu/people/rodhetzel
---

You are currently subscribed to tips as: [EMAIL PROTECTED]

To unsubscribe send a blank email to [EMAIL PROTECTED]




Re: IRB's Gone Wild?

2004-05-06 Thread Bill Scott
Annette Taylor wrote:

  even in a minimum risk study you are abusing your participants if you
are asking them to give up their time and energy on a useless task.
--

Does this mean an IRB should not approve a replication of Martin Orne's
classic demonstration of experimental demand characteristics where he asked
participants to add up columns of numbers and then tear up their work over
and over for hours on end? He meant it to be as useless a task as possible,
and he wanted to see how long they would do it just because he asked them to
do it for an experiment. Was he abusing his participants?

see http://journals.apa.org/prevention/volume5/pre0050035a.html

Bill Scott


---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]