Re: Test item difficulty

2003-02-16 Thread John W. Nichols, M.A.
OpScan ( http://www.ncspearson.com/scanners/index.htm ) is one of the
two major suppliers of answer sheets that I know of.  ScanTron is the
other.  Both companies provide optical scanner systems for their answer
sheets.  OpScan's are free if you use their forms, or at least they were
when we began using them.  ScanTron's is not free even if you use their
answer sheets.

Years ago, in the days of DOS, our computer people developed a program
for analyzing tests completed on the OpScan sheets.  It allows for
difficulty and discrimination measures and also reports student scores
in a variety of formats.  Primitive by today's standards, but it works.

NCS Pearson now has Windows-based software
(http://www.ncspearson.com/scantools/index.htm ) for analyzing the
answer sheets.  It looks good.  I lust after it, but to date TCC has not
been willing to pop for it.

ScanTron also has analysis software, but at the time we looked into it,
it was far to expensive to consider seriously.



Hetzel, Rod wrote:
 
 Thanks for all your responses on the item difficulty post.  John, can you tell me a 
little more about the OpScan program?  Sounds interesting...
 
 __
 Roderick D. Hetzel, Ph.D.
 Department of Psychology
 LeTourneau University
 Post Office Box 7001
 2100 South Mobberly Avenue
 Longview, Texas  75607-7001
 
 Office:   Education Center 218
 Phone:903-233-3893
 Fax:  903-233-3851
 Email:[EMAIL PROTECTED]
 Homepage: http://www.letu.edu/people/rodhetzel
 
  -Original Message-
  From: John W. Nichols, M.A. [mailto:[EMAIL PROTECTED]]
  Sent: Friday, February 14, 2003 12:41 PM
  To: Teaching in the Psychological Sciences
  Subject: Re: Test item difficulty
 
 
  It is certainly a difficult item for the students in the
  class.  Your difficulty measure clearly shows that to be the
  case.  That however, does not necessarily mean that there is
  a test construction problem or that the item should be
  eliminated.  It could simply be a difficult item that few
  students studied well enough to do more than guess at.
 
  Without a discrimination measure, it cannot be determined who
  answered the question correctly.  Was it the best prepared
  student(s) who answered it correctly?  Was it the poorly
  prepared student(s) who knew that one thing, or just guessed
  correctly?
 
  In my judgment, at least some high difficulty/high
  discrimination items should make up part of the exam or quiz.
   If it is a high difficulty/low discrimination item, I would
  try to rework it or toss it.  Lucky me! I use an OpScan
  program that makes it very easy to measure both.
 
  I doubt that there are any statistical measures that will
  discriminate between inadequate instruction and inadequate
  preparation, but my years of experience have provided a lot
  more cases of inadequate preparation by the student than
  inadequate instruction by the prof.
 
  I used a series of similar questions on my exams until most
  Intro authors quit covering more than one or two types of
  validity and reliability.  My own Intro students usually
  wound up with around a .45 or .50 difficulty value and
  discrimination level of around .70 or better.  In other
  words, those who knew the rest of the material very well
  usually knew that item, too.  Those who did not, did not.
 
 
  Hetzel, Rod wrote:
  
   Hi everyone:
  
   Here's a scenario for your consideration.
  
   I gave a multiple-choice quiz today with ten items.  Each item has
   four response options, so the optimum difficulty level for any item
   would be about .625.  For one question, most of the class got the
   question wrong and the actual item difficulty was .08.
  Does this mean
   that item itself was a difficult item (which would be a test
   construction issue and suggest that the item should be
  discarded from
   the test), or does it mean that the students were not prepared to
   answer the question (which would suggest either inadequate
  instruction
   by the professor or inadequate preparation by the students)?  I'm
   looking at this because the question, in my estimation, was
  a simple
   question.  Here it is:
  
   A student confronts his psychology professor and says, You
  assigned
   Chapters 7 through 10, but nearly all of the items came
  from Chapter
   7. How can you evaluate whether we know anything about the other
   material we were supposed to read?  The student is challenging the
   test on the basis of:
  
   A.  Face validity
   B.  Content validity
   C.  Criterion validity
   D.  Construct validity
  
   This to me seems like a straightforward question.  Students chose
   equally from the three distractors.  The topic was covered
   substantially in class through lecture and activities.  The
  book also
   provides very easy coverage of this topic.  I'm trying to
  decide why
   this question posed such a challenge to the students.
  
   Rod

Re: Test item difficulty

2003-02-15 Thread Annette Taylor, Ph. D.
This is one of those times when my 'teaching for mastery' approach comes into 
play to help me, as well as the students!

I allow students to redo, for half credit, those items which a majority of the 
class missed. I set a criterion based on class size, number of items, overall 
grades, number of items missed, etc. So, for example, a student who just plain 
and simple blew off the test can't benefit t much from this system.

Anyway, to make the long story short, for multiple choice items they have to 
write me one sentence about why they answer I think is correct is best (they 
can refer to/cite notes or text), and one sentence about what they were 
thinking when they picked their answers, which I did not think was correct or 
best. 

The first sentence is to make sure they now 'know' the answer. The second 
sentence is to give them an insight into their test taking strategies and 
skills--they often find that there emerges a pattern that they can work on for 
future tests across disciplines.

But it can also show me where a particular weakness lies!

Annette

ps: usually just the D and A students take me up on this re-write offer! 
  
 At 11:15 AM 2/14/2003 -0600, you wrote:
 Hi everyone:
 
 Here's a scenario for your consideration.
 
 I gave a multiple-choice quiz today with ten items.  Each item has four
 response options, so the optimum difficulty level for any item would be
 about .625.  For one question, most of the class got the question wrong
 and the actual item difficulty was .08.  Does this mean that item itself
 was a difficult item (which would be a test construction issue and
 suggest that the item should be discarded from the test), or does it
 mean that the students were not prepared to answer the question (which
 would suggest either inadequate instruction by the professor or
 inadequate preparation by the students)?  I'm looking at this because
 the question, in my estimation, was a simple question.  Here it is:
 
 A student confronts his psychology professor and says, You assigned
 Chapters 7 through 10, but nearly all of the items came from Chapter 7.
 How can you evaluate whether we know anything about the other material
 we were supposed to read?  The student is challenging the test on the
 basis of:
 
 A.  Face validity
 B.  Content validity
 C.  Criterion validity
 D.  Construct validity
 
 This to me seems like a straightforward question.  Students chose
 equally from the three distractors.  The topic was covered substantially
 in class through lecture and activities.  The book also provides very
 easy coverage of this topic.  I'm trying to decide why this question
 posed such a challenge to the students.
 
 Rod
 
 
 __
 Roderick D. Hetzel, Ph.D.
 Department of Psychology
 LeTourneau University
 Post Office Box 7001
 2100 South Mobberly Avenue
 Longview, Texas  75607-7001
 
 Office:   Education Center 218
 Phone:903-233-3893
 Fax:  903-233-3851
 Email:[EMAIL PROTECTED]
 Homepage: http://www.letu.edu/people/rodhetzel
 
 ---
 You are currently subscribed to tips as: [EMAIL PROTECTED]
 To unsubscribe send a blank email to [EMAIL PROTECTED]
 
 Janet L. Kottke, Ph.D.
 Professor
 Department of Psychology
 California State University, San Bernardino
 5500 University Parkway
 San Bernardino, CA  92407
 909-880-5585 (voice)
 909-880-7003 (fax)
 [EMAIL PROTECTED] (internet)
 WWW: http://psychology.csusb.edu/io/index.htm
 
 
 ---
 You are currently subscribed to tips as: [EMAIL PROTECTED]
 To unsubscribe send a blank email to [EMAIL PROTECTED]
 


Annette Kujawski Taylor, Ph. D.
Department of Psychology
University of San Diego 
5998 Alcala Park
San Diego, CA 92110
[EMAIL PROTECTED]

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]



Re: Test item difficulty

2003-02-15 Thread David Campbell
If you want to know why the students missed an easy test item that deals
with content covered in class, why not bring it up with the students?
They should be able to shed light on where the problem lies.
  --Dave

--
David E. Campbell, Ph.D.[EMAIL PROTECTED]
Department of PsychologyPhone: 707-826-3721
Humboldt State University   FAX:   707-826-4993
Arcata, CA  95521   www.humboldt.edu/~campbell/psyc.htm


On Fri, 14 Feb 2003, Hetzel, Rod wrote:

 Hi everyone:

 Here's a scenario for your consideration.

 I gave a multiple-choice quiz today with ten items.  Each item has four
 response options, so the optimum difficulty level for any item would be
 about .625.  For one question, most of the class got the question wrong
 and the actual item difficulty was .08.  Does this mean that item itself
 was a difficult item (which would be a test construction issue and
 suggest that the item should be discarded from the test), or does it
 mean that the students were not prepared to answer the question (which
 would suggest either inadequate instruction by the professor or
 inadequate preparation by the students)?  I'm looking at this because
 the question, in my estimation, was a simple question.  Here it is:

 A student confronts his psychology professor and says, You assigned
 Chapters 7 through 10, but nearly all of the items came from Chapter 7.
 How can you evaluate whether we know anything about the other material
 we were supposed to read?  The student is challenging the test on the
 basis of:

 A.  Face validity
 B.  Content validity
 C.  Criterion validity
 D.  Construct validity

 This to me seems like a straightforward question.  Students chose
 equally from the three distractors.  The topic was covered substantially
 in class through lecture and activities.  The book also provides very
 easy coverage of this topic.  I'm trying to decide why this question
 posed such a challenge to the students.

 Rod


 __
 Roderick D. Hetzel, Ph.D.
 Department of Psychology
 LeTourneau University
 Post Office Box 7001
 2100 South Mobberly Avenue
 Longview, Texas  75607-7001

 Office:   Education Center 218
 Phone:903-233-3893
 Fax:  903-233-3851
 Email:[EMAIL PROTECTED]
 Homepage: http://www.letu.edu/people/rodhetzel

 ---
 You are currently subscribed to tips as: [EMAIL PROTECTED]
 To unsubscribe send a blank email to [EMAIL PROTECTED]


---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]



RE: Test item difficulty

2003-02-14 Thread John Kulig
Rod:
Only 8% of the students got it correct? Was the answer key
filled out correctly? Who figured the item difficulty level - you or the
software? I agree the question is straightforward. 



John W. Kulig
Professor of Psychology
Plymouth State College
Plymouth NH 03264

Eat bread and salt and speak the truth 
Russian saying.

-Original Message-
From: Hetzel, Rod [mailto:[EMAIL PROTECTED]] 
Sent: Friday, February 14, 2003 12:16 PM
To: Teaching in the Psychological Sciences
Subject: Test item difficulty

Hi everyone:

Here's a scenario for your consideration.

I gave a multiple-choice quiz today with ten items.  Each item has four
response options, so the optimum difficulty level for any item would be
about .625.  For one question, most of the class got the question wrong
and the actual item difficulty was .08.  Does this mean that item itself
was a difficult item (which would be a test construction issue and
suggest that the item should be discarded from the test), or does it
mean that the students were not prepared to answer the question (which
would suggest either inadequate instruction by the professor or
inadequate preparation by the students)?  I'm looking at this because
the question, in my estimation, was a simple question.  Here it is:  

A student confronts his psychology professor and says, You assigned
Chapters 7 through 10, but nearly all of the items came from Chapter 7.
How can you evaluate whether we know anything about the other material
we were supposed to read?  The student is challenging the test on the
basis of:

A.  Face validity
B.  Content validity
C.  Criterion validity
D.  Construct validity

This to me seems like a straightforward question.  Students chose
equally from the three distractors.  The topic was covered substantially
in class through lecture and activities.  The book also provides very
easy coverage of this topic.  I'm trying to decide why this question
posed such a challenge to the students.

Rod


__
Roderick D. Hetzel, Ph.D.
Department of Psychology
LeTourneau University
Post Office Box 7001
2100 South Mobberly Avenue
Longview, Texas  75607-7001
 
Office:   Education Center 218
Phone:903-233-3893
Fax:  903-233-3851
Email:[EMAIL PROTECTED]
Homepage: http://www.letu.edu/people/rodhetzel

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to
[EMAIL PROTECTED]


---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]



RE: Test item difficulty

2003-02-14 Thread Horton, Joseph J.
The item looks fine to me. I would want to know how the top students in the
class did. If the top students got it right it is just a hard question,
which is acceptable. If the top students tended to get it wrong, I would be
confused and wonder what went wrong.

Joe

Joseph J. Horton Ph. D.
Faculty Box 2694
Grove City College
Grove City, PA  16127
 
(724) 458-2004
 
In God we trust, all others must bring data.

-Original Message-
From: Hetzel, Rod [mailto:[EMAIL PROTECTED]] 
Sent: Friday, February 14, 2003 12:16 PM
To: Teaching in the Psychological Sciences
Subject: Test item difficulty

Hi everyone:

Here's a scenario for your consideration.

I gave a multiple-choice quiz today with ten items.  Each item has four
response options, so the optimum difficulty level for any item would be
about .625.  For one question, most of the class got the question wrong
and the actual item difficulty was .08.  Does this mean that item itself
was a difficult item (which would be a test construction issue and
suggest that the item should be discarded from the test), or does it
mean that the students were not prepared to answer the question (which
would suggest either inadequate instruction by the professor or
inadequate preparation by the students)?  I'm looking at this because
the question, in my estimation, was a simple question.  Here it is:  

A student confronts his psychology professor and says, You assigned
Chapters 7 through 10, but nearly all of the items came from Chapter 7.
How can you evaluate whether we know anything about the other material
we were supposed to read?  The student is challenging the test on the
basis of:

A.  Face validity
B.  Content validity
C.  Criterion validity
D.  Construct validity

This to me seems like a straightforward question.  Students chose
equally from the three distractors.  The topic was covered substantially
in class through lecture and activities.  The book also provides very
easy coverage of this topic.  I'm trying to decide why this question
posed such a challenge to the students.

Rod


__
Roderick D. Hetzel, Ph.D.
Department of Psychology
LeTourneau University
Post Office Box 7001
2100 South Mobberly Avenue
Longview, Texas  75607-7001
 
Office:   Education Center 218
Phone:903-233-3893
Fax:  903-233-3851
Email:[EMAIL PROTECTED]
Homepage: http://www.letu.edu/people/rodhetzel

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]



Re: Test item difficulty

2003-02-14 Thread John W. Nichols, M.A.
It is certainly a difficult item for the students in the class.  Your
difficulty measure clearly shows that to be the case.  That however,
does not necessarily mean that there is a test construction problem or
that the item should be eliminated.  It could simply be a difficult item
that few students studied well enough to do more than guess at.

Without a discrimination measure, it cannot be determined who answered
the question correctly.  Was it the best prepared student(s) who
answered it correctly?  Was it the poorly prepared student(s) who knew
that one thing, or just guessed correctly?  

In my judgment, at least some high difficulty/high discrimination items
should make up part of the exam or quiz.  If it is a high difficulty/low
discrimination item, I would try to rework it or toss it.  Lucky me! I
use an OpScan program that makes it very easy to measure both.

I doubt that there are any statistical measures that will discriminate
between inadequate instruction and inadequate preparation, but my years
of experience have provided a lot more cases of inadequate preparation
by the student than inadequate instruction by the prof.

I used a series of similar questions on my exams until most Intro
authors quit covering more than one or two types of validity and
reliability.  My own Intro students usually wound up with around a .45
or .50 difficulty value and discrimination level of around .70 or
better.  In other words, those who knew the rest of the material very
well usually knew that item, too.  Those who did not, did not.


Hetzel, Rod wrote:
 
 Hi everyone:
 
 Here's a scenario for your consideration.
 
 I gave a multiple-choice quiz today with ten items.  Each item has four
 response options, so the optimum difficulty level for any item would be
 about .625.  For one question, most of the class got the question wrong
 and the actual item difficulty was .08.  Does this mean that item itself
 was a difficult item (which would be a test construction issue and
 suggest that the item should be discarded from the test), or does it
 mean that the students were not prepared to answer the question (which
 would suggest either inadequate instruction by the professor or
 inadequate preparation by the students)?  I'm looking at this because
 the question, in my estimation, was a simple question.  Here it is:
 
 A student confronts his psychology professor and says, You assigned
 Chapters 7 through 10, but nearly all of the items came from Chapter 7.
 How can you evaluate whether we know anything about the other material
 we were supposed to read?  The student is challenging the test on the
 basis of:
 
 A.  Face validity
 B.  Content validity
 C.  Criterion validity
 D.  Construct validity
 
 This to me seems like a straightforward question.  Students chose
 equally from the three distractors.  The topic was covered substantially
 in class through lecture and activities.  The book also provides very
 easy coverage of this topic.  I'm trying to decide why this question
 posed such a challenge to the students.
 
 Rod
 
 __
 Roderick D. Hetzel, Ph.D.
 Department of Psychology
 LeTourneau University
 Post Office Box 7001
 2100 South Mobberly Avenue
 Longview, Texas  75607-7001
 
 Office:   Education Center 218
 Phone:903-233-3893
 Fax:  903-233-3851
 Email:[EMAIL PROTECTED]
 Homepage: http://www.letu.edu/people/rodhetzel
 
 ---
 You are currently subscribed to tips as: [EMAIL PROTECTED]
 To unsubscribe send a blank email to [EMAIL PROTECTED]

-- 

--== ô¿ô ==-- 
Sometimes you just have to try something, and see what happens.

John W. Nichols, M.A.
Assistant Professor of Psychology
Tulsa Community College
909 S. Boston Ave., Tulsa, OK  74119
(918) 595-7134

Home: http://www.tulsa.oklahoma.net/~jnichols
MegaPsych: http://www.tulsa.oklahoma.net/~jnichols/megapsych.html

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]



Re: Test item difficulty

2003-02-14 Thread Jan Kottke
I agree with the responders regarding Hetzel's question. Sometimes, 
however, the solution is not to be found in the item analysis. When you 
covered validity in your class, what did you tell the students? The answer 
very likely lies there with a classroom test. When you return the test, ask 
the students why they selected what they did on this, or any other item 
with which you have concern. You may discover that you said something like 
all forms of validity can be construed as construct validity, for 
example, and the students attempted to demonstrate that concept with this 
question. (Of course, the item analyses can typically help you explore this 
possibility, especially the distractor analysis)


At 11:15 AM 2/14/2003 -0600, you wrote:
Hi everyone:

Here's a scenario for your consideration.

I gave a multiple-choice quiz today with ten items.  Each item has four
response options, so the optimum difficulty level for any item would be
about .625.  For one question, most of the class got the question wrong
and the actual item difficulty was .08.  Does this mean that item itself
was a difficult item (which would be a test construction issue and
suggest that the item should be discarded from the test), or does it
mean that the students were not prepared to answer the question (which
would suggest either inadequate instruction by the professor or
inadequate preparation by the students)?  I'm looking at this because
the question, in my estimation, was a simple question.  Here it is:

A student confronts his psychology professor and says, You assigned
Chapters 7 through 10, but nearly all of the items came from Chapter 7.
How can you evaluate whether we know anything about the other material
we were supposed to read?  The student is challenging the test on the
basis of:

A.  Face validity
B.  Content validity
C.  Criterion validity
D.  Construct validity

This to me seems like a straightforward question.  Students chose
equally from the three distractors.  The topic was covered substantially
in class through lecture and activities.  The book also provides very
easy coverage of this topic.  I'm trying to decide why this question
posed such a challenge to the students.

Rod


__
Roderick D. Hetzel, Ph.D.
Department of Psychology
LeTourneau University
Post Office Box 7001
2100 South Mobberly Avenue
Longview, Texas  75607-7001

Office:   Education Center 218
Phone:903-233-3893
Fax:  903-233-3851
Email:[EMAIL PROTECTED]
Homepage: http://www.letu.edu/people/rodhetzel

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]


Janet L. Kottke, Ph.D.
Professor
Department of Psychology
California State University, San Bernardino
5500 University Parkway
San Bernardino, CA  92407
909-880-5585 (voice)
909-880-7003 (fax)
[EMAIL PROTECTED] (internet)
WWW: http://psychology.csusb.edu/io/index.htm


---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]



RE: Test item difficulty

2003-02-14 Thread Hetzel, Rod
Thanks for all your responses on the item difficulty post.  John, can you tell me a 
little more about the OpScan program?  Sounds interesting...

__
Roderick D. Hetzel, Ph.D.
Department of Psychology
LeTourneau University
Post Office Box 7001
2100 South Mobberly Avenue
Longview, Texas  75607-7001
 
Office:   Education Center 218
Phone:903-233-3893
Fax:  903-233-3851
Email:[EMAIL PROTECTED]
Homepage: http://www.letu.edu/people/rodhetzel


 -Original Message-
 From: John W. Nichols, M.A. [mailto:[EMAIL PROTECTED]] 
 Sent: Friday, February 14, 2003 12:41 PM
 To: Teaching in the Psychological Sciences
 Subject: Re: Test item difficulty
 
 
 It is certainly a difficult item for the students in the 
 class.  Your difficulty measure clearly shows that to be the 
 case.  That however, does not necessarily mean that there is 
 a test construction problem or that the item should be 
 eliminated.  It could simply be a difficult item that few 
 students studied well enough to do more than guess at.
 
 Without a discrimination measure, it cannot be determined who 
 answered the question correctly.  Was it the best prepared 
 student(s) who answered it correctly?  Was it the poorly 
 prepared student(s) who knew that one thing, or just guessed 
 correctly?  
 
 In my judgment, at least some high difficulty/high 
 discrimination items should make up part of the exam or quiz. 
  If it is a high difficulty/low discrimination item, I would 
 try to rework it or toss it.  Lucky me! I use an OpScan 
 program that makes it very easy to measure both.
 
 I doubt that there are any statistical measures that will 
 discriminate between inadequate instruction and inadequate 
 preparation, but my years of experience have provided a lot 
 more cases of inadequate preparation by the student than 
 inadequate instruction by the prof.
 
 I used a series of similar questions on my exams until most 
 Intro authors quit covering more than one or two types of 
 validity and reliability.  My own Intro students usually 
 wound up with around a .45 or .50 difficulty value and 
 discrimination level of around .70 or better.  In other 
 words, those who knew the rest of the material very well 
 usually knew that item, too.  Those who did not, did not.
 
 
 Hetzel, Rod wrote:
  
  Hi everyone:
  
  Here's a scenario for your consideration.
  
  I gave a multiple-choice quiz today with ten items.  Each item has 
  four response options, so the optimum difficulty level for any item 
  would be about .625.  For one question, most of the class got the 
  question wrong and the actual item difficulty was .08.  
 Does this mean 
  that item itself was a difficult item (which would be a test 
  construction issue and suggest that the item should be 
 discarded from 
  the test), or does it mean that the students were not prepared to 
  answer the question (which would suggest either inadequate 
 instruction 
  by the professor or inadequate preparation by the students)?  I'm 
  looking at this because the question, in my estimation, was 
 a simple 
  question.  Here it is:
  
  A student confronts his psychology professor and says, You 
 assigned 
  Chapters 7 through 10, but nearly all of the items came 
 from Chapter 
  7. How can you evaluate whether we know anything about the other 
  material we were supposed to read?  The student is challenging the 
  test on the basis of:
  
  A.  Face validity
  B.  Content validity
  C.  Criterion validity
  D.  Construct validity
  
  This to me seems like a straightforward question.  Students chose 
  equally from the three distractors.  The topic was covered 
  substantially in class through lecture and activities.  The 
 book also 
  provides very easy coverage of this topic.  I'm trying to 
 decide why 
  this question posed such a challenge to the students.
  
  Rod
  
  __
  Roderick D. Hetzel, Ph.D.
  Department of Psychology
  LeTourneau University
  Post Office Box 7001
  2100 South Mobberly Avenue
  Longview, Texas  75607-7001
  
  Office:   Education Center 218
  Phone:903-233-3893
  Fax:  903-233-3851
  Email:[EMAIL PROTECTED]
  Homepage: http://www.letu.edu/people/rodhetzel
  
  ---
  You are currently subscribed to tips as: [EMAIL PROTECTED] To 
  unsubscribe send a blank email to 
  [EMAIL PROTECTED]
 
 -- 
 
 --== ô¿ô ==-- 
 Sometimes you just have to try something, and see what happens.
 
 John W. Nichols, M.A.
 Assistant Professor of Psychology
 Tulsa Community College
 909 S. Boston Ave., Tulsa, OK  74119
 (918) 595-7134
 
 Home: http://www.tulsa.oklahoma.net/~jnichols
 MegaPsych: http://www.tulsa.oklahoma.net/~jnichols/megapsych.html
 
 ---
 You are currently subscribed to tips as: [EMAIL PROTECTED]
 To unsubscribe send a blank email to 
 [EMAIL PROTECTED]
 

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe