On 21 Dec 1999 15:33:56 -0800, [EMAIL PROTECTED] (EAKIN MARK E)
wrote:

[ ...]
> Which doesn't even consider the fact that few (no one I know of)
> instructors attempt to validate their testing instruments using the
> concepts of measurement theory (reliablity and validity assesment). This
> is another thread I would like to see discussed. Shouldn't we teach our
> Ph.D. students how to use measurement theory in the area that of
> measuring that they will practice most often: measuring student
> performance? 

And immediately, I remember an undergrad experience I had, in
Experimental Design in Psychology, c. 1967.  (University of Texas at
Austin had a pretty good department, I think, but I don't have
standards for comparison.)

The second examination was a test on the content of various Readings,
which were published reports of important studies -- I spent a long
time studying and then scored a B.  My buddy from various calculus
courses who did all his studying during breakfast on the morning of
the exam, reading just the study-abstracts, aced the exam.

This ticked me off.  When I noticed that the grades from the first
exam were still posted alongside the new ones, I did a bit of informal
comparison (I had not yet taken any statistics) and noted that the
persons with the highest 90s on the first exam, which was a tougher
one, tended to score *much*  lower on the second.  I wrote a bunch of
those down, and made my complaint known in some fashion that I don't
recall.

I thought I was observing a negative correlation, though today I have
to wonder how much of it was regression-to-the-mean.  On the other
hand, maybe it wasn't r-to-the-m, or they might have made it an
object-lesson to us all.  Instead, the only response that I saw, which
made me think I was right,  was that the list of grades from the first
test  were immediately taken off the bulletin board.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html

Reply via email to