Hi

I find it ironic to be supporting in one thread concerns about a
procedure that is designed to control grade inflation (review of
grades) and in another a procedure that some think contributes to
grade inflation (course evaluations).  I only have time to
respond to a few of the points in the article forwarded by John
Damron, but believe that course evaluations have numerous
justifications, including: (1) people who have observed someone
teaching for 3-7 months should be able to produce reasonably
accurate judgements about such concrete behaviours as being
organized, (2) students' judgements about such behaviours
correlated highly with judgements of expert and neutral observers
(e.g., graduate students trained as observers), (3) such
behaviours as being organized can be identified as important from
far more than just their relation to direct measures of learning
in a class (e.g., cognitive theories of learning and memory), (4) 
in most uses of evaluations with which I am familiar faculty have
an opportunity to comment on ratings and faculty reviewers
recognize that numerous factors (e.g., difficulty) contribute to
ratings, and (5) I have not heard anyone suggest a better way to
assess quality of instruction that is practical.

On Sat, 20 Nov 1999, JC Damron FORWARDED: [note article not
written by John]

>         are failing can inflate average grades. So can a policy such as
>         permitting students to replace failing grades when they repeat a
>         course (Geisinger, 1979).

This one surprised me.  I thought it might reduce inflation since
faculty would know that a serious student could take the course
again and improve their marks.  Sort of like a large-scale Keller
(PSI) system.  What was the evidence that it contributed to
inflation?

>         decisions remains a controversial matter (Zirkel, 1995). Many
>         faculty suspect that institutional reliance on student ratings
>         of instruction is a prime cause of lowered standards and grade
>         inflation (Cohen, 1984; Goldman, 1985; Mieczkowski, 1995;
>         Renner, 1981).

What is the evidence first of inflation and second of a
connection between inflation and student ratings?  Simply an
increase in grades does not indicate inflation.  Is it not the
case that scores on certain standardized tests have been
increasing over past decades?  Is that too due to inflation?

>         The apparent inconsistency between research reporting validity
>         for student ratings and their failure to detect declining
>         achievement and inflated grades may be due to the way in which
>         they are used.

I must have missed the part that showed that student ratings
failed to detect declining achievement and, indeed, the
rationale for such an expectation.  Admittedly I read the
preceding quickly.  I also missed the evidence for declining
achievement as well.

 A statistically established validity of r=0.4
>         means that 16 percent of the differences among the ratings of
>         instructors are attributable to differences in measured student
>         achievement. However, the remaining 84 percent of the
>         variability in ratings is attributable to factors that must be
>         controlled for in their use and interpretation with respect to
>         individual instructors.

Only if you believe that a measure must be perfect (or made
perfect) before it can be used.  That pretty well does away with
all psychometrics, not to mention psychological research in
general.

 The most sophisticated and convincing
>         student rating validation studies carefully control student
>         achievement, reported grades, and other possible sources of bias
>         (Benton, 1982). Typical institutional applications of student
>         ratings, however, rarely employ the monitoring and controls
>         found in validation studies. Thus, rating instruments with a
>         modest degree of validity may be producing invalid faculty
>         evaluations because they are used without appropriate
>         precautions.

"May" be producing invalid evaluations is a pretty weak
statement.  Is there direct evidence of widespread invalidity
(e.g., faculty judged poor teachers who were shown by other,
stronger criteria [whatever those would be] to be a good or
excellent teacher)?

>         Although it is understood that student-opinion-based assessments
>         of individual instructors should be corrected for watered-down
>         objectives, inflated grades, and other potential sources of bias
>         (Aleamoni, 1981; Seldin, 1993), in practice, ratings are
>         typically presumed valid, sometimes despite indications to the
>         contrary. For example, although instructors who receive the high
>         ratings often report higher grades, administrative interpreters
>         of such data most often simply assume that such higher grades
>         are a function of superior student learning. Their assumption is
>         rarely checked.

The "For example" implied to me an example of "indications to the
contrary," as I asked about above.  But that is not what the
example shows.  In fact is it that common for instructors to
justify course evaluations on the basis of higher-than-average
grades in their courses?

>         competing with their higher rated peers. In contrast to those
>         who receive less favorable ratings, instructors with ratings
>         that may be biased by low expectations or lax grading are
>         comfortably insulated from skeptical inquiries and otherwise
>         given little cause to be concerned about the student
>         rating/evaluation process.

In another thread I mentioned one practice that addresses lax
grading (i.e., comparison across courses of students
performance).

>         (Seldin, 1993). If they are entered into the normative data
>         base, the benchmarks against which faculty are compared become
>         biased by default and to an unknown extent.

But wouldn't the bias be such as to raise the standard against
which people are being compared?  Or is the argument that this is
biased against the person with rigorous standards?

>         The longer term impact of teaching evaluated by student ratings
>         now seems to be evidencing itself. It is as a former American
>         Association of University Professors president, Fritz Machlup,
>         anticipated: We now have "Poor Learning from Good Teachers"
>         (Machlup, 1979). The fact that learning has declined and
>         stagnated during the twenty-five or so years that higher
>         education has relied on student opinion as a measure of "good"
>         teaching speaks for itself.

Is this claim about a decline consistent with the Flynn effect
and the like?  Again, evidence for a decline would be nice.
There is one sign of inadequate learning here ... the writer
appears to have not learned that correlation does not imply
causation.

>         Economists Gordon Tullock and Richard McKenzie (1985) argued
>         that economic theory predicts that professors will ease that
>         which is expected of their student customers to buy higher
>         ratings. Are typical institutional procedures for interpreting
>         student ratings sufficiently stringent to prevent such
>         transactions?

Pardon me if I do not find "economic theory" all that compelling
an account of human behaviour.

 Proponents of student ratings must agree that even
>         if student opinion can, under carefully circumscribed
>         conditions, serve as a sound basis for evaluating teaching, the
>         task of insuring correctly interpreted ratings under real-world
>         conditions may be beyond the practical limits of institutional
>         ability.

Well how about suggesting a better, practical way of ensuring the
quality of instruction in higher education?  Not to try and
measure it at all?

>         observed: "The credential is for most students more important
>         than the course."

I agree entirely that this development is unfortunate, but fail
to see the connection to evaluations.  In fact, don't evaluations
tell students that we care whether faculty are acting in ways to
strengthen learning?

 Higher education makes a very great mistake if
>         it permits its primary mission to become one of serving student
>         "customers."

Again I agree with this and largely what followed (up to the
comment), but do not see the connection to course evaluations. It
also appears that there are far greater threats than what happens
in legitimate universities.  The comment on "education as
production" was pretty empty in my view, given the nature of
benefits from proper higher education. 

Best wishes
Jim

============================================================================
James M. Clark                          (204) 786-9757
Department of Psychology                (204) 774-4134 Fax
University of Winnipeg                  4L05D
Winnipeg, Manitoba  R3B 2E9             [EMAIL PROTECTED]
CANADA                                  http://www.uwinnipeg.ca/~clark
============================================================================

Reply via email to