On Mon, 09 Jan 2012 05:50:34 -0800, Christopher D. Green wrote:
>So it may turn out that student course evaluations are not merely
>invalid, but actually contraindicative of instructor quality. That is,
>the less students learn (i.e., are made to learn), the higher they rate
>the professor, on average.

As always, one should not rely upon mass media presentations
of research because they may present a distorted view of the
research.  It is always a good idea to locate the original
reference and examine it.  In this case, the 2008 version of
the paper by Carrell and West can be obtained here:
http://www.nber.org/papers/w14081

I note that the paper was in some sort of final form in 2008
but not published until last year (the news article Green
links to was published in June 2010).

It should be noted that there are two main results that are reported:

(1) In an introductory course in math and the sciences, there is
a NEGATIVE correlation between between faculty rank and
experience, that is, intro teachers with higher rank, etc., tend
to have lower grades than teachers with lower rank (whether
this is an indicator of teach quality is another argument).

(2)  There is a positive correlation with faculty rank etc with
performance in courses beyond the introductory course.  That is,
grades of students taking introductory courses with more experienced
faculty is predictive of grades in "follow-on" or subsequent
courses in the sequence.

Curiously, the relationship of course evaluation to course
performance is not addressed until page 19 of the 38 page
manuscript (of the 20 pages of text -- the rest consists of
tables).  There are two main results here as well:

(1) Course evaluations are predictive of grade in the current
course.

(2) Course evaluations are NOT predictive of grades in future
courses.

For the sake of redundancy ;-), I quote the final paragraph
of the manuscript:

|We also examine the relationship between the student
|evaluations of professors and student academic achievement
|corrected for endogeneity and common shocks. We found that
|student evaluations positively predict student achievement
|in contemporaneous courses, but are very poor predictors
|of follow-on student achievement. This latter finding draws
|into question how one should measure professor quality as
|professor-teaching quality is primarily evaluated at most
|U.S. colleges and universities by scores on subjective
|student evaluations.

It is not clear to me where the notion of "lower quality"
teachers are rewarded while "higher quality" teachers
are not comes from. This notion appears to be based on
the following conclusion by Carrell and West:

|For math and science courses we found that academic
|rank, teaching experience, and terminal degree status
|are negatively correlated with contemporaneous student
|achievement, but positively related to follow-on
|course achievement. *****That is, the less experienced
|instructors who do not possess terminal degrees
|produce students who perform better in the
|contemporaneous course being taught, but perform
|worse in the follow-on related courses.*****
|These results are consistent with recent evidence
|by Bettinger and Long (2006) and Ehrenberg and Zhang
|(2005) who, respectively, found that the use of adjunct
|professors have a positive effect on follow-on course
|interest, but a negative effect on student graduation.
|That is, our results support the notion that less
|academically qualified instructors may spur (potentially
|erroneous) interest in a particular subject through
|higher grades, but these students perform significantly
|worse in follow-on related courses that rely on the
|initial course for content. (p19-20)

Note:  emphasis added.

>
http://voices.washingtonpost.com/college-inc/2010/06/study_high-rated_professors_ar.html
>Although Air Force Academy may not be typical of colleges and
>universities, and calculus may not be typical of all courses, I am
>impressed by the "natural experiment" Air Force affords by randomly
>assigning students to course sections, and by having a common syllabus
>among them.

Carrell and West make the following observation:

|For humanities courses, we found almost no relationship
|between professor observable attributes and student
|achievement.

One explanation they provide is that the science and math
courses use more "objective" tests and measures and the
humanities courses (i.e., English and history) do not.
Draw your own conclusion.

-Mike Palij
New York University
[email protected]

---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=15188
or send a blank email to 
leave-15188-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to