On Tue, 18 Sep 2012 07:56:10 -0700, Christopher Green wrote:
>Nice summary of a good empirical study of student course evaluations (with a
>link to the full study).
>http://www.nytimes.com/roomfordebate/2012/09/17/professors-and-the-students-who-grade-them/students-confuse-grades-with-long-term-learning
>
>Short version: Especially with introductory courses, young teachers tend to
>teach more superficially, to the test, and get higher evaluations as a result.
>More experienced teachers tend teach to long-term, "deeper" learning, which
>doesn't necessary show up in immediate tests or grades, and get lower
>evaluations as a result. As courses advance in level, however, the gap begins
>to close.

I'm somewhat skeptical that this is a "good empirical study", a
correlational study that does not use direct measures of whether
a professor is "teaching to the test".  Instead, the authors, Carrell
and West assume that if a professor in an introductory course,
say, Calculus I (NOTE: really relevant to psychology), get better
grades than another professor, the higher grading prof is "teaching
to the test".  It would be nice if Carrell & West actually had
something like observations of teacher performance in class
to substantiate the claim but they don't.  Instead, they try to
identify variables that may explain why students who do well
in Calculus I do less well in Calculus II while students who do
poorly in Calculus I appear to do better in Calculus II (and not
even a mention of the "John Henry effect", that is, after grades
for the intro course are known, students who did less well may
engage in compensatory work in subsequent courses).

I'll leave out the issue of "higher grading" professors getting
higher student evaluations while "lower grading" professors
getting lower student evaluations for another day. Given the
seriousness of the lack of an independent measure of "depth
of teaching", one should be cautious about making statements
of "depth of teaching" and its relationship to any other variables.

It is a slog but I strongly recommend reading the Carrell & West
study and not to be intimidated by their "value-added analysis"
(this type of analysis is being used in K-12 school to measure
teacher "value-added" or effect sizes; typically, the assumptions
for value-added analyses are violated, as pointed by teacher unions
and researchers like Rothstein who is cited in the article) but
the analysis seems justified by random assignment of students
to courses in a standardized curriculum; See:
http://www.economics.harvard.edu/faculty/staiger/files/carrell%2Bwest%2Bprofessor%2Bqualty%2Bjpe.pdf

Carrell & West want to say that older teacher with terminal degrees
and higher rank -- the group that gets lower grades when they teach
Calculus I -- are really not teaching to the test but engaging in some
sort of "deep learning" even though they provide no independent
evidence for this, for example, evidence that their coverage of the
material requires some deeper cognitive processing than that of the alleged
"teacher to the test" (NOTE:  there is no explanation of what such
"deeper processing" might be -- one wonders if it includes the
self-reference effect).  Since this is a correlational study, some reflection
might point out a variety of rival hypotheses as to why the results are
obtained, the "John Henry Effect" just being one.

One more thing:  one should guard against the error of assuming that
the results of this study apply to all other college teachers and colleges.
Indeed, the fundamental question of "to whom do these results
generalize to?" needs to be asked regardless of questions about
analysis and interpretation of results.  The research was done
at the U.S. Air Force Academy and here is a description from the
U.S. News and World Reports ranking of colleges; see:
http://colleges.usnews.rankingsandreviews.com/best-colleges/united-states-air-force-academy-1369

Do these results apply to:
(a)  other military colleges/academies
(b)  other schools that select only 10.8% of applicants
(c)  other schools where males are 78% of the students
(d)  other schools where 70% of classes have 20 or less students
(e)  other public coed suburban colleges
(f)   all of the above
(g)  none of the above

Finally, quoting from Carrell & West's conclusions:
NOTE: an apparent confusion of "breadth" with "depth"

|One potential explanation for our results is that the less-experienced
|professors may teach more strictly to the regimented curriculum being
|tested, while the more experienced professors broaden the curriculum
|and produce students with a deeper understanding of the material. This
|deeper understanding results in better achievement in the follow-on
|courses. Another potential mechanism is that students may learn
|(good or bad) study habits depending on the manner in which their
|introductory course is taught. For example, introductory professors
|who "teach to the test" may induce students to exert less study effort
|in follow-on related courses. This may occur due to a false signal
|of one's own ability or from an erroneous expectation of how follow-on
|courses will be taught by other professors. A final, more cynical,
|explanation could also relate to student effort. Students of low value
|added professors in the introductory course may increase effort in
|follow-on courses to help "erase" their lower than expected grade
|in the introductory course.

Wait, the final explanation is equivalent to a John Henry effect!
I wonder why they refer to it as a "cynical" explanation?  Oh well,
Carrell & West are economists and not psychologists so one probably
should not expect them to be very familiar with psychological theory,
research, and results.

-Mike Palij
New York University
[email protected]

---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=20498
or send a blank email to 
leave-20498-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to