Hi I remember somewhere seeing that grades (grade inflation?) are higher for stipends teaching courses than regular faculty. That would be consistent with Chris's observations as well.
Take care Jim James M. Clark Professor of Psychology 204-786-9757 204-774-4134 Fax [email protected] >>> "Christopher D. Green" <[email protected]> 15-Jun-10 12:55:41 PM >>> Thanks to Jim for sussing this all out. The reason this study (the course evaluation part) interested me is that I teach statistics. I am one of very few tenured faculty that teach it where I am. Most of the sections are taught by contract faculty. Although I have never seen any of these other people teach (a matter that should concern us... why do we so rarely see what our colleagues do in the classroom? what a great source of ideas that would be.), I have pretty good evidence from the reports of students and from their performance in other courses (e.g., when writing an honors paper under my supervision) that most of these people teach a somewhat less rigorous course than I do. Apart from honest differences in teaching philosophy, the basic reason for this is simple enough: they have to get a new contract every year and one set of bad evaluations could jeopardize that. I have the luxury of teaching what I think should be learned without having to worry about short-term fluctuations in my course evaluations (another good argument for not letting universities continue to replace retiring tenured faculty with cheaper, more vulnerable contract faculty). What I do in my course, given this flexibility, is to teach them, over the course of a full year, enough that they could conduct the kind of honors research that I would like to supervise (and that I think would prepare them well for a graduate program). That means, roughly, that by the end of the years they should be able to calculate a 2-way, mixed-factor ANOVA by hand, and to pick apart the interaction with simple effects tests, if necessary. I also have them do a 3-way factorial ANOVA, so that I can talk about how to interpret higher-order interactions, but that's actually simpler than the smaller mixed-factor design. (I also do a whack of bivariate correlation, regression, non-parametrics, etc.) As I recall, the "minimum requirements" for this course in my dept. mandate only a 1-way independent-groups ANOVA, and a "discussion of concepts" (or some such phrase) for repeated measures or 2-way factorial ANOVAs. I personally think (a) that's too little for a full year (I'd be done three weeks into the winter term), and (b) that it doesn't get students where they need to be if they are going to do anything but the simplest of honors projects. (Also, there is nothing like being a little ahead of the curve on stats to get one's MA off to a fast start. I've seen many MA students get dragged behind quickly because of troubles in their grad stats course.) Now, my average course evaluations are a little behind (about two-tenths of a point on a five-point scale) where I would ideally like them to be, but I do not feel required to "dumb down" my course in order to get them. (To be entirely honest here, over the years I have come to test at a somewhat lower level than I teach at, in order to get the "right" sort of grade distribution. But still, for students who want to learn, they have got all the information they need to succeed. I don't want to punish them because not all students are as eager and/or talented as they are.) Thus, the I found the idea heartening that lower course-evaluations might just mean that you are teach more material, not just that you're not teaching as well. Chris (full disclosure) Green York U. Toronto ======= Jim Clark wrote: > Hi > > The paper can be read at > > http://www.journals.uchicago.edu/doi/pdf/10.1086/653808 > > Although Chris has focused on the course evaluation part, the bulk of > the paper concerns learning in current and future related courses. Some > faculty promote learning better in current courses and some in future > courses, and the two are negatively related (I think ... complex > analyses by economists!). Final section on course evaluations concerns > fact that evaluations correlate positively with current value added and > negatively with later value added. Experience and rank tends to > correlate in the opposite direction, that is, negative with current and > positive with later. > > A couple of caveats ... these are math courses and the students score > high on math SATs, so we are talking about good students here. Also > probably highly motivated given they are in Air Force Academy. I could > not find the range of course evaluation scores, but my guess would be > that they too would be quite high. Finally, the "value added" measures > for current and future courses presumably (I did not take the time to > work through their logic / analysis) are some sort of residual scores. > And residual scores can demonstrate "unusual" relationships with one > another and with predictors (e.g., being negatively related under > certain circumstances). > > All in all though, an interesting study especially as students are > randomly assigned to sections of courses. The authors suggest some > possible mechanisms for the negative association between current and > future learning (assuming it is real and not a statistical artifact). > Here's penultimate paragraph: > > "One potential explanation for our results is that the less > experienced > professors may adhere more strictly to the regimented curriculum being > tested, whereas the more experienced professors broaden the curriculum > and produce students with a deeper understanding of the material. > This deeper understanding results in better achievement in the follow > on > courses. Another potential mechanism is that students may learn > (good or bad) study habits depending on the manner in which their > introductory course is taught. For example, introductory professors > who > *teach to the test* may induce students to exert less study effort > in follow on > related courses. This may occur because of a false signal of one*s > own ability or an erroneous expectation of how follow-on courses will > be taught by other professors. A final, more cynical, explanation > could > also relate to student effort. Students of low-value-added professors > in > the introductory course may increase effort in follow-on courses to > help > *erase* their lower than expected grade in the introductory > course." > > Study is also clearly relevant to this list and raises the interesting > point that we can do certain things to promote immediate learning and > certain things to promote long-term learning, but the two "things" might > not be the same! > > Take care > Jim > > > James M. Clark > Professor of Psychology > 204-786-9757 > 204-774-4134 Fax > [email protected] > > Department of Psychology > University of Winnipeg > Winnipeg, Manitoba > R3B 2E9 > CANADA > > > >>>> "Christopher D. Green" <[email protected]> 15-Jun-10 8:02 AM >>> >>>> > Getting good evaluations from your students? Perhaps you're not > teaching > them enough. According to a recent study conducted at the US Air Force > > Academy: "professors who rate highly among students tend to teach > students less. Professors who teach students more tend to get bad > ratings from their students." > > Here's a Washington Post report: > http://voices.washingtonpost.com/college-inc/2010/06/study_high-rated_professors_ar.html > > > > Chris > --- You are currently subscribed to tips as: [email protected]. To unsubscribe click here: http://fsulist.frostburg.edu/u?id=13251.645f86b5cec4da0a56ffea7a891720c9&n=T&l=tips&o=3106 or send a blank email to leave-3106-13251.645f86b5cec4da0a56ffea7a89172...@fsulist.frostburg.edu --- You are currently subscribed to tips as: [email protected]. To unsubscribe click here: http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=3112 or send a blank email to leave-3112-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu
