Thanks for your insights, Del. Another “problem”
with many of the student evaluation forms is that they do not ask what is the
students’ *idea* of “good
teaching.”
I use the “Socratic method”
much, so if a student believes that “good” teaching is when the
Professor lectures non-stop for the whole period w/o engaging students or
getting them actively involved then that student would rate a Prof. who uses
the Socratic approach low. I call on students by name if a student doesn’t
like that then she/he would rate me low,etc.
From: Del Thomas Ph. D. [mailto:[EMAIL PROTECTED]
Sent: Tuesday, January 24, 2006
11:19 AM
To: Michael Klausner
Cc: [email protected]
Subject: Re: TEACHSOC: Re:
ESPECIALLY FOR JAY AND KATHLEEN..
Hi,
We know that numerical scores will regress toward the mean. That is not
possible on the qualitative side. The latter while more accurate measures
can not be opscaned and
take longer to read. The students who do not take the evals seriously also bias
the score.
One year the college made a major error and included me on the eval
committee. I included a set of measures such as prep time, expected grade
and removed the lecture item. Why should faculty be forced to
lecture. I added group work.
It turns out that the more prep time and higher expected grade (they were
related) the better the professor was. Surprise! Also, the same
text book was loved by one section and
hated by another.. they had the same professor. Well it
figures. None of the other instructional materials have been tested, why
should the evals? I once subed for the perennial
teacher of the year. A few of the students had to duck to get into the
classroom. The class told me that everyone got an A or B. I read
the blue books from an earlier exam
They were mostly gibberish. Another school the award willing
faculty member read to the class from the text.
And so it goes...... no one complains publicly. I get lots of private
e-mails.
Del
Michael Klausner wrote:
Gerry,
They receive BOTH forms at the same time.
Most fill in *both* forms. BTW, I
have heard the same from many of my colleagues. After I read comments on my
Qualitative form, I was expecting many 4’s and 5’s on the
quantitative ratings but not so. I scored just above the “school
mean” on most of the questions. Since promotion, merit raises, etc. are
heavily determined by student evaluations I’m thinking of sending the
qualitative responses to the Dean and Chair. However, they give them much less
weight than the numbers.
Could it be that students are
“numbed” somewhat after doing so many quantitative forms that
there is an “automatic “ averaging effect?
At 09:24 AM 1/24/2006, Michael Klausner wrote:
Greetings:
Anyone should feel free to respond to the following but *especially* Professors Howard and McKinney.
We have both quantitative and qualitative student evaluation forms for students
to do. The qualitative ones ask such things as:
- What
did you especially like about the course and the Professor?
- What
were his/her strengths/weaknesses?
- How
do you think the course could be improved?
Why that is the Qualitative responses are usually much more positive than the
quantitiave ones. For example, I first read the qualitative responses from
students in my Intro class and based on them thought that the numerical ratings
would reflect them. However, while the numerical ratings, in response to
questions, were okay they, were not as good as what I expected after reading
the qualitative responses.
Administrators here only see the numerical ratings. Thus people who get better
qualitative ones are at a disadvantage.
Have any of you had similar experiences?
Thanks.
Michael
One quick question--are students REQUIRED to fill in anything on the
qualitative forms?
Gerry Grzyb