To directly answer Michael's main question (see his message below), I don't know that my experience has been the same (higher/more positive qualitative than quantitative responses). We have both types in our department and students are not required to complete either though most do (I am not sure it is ethical to require students complete course/instructor evaluations). I think it might be a bit difficult to make a systematic comparison between the two, though certainly we have our gut reactions. I generally find qualitative responses to have more variance, in a sense. Quantitative ones tend to be in a limited numerical range and fairly similar with, perhaps, one exception from a dissatisfied student. Qualitative ones, by definition, or more specific, tend to be about many things, etc. But I have not found them to be that different from the quantitative. If this is the experience, often, of Michael and many of this colleagues, it is likely due to a measurement and/or climate/context (norms) reasons.
Some other thoughts based on my reading of relevant literature...
Some argue that for summative, personnel decisions, a small number of global, highly reliable and valid quantitative items are best, while for formative, a larger number of more specific items and qualitative questions are best. Researchers have also argued that it is unreliable to make decisions based on quantitative evaluations from fewer than 15 students per class, and for only 1-2 classes or 1-2 semesters...but, of course, we do this all the time. In other words, we should have larger Ns, several classes, and look at trends over time.
When developing questions, the measures should be based on the goals of the evaluation and the objectives of the teaching and learning process. The measures should undergo all sorts of testing, validity checks, reliability checks, etc. before use. Most departments and schools do not do this, of course.
Many of the questions we ask are problematic. Students can provide reliable and valid information to questions that ask them things they can know or opinions. Asking questions such as "How knowledgeable is the instructor in the content area?" are not reasonable to ask students, especially undergraduates, as they don't have the expertise to know. In addition, we ask questions including, for example, What did you especially like about the course and the Professor?, that probably don't get us the best information for improving student learning (our ultimate goal!) and/or yield a lot of random error. Many places are trying to use items that focus more on learning such as "Compared to other sociology courses of a similar level, how much did you learn in this course?" or "What factors in the course or by the instructor helped you to learn the skills or material in this course?"
How the numbers look, of course, depend also on their use, interpretation, comparisons... At some institutions the numbers are interpreted by comparing them to the average in the dept. or college for courses of similar size and level. Differences are not seen as "meaningful" unless they are at least one standard deviation different... or some such standard. Unfortunately, I have also seen departments where faculty are ranked on their quantitative scores using the mean to the hundredth decimal place, and people actually think differences of a tenth or one-hundredth are actually meaningful! Michael might try to find out how interpretation or comparison of quantitative scores is done and whether that needs "fixing."
Administrators, I believe, should look at both types --qualitative and quantitative... multi measures are better... especially if the goal (explicit or implicit) is formative and summative. They probably don't want to take the time to read the qualitative or are hesitant to draw conclusions/analyze the qualitative. I would go ahead and send the qualitative ones but, perhaps, they could be "analyzed" and "summarized" by a colleague or your Chair first making it easier and, perhaps, seem more legitimate to the other administrators. They should also, of course, be looking at data other than end of semester student evaluations. I would also do CATs and midterm evaluations and gather direct evidence of student learning to send, especially at times of critical personnel decisions. Colleagues (internal or external) can also evaluate syllabi and materials. Evidence of being a scholarly teacher (e.g., you read teaching soc, participate on this list, innovate in your teaching, go to teaching workshops, etc.) and/or of doing SoTL (you investigate teaching-learning in your discipline and make that public) should also matter in terms of being a "good teacher."
Whew, I may have gotten a little carried away... enough for now...
Kathleen
At 09:24 AM 1/24/2006, Michael Klausner wrote:
Greetings:
Anyone should feel free to respond to the following but *especially* Professors Howard and McKinney.
We have both quantitative and qualitative student evaluation forms for students to do. The qualitative ones ask such things as:
- What did you especially like about the course and the Professor?
What were his/her strengths/weaknesses? How do you think the course could be improved? Why that is the Qualitative responses are usually much more positive than the quantitiave ones. For example, I first read the qualitative responses from students in my Intro class and based on them thought that the numerical ratings would reflect them. However, while the numerical ratings, in response to questions, were okay they, were not as good as what I expected after reading the qualitative responses. Administrators here only see the numerical ratings. Thus people who get better qualitative ones are at a disadvantage.
Have any of you had similar experiences?
Thanks.
Michael
Cross Endowed Chair in the Scholarship of Teaching and Learning
Professor, Sociology
Carnegie Scholar
Box 6370
Illinois State University
Normal, Il 61790-6370
off 309-438-7706
fax 309-438-8788
[EMAIL PROTECTED]
http://www.ilstu.edu/~kmckinne/
