In article <004601bf5027$42a32d00$450aad86@bergerd>, Dale Berger
<[EMAIL PROTECTED]> writes
>The problem in a nutshell -
>
>(1) practice with statistical applications outside of class, as with
>homework exercises, is essential;
>(2) we can't be sure who did work that was done outside of class;
>(3) If we don't grade homework, many students won't do it, despite good
>intentions.
>
>There may be no perfect solution.  My approach is to allow, and even
>encourage, students to work together on homework in teams of two or three.
>Include issues for discussion.  Promise that exams will include some
>questions based on homework problems and discussion issues.  Grade homework
>for feedback, but don't give it much weight.
>
>Collaboration on homework can work very well.  As we all know, teaching is a
>great way to solidify one's own learning.
>
>Despite the appeal of take-home exams, I no longer use them.  I find them
>difficult to write and grade, there is the possibility of collaboration at
>some level, and some students put in tremendous amounts of time that
>distorts their lives and goes way beyond what is possible for many other
>students.  That doesn't seem fair to me.
>
All the threads that have contributed to this has made this a really
interesting rope to follow (too thick to be a 'thread' any longer!). At
the risk of telling everyone what they already know, here's a pennyworth
from the other side of the pond. A non-statisticians view but one from a
person who has spent the last 20+ years working on issues connected with
assessing learning. 

In the UK we tend to evaluate courses and assess learning, I just say
this because terminology is not the same everywhere. I find it a useful
distinction because the term assess has a root that means to 'sit
alongside and observe, value and seek to understand'. Evaluate has a
root that means to reckon up and to work out the value of and
examination, a root that has to do with the judgements of evidence in a
court of law. In this case, if the examen moves even slightly because of
the weight of evidence then I am convicted. Or if a student I pass. One
of the big issues in all this is *purpose*, if we are assessing when we
should be evaluating, or examining when we say we are assessing, then
clarity of purpose and result is lost and our ability to communicate
with students and each other impaired.  

A big issue in all this is the *repeatability* of judgements. Does a 'B
grade' or a 67% mean the same thing each time a teacher or examiner
gives it? Do the same standards and values hold good across a whole
faculty? Last year? This year? Much more problematic than the extent of
variation between assessors or raters, is what are the causes of such
variations and what do they mean?  Of course it is possible to publish
descriptions of standards and expectations and one might even have the
proverbial 'gold standard' to measure against. How many of these term
grades and assessments are considered 'important' enough to probe how
fair, repeatable and credible they are?

It seems to me that many of the points raised in these postings echo the
debates about portfolio and authentic assessment that is current in the
USA and in different forms here as well.  The 'homework grading' issue
is an example and probably has as much to do with educational
philosophies and individual beliefs as anything else. Purpose is also an
issue for this type of issue. For instance if ( a big if) a faculty has
a clearly thought through (thru!?) Vygotskian approach to teaching and
learning, then homework assignments might be used to assess the extent
to which metacognitive strategies are understood and employed by
students. Alternatively if (as in the post by Dale Berger above) then
assessing the ability to problem solve by becoming part of and working
as a small team may be important. If a teacher needs to check whether
first stage learning (e.g. vocabulary, skills, techniques) is understood
then neither of these are appropriate. Other theories of learning
generate different assessment rationales. Simply teaching without being
aware of the how and why of the procedures adopted, is a form of the
'cookbook' approach so often criticised in this group when the use of
statistics are concerned. Choices about how to teach and assess (whether
formatively or summatively) need to be informed and considered, not just
'grabbed'.

As other posters have noted, providing formative or diagnostic feedback
to students is essential. Some of this needs to be norm referenced and
some needs to be related to performance indicators or criteria, if
students and faculty members are to have a way to 'see' what is
happening and why.  This is quite different to 'grading on the curve'
which (at least to me) seems lazy at best. I have to come to believe
that clearly stated *thresholds of performance* are the most effective
form of cut off points and that letter grades or percentages should be
related to these and not the other way around. 

What I find really difficult in relation to assessments and grades is
the use of terms like A or B+, good, bad and average in relation to
achievement or performance, not least because they are capable of so
many meanings as to be misleading. 

*Fewer and better measures* would be a good resolution for many
faculties to adopt for the New Year. 

My thanks to the regular contributors to this group from whom I have
learned a great deal in the last 18 months or so. Enjoy the next century
and greetings from Dorset, England.
--
Jonathan Robbins 
www.talent-centre.com

Reply via email to