Hi Dennis.

In an ideal world good assignments would be appropriate for both
learning and assessment. They work reasonably well for graduate students
- we usually call them 'theses' there. 

But the world is certainly not ideal, and in the practical world we live
in GOOD assignments do not work for assessment. A good assignment
requires a very large amount of work to prepare - much more if you want
individual assignments - and an enormous amount of work to mark. (I am
defining a 'good' assignment as one that assesses the thinking ability
of the students; not one that simply checks if they can follow a
specified recipe or if they have memorised some piece of theory.) In our
real world we simply do not have time or, often, the interest to put in
this time.

Students will inevitably seek help for assignments. This may be other
students, other staff members or even you. If the main purpose of the
assignment is to encourage students to learn, this is fine, but it
certainly stuffs it up as an assessment tool! (One of the reasons I
eventually gave up on assignments was that I was sick of marking my own
work - and of giving lower marks to students who had not obtained help.)

In our sausage-machine educational institutions, where students equate
'learning' with 'photocopying' the effect of most assignments is simply
to provide marks which are generally restricted in range, and generally
reasonably high.

Regards,
Alan




Dennis Roberts wrote:
> 
> perhaps what alan is opining about is the problem of trying to make
> satisfactory distinctions amongst all these turned in assignments ... but,
> that does not DEvalue the activity ... but, does DEValues these scores from
> adding more spread to to the distribution
> 
> we can assist this by making the SAME assignment to all ... variations are
> more evident that way BUT, then you run into the problem of "cheating" more
> ...
> 
> if we make different assignments to all, then differences in quality of
> content become hard to ascertain since all content is different ... then we
> have to rely on presentation, structure, good writing, and the like
> 
> however, i still contend that the fact that we have an activity that we
> seem to find hard to handle from a spread-em-out point of view, is not
> sufficient reason to abandon the activity but, rather, should make us more
> vigilant to find better ways to grade these (using pre determined
> checklists, keys, points to look for, etc.) things
> 
> At 09:25 AM 4/8/02 +1000, Alan McLean wrote:
> >A number (>10!) of years ago I directed a subject with assignments and
> >of course had this problem. (I say of course because there is always
> >some variation among markers!) I went through an incredible amount of
> >heart burn, trying to do the best thing for the students. I started by
> >taking into account both class means and class standard deviations,
> >using the linear transformation referred to before. Then I decided that
> >the SD had very little variation, and simply leveled the class means.
> >
> >None of this resolved my heart burn, because I was never sure which
> >group of students I was 'helping'. Did a class get a low average mark
> >because their performance was low or because the marker was more
> >demanding? (Extensive remarking to try and find this out was out of the
> >question - this was why we spread the marking load in the first case.)
> >
> >One solution of course was to create assignments for which the solution
> >was very cut and dried, so that variation in marking was not a problem.
> >I eventually decided that this amounted to a class test (except that it
> >was a 'take home' test, with all the disadvantages of that) so wound up
> >turning it into a genuine class test.
> >
> >Eventually I realised that assignments are simply inappropriate as
> >assessment tools for large classes. They work fine as learning tools -
> >if you can get the students to do them as such! - and they work fine for
> >a small class (under 10). For large classes they are simply nonrandom
> >number generators.
> >
> >(You might pick up a certain emotional tone to this email......)
> >
> >Regards,
> >Alan
> >
> >
> >Tristan Miller wrote:
> > >
> > > Greetings.
> > >
> > > On Sun, 7 Apr 2002, Glen Barnett wrote:
> > > > Assuming you *can* take average student abilities across classes as equal
> > >
> > > Who said that we are sampling across classes?  I was thinking of the case
> > > where the assignments from a single large class are randomly divided among
> > > several graders for marking, and one of the graders is an outlier.
> > >
> > > > there are a variety of ways you might match mean and s.d.,
> > > > but the obvious one is the linear transformation you get by
> > > > multiplying the B group's marks by the ratio of standard deviations (r
> > > > = s_A/s_B, making the new sd equal to s_A), and then adding the
> > > > difference d = x_A - r x_B.
> > >
> > > Thanks, this is exactly what I'm looking for. :)
> > >
> > > On 6 Apr 2002, Jay Warner wrote:
> > > > I would be more concerned that the graders can interpret the answers
> > > > in such blatantly different ways.  Perhaps the students do the same,
> > > > which begs the question of the precision & usefulness of the
> > > > questions.  Reviewing the questions with your graders might tighten up
> > > > your (instructor's) part of the process.
> > >
> > > I am aware that the best solution in this case is preventative rather than
> > > corrective, but unfortunately situations do arise where the damage has
> > > already been done, and redesigning or remarking the assignment is not
> > > practical.  In such cases the regulations of my university mandate a
> > > linear scaling of the affected grades, hence my query.  I hope that my
> > > assignments will be so clearly specified and my markers so clearly
> > > instructed that I will never have need of such a scaling, but I wish to be
> > > prepared for all possibilities.
> > >
> > > --
> > > \\\  Tristan Miller
> > >  \\\  Department of Computer Science, University of Toronto
> > >   \\\  http://www.cs.toronto.edu/~psy/
> > >
> > > .
> > > .
> > > =================================================================
> > > Instructions for joining and leaving this list, remarks about the
> > > problem of INAPPROPRIATE MESSAGES, and archives are available at:
> > > .                  http://jse.stat.ncsu.edu/                    .
> > > =================================================================
> >
> >--
> >Alan McLean ([EMAIL PROTECTED])
> >Department of Econometrics and Business Statistics
> >Monash University, Caulfield Campus, Melbourne
> >Tel:  +61 03 9903 2102    Fax: +61 03 9903 2007
> >
> >.
> >.
> >=================================================================
> >Instructions for joining and leaving this list, remarks about the
> >problem of INAPPROPRIATE MESSAGES, and archives are available at:
> >.                  http://jse.stat.ncsu.edu/                    .
> >=================================================================

-- 
Alan McLean ([EMAIL PROTECTED])
Department of Econometrics and Business Statistics
Monash University, Caulfield Campus, Melbourne
Tel:  +61 03 9903 2102    Fax: +61 03 9903 2007

.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to