I wrote:
 >> ... keeping people at 80% correct is great rule-of-thumb goal ...

To elaborate on the statement above a bit, we did drill-and practice
teaching (and had students loving it).  The value of the 80% is for
maximal learning.  Something like 50% is the best for measurement theory
(but discourages the student drastically).  In graduate school I had
one instructor who tried to target his tests to get 50% as the average
mark.  It was incredibly discouraging for most of the students (I
eventually came to be OK with it, but it took half the course).

The hardest part to create is the courseware (including questions), the
second-hardest effort is scoring the questions (rating the difficulty in
all applicable strands).  The software to deliver the questions was, in
many senses, a less labor-intensive task (especially when amortized over
a number of courses).  I think we came up with at least a ten-to-one
ratio (may have been higher, but definitely not lower) in effort
compared to the new prep for a course by an instructor.

I am (and was) a programming, rather than an education, guy.  I do not
know the education theory behind our research well, but I know how a
lot of the code worked (and know where some of our papers went).  We
kept an exponentially decaying model of the student's ability in each
"strand" and used that to help the estimate of his score in the coming
question "cloud."  A simplified version of the same approach would be
to have strand-specific questions, randomly pick a strand, and pick the
"best" question for that student in that strand.  Or, you could bias the
choices between strands to give more balanced progress (increasing the
probability of work where the student is weakest).


--Scott David Daniels
[EMAIL PROTECTED]

_______________________________________________
Edu-sig mailing list
Edu-sig@python.org
http://mail.python.org/mailman/listinfo/edu-sig

Reply via email to