So far we have not seen a problem with this. We’ve got a few problems set up like this across two very high enrollment courses. In these courses, both lab courses, the deciding factor on whether they get full credit for the question or half credit is the accuracy of their lab data (full credit if they do their calculations correctly and their data is accurate enough to lead them to the right conclusion, half credit if their data is sloppy but they do their calculations correctly at least, no credit if their answer is not supported by their data). These courses have from one to 2.4 thousand students a semester and in general anything that can go wrong does for some percentage of them, so it *seems* like it’s working reliably. Some of these we’ve done as customresponse but I know some of them are other types as well.
Doug Douglas Mills Director of Instructional Technologies Department of Chemistry University of Illinois dmi...@illinois.edu (217) 244-5739 On 2/3/14, 10:27 AM, "Gerd Kortemeyer" <ko...@lite.msu.edu> wrote: >Hi, > >On Feb 3, 2014, at 11:16 AM, Mills, Douglas G <dmi...@illinois.edu> wrote: > >> We have done something similar to your first challenge below by setting >> the weight of the problem equal to a variable and setting the variable >>in >> the Perl scrip at the head of the document based on how many tries have >> been taken on the problem using the relevant EXT functions. I can >>provide >> code examples if you¹d like. > >I am sorry, but I don’t think that is a save solution. This cannot >reliably feed back into the grade book, I would expect “surprises.” > >- Gerd. _______________________________________________ LON-CAPA-users mailing list LON-CAPA-users@mail.lon-capa.org http://mail.lon-capa.org/mailman/listinfo/lon-capa-users