On Feb 20, 2019, at 10:40, 'John Clements' via Racket Users 
<racket-users@googlegroups.com> wrote:
> One solution would be to use the command-line version of Racket’s 
> handin-server, which is language-agnostic to the degree that it can just 
> submit arbitrary files to the server, and what happens on the back end is up 
> to you....
> 
> I look forward to hearing about anything awesome that others use.

I’ve been working on making the handin server fit the way I think about things, 
and I’m hoping to get my code to the point where “awesome” would be a 
reasonable descriptor.

Here’s a sketch of the core ideas.

1) I want to be able to code up a solution for each assignment I write, and use 
the solution file in assessing student work. My students’ work is in *SL, so I 
want to write my solutions in that language as well. I have a library that 
provides macros check-expect*, check-within*, etc; they expand to check-expect 
& co for use on my own code, but they also get collected as tests to run on 
student submissions. The library makes my solution file provide a handle on 
these tests, so I can just require my solution file from the checker.

2) I want automated checking that the student has defined all the 
functions/constants that I’ve explicitly named in the assignment, and I do this 
via a students-must-define macro that the solution library provides; again, my 
resulting assignment solution file then provides a handle on the required names.

I'm satisfied that the above two points are clear wins, at least for me; the 
interface is comfortable to work with.

3) I use the web-server/templates library to generate a partially-filled rubric 
based on the solution file. For each assignment I include a rubric (.txt file) 
in the assignment directory for this purpose.

4) I want the code that actually resides in checker.rkt to be about how to 
respond to student code passing or failing the desiderata I’ve set out in the 
solution file, rather than about what the student should  define and what tests 
should run.

I also want to incorporate some cosmetic checks (like indentation) but haven’t 
gotten around to it yet.

My motivation in all this is that the convenience of submitting from DrR is 
great, but I’ve found it hard to write checkers that yield good UX for students 
whose work is imperfect. The typical failure mode is that errors are raised and 
handin fails; I generally would rather accept the handin and log its 
shortcomings in the rubric, and sometimes also give the student a notification 
(like, “hey, please break your long lines and re-submit”).

Longer term I’d like to come up with something to serve as an instructor 
dashboard, with an aggregate display of features like who left what function 
undefined, and what tests were failed by multiple students’ code.

If I get to a point where I’m satisfied that the end product is useful enough 
to warrant sharing, I’ll write it all up and share the code. If anybody wants 
to collaborate I’d be happy to discuss and share off-list.

Best,
J

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to