I've just finished a lecture that goes over the Hackystat telemetry data from an assignment in my software engineering class, done by 8 two-person groups. This might be interesting for some of you to look at:
<http://csdl.ics.hawaii.edu/~johnson/613s05/17.GradesterFeedback.pdf>
A few things to note:
* There did appear to be a correlation between groups using a "better" (i.e. consistent/egalitarian) process and their overall outcome in terms of the quality and completeness of the system. (There's not enough data, nor is my assessment of quality/completeness good enough for this 'correlation' to hold up under scrutiny, but the initial results looking plausible.)
* In doing this, I did some fixes to the Telemetry Reports on the public server. Some of them weren't working any longer due to the redesign of the Reduction Functions, and I fixed that whenever I came across it.
* I also redesigned the Telemetry Reports so that none of them require any parameters. I think this makes it easier for new users to "jump in" and get some charts out without having to figure out what to type into the parameter field.
* In the classroom setting, I find that the initial use of telemetry data is to quantitatively assess "work habits" and to provide evidence that bad work habits result in bad products.
Cheers, Philip
