Not being a statistics expert, I wondered if someone here could
comment on the suitability of using parametric tests: has there
been prior work to demonstrate that developer performance shows
normality?

I recently came across a study of the impact of a tool on software
developers that used a two parametric significance tests to show
an statistically-significant effects looking at task-completion
times and number of tasks completed.  The study compared two groups
of developers (one with 4, the other with 5 developers) as they
completed 6 tasks.  The authors used two tests and found significant
effects: a repeated-measures ANOVA across the completion times,
and a t test to compare number of tasks successfully completed.
The surrounding description and actual numbers convinced me of a
practical effect, but the statistical results seemed a little
sketchy to me.

I'm currently designing an experiment to assess the impact of a
tool on developer performance.  Ideally I'd like to have statistical
significance, but don't think I can rely on parametric methods.

Brian.

-- 
  Brian de Alwis | Software Practices Lab | UBC | http://www.cs.ubc.ca/~bsd/
      "Amusement to an observing mind is study." - Benjamin Disraeli
 
----------------------------------------------------------------------
PPIG Discuss List ([email protected])
Discuss admin: http://limitlessmail.net/mailman/listinfo/discuss
Announce admin: http://limitlessmail.net/mailman/listinfo/announce
PPIG Discuss archive: http://www.mail-archive.com/discuss%40ppig.org/

Reply via email to