Over on the libresoft mailing list we're having a conversation about interpretation of complexity metrics (e.g. McCabe cyclometic complexity etc). The studies that we know on this metric demonstrate that the metric is useful to predict bugs, but I often hear the further interpretation that complexity actually causes more bugs (or inhibits their fixes) because the code is harder to understand.

That interpretation seems to need stronger validation than the correlational studies. I thought this forum might know of some studies that approach this. For example has anyone tried to measure the impact of (e.g.) higher cyclometric complexity on the speed of fixing a bug in code? I'm thinking of some comparison of effort to fix the "same bug" with different coding styles (one with high cyclometric complexity, one with lots of function calls, something else)?


An alternative explanation of the correlation might be that complexity metrics measure the difficulty of work (ie difficulty of the work is driving both the complexity and the bugs, at the same time).

Thanks,
James
http://james.howison.name

Reply via email to