metric is useful to predict bugs, but I often hear the further interpretation that complexity actually causes more bugs (or inhibits their fixes) because the code is harder to understand.

That interpretation seems to need stronger validation than the correlational studies.

The problem with many of these correlational studies is that many of the
metrics correlate to lines of code.

I thought this forum might know of some studies that approach this. For example has anyone tried to measure the impact of (e.g.) higher cyclometric complexity on the speed of fixing a bug in code?

No studies that I know of and of course it would depend on the kind
of bug.

I wonder how cyclomatic complexity effects the time for a genetic
algorithm to fix faults:

An alternative explanation of the correlation might be that complexity metrics measure the difficulty of work (ie difficulty of the work is driving both the complexity and the bugs, at the same time).

There has been some interesting work done by John Sweller on what
he calls cognitive load:

Derek M. Jones                         tel: +44 (0) 1252 520 667
Knowledge Software Ltd                 mailto:de...@knosof.co.uk
Source code analysis                   http://www.knosof.co.uk

Reply via email to