Ruven,

No one seems to have yet mentioned the work on quantifying software complexity such as the McCabe and
Halstead metrics.


The Halstead metric is based on token count and given that
the two pieces of code containing the same number of tokens
can be easy/hard for somebody to comprehend, I don't feel there
is a strong correlation with complexity from the cognitive point
of view.

Of course the more tokens there are in a program the more effort
will be needed to comprehend it.  But the semantic associations
between tokens introduce an additional factor that is not part of
the Halstead metric.

There is evidence that these measures have behavioral correlates - e.g., modules with higher cyclomatic complexity tend to have larger numbers of defects - so there is clearly a cognitive
aspect to software complexity.

Both metrics correlate highly with lines of code and there is a strong
correlation between lines of code and number of faults.  No need to
invoke a cognitive aspect.

Most code is stunningly mundane, as are most faults, which helps explain
how such simple minded metrics turn out to be more useful than might
be expected.

However, I can't find any work which provides an explicity psychological model for software complexity which could be used to relate it to more general ideas of cognitive complexity.

I have no good idea for how to practically define software complexity or
cognitive complexity.

If we had a method of measuring cognitive effort to perform
some task we might say that small tasks that required lots of
such effort were more complicated than large tasks that
required little effort.

--
Derek M. Jones                         tel: +44 (0) 1252 520 667
Knowledge Software Ltd                 mailto:[EMAIL PROTECTED]
Source code analysis                   http://www.knosof.co.uk

Reply via email to