> I agree, opcodes will have optimizations in them which will affect the
> count so they can be flawed there. And also things like assignment
> statements which are one line of perl code are a lot more opcodes that
> one might initially think. ...

It's not a matter of optimization like -O2 or whatever. It's a matter
of optimizing the written code for maintainability. A better example
would be a loop vs. unrolling that loop. A standard for-loop would be,
say, one decision point plus 5 statements. Unrolled, that loop may be
5 statements * 100 items for 500 statements. Which one is easier to
maintain? Optimizing for maintainability is, imho, much more
important, in general, than optimizing for opcode efficiency.

> ... I like your idea of measuring the decision points, and in
> particular their density. And Matt had a good suggestion with tokens
> too.

Unfortunately, it's not my idea. It's one of the more recent crazes in
software analysis. Of course, I can't remember the name of the theory,
but it's out there. Google for "software decision point" and you'll
hit a bunch of articles.

Personally, I think that we should be looking at minimizing the number
of tokens in a block, as that increases readability of that block.
(The less to read, the easier to read it all.) This would imply that
creating a set of one-use functions for a complex set of operations is
a good thing.

Additionally, minimizing the number of tokens in a block also implies
keeping functions to a minimal size, which maps well to most people's
feelings that functions should be no more than 50 lines long,
including comments.

Decision point minimizing ... that's a tough one. There is a minimum
number of decision points that any given problem must have in order to
correctly model that problem's complexity. If you dip below that
number, you have incorrectly modeled the problem. Going over that
number means you have (potentially) modeled the problem correctly, but
with additional complexity. That said, we can probably come up with a
useful metric regarding the number of decision points within a given
block.

*looks over what he has been babbling*

It seems to me that the block is rapidly becoming, at least for me,
the actual unit we want to measure stuff about. In Perl (and other
C-derived languages), the block is very neatly delimited by { ... }.
Coincedentally, that comes back to the line-folding idea that Matt
intially brought up as a starting point for analysis.

So, maybe we should look at block and count the following:
1) Number of tokens
2) Number of decision points
3) Number of child-blocks
4) Number of function calls (core and otherwise)

The reason for function calls is that they require additional
knowledge that isn't specified within the block itself. It's an
increaser of global complexity (more outside knowledge needed) and a
decreaser of local complexity (less stuff happening in this block).
It's not immediately clear exactly how we would want to weight #4.

Rob

_______________________________________________
sw-design mailing list
[EMAIL PROTECTED]
http://metaperl.com/cgi-bin/mailman/listinfo/sw-design

Reply via email to