On 28/07/07, chromatic <[EMAIL PROTECTED]> wrote:
> Let me preface this by saying that I know that our static analysis tests
> represent a tremendous amount of work by several people (especially Paul,
> with an enormous amount of respect to everyone who's contributed also to
> Perl::Critic and PPI), and that they have helped us reach and do help us
> maintain an important level of quality in our code.  Maintainability my the
> second most important goal for our code after correctness.
>
> With that in mind, I wonder if it's time to reconsider our strategy for using
> these tests effectively.
>
> I ran make test.  It took almost six minutes:
>
> Files=307, Tests=7413, 345 wallclock secs (187.91 cusr + 34.26 csys = 222.17
> CPU
>
> I renamed DEVELOPING to DEV and ran make test again.  It took under four
> minutes:
> Files=296, Tests=7392, 220 wallclock secs (121.98 cusr + 26.64 csys = 148.62
> CPU)
>
> The first run failed several coding standards tests, which suggests to me that
> people don't run them before every commit.  We can't prevent accidental
> forgetting, but I wonder if making the coding standards tests faster would
> make them less painful and make it more likely that people would run them
> more often.
>
> Most of our commits touch fewer than a dozen files.  Are we getting enough
> benefit from performing static (non-functional) analysis of all several
> thousand files in our tree for every make test run that it's worth adding
> another 50% to our test run times?  (Not all assertions are equal in value,
> but cutting out 21 from 7400 drops the amount of time in a test run by a
> third.)
>
> Again, our code has improved in no small part due to the tests and the
> diligence of all committers in running them and correcting minor accidents as
> they occur.  I only mention this to bring up the possibility of brainstorming
> alternate ways to use the tests to their full advantage.  If we're using them
> to their full potential now, that's fine.

As a short term improvement of the situtation, how would a "make
test-cage" target sound?  This would be the same as "make test" but
with the coding standards (and any other cage-cleanliness-checking
tests) shifted into this suite, and not run by default with "make
test".  At present the perlcritic tests aren't run by default with
"make test", however, I run the perlcritic tests as part of my
standard test suite.  This proposal shifts the load of code standards
tidyups onto people such as myself (a load I don't mind bearing for a
little while) which isn't a very efficient way of running things, but
has the advantage of making the standard "make test" suite run a lot
faster, and hence helping the majority of developers on the project.

A longer term strategy would be to check for recently changed files
(since the developer's last "svn up") and to run the coding standards
tests only over those files.  We could do this by storing checksums of
each file, but this has the disadvantage of slowing down the base
"make test" process (by how much, though, is a good question).
Another way would be to locally store a record of the modification
time of each file listed in MANIFEST and compare that against the
current modification time of each file, and then run any newly changed
files through the coding standards tests.  These are just the first
ideas which came to mind, does anyone else have a more
elegant/efficient solution?

I'm all for reducing the runtime of the base test suite, and for
having proper coverage of standards tests (run by more people than
just me :-) ); we just need to find a good solution.

Paul

Reply via email to