> Mandatory, Managed Code Reviews is basically where each individual
> change is reviewed by another member of the team before being committed.  
> We used it at Blackstar (AFAIK they still do, Tony?)  

I believe so, but haven't been there for over 6 months now, so I'm not
sure ...

> So if the management of the review system gets lax, the whole system
> comes apart.  Therefore it fails the "hard to screw up" requirement.
> (You guys solve any of this, Tony?)

I think we learnt many things from doing this. I've learnt more things
doing it in other places since. I know there are many more things still
to learn.

The first big lesson was:

  - automate whatever review standards you can

When we first introduced this most of the reviews were for very
basic things: you forgot strict or warnings. You didn't untaint that
variable. You're not following our coding standards there.

This wasted a lot of time in to-ing and fro-ing between developer and
reviewer, and also had the effect that most reviewers, once the initial
reasons were fixed, found it much more difficult to re-review it and
catch the _real_ problems.

So we built as much of that as possible into our build script to catch
automatically. This hugely improved the feedback loop. A developer could
get instant feedback on the sort of things that would get an immediate
'fail' from a reviewer.

Over time the build script got more and more elaborate as it tried to
catch more and more of these. (Mike, did you ever get your stuff finished 
to catch implicit dependencies, where Module Foo didn't do a 'use Bar'
but everything worked ... for now ... because the script calling it
did...?)


The biggest problem with this is that 

The next big lesson was:

  - test as much as possible

One idea I that I like from XP is that your code works when it passes
its tests. Accepting that greatly simplifies review processes as it
removes the 'does this code do the right thing' part (which is the part
that most developers are afraid of - they don't want to give their seal
of approval to something that might turn out to have been wrong!)

If you've also got (mostly) automated checking that code meets
standards, all you're left with is:
 - are the tests valid    and
 - is the code as good as possible?

The first, for whatever reason, doesn't seem to cause developers as much
of a mental block as checking if the code is valid (Assuming they're
used to writing tests and you've got a good house style for these).
The 'customer' can also help with this in terms of acceptance testing
etc. [At BlackStar we never really cracked the testing of web pages,
leaving most of that for modules, but at Kasei we've developed a pretty
good system for doing that that I keep meaning to write up ("useful uses
of 'local' in testing")]. 

The second is in the refactoring sense of "Is there a better way?". Is
this method in the correct class? Do I need to rename any of my variables
or methods or classes to express more clearly what this is about? etc.
It assumes that the code is correct, and looks solely at how to make it
better. 

The other significant thing this buys is that it's much easier to make
trade-offs on where to stop. If you're hitting a deadline it's easier to
skip a few refactorings if you believe the code to be otherwise correct.

None of this really helps with the "if people don't actually do it, it's
not going to work" problem, but an automated build check helps enough to
minimise the worst of this problem -  which is about as good as you can
get. I assume the question is much more "what can we put in place to
improve our changes of having a useful process when we actually do
follow it"?. If so I'd suggest:

  - automate checking of whatever is possible (must have real time
    feedback)
    - don't allow any code through without decent test coverage
     (there are much better ways of doing that now...)
    - don't allow any code through without decent documentation
     (Pod::Coverage is wonderful for this)
    - test as much site policy as possible. It's well worth investing 
      in writing this. (Of course it would be even better if we had a
      nice suite of modules on CPAN for this sort of thing with lots of
      switches that can be flicked to set what local policy is ...)
  - break the review process into distinct concepts. Have a separate
    review (even if by the same person at the same time!) for each
    thing you're testing: tests, documentation, naming, clarity etc.
    Look at each in turn, and feedback on each.

In short, maximise your Kwalitee checking to free time for the Quality
checking.

Tony

    

Reply via email to