Way Back When, somewhere in the vicinity of
http:[EMAIL PROTECTED]/msg00003.html and
http:[EMAIL PROTECTED]/msg00005.html we had
some ideas about developing a sane patching process.  I have some very
definate ideas now.

Currently, it seems to go something like this:

    - Someone has an idea, or maybe looks at the bug list.
    - Patch is produced and sent to p5p.
    - Maybe a new test is added
    - Maybe the docs are added/changed
    - Maybe the tests are run (if so, usually only on one architecture)
    - Pumpking decides if its introduced (possibly after running tests)
    - Stuff breaks in the next release, people complain

The size of the maybes varies wildly according to patcher and
pumpking.  This is how most software projects work, and it sucks.

    - Bugs and interesting feature ideas are forgotten
    - Low quality patches slip through
    - The Pumpking is responsible for review *and* integration
    - Docs are not kept up to date
    - Tests are not kept up to date
    - Quality on obscure architectures degrades

It can be fixed.  The key is automation and enforcement.  Automation
of the process and enforcement of review.  It goes something like
this:

    - An open change list is maintained.  This includes bugs AND
      feature ideas.  If someone has a neat idea on p5p and the pumpking 
      likes it, it goes into the list.
    - Someone claims responsibility on a feature/bug (or set of if
      they're related).  Several people can claim the same things
      at the same time.  Claims timeout if there's no progress.
    - The patch is presented.
    - The patch is scanned for doc and test patches.  Bug fixes
      *require* a test patch if its not already failing (this will
      be noted as part of the change).  New features *require* a
      doc patch.
    - The test suite is run on a server farm (Sourceforge, Perl labs, 
      etc...).  Bug fixes are run before (fail) and after (pass) the
      patch.  Test coverage is checked to make sure the new tests
      cover the code patch.
    - A reviewer (or reviewers) takes responsibility for the patch.
      Reviewers are appointed by the Pumpking.  A reviewer may not
      review their own code.  The reviewer decides if its in good
      enough shape to go in, based on a coding standard.  Rejections
      are sent back to the patcher and to p5p.  Any exceptions to the
      coding standard which are allowed through are also posted to p5p.
    - The Pumpking performs final review and integration into the dist.

That's just a rough overview, if you got lost in there its:  Patch,
Test, Review, Integrate.  Please comment/add/take away.


This is based on the Aegis CASE tool which I've had to use lately.
While Aegis is very good, it has some fundemental issues when it comes
to being used with perl and will have to be reimplemented (I need a
name).

It has its own version control system (fhist, but it can use RCS).  It
*might* be able to work with a CVS repository, but probably not with
Perforce and after seeing the fallout of the last "what version
control should we use" discussion, I don't want to pick that scab again.

For distributed work, it relies on NFS which is right out.  It has
some tools for allowing remote work, but they are clumsy.  The whole
thing is architected around the assumption of programmers working in
an office.

There's a few other annoyances, but those are the fundemental ones.
However, my experience is short, so if anyone else knows better,
please say so.  I'd *really* rather not have to reimplement it.

http://www.canb.auug.org.au/~millerp/aegis/aegis.html


Some potential problems/rough spots:

- Experience shows that the review process can get tedious if not kept
  lean.  Developers can become discouraged from performing small changes
  because they don't want to have to go through a review and get
  rejected on some small point.  

  This can be alleviated by keeping the coding standard flexible (to
  discourage review lawyering) and by allowing reviewers to make minor
  changes to the patch (subject to approval by the developer) to avoid
  having to go through another cycle just to fix a typo.  More ideas are
  welcome.

- New developers will get scared off by the process, older ones will be
  annoyed by it.

  First off, the thing's got to be really, really simple to use.  It
  might even make sense to distribute a perlpatch utility similar
  to perlbug.
  Patches posted directly to p5p will be strongly discouraged, but will 
  happen.  Those will be picked up by a champion and run through the 
  process, similar to the mentor system already in place.  They'll also
  teach the system and/or convince them to use it.  Repeat offenders will
  eventually be denied their patches.  You might be a brilliant programmer,
  but in a project of any scope its just as important to Play Well With 
  Others.


Something probably already does all this.  I mentioned Aegis does, but
has other problems.  Does anyone know of something (or know of pieces
we can apply to this?)

Reply via email to