On 11.11.10 08.28, Mcdonald Jason (Nokia-MS-Qt/Brisbane) wrote:
> Hi Tor Arne,
>
> That's an excellent summary.  I have questions/comments on a few of the
> early-warning tests (see below), but the philosophy and the high-level
> model both look good to me.

Thanks.

>>    - If the patch contains a new test, run the test before applying the
>>    code change, to verify that the test actually fails without the patch
>
> Depending on the nature of the patch, the test may fail to compile (or
> worse, loop infinitely or crash the test environment) when run without
> the rest of the patch.  The test runner will need to be a little bit
> clever to handle these cases.

Good point. I guess we need to set up some of these bots in a way that 
allow them to run arbitrary code without risk of messing anything up 
besides that one run.

>>    - Check that the commit message conforms to the template
>
> The current legal checks (for prototype name leakage and names of
> employees who wish to remain anonymous) will need to be retained for
> changes that are pushed from inside Nokia.  I think those particular
> checks need to be done earlier though, before external people can see
> the patch.
 >
> In a way, we'll need to apply some stricter checks to stuff pushed from
> within the company than to stuff pushed from outside, due to various
> legal obligations the company must uphold (e.g. the company is legally
> forbidden from publishing employee names without their consent).

Agreed. This is something we need to take care of on our side as part of 
the "how our developers push changes out into the open".

>>    - Check that there's only one author in the patch-set
>
> That restriction seems a little odd to me.  Can you explain the reason
> behind it?

I was not very clear on that one. It's more about making sure that we 
have signed contributor agreements for every contributor in a patch-set 
than actually limiting the number of authors in a set.

Currently that's handled by having people sign the contribution 
agreement before allowing them to create a merge-request, and we only 
allow merge-requests that contain contributions from the same person who 
submitted the merge-request, hence the "one author"-rule.

But, in the future we might have a completely stand-alone review system, 
where we could ensure that people have signed the CA by enforcing that 
at sign-up. Or, by having a bot that scans every author in a patch-set 
and verifies the name/email against the list of signed CAs, and complain 
otherwise.

Either way, it's something we need to deal with.

> I would see the EWS as a means to catch errors that are obvious and/or
> are easy/cheap to find automatically -- the kind of things that a human
> reviewer would likely see as wasting their time:
>
> - check that there are no new compiler warnings,
> - check that new classes/functions have docs,
> - check that there are no new qdoc warnings,
> - run static checking tools, e.g. Coverity, if it runs fast enough.

Absolutely agree. The idea of the EWS is to catch things so that the 
human reviewers don't have to waste precious cycles on it.

But, if you apply that principle to the integrators as well, you might 
want to "summon" a bot for doing static analysis, or a full build, or a 
build on a slow platform, so that you (as a contributor), can find and 
fix any errors _before_ the patch gets through to a 
maintainer/integrator, possibly wasting his or hers time by blowing up 
in the final quality gate.

In that sense the EWS is more like a network of try-bots.

And if you think about it, the final quality gate is basically nothing 
more than a "review step" that summons all the slow, full-build, all or 
nothing-bots. I think we should keep this in mind when designing the 
solution, so that we don't end up with two completely separate systems.

A bot should be able to do its job, regardless of how it's initiated, 
either by being summoned from a review (manually or automatically), or 
by being triggered from the final quality gate.

> Finally, I think there needs to be a way for a human arbiter to override
> the decision of the EWS in the (hopefully rare) cases where the EWS will
> reject a valid change.

Right. I think the bots should be mostly advisory, not have any real 
power in accepting (definitely not), or rejecting (perhaps only if the 
apply fails) patches. This is how things work in the WebKit project:

https://bugs.webkit.org/show_bug.cgi?id=49200

Next to each patch you'll see that status bubbles, indicating green, red 
or purple depending on the state of the bot. The "review patch" 
interface has the same bubbles on the bottom, next to where you set the 
review flag:

https://bugs.webkit.org/attachment.cgi?id=73375&action=review

So, as a reviewer you're free to review a patch, ignoring the bots, but 
social contract says you really should wait for them.

If you imagine a review system where each review has a number of 
identified reviewers, and each reviewer gives his opinion by checking 
off or r-/r+ the patch (see [1] and [2] for examples) before it can be 
integrated, the bots are really just another reviewer.

Some bots will CC themselves to every single review (or be CC'ed by the 
"apply-bot" if the patch applies), some bots will have to be explicitly 
CC'ed (the slow ones, in the try-bot scenario from above). Some human 
reviewers will be CC'ed by the contributor because he or she knows the 
reviewer should look at the code, some reviewers will CC themselves 
because they keep track of reviews and notice one they want to give 
feedback on, some reviewers will be "summoned" by other reviewers 
because they have expert knowledge in that area, etc, etc. In the end 
the sum of all these (possibly weighted) reviewers is what decides if a 
patch is r+ or r-.


Tor Arne


[1] 
http://www.atlassian.com/software/crucible/images/tour/full/code-review-time-tracking.png

[2] 
http://gerrit.googlecode.com/svn/documentation/2.1.5/access-control.html#category_CVRW
_______________________________________________
Opengov mailing list
Opengov@qt-labs.org
http://lists.qt-labs.org/listinfo/opengov

Reply via email to