On Fri, Apr 23, 2021 at 9:40 AM Daniel Shahaf <d...@daniel.shahaf.name> wrote:
>
> Nathan Hartman wrote on Thu, 22 Apr 2021 21:41 +00:00:
> > Not knowing whether / how many people have reviewed a particular
> > commit is, as was said elsewhere, a silent failure mode of the CTR
> > (commit-then-review) convention.
> >
> > Do we want to try switching to a RTC (review-then-commit) convention?
>
> If we do switch to RTC, we might want to also retroactively ensure all
> commits post 1.14.x's branching have been reviewed by at least two pairs
> of eyes each.
>
> However, I wonder whether there's a smaller change we can do first,
> rather than a full-blown s/CTR/RTC/ flag day.  For instance, how
> about we agree, for the next N weeks, to make our commit reviews
> explicit?  I.e., to explicitly say "I've reviewed this commit and
> found no issues"?  (Call this Commit-Then-Explicit-Review.)

I'm +1 to this idea. It sounds to me like a reasonable middle ground
to gain some insights before making bigger workflow-altering changes.

There is, of course, the risk of the experiment altering its own
results; the Heisenberg Principle comes to mind; also, the slightly-
less-well-known (but perhaps more relevant) Hawthorne Effect.

But I'm willing to accept that caveat.

> At the end of the N weeks, we can look at the experience and data we'll
> have then, and decide how to continue: whether to revert to CTR, switch
> to RTC, keep the new system and implement supporting tooling, etc..

Regarding supporting tooling, it would be A Good Thing if we could
have static analysis, or perhaps better yet, an instrumented build to
catch certain types of common errors during automated testing.

Nathan

Reply via email to