Aled,
> *Light-weight Review*> I agree with you - where PRs look sensible, low-risk and unit tested we should take more risk and > merge them sooner (even if there's not been time for a thorough review by the community).
I'm saying something a little different: we should _try_ for a thorough review of *all* PRs. Which I think is uncontroversial.
> What should we do with a PR when we aren't able to review things in as much depth as we'd like?
This is the question I'm asking, to ensure we handle PR's in a good time frame. To summarise, I'm suggesting we make more of an effort, and we fall back to an "eyeball test" a certain period of time (7 days max, less if it's simple?), triage the review to look at:
* clearly helpful & not obviously wrong * low-risk / doesn't break compatibility * good test coverage (and passing) * likely to be maintainedIf these can't be confirmed, the reviewer should say what they have doubts about, maybe suggest what the contributor could do to help, or appeal to other committers more familiar with an area. In any case get a discussion going.
If these do seem confirmed, I still suggest we don't merge immediately in the absence of a thorough review, but ping specific committers likely to be interested. If no thorough review after a few more days, _then_ merge.
I'm not suggesting any heavyweight process, but just enough to put healthy forces on us as reviewers.
This is not a theoretical question, nor is it restricted to the YOML PR. We're pretty good with most of our PRs and reviews but there are plenty of examples where we've dropped the ball. Look at [1] which is tiny and tests-only and took nine days to get a review. Or [2] which yes combines a few related-but-different things but is by no means a hard thing to review. It would take far more time to split that up into 3 branches, test those locally, then babysit each of those PR's than it would take for a reviewer to just get on with a review. It's been sitting there for 2 months and doesn't even have a comment.
This is not a good state of affairs. Turning this discussion in to guidelines about PR's misses the point. If there's any change to our docs/process made as a result of this discussion I'd like to see the eyeball test added to a review process discussion.
Finally re YOML, there is an ML thread started when the issue was raised. There was chatter beforehand but it wasn't an easy thing to discuss until there was prototype code. The point is for 7 months there have been no comments in any of these places, even after I've run a public session explaining it and private sessions and the PR itself says how it can be tested and how it is insulated from the rest of the code (Thomas I think you missed that point). As there is an ML thread and an open issue, either of which would be a fine place to comment, but no one is -- the suggestion of a new separate ML thread to solve the problem is bizarre. I say this is _exactly_ the situation when we need guidelines for how we handle PR's that are not being reviewed in a timely way.
Best Alex [1] https://github.com/apache/brooklyn-server/pull/600 [2] https://github.com/apache/brooklyn-server/pull/575 On 02/05/2017 19:21, Aled Sage wrote:
Hi Alex, Interesting question. A few initial thoughts: *YOML*YOML (PR #363) is an exceptional case - we should not use that as an example when discussing this meta-question. The PR is 12,000 lines (including comments/notes), and was not discussed on the mailing list before it was submitted. I suggest we have a separate email thread specifically about merging that PR, as there are certainly very useful things we'd get from YOML.*Small PRs*We should strongly encourage small focused PRs on a single thing, wherever possible. That will make review faster, easier and lower risk. For such PRs, we should strive for review+merge within days (7 days being an upper bound in normal circumstances, hopefully).We can add some brief guidelines to this effect at http://brooklyn.apache.org/developers/how-to-contribute.html*Changing low-level Brooklyn*PRs that change low-level things in Brooklyn (e.g. changes to config-inheritance etc) deserve thorough review. They are high-risk as the unforeseen consequences of the changes can be very subtle, and break downstream blueprints that rely on old ways of doing things.*Light-weight Review*I agree with you - where PRs look sensible, low-risk and unit tested we should take more risk and merge them sooner (even if there's not been time for a thorough review by the community).Aled On 02/05/2017 15:50, Duncan Johnston Watt wrote:Hi Alex This is probably covered already but I guess there needs to be an impact assessment (by submitter?) before something is waved through by default. Best Duncan On 2 May 2017 at 06:52, Alex Heneveld <[email protected]> wrote:Hi Brooklyners-As many of you know, my YOML PR #363 [1] has been open for a while. Thissets up a foundation for giving better documentation and feedback andhugely simplifying how we do our parsing. However it's a very big PR. I'm eager to have people spend some time using it and ideally extending it --but here I wanted to raise a meta-question:*W**hat should we do with a PR when we aren't able to review things in asmuch depth as we'd like?* One option -- call it (A) -- is to say if we can't review thingsthoroughly in a reasonable timeframe, we do a lighter review and if the PRlooks promising and safe we merge it.The other option -- call it (B) -- is to leave PRs open for as long as ittakes for us to do the complete review.I think most people have been approaching this with a mindset of (B), and while that's great for code quality and shared code understanding, if we can't deliver on that quickly, it's frankly anti-social. The contributorhas to deal with merge conflicts (and the rudeness of his or her contribution being ignored), and Brooklyn loses velocity. My PR is anextreme example but many have been affected by slow reviews, and I thinkthe expectation that reviews have to be so thorough is part of theproblem: it even discourages reviewers, as if you're not an expert in anarea you probably don't feel qualified to review. We have good test coverage so product risk of (A) is small, and we havegreat coders so I've no worry about us being able to solve problems that(A) might introduce. We should be encouraging reviewers to look at any area, and we need to solve the problem of slow reviews.*I propose that the**standard we apply is that we quickly either merge PRsor identify what the contributor needs to resolve.*I'm all for thorough reviews and shared understanding, but if we can't do this quickly I suggest we are better to sacrifice those things rather than block contributions, stifle innovation, and discourage reviews by insistingon a standards that we struggle to sustain. As a general rule of thumb, maybe something like: (1) After 7 days of no activity on a PR we go with an "eyeball test"; unless the following statement is untrue we say: /I haven't done as much review as I'd like, but the code is clearlyhelpful, not risky or obviously wrong or breaking compatibility, it hasgood test coverage, and we can reasonably expect the contributor orcommitters to maintain it. Leaving open a bit longer in case someone elsewants to review more but if nothing further in the next few days, let's merge. /(If there are committers who are likely to be specifically interested, call them out as CC.) (2) After 3 more days, if no activity, merge it. And we encourage _anyone_ to review anything. If the above response isthe baseline, everyone in our community is qualified to do it or better andwe'll be grateful! Best Alex [1] https://github.com/apache/brooklyn-server/pull/363
