+1 to eyeballing

Some other random thoughts:

- but let's put some emphasis on that 3rd point about good test coverage.
It is a lot less unnerving to accept a non-trivial PR with good test
coverage than one without, and just reading the tests is often the best way
to figure out what a PR is actually doing
- "master"  _is_  pretty stable, and it would be a shame to get away from
that;
- I think some guidelines would help. At the end of the day each PR is a
judgement call, but it's about setting expectations - e.g. reviewers don't
necessarily have to figure out all the implications and possible problems a
PR might introduce, just make an "honest effort" to review what they see.




On Wed, 3 May 2017 at 23:52 Richard Downer <[email protected]> wrote:

> All,
>
> A lot of very good points going on here - thank you to everyone who has
> contributed to this conversation. It's clear that we are doing a disservice
> to our contributors and things need to change. We need to be merging (or,
> rarely, rejecting) PRs consistently and promptly. Obviously, difficulty
> comes in how we do this while keeping quality up.
>
> After reading the debate so far, I think Alex has come up with a simple and
> sensible suggestion:
>
> On 03/05/2017 02:13, Alex Heneveld wrote:
>
> > I'm suggesting we make more of an effort, and we fall back to an "eyeball
> > test" a certain period of time (7 days max, less if it's simple?), triage
> > the review to look at:
> >
> > * clearly helpful & not obviously wrong
> > * low-risk / doesn't break compatibility
> > * good test coverage (and passing)
> > * likely to be maintained
> >
> > If these can't be confirmed, the reviewer should say what they have
> doubts
> > about, maybe suggest what the contributor could do to help, or appeal to
> > other committers more familiar with an area. In any case get a discussion
> > going.
> >
> > If these do seem confirmed, I still suggest we don't merge immediately in
> > the absence of a thorough review, but ping specific committers likely to
> be
> > interested.  If no thorough review after a few more days, _then_ merge.
> >
> > I'm not suggesting any heavyweight process, but just enough to put
> healthy
> > forces on us as reviewers.
> >
>
> And Aled makes some sensible observations around this subject:
>
> On 03/05/2017 09:41, Aled Sage wrote:
>
> > 1. Some PRs don't get enough attention from reviewers.
> >
> > 2. Not as much time as we'd like is spent by the community reviewing PRs.
> >
> > 3. The bar for being merged is too high (for at least some PRs).
> >
> > 4. Some PRs are very hard to review.
> >
> > There are clearly examples of (1) where it's a no-brainer we should have
> > given it more attention, commented and merged. Your proposal for the
> > "eye-ball test" will help for some.
> >
> > However, there are also examples of (1) that are caused by (4) - commonly
> > due to the complexity of the code being changed (e.g.
> config-inheritance),
> > sometimes made worse by it changing many things (so it's more daunting to
> > review).
> >
> > Given (2) and (3), it suggests we should spread that time across more PRs
> > (i.e. some PRs that are getting a very thorough review could get less,
> and
> > folk try to use that "saved" time on some other PRs). I'm not convinced
> > that would actually happen in practice though!
> >
> >
> > *Solutions*
> >
> > 1. Add the "eye-ball test" to our reviewer guidelines (as described by
> > Alex) - and adjust our perception of "low-risk" over time, as we see how
> it
> > works.
> >
> > 2. Guidelines for PRs - what will make the reviewers' job considerably
> > easier, so we can get things merged faster? For example, small and
> focused
> > PRs, with good test coverage, linking to a jira issue where appropriate.
> >
> > 3. Invest more time simplifying Brooklyn (see below).
> >
> >
> > *Complexity of Brooklyn*
> > I've heard from quite a few people that certain areas of Brooklyn are far
> > too hard to understand. Some people avoid reviewing PRs that touch it,
> > because it's so hard to understand the implications - they focus their
> time
> > on PRs that they feel more confident to review.
> >
> > This is a symptom of an overly complex project. It would be great to find
> > more time to simplify things - e.g. to delete things from Brooklyn, to
> make
> > things more consistent, to refactor or even rewrite some sections, and to
> > add more javadoc.
> >
> >
> > *Accepted Limitations to Timely Review*
> > PRs that make far reaching changes to low-level details of Brooklyn will
> > always require a thorough review. Clearly we should try to find the time
> > for that promptly, but should always view them as high-risk.
>
>
> Thomas also makes a good observation which is that `master` would have to
> be considered unstable. This is true, although the committers are still
> acting as gatekeepers, so it won't be a wild west out there - but it does
> mean that taking the bleeding-edge code would be somewhat higher risk. This
> does complicate things when we come near to release time. Some projects use
> Git branching models such as [1]; other projects only accept "new feature"
> PRs at certain points in the release cycle and have strict stabilisation
> phases near the end of a cycle. This is something that we will need to
> discuss.
>
> Alex and Aled have both made some suggestions about how to mitigate that
> risk, and I'd add that we should also push for bug fix PRs to boost test
> coverage to reduce the chance of the problem happening again.
>
> I'd also explicitly make the point that it's generally accepted in open
> source communities that a contributor is expected to make some effort to
> make their contribution easy to review and accept, and can expect pushback
> if their contribution is poorly structured or neglects tests, for example
> (Alex alludes to this as "maybe suggest what the contributor could do to
> help" and Aled states it explicitly as "guidelines for PRs"). Setting up a
> list of rules could be seen as strict/inflexible and possibly daunting for
> a new committer, but there is an advantage in setting down some best
> practices (an expansion of what we already have at [2]) and being flexible
> in their interpretation. Where we are currently failing here is that when
> faced with a difficult contribution, we don't push back to the contributor
> and make it clear what we would like to see - instead the contribution is
> just ignored.
>
> To add some of my own opinions: this is just software, it's supposed to be
> malleable and changing, and we have SCM and tests as a safety net. Let's
> take some more risks. Along the way we'll inevitably make mistakes and
> things will break, sometimes at the worst possible time - accept that this
> is normal, and learn from the experience. We're just making a release now
> so we've got a couple of months to experiment, during which time we can
> tweak our interpretations of risk etc. and converge on a release-quality
> build.
>
> While some of the surrounding details need to be worked out (such as how do
> we do a release if master is unstable):
>
> *do we feel like we are converging on an acceptable position based around
> Alex's eyeball review concept?*
>
> Thanks
> Richard.
>
>
> [1]http://nvie.com/posts/a-successful-git-branching-model/
> [2]
>
> https://brooklyn.apache.org/developers/how-to-contribute.html#pull-request-at-github
>
>
> On 3 May 2017 at 17:38, Sam Corbett <[email protected]> wrote:
>
> > I used GitHub to check the number of pull requests we've merged to a few
> > of our projects over the last year in two month-chunks [1]. Each list
> > begins today and works backwards:
> >
> > brooklyn-server: 86, 79, 85, 68, 95, 78.
> > brooklyn-client: 7, 4, 3, 8, 2, 4.
> > brooklyn-library: 11, 12, 9, 7, 17, 8.
> > total: 104, 95, 97, 83, 114, 90.
> >
> > It's my opinion that these numbers show a consistency in the rate at
> which
> > we merge pull requests. Since a few committers have recently joined I
> > expect the rate to increase over the next few months. I would like to
> know
> > other figures like the average length of time between opening a pull
> > request and a committer merging it and the average length of time before
> > the first review comment. I'm sure there are many other interesting
> metrics
> > that could inform this discussion.
> >
> > Maybe there's just one or two cases that we have managed badly.
> > Identifying those pull requests and understanding the cause of the delay
> to
> > each one would be valuable.
> >
> > Regarding the complexity of Brooklyn and speaking as someone that has
> used
> > it in anger (sometimes literally) I would posit that many of us have been
> > caught out several times by the corner cases and unexpected interactions
> > between components that complexity so often implies. This leads to
> cautious
> > reviews and slow turnaround times. Quick-scan reviews solve the speed at
> > which we get code into the product. We need to think harder about how we
> > reduce its overall complexity.
> >
> > Sam
> >
> > 1. I used GitHub's interface to search for "is:pr
> > merged:2017-03-04..2017-05-03", etc.
> >
> >
> >
> > On 03/05/2017 09:41, Aled Sage wrote:
> >
> >> Hi Alex,
> >>
> >> I agree with you that we have a problem with the reviewing of some of
> our
> >> PRs - it's a bad situation for all concerned when these PRs stay open
> for
> >> as long as they do.
> >>
> >> I agree with your "eye-ball test" for a certain class of PR. I think
> >> where we probably disagree is where the line is for "low-risk" PRs.
> There
> >> are some examples where they "eye-ball test" would certainly have
> helped.
> >>
> >> I deliberately expanded the scope of this discussion, because there's
> not
> >> one single solution - I'm not "turning this discussion in to guidelines
> >> about PR". I'm adding to the discussion.
> >>
> >>
> >> *Contributing Factors*
> >> Breaking this down into the contributing factors, I believe those are:
> >>
> >> 1. Some PRs don't get enough attention from reviewers.
> >>
> >> 2. Not as much time as we'd like is spent by the community reviewing
> PRs.
> >>
> >> 3. The bar for being merged is too high (for at least some PRs).
> >>
> >> 4. Some PRs are very hard to review.
> >>
> >> There are clearly examples of (1) where it's a no-brainer we should have
> >> given it more attention, commented and merged. Your proposal for the
> >> "eye-ball test" will help for some.
> >>
> >> However, there are also examples of (1) that are caused by (4) -
> commonly
> >> due to the complexity of the code being changed (e.g.
> config-inheritance),
> >> sometimes made worse by it changing many things (so it's more daunting
> to
> >> review).
> >>
> >> Given (2) and (3), it suggests we should spread that time across more
> PRs
> >> (i.e. some PRs that are getting a very thorough review could get less,
> and
> >> folk try to use that "saved" time on some other PRs). I'm not convinced
> >> that would actually happen in practice though!
> >>
> >>
> >> *Solutions*
> >>
> >> 1. Add the "eye-ball test" to our reviewer guidelines (as described by
> >> Alex) - and adjust our perception of "low-risk" over time, as we see
> how it
> >> works.
> >>
> >> 2. Guidelines for PRs - what will make the reviewers' job considerably
> >> easier, so we can get things merged faster? For example, small and
> focused
> >> PRs, with good test coverage, linking to a jira issue where appropriate.
> >>
> >> 3. Invest more time simplifying Brooklyn (see below).
> >>
> >>
> >> *Complexity of Brooklyn*
> >> I've heard from quite a few people that certain areas of Brooklyn are
> far
> >> too hard to understand. Some people avoid reviewing PRs that touch it,
> >> because it's so hard to understand the implications - they focus their
> time
> >> on PRs that they feel more confident to review.
> >>
> >> This is a symptom of an overly complex project. It would be great to
> find
> >> more time to simplify things - e.g. to delete things from Brooklyn, to
> make
> >> things more consistent, to refactor or even rewrite some sections, and
> to
> >> add more javadoc.
> >>
> >>
> >> *Accepted Limitations to Timely Review*
> >> PRs that make far reaching changes to low-level details of Brooklyn will
> >> always require a thorough review. Clearly we should try to find the time
> >> for that promptly, but should always view them as high-risk.
> >>
> >>
> >> *YOML*
> >> If you insist on generalising YOML here, rather than a separate email
> >> thread specifically about it, then: we should have commented very
> quickly
> >> and discussed it promptly on the mailing list - at the level of whether
> we
> >> want it (ignoring much of its technical details). If it was pretty much
> >> anyone but Alex, then we should have commented saying:
> >>
> >>    "Very interesting, but this is a huge amount of code to add and
> >>    maintain in Brooklyn. Can you instead create a new github project
> >>    for this library, so that it can be worked on and maintained
> >>    separately? We'd then be interested to see how it can be used within
> >>    Brooklyn. Can you close this PR and let us know when/where you
> >>    create that library."
> >>
> >> Like I said, that's for pretty much anyone but Alex. The difference is
> >> that Alex wrote the first version of our yaml/camp parsing and knows it
> >> better than anyone else. That original code definitely deserves a
> re-write:
> >> it's become increasingly complicated as the supported yaml has evolved.
> >> Alex has investigated different approaches and has come up with a way
> that
> >> could greatly improve that code, and be used in other places as well.
> Doing
> >> that in Brooklyn is simpler for him, because it can evolve in tandem to
> >> satisfy requirements of Brooklyn.
> >>
> >> I therefore suggest we discuss YOML separately, rather than
> generalising.
> >>
> >> Aled
> >>
> >>
> >> On 03/05/2017 02:13, Alex Heneveld wrote:
> >>
> >>>
> >>> Aled,
> >>>
> >>> > *Light-weight Review*
> >>> > I agree with you - where PRs look sensible, low-risk and unit tested
> >>> we should take more risk and
> >>> > merge them sooner (even if there's not been time for a thorough
> review
> >>> by the community).
> >>>
> >>> I'm saying something a little different:  we should _try_ for a
> thorough
> >>> review of *all* PRs.  Which I think is uncontroversial.
> >>>
> >>> > What should we do with a PR when we aren't able to review things in
> as
> >>> much depth as we'd like?
> >>>
> >>> This is the question I'm asking, to ensure we handle PR's in a good
> time
> >>> frame.  To summarise, I'm suggesting we make more of an effort, and we
> fall
> >>> back to an "eyeball test" a certain period of time (7 days max, less if
> >>> it's simple?), triage the review to look at:
> >>>
> >>> * clearly helpful & not obviously wrong
> >>> * low-risk / doesn't break compatibility
> >>> * good test coverage (and passing)
> >>> * likely to be maintained
> >>>
> >>> If these can't be confirmed, the reviewer should say what they have
> >>> doubts about, maybe suggest what the contributor could do to help, or
> >>> appeal to other committers more familiar with an area. In any case get
> a
> >>> discussion going.
> >>>
> >>> If these do seem confirmed, I still suggest we don't merge immediately
> >>> in the absence of a thorough review, but ping specific committers
> likely to
> >>> be interested.  If no thorough review after a few more days, _then_
> merge.
> >>>
> >>> I'm not suggesting any heavyweight process, but just enough to put
> >>> healthy forces on us as reviewers.
> >>>
> >>> This is not a theoretical question, nor is it restricted to the YOML
> >>> PR.  We're pretty good with most of our PRs and reviews but there are
> >>> plenty of examples where we've dropped the ball.  Look at [1] which is
> tiny
> >>> and tests-only and took nine days to get a review.  Or [2] which yes
> >>> combines a few related-but-different things but is by no means a hard
> thing
> >>> to review.  It would take far more time to split that up into 3
> branches,
> >>> test those locally, then babysit each of those PR's than it would take
> for
> >>> a reviewer to just get on with a review.  It's been sitting there for 2
> >>> months and doesn't even have a comment.
> >>>
> >>> This is not a good state of affairs.  Turning this discussion in to
> >>> guidelines about PR's misses the point.  If there's any change to our
> >>> docs/process made as a result of this discussion I'd like to see the
> >>> eyeball test added to a review process discussion.
> >>>
> >>> Finally re YOML, there is an ML thread started when the issue was
> >>> raised.  There was chatter beforehand but it wasn't an easy thing to
> >>> discuss until there was prototype code.  The point is for 7 months
> there
> >>> have been no comments in any of these places, even after I've run a
> public
> >>> session explaining it and private sessions and the PR itself says how
> it
> >>> can be tested and how it is insulated from the rest of the code
> (Thomas I
> >>> think you missed that point).  As there is an ML thread and an open
> issue,
> >>> either of which would be a fine place to comment, but no one is -- the
> >>> suggestion of a new separate ML thread to solve the problem is
> bizarre.  I
> >>> say this is _exactly_ the situation when we need guidelines for how we
> >>> handle PR's that are not being reviewed in a timely way.
> >>>
> >>> Best
> >>> Alex
> >>>
> >>>
> >>> [1] https://github.com/apache/brooklyn-server/pull/600
> >>> [2] https://github.com/apache/brooklyn-server/pull/575
> >>>
> >>>
> >>>
> >>> On 02/05/2017 19:21, Aled Sage wrote:
> >>>
> >>>> Hi Alex,
> >>>>
> >>>> Interesting question. A few initial thoughts:
> >>>>
> >>>> *YOML*
> >>>> YOML (PR #363) is an exceptional case - we should not use that as an
> >>>> example when discussing this meta-question. The PR is 12,000 lines
> >>>> (including comments/notes), and was not discussed on the mailing list
> >>>> before it was submitted. I suggest we have a separate email thread
> >>>> specifically about merging that PR, as there are certainly very useful
> >>>> things we'd get from YOML.
> >>>>
> >>>> *Small PRs*
> >>>> We should strongly encourage small focused PRs on a single thing,
> >>>> wherever possible. That will make review faster, easier and lower
> risk. For
> >>>> such PRs, we should strive for review+merge within days (7 days being
> an
> >>>> upper bound in normal circumstances, hopefully).
> >>>>
> >>>> We can add some brief guidelines to this effect at
> >>>> http://brooklyn.apache.org/developers/how-to-contribute.html
> >>>>
> >>>> *Changing low-level Brooklyn*
> >>>> PRs that change low-level things in Brooklyn (e.g. changes to
> >>>> config-inheritance etc) deserve thorough review. They are high-risk
> as the
> >>>> unforeseen consequences of the changes can be very subtle, and break
> >>>> downstream blueprints that rely on old ways of doing things.
> >>>>
> >>>> *Light-weight Review*
> >>>> I agree with you - where PRs look sensible, low-risk and unit tested
> we
> >>>> should take more risk and merge them sooner (even if there's not been
> time
> >>>> for a thorough review by the community).
> >>>>
> >>>> Aled
> >>>>
> >>>>
> >>>> On 02/05/2017 15:50, Duncan Johnston Watt wrote:
> >>>>
> >>>>> Hi Alex
> >>>>>
> >>>>> This is probably covered already but I guess there needs to be an
> >>>>> impact
> >>>>> assessment (by submitter?) before something is waved through by
> >>>>> default.
> >>>>>
> >>>>> Best
> >>>>>
> >>>>> Duncan
> >>>>>
> >>>>> On 2 May 2017 at 06:52, Alex Heneveld <
> [email protected]
> >>>>> >
> >>>>> wrote:
> >>>>>
> >>>>> Hi Brooklyners-
> >>>>>>
> >>>>>> As many of you know, my YOML PR #363 [1] has been open for a while.
> >>>>>> This
> >>>>>> sets up a foundation for giving better documentation and feedback
> and
> >>>>>> hugely simplifying how we do our parsing.  However it's a very big
> >>>>>> PR.  I'm
> >>>>>> eager to have people spend some time using it and ideally extending
> >>>>>> it --
> >>>>>> but here I wanted to raise a meta-question:
> >>>>>>
> >>>>>>
> >>>>>> *W**hat should we do with a PR when we aren't able to review things
> >>>>>> in as
> >>>>>> much depth as we'd like?*
> >>>>>>
> >>>>>>
> >>>>>> One option -- call it (A) -- is to say if we can't review things
> >>>>>> thoroughly in a reasonable timeframe, we do a lighter review and if
> >>>>>> the PR
> >>>>>> looks promising and safe we merge it.
> >>>>>>
> >>>>>> The other option -- call it (B) -- is to leave PRs open for as long
> >>>>>> as it
> >>>>>> takes for us to do the complete review.
> >>>>>>
> >>>>>> I think most people have been approaching this with a mindset of
> (B),
> >>>>>> and
> >>>>>> while that's great for code quality and shared code understanding,
> if
> >>>>>> we
> >>>>>> can't deliver on that quickly, it's frankly anti-social. The
> >>>>>> contributor
> >>>>>> has to deal with merge conflicts (and the rudeness of his or her
> >>>>>> contribution being ignored), and Brooklyn loses velocity. My PR is
> an
> >>>>>> extreme example but many have been affected by slow reviews, and I
> >>>>>> think
> >>>>>> the expectation that reviews have to be so thorough is part of the
> >>>>>> problem:  it even discourages reviewers, as if you're not an expert
> >>>>>> in an
> >>>>>> area you probably don't feel qualified to review.
> >>>>>>
> >>>>>> We have good test coverage so product risk of (A) is small, and we
> >>>>>> have
> >>>>>> great coders so I've no worry about us being able to solve problems
> >>>>>> that
> >>>>>> (A) might introduce.  We should be encouraging reviewers to look at
> >>>>>> any
> >>>>>> area, and we need to solve the problem of slow reviews.
> >>>>>>
> >>>>>>
> >>>>>> *I propose that the**standard we apply is that we quickly either
> >>>>>> merge PRs
> >>>>>> or identify what the contributor needs to resolve.
> >>>>>>
> >>>>>>
> >>>>>> *I'm all for thorough reviews and shared understanding, but if we
> >>>>>> can't do
> >>>>>> this quickly I suggest we are better to sacrifice those things
> rather
> >>>>>> than
> >>>>>> block contributions, stifle innovation, and discourage reviews by
> >>>>>> insisting
> >>>>>> on a standards that we struggle to sustain.
> >>>>>>
> >>>>>> As a general rule of thumb, maybe something like:
> >>>>>>
> >>>>>> (1) After 7 days of no activity on a PR we go with an "eyeball
> test";
> >>>>>> unless the following statement is untrue we say:
> >>>>>>
> >>>>>> /I haven't done as much review as I'd like, but the code is clearly
> >>>>>> helpful, not risky or obviously wrong  or breaking compatibility, it
> >>>>>> has
> >>>>>> good test coverage, and we can reasonably expect the contributor or
> >>>>>> committers to maintain it.  Leaving open a bit longer in case
> someone
> >>>>>> else
> >>>>>> wants to review more but if nothing further in the next few days,
> >>>>>> let's
> >>>>>> merge.
> >>>>>>
> >>>>>> /(If there are committers who are likely to be specifically
> >>>>>> interested,
> >>>>>> call them out as CC.)
> >>>>>>
> >>>>>> (2) After 3 more days, if no activity, merge it.
> >>>>>>
> >>>>>> And we encourage _anyone_ to review anything.  If the above response
> >>>>>> is
> >>>>>> the baseline, everyone in our community is qualified to do it or
> >>>>>> better and
> >>>>>> we'll be grateful!
> >>>>>>
> >>>>>> Best
> >>>>>> Alex
> >>>>>>
> >>>>>>
> >>>>>> [1]  https://github.com/apache/brooklyn-server/pull/363
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>
> >>
> >>
> >
>

Reply via email to