+1 I like the idea of the votes

 - I'd keep them separate for clarity, but make sure the vote on the YOML
merge references the general PR one

On Mon, 8 May 2017 at 10:45 Richard Downer <[email protected]> wrote:

> Geoff, all,
>
> Thanks for moving this forward. I think those that have participated in
> this thread are generally in agreement over changing our PR reviewing
> standards, although I'm slightly concerned that relatively few people have
> contributed to the thread.
>
> I'd like to move the PR reviewing question on to a vote, based on a short,
> clear statement about the new standards. I will start a vote shortly (with
> the aim that only requiring reviewing a short email and providing a yes/no
> response will get more responses).
>
> On the YOML question, fewer people have participated in that part of the
> thread. Several voices are of the opinion that a PR would not be acceptable
> in this form (even under the new reviewing standards), *were it submitted
> by anyone other than Alex*, but an exception is perhaps appropriate in this
> case. I have reservations about this in principle, but agree that it may be
> appropriate. However I want to ensure cases like this *remain* exceptional
> and not be normalised. I would therefore want that to be a subject of a
> vote too. To avoid confusion the votes should not overlap IMO. Any comments
> on this paragraph?
>
> Thanks
> Richard.
>
>
> On 8 May 2017 at 09:47, Geoff Macartney <[email protected]
> >
> wrote:
>
> > So have we reached a consensus on this?
> >
> > 1. Let's take some more risks; and adjust our perception of "low-risk"
> over
> > time, as we see how
> > it works.
> > 2. Add guidelines to
> > http://brooklyn.apache.org/developers/how-to-contribute.html
> > - strongly encourage small focused PRs on a single thing
> > - add the "eyeball test" (description earlier in this email)
> > 3. Invest more time simplifying Brooklyn.
> >
> > As for YOML, it is an exceptional case; let's treat it exceptionally -
> Aled
> > makes the point about
> > Alex's familiarity with the parser code meaning this is safer to merge
> than
> > would
> > normally be the case with a PR this size; and Alex has pointed out that
> > he's designed
> > it to avoid deep impact on the existing code.  I'll +1 Alex's proposal
> that
> > we merge this
> > and then review/test as we start to use it.  What say you?
> >
> > On Thu, 4 May 2017 at 11:58 Graeme Miller <
> [email protected]
> > >
> > wrote:
> >
> > > Thanks for raising this topic Alex, I think it is certainly worthwhile
> to
> > > try and reduce the amount of time PRs are spent waiting for a review.
> > >
> > > I know I am a bit late to the party here, as I see Richard is trying to
> > get
> > > a consensus. However, I have spent a bit of time the last couple of
> days
> > > doing some analysis of git repos and don't want the observations to go
> to
> > > waste.
> > >
> > > There are two things here I want to mention; the meta question about
> the
> > > eyeball test and the YOML PR.
> > >
> > > ------------------------------------------------------------
> > > ------------------------------
> > > The Metaquestion/Eyeball test
> > >
> > > > I haven't done as much review as I'd like, but the code is clearly
> > > > helpful, not risky or obviously wrong  or breaking compatibility, it
> > has
> > > > good test coverage, and we can reasonably expect the contributor or
> > > > committers to maintain it.  Leaving open a bit longer in case someone
> > > else
> > > > wants to review more but if nothing further in the next few days,
> let's
> > > > merge.
> > >
> > >
> > > Reading it through, I don't see why this is an eyeball test. Your
> checks
> > > are:
> > > 1) helpful
> > > 2) nothing risky/obviously wrong/breaking compatibility
> > > 3) good test coverage
> > > 4) we can reasonably expect the contributor or committers to maintain
> it
> > >
> > > In my opinion, If a PR passes these checks, then we have reviewed it.
> > > Essentially, I think our standards for a review are too high. For
> > instance,
> > > I don't think there is necessarily a need to check out the code and
> test
> > > it- which we often do. I'd maybe add follows best practices to that
> list
> > as
> > > well. So I am all in favour of an eyeball test, or reducing the
> standard
> > > for review.
> > >
> > > ------------------------------------------------------------
> > > ------------------------------
> > > YOML PR
> > >
> > > To address this separately. Yes, we do sometimes have PRs sitting
> around
> > > for awhile. In my opinion these are often either large, or WIP PRs. To
> > > address why it is taking a while to review, we have to look at those
> PRs.
> > > In my opinion, these PRs are too big to actually review. I've tried a
> > > couple of times to understand YOML, but I just can't wrap my head
> around
> > > it. If I was applying the eyeball criteria to it, I wouldn't be able to
> > say
> > > that it ticks off (1) and (2) without spending a lot more time.
> > >
> > > I wanted to double check whether or not my statement about this PR
> being
> > > too big was accurate, so I used the github API to do some research. I
> > > picked brooklyn-server and 4 other open source repos. Some quite mature
> > > like spark, and some less so like Kubernetes. I looked at the last 500
> > prs
> > > that were closed, sorted by the number of additions and removed any
> > > mistakes (there were quite a few broken PRs).
> > >
> > > Top Ten biggest PRs
> > > K8s JClouds Kafka Brooklyn Server Spark
> > > 5303 28932 7417 2670 1049
> > > 4064 11622 5271 2600 749
> > > 2901 4302 4452 2186 710
> > > 2896 3665 4208 1956 682
> > > 2079 3004 3837 1952 677
> > > 2043 1660 3681 1870 642
> > > 1939 1523 3549 1620 616
> > > 1763 1419 3124 1619 571
> > > 1647 1363 2515 1619 569
> > > 1606 1294 2165 1543 547
> > > It's also worth looking at the average and standard deviation for these
> > > repos (again, last 500 closed PRs with obvious mistakes removed)
> > >
> > > Avg + Std dev
> > > K8s JClouds Kafka Brooklyn Server Spark
> > > Average 104.85 206.41 204.65 199.63 62.52
> > > StdDev 418.90 1,428.16 640.85 358.36 121.54
> > >
> > > I have put some notes on my working at [1].
> > >
> > > Looking at this data, it is clear that the YOML PR, which clocks in at
> > > 12,758
> > > additions, is an outlier. It's a record for Brooklyn and outstrips the
> > > other repos that I was looking at. There was only one with more lines
> > > changed (and one other that was close):
> > > 1) "JCLOUDS-172: Promote Google Compute Engine", 28932, https://github
> > > .com/jclouds/jclouds/pull/787
> > > 2) "Promoted Docker", 11622, https://github.com/jclouds/
> > jclouds/pull/1002
> > >
> > > Both of these are adding standalone extensions to the project that were
> > > developed in JClouds labs first and then merged into master later.  So
> > > there would have been multiple PRs/reviews before that got into master.
> > In
> > > fact, a number of the large PRs in JClouds are like this (hence the
> > rather
> > > large standard deviation).
> > >
> > > Looking at the standard deviation as well it seems that PR's that go
> into
> > > the thousands are rare and into the tens of thousands exceedingly rare.
> > >
> > > So, there are two points I'd like to make.
> > >
> > > Firstly, it may be worthwhile looking into how JClouds approaches these
> > > large code additions and maybe follow a similar process for YOML. I
> > wonder
> > > if we can separate it out, develop it outside of Brooklyn-Server master
> > and
> > > then merge back in later. Maybe Andrea, or Svet or one of the other
> > JClouds
> > > committers could shed some light on the JClouds labs process and if it
> > > could apply here.
> > >
> > > Secondly, you're right to be annoyed that it has taken 7 months. I
> think
> > > you should have received a response much sooner- but that response
> should
> > > have been to say it is too big, and we need a plan for handling it
> > > differently. IMHO, it seems that no-one felt comfortable commenting on
> > the
> > > PR and making this point, and that is part of the issue here. It was
> just
> > > sort of left in limbo which isn't great for the PR raiser or the
> > community.
> > > So, I favour any suggestions on how to deal with these kind of
> situations
> > > that further empower Brooklyn reviewers to comment on PRs like this to
> > > raise concerns and/or ask for simplification/re-work/labs approach.
> > >
> > > Lastly, I don't just want to single out your YOML PR here as this
> > certainly
> > > isn't a unique situation. Any PR which enters into the multiple
> thousands
> > > of lines added is in my opinion getting too big to review- and you're
> not
> > > the only one to create a PR like that!
> > >
> > > Regards,
> > > Graeme Miller
> > >
> > > [1] In the interests of transparency, full data set here:
> > > https://docs.google.com/spreadsheets/d/1zDXyhs9AOW1s3f
> > > 0Zhd3oZlAW4nKdn7hLPd-118zwPhw/edit?usp=sharing
> > > Some notes:
> > > *) I had to manually remove PRs from the data set that were obviously
> > > broken (e.g. Spark 17547 <https://github.com/apache/spark/pull/17547>)
> > or
> > > were part of some greater release process (Kubernetes 44824
> > > <https://github.com/kubernetes/kubernetes/pull/44824>). I have tried
> to
> > do
> > > this fairly, however there is a chance that things were removed that
> > > shouldn't be, or things were left in that shouldn't be.
> > > *) I tried to stick to Apache Repos. Apache doesn't host the repos on
> > > github, it mirrors them there. Because of this it is not clear what was
> > > closed and what was merged. The data should be thought of as containing
> > > closed PRs not merged PRs.
> > > *) I used the pygithub library to get this data. Code here:
> > >
> > > > g = Github(login_or_token="user", "password")
> > > > repo =  g.get_organization("apache").get_repo("brooklyn-server")
> > > > first500PRs= repo.get_pulls(state="closed")[:500]
> > > > for pr in first500PRs:
> > > >      print pr.title, "**", pr.additions, "**", pr.deletions
> > >
> > > *) I only looked at additions. There may be different results for
> > > additions-subtractions (to get rid of any moved files that git couldn't
> > > process correctly) or even additions + subtractions (if you are
> > reviewing a
> > > PR, you have to look at the subtractions as well)
> > >
> > > On 4 May 2017 at 09:38, Geoff Macartney <geoff.macartney@
> > cloudsoftcorp.com
> > > >
> > > wrote:
> > >
> > > > +1 to eyeballing
> > > >
> > > > Some other random thoughts:
> > > >
> > > > - but let's put some emphasis on that 3rd point about good test
> > coverage.
> > > > It is a lot less unnerving to accept a non-trivial PR with good test
> > > > coverage than one without, and just reading the tests is often the
> best
> > > way
> > > > to figure out what a PR is actually doing
> > > > - "master"  _is_  pretty stable, and it would be a shame to get away
> > from
> > > > that;
> > > > - I think some guidelines would help. At the end of the day each PR
> is
> > a
> > > > judgement call, but it's about setting expectations - e.g. reviewers
> > > don't
> > > > necessarily have to figure out all the implications and possible
> > > problems a
> > > > PR might introduce, just make an "honest effort" to review what they
> > see.
> > > >
> > > >
> > > >
> > > >
> > > > On Wed, 3 May 2017 at 23:52 Richard Downer <[email protected]>
> wrote:
> > > >
> > > > > All,
> > > > >
> > > > > A lot of very good points going on here - thank you to everyone who
> > has
> > > > > contributed to this conversation. It's clear that we are doing a
> > > > disservice
> > > > > to our contributors and things need to change. We need to be
> merging
> > > (or,
> > > > > rarely, rejecting) PRs consistently and promptly. Obviously,
> > difficulty
> > > > > comes in how we do this while keeping quality up.
> > > > >
> > > > > After reading the debate so far, I think Alex has come up with a
> > simple
> > > > and
> > > > > sensible suggestion:
> > > > >
> > > > > On 03/05/2017 02:13, Alex Heneveld wrote:
> > > > >
> > > > > > I'm suggesting we make more of an effort, and we fall back to an
> > > > "eyeball
> > > > > > test" a certain period of time (7 days max, less if it's
> simple?),
> > > > triage
> > > > > > the review to look at:
> > > > > >
> > > > > > * clearly helpful & not obviously wrong
> > > > > > * low-risk / doesn't break compatibility
> > > > > > * good test coverage (and passing)
> > > > > > * likely to be maintained
> > > > > >
> > > > > > If these can't be confirmed, the reviewer should say what they
> have
> > > > > doubts
> > > > > > about, maybe suggest what the contributor could do to help, or
> > appeal
> > > > to
> > > > > > other committers more familiar with an area. In any case get a
> > > > discussion
> > > > > > going.
> > > > > >
> > > > > > If these do seem confirmed, I still suggest we don't merge
> > > immediately
> > > > in
> > > > > > the absence of a thorough review, but ping specific committers
> > likely
> > > > to
> > > > > be
> > > > > > interested.  If no thorough review after a few more days, _then_
> > > merge.
> > > > > >
> > > > > > I'm not suggesting any heavyweight process, but just enough to
> put
> > > > > healthy
> > > > > > forces on us as reviewers.
> > > > > >
> > > > >
> > > > > And Aled makes some sensible observations around this subject:
> > > > >
> > > > > On 03/05/2017 09:41, Aled Sage wrote:
> > > > >
> > > > > > 1. Some PRs don't get enough attention from reviewers.
> > > > > >
> > > > > > 2. Not as much time as we'd like is spent by the community
> > reviewing
> > > > PRs.
> > > > > >
> > > > > > 3. The bar for being merged is too high (for at least some PRs).
> > > > > >
> > > > > > 4. Some PRs are very hard to review.
> > > > > >
> > > > > > There are clearly examples of (1) where it's a no-brainer we
> should
> > > > have
> > > > > > given it more attention, commented and merged. Your proposal for
> > the
> > > > > > "eye-ball test" will help for some.
> > > > > >
> > > > > > However, there are also examples of (1) that are caused by (4) -
> > > > commonly
> > > > > > due to the complexity of the code being changed (e.g.
> > > > > config-inheritance),
> > > > > > sometimes made worse by it changing many things (so it's more
> > > daunting
> > > > to
> > > > > > review).
> > > > > >
> > > > > > Given (2) and (3), it suggests we should spread that time across
> > more
> > > > PRs
> > > > > > (i.e. some PRs that are getting a very thorough review could get
> > > less,
> > > > > and
> > > > > > folk try to use that "saved" time on some other PRs). I'm not
> > > convinced
> > > > > > that would actually happen in practice though!
> > > > > >
> > > > > >
> > > > > > *Solutions*
> > > > > >
> > > > > > 1. Add the "eye-ball test" to our reviewer guidelines (as
> described
> > > by
> > > > > > Alex) - and adjust our perception of "low-risk" over time, as we
> > see
> > > > how
> > > > > it
> > > > > > works.
> > > > > >
> > > > > > 2. Guidelines for PRs - what will make the reviewers' job
> > > considerably
> > > > > > easier, so we can get things merged faster? For example, small
> and
> > > > > focused
> > > > > > PRs, with good test coverage, linking to a jira issue where
> > > > appropriate.
> > > > > >
> > > > > > 3. Invest more time simplifying Brooklyn (see below).
> > > > > >
> > > > > >
> > > > > > *Complexity of Brooklyn*
> > > > > > I've heard from quite a few people that certain areas of Brooklyn
> > are
> > > > far
> > > > > > too hard to understand. Some people avoid reviewing PRs that
> touch
> > > it,
> > > > > > because it's so hard to understand the implications - they focus
> > > their
> > > > > time
> > > > > > on PRs that they feel more confident to review.
> > > > > >
> > > > > > This is a symptom of an overly complex project. It would be great
> > to
> > > > find
> > > > > > more time to simplify things - e.g. to delete things from
> Brooklyn,
> > > to
> > > > > make
> > > > > > things more consistent, to refactor or even rewrite some
> sections,
> > > and
> > > > to
> > > > > > add more javadoc.
> > > > > >
> > > > > >
> > > > > > *Accepted Limitations to Timely Review*
> > > > > > PRs that make far reaching changes to low-level details of
> Brooklyn
> > > > will
> > > > > > always require a thorough review. Clearly we should try to find
> the
> > > > time
> > > > > > for that promptly, but should always view them as high-risk.
> > > > >
> > > > >
> > > > > Thomas also makes a good observation which is that `master` would
> > have
> > > to
> > > > > be considered unstable. This is true, although the committers are
> > still
> > > > > acting as gatekeepers, so it won't be a wild west out there - but
> it
> > > does
> > > > > mean that taking the bleeding-edge code would be somewhat higher
> > risk.
> > > > This
> > > > > does complicate things when we come near to release time. Some
> > projects
> > > > use
> > > > > Git branching models such as [1]; other projects only accept "new
> > > > feature"
> > > > > PRs at certain points in the release cycle and have strict
> > > stabilisation
> > > > > phases near the end of a cycle. This is something that we will need
> > to
> > > > > discuss.
> > > > >
> > > > > Alex and Aled have both made some suggestions about how to mitigate
> > > that
> > > > > risk, and I'd add that we should also push for bug fix PRs to boost
> > > test
> > > > > coverage to reduce the chance of the problem happening again.
> > > > >
> > > > > I'd also explicitly make the point that it's generally accepted in
> > open
> > > > > source communities that a contributor is expected to make some
> effort
> > > to
> > > > > make their contribution easy to review and accept, and can expect
> > > > pushback
> > > > > if their contribution is poorly structured or neglects tests, for
> > > example
> > > > > (Alex alludes to this as "maybe suggest what the contributor could
> do
> > > to
> > > > > help" and Aled states it explicitly as "guidelines for PRs").
> Setting
> > > up
> > > > a
> > > > > list of rules could be seen as strict/inflexible and possibly
> > daunting
> > > > for
> > > > > a new committer, but there is an advantage in setting down some
> best
> > > > > practices (an expansion of what we already have at [2]) and being
> > > > flexible
> > > > > in their interpretation. Where we are currently failing here is
> that
> > > when
> > > > > faced with a difficult contribution, we don't push back to the
> > > > contributor
> > > > > and make it clear what we would like to see - instead the
> > contribution
> > > is
> > > > > just ignored.
> > > > >
> > > > > To add some of my own opinions: this is just software, it's
> supposed
> > to
> > > > be
> > > > > malleable and changing, and we have SCM and tests as a safety net.
> > > Let's
> > > > > take some more risks. Along the way we'll inevitably make mistakes
> > and
> > > > > things will break, sometimes at the worst possible time - accept
> that
> > > > this
> > > > > is normal, and learn from the experience. We're just making a
> release
> > > now
> > > > > so we've got a couple of months to experiment, during which time we
> > can
> > > > > tweak our interpretations of risk etc. and converge on a
> > > release-quality
> > > > > build.
> > > > >
> > > > > While some of the surrounding details need to be worked out (such
> as
> > > how
> > > > do
> > > > > we do a release if master is unstable):
> > > > >
> > > > > *do we feel like we are converging on an acceptable position based
> > > around
> > > > > Alex's eyeball review concept?*
> > > > >
> > > > > Thanks
> > > > > Richard.
> > > > >
> > > > >
> > > > > [1]http://nvie.com/posts/a-successful-git-branching-model/
> > > > > [2]
> > > > >
> > > > > https://brooklyn.apache.org/developers/how-to-contribute.htm
> > > > l#pull-request-at-github
> > > > >
> > > > >
> > > > > On 3 May 2017 at 17:38, Sam Corbett <[email protected]
> >
> > > > wrote:
> > > > >
> > > > > > I used GitHub to check the number of pull requests we've merged
> to
> > a
> > > > few
> > > > > > of our projects over the last year in two month-chunks [1]. Each
> > list
> > > > > > begins today and works backwards:
> > > > > >
> > > > > > brooklyn-server: 86, 79, 85, 68, 95, 78.
> > > > > > brooklyn-client: 7, 4, 3, 8, 2, 4.
> > > > > > brooklyn-library: 11, 12, 9, 7, 17, 8.
> > > > > > total: 104, 95, 97, 83, 114, 90.
> > > > > >
> > > > > > It's my opinion that these numbers show a consistency in the rate
> > at
> > > > > which
> > > > > > we merge pull requests. Since a few committers have recently
> > joined I
> > > > > > expect the rate to increase over the next few months. I would
> like
> > to
> > > > > know
> > > > > > other figures like the average length of time between opening a
> > pull
> > > > > > request and a committer merging it and the average length of time
> > > > before
> > > > > > the first review comment. I'm sure there are many other
> interesting
> > > > > metrics
> > > > > > that could inform this discussion.
> > > > > >
> > > > > > Maybe there's just one or two cases that we have managed badly.
> > > > > > Identifying those pull requests and understanding the cause of
> the
> > > > delay
> > > > > to
> > > > > > each one would be valuable.
> > > > > >
> > > > > > Regarding the complexity of Brooklyn and speaking as someone that
> > has
> > > > > used
> > > > > > it in anger (sometimes literally) I would posit that many of us
> > have
> > > > been
> > > > > > caught out several times by the corner cases and unexpected
> > > > interactions
> > > > > > between components that complexity so often implies. This leads
> to
> > > > > cautious
> > > > > > reviews and slow turnaround times. Quick-scan reviews solve the
> > speed
> > > > at
> > > > > > which we get code into the product. We need to think harder about
> > how
> > > > we
> > > > > > reduce its overall complexity.
> > > > > >
> > > > > > Sam
> > > > > >
> > > > > > 1. I used GitHub's interface to search for "is:pr
> > > > > > merged:2017-03-04..2017-05-03", etc.
> > > > > >
> > > > > >
> > > > > >
> > > > > > On 03/05/2017 09:41, Aled Sage wrote:
> > > > > >
> > > > > >> Hi Alex,
> > > > > >>
> > > > > >> I agree with you that we have a problem with the reviewing of
> some
> > > of
> > > > > our
> > > > > >> PRs - it's a bad situation for all concerned when these PRs stay
> > > open
> > > > > for
> > > > > >> as long as they do.
> > > > > >>
> > > > > >> I agree with your "eye-ball test" for a certain class of PR. I
> > think
> > > > > >> where we probably disagree is where the line is for "low-risk"
> > PRs.
> > > > > There
> > > > > >> are some examples where they "eye-ball test" would certainly
> have
> > > > > helped.
> > > > > >>
> > > > > >> I deliberately expanded the scope of this discussion, because
> > > there's
> > > > > not
> > > > > >> one single solution - I'm not "turning this discussion in to
> > > > guidelines
> > > > > >> about PR". I'm adding to the discussion.
> > > > > >>
> > > > > >>
> > > > > >> *Contributing Factors*
> > > > > >> Breaking this down into the contributing factors, I believe
> those
> > > are:
> > > > > >>
> > > > > >> 1. Some PRs don't get enough attention from reviewers.
> > > > > >>
> > > > > >> 2. Not as much time as we'd like is spent by the community
> > reviewing
> > > > > PRs.
> > > > > >>
> > > > > >> 3. The bar for being merged is too high (for at least some PRs).
> > > > > >>
> > > > > >> 4. Some PRs are very hard to review.
> > > > > >>
> > > > > >> There are clearly examples of (1) where it's a no-brainer we
> > should
> > > > have
> > > > > >> given it more attention, commented and merged. Your proposal for
> > the
> > > > > >> "eye-ball test" will help for some.
> > > > > >>
> > > > > >> However, there are also examples of (1) that are caused by (4) -
> > > > > commonly
> > > > > >> due to the complexity of the code being changed (e.g.
> > > > > config-inheritance),
> > > > > >> sometimes made worse by it changing many things (so it's more
> > > daunting
> > > > > to
> > > > > >> review).
> > > > > >>
> > > > > >> Given (2) and (3), it suggests we should spread that time across
> > > more
> > > > > PRs
> > > > > >> (i.e. some PRs that are getting a very thorough review could get
> > > less,
> > > > > and
> > > > > >> folk try to use that "saved" time on some other PRs). I'm not
> > > > convinced
> > > > > >> that would actually happen in practice though!
> > > > > >>
> > > > > >>
> > > > > >> *Solutions*
> > > > > >>
> > > > > >> 1. Add the "eye-ball test" to our reviewer guidelines (as
> > described
> > > by
> > > > > >> Alex) - and adjust our perception of "low-risk" over time, as we
> > see
> > > > > how it
> > > > > >> works.
> > > > > >>
> > > > > >> 2. Guidelines for PRs - what will make the reviewers' job
> > > considerably
> > > > > >> easier, so we can get things merged faster? For example, small
> and
> > > > > focused
> > > > > >> PRs, with good test coverage, linking to a jira issue where
> > > > appropriate.
> > > > > >>
> > > > > >> 3. Invest more time simplifying Brooklyn (see below).
> > > > > >>
> > > > > >>
> > > > > >> *Complexity of Brooklyn*
> > > > > >> I've heard from quite a few people that certain areas of
> Brooklyn
> > > are
> > > > > far
> > > > > >> too hard to understand. Some people avoid reviewing PRs that
> touch
> > > it,
> > > > > >> because it's so hard to understand the implications - they focus
> > > their
> > > > > time
> > > > > >> on PRs that they feel more confident to review.
> > > > > >>
> > > > > >> This is a symptom of an overly complex project. It would be
> great
> > to
> > > > > find
> > > > > >> more time to simplify things - e.g. to delete things from
> > Brooklyn,
> > > to
> > > > > make
> > > > > >> things more consistent, to refactor or even rewrite some
> sections,
> > > and
> > > > > to
> > > > > >> add more javadoc.
> > > > > >>
> > > > > >>
> > > > > >> *Accepted Limitations to Timely Review*
> > > > > >> PRs that make far reaching changes to low-level details of
> > Brooklyn
> > > > will
> > > > > >> always require a thorough review. Clearly we should try to find
> > the
> > > > time
> > > > > >> for that promptly, but should always view them as high-risk.
> > > > > >>
> > > > > >>
> > > > > >> *YOML*
> > > > > >> If you insist on generalising YOML here, rather than a separate
> > > email
> > > > > >> thread specifically about it, then: we should have commented
> very
> > > > > quickly
> > > > > >> and discussed it promptly on the mailing list - at the level of
> > > > whether
> > > > > we
> > > > > >> want it (ignoring much of its technical details). If it was
> pretty
> > > > much
> > > > > >> anyone but Alex, then we should have commented saying:
> > > > > >>
> > > > > >>    "Very interesting, but this is a huge amount of code to add
> and
> > > > > >>    maintain in Brooklyn. Can you instead create a new github
> > project
> > > > > >>    for this library, so that it can be worked on and maintained
> > > > > >>    separately? We'd then be interested to see how it can be used
> > > > within
> > > > > >>    Brooklyn. Can you close this PR and let us know when/where
> you
> > > > > >>    create that library."
> > > > > >>
> > > > > >> Like I said, that's for pretty much anyone but Alex. The
> > difference
> > > is
> > > > > >> that Alex wrote the first version of our yaml/camp parsing and
> > knows
> > > > it
> > > > > >> better than anyone else. That original code definitely deserves
> a
> > > > > re-write:
> > > > > >> it's become increasingly complicated as the supported yaml has
> > > > evolved.
> > > > > >> Alex has investigated different approaches and has come up with
> a
> > > way
> > > > > that
> > > > > >> could greatly improve that code, and be used in other places as
> > > well.
> > > > > Doing
> > > > > >> that in Brooklyn is simpler for him, because it can evolve in
> > tandem
> > > > to
> > > > > >> satisfy requirements of Brooklyn.
> > > > > >>
> > > > > >> I therefore suggest we discuss YOML separately, rather than
> > > > > generalising.
> > > > > >>
> > > > > >> Aled
> > > > > >>
> > > > > >>
> > > > > >> On 03/05/2017 02:13, Alex Heneveld wrote:
> > > > > >>
> > > > > >>>
> > > > > >>> Aled,
> > > > > >>>
> > > > > >>> > *Light-weight Review*
> > > > > >>> > I agree with you - where PRs look sensible, low-risk and unit
> > > > tested
> > > > > >>> we should take more risk and
> > > > > >>> > merge them sooner (even if there's not been time for a
> thorough
> > > > > review
> > > > > >>> by the community).
> > > > > >>>
> > > > > >>> I'm saying something a little different:  we should _try_ for a
> > > > > thorough
> > > > > >>> review of *all* PRs.  Which I think is uncontroversial.
> > > > > >>>
> > > > > >>> > What should we do with a PR when we aren't able to review
> > things
> > > in
> > > > > as
> > > > > >>> much depth as we'd like?
> > > > > >>>
> > > > > >>> This is the question I'm asking, to ensure we handle PR's in a
> > good
> > > > > time
> > > > > >>> frame.  To summarise, I'm suggesting we make more of an effort,
> > and
> > > > we
> > > > > fall
> > > > > >>> back to an "eyeball test" a certain period of time (7 days max,
> > > less
> > > > if
> > > > > >>> it's simple?), triage the review to look at:
> > > > > >>>
> > > > > >>> * clearly helpful & not obviously wrong
> > > > > >>> * low-risk / doesn't break compatibility
> > > > > >>> * good test coverage (and passing)
> > > > > >>> * likely to be maintained
> > > > > >>>
> > > > > >>> If these can't be confirmed, the reviewer should say what they
> > have
> > > > > >>> doubts about, maybe suggest what the contributor could do to
> > help,
> > > or
> > > > > >>> appeal to other committers more familiar with an area. In any
> > case
> > > > get
> > > > > a
> > > > > >>> discussion going.
> > > > > >>>
> > > > > >>> If these do seem confirmed, I still suggest we don't merge
> > > > immediately
> > > > > >>> in the absence of a thorough review, but ping specific
> committers
> > > > > likely to
> > > > > >>> be interested.  If no thorough review after a few more days,
> > _then_
> > > > > merge.
> > > > > >>>
> > > > > >>> I'm not suggesting any heavyweight process, but just enough to
> > put
> > > > > >>> healthy forces on us as reviewers.
> > > > > >>>
> > > > > >>> This is not a theoretical question, nor is it restricted to the
> > > YOML
> > > > > >>> PR.  We're pretty good with most of our PRs and reviews but
> there
> > > are
> > > > > >>> plenty of examples where we've dropped the ball.  Look at [1]
> > which
> > > > is
> > > > > tiny
> > > > > >>> and tests-only and took nine days to get a review.  Or [2]
> which
> > > yes
> > > > > >>> combines a few related-but-different things but is by no means
> a
> > > hard
> > > > > thing
> > > > > >>> to review.  It would take far more time to split that up into 3
> > > > > branches,
> > > > > >>> test those locally, then babysit each of those PR's than it
> would
> > > > take
> > > > > for
> > > > > >>> a reviewer to just get on with a review.  It's been sitting
> there
> > > > for 2
> > > > > >>> months and doesn't even have a comment.
> > > > > >>>
> > > > > >>> This is not a good state of affairs.  Turning this discussion
> in
> > to
> > > > > >>> guidelines about PR's misses the point.  If there's any change
> to
> > > our
> > > > > >>> docs/process made as a result of this discussion I'd like to
> see
> > > the
> > > > > >>> eyeball test added to a review process discussion.
> > > > > >>>
> > > > > >>> Finally re YOML, there is an ML thread started when the issue
> was
> > > > > >>> raised.  There was chatter beforehand but it wasn't an easy
> thing
> > > to
> > > > > >>> discuss until there was prototype code.  The point is for 7
> > months
> > > > > there
> > > > > >>> have been no comments in any of these places, even after I've
> > run a
> > > > > public
> > > > > >>> session explaining it and private sessions and the PR itself
> says
> > > how
> > > > > it
> > > > > >>> can be tested and how it is insulated from the rest of the code
> > > > > (Thomas I
> > > > > >>> think you missed that point).  As there is an ML thread and an
> > open
> > > > > issue,
> > > > > >>> either of which would be a fine place to comment, but no one is
> > --
> > > > the
> > > > > >>> suggestion of a new separate ML thread to solve the problem is
> > > > > bizarre.  I
> > > > > >>> say this is _exactly_ the situation when we need guidelines for
> > how
> > > > we
> > > > > >>> handle PR's that are not being reviewed in a timely way.
> > > > > >>>
> > > > > >>> Best
> > > > > >>> Alex
> > > > > >>>
> > > > > >>>
> > > > > >>> [1] https://github.com/apache/brooklyn-server/pull/600
> > > > > >>> [2] https://github.com/apache/brooklyn-server/pull/575
> > > > > >>>
> > > > > >>>
> > > > > >>>
> > > > > >>> On 02/05/2017 19:21, Aled Sage wrote:
> > > > > >>>
> > > > > >>>> Hi Alex,
> > > > > >>>>
> > > > > >>>> Interesting question. A few initial thoughts:
> > > > > >>>>
> > > > > >>>> *YOML*
> > > > > >>>> YOML (PR #363) is an exceptional case - we should not use that
> > as
> > > an
> > > > > >>>> example when discussing this meta-question. The PR is 12,000
> > lines
> > > > > >>>> (including comments/notes), and was not discussed on the
> mailing
> > > > list
> > > > > >>>> before it was submitted. I suggest we have a separate email
> > thread
> > > > > >>>> specifically about merging that PR, as there are certainly
> very
> > > > useful
> > > > > >>>> things we'd get from YOML.
> > > > > >>>>
> > > > > >>>> *Small PRs*
> > > > > >>>> We should strongly encourage small focused PRs on a single
> > thing,
> > > > > >>>> wherever possible. That will make review faster, easier and
> > lower
> > > > > risk. For
> > > > > >>>> such PRs, we should strive for review+merge within days (7
> days
> > > > being
> > > > > an
> > > > > >>>> upper bound in normal circumstances, hopefully).
> > > > > >>>>
> > > > > >>>> We can add some brief guidelines to this effect at
> > > > > >>>> http://brooklyn.apache.org/developers/how-to-contribute.html
> > > > > >>>>
> > > > > >>>> *Changing low-level Brooklyn*
> > > > > >>>> PRs that change low-level things in Brooklyn (e.g. changes to
> > > > > >>>> config-inheritance etc) deserve thorough review. They are
> > > high-risk
> > > > > as the
> > > > > >>>> unforeseen consequences of the changes can be very subtle, and
> > > break
> > > > > >>>> downstream blueprints that rely on old ways of doing things.
> > > > > >>>>
> > > > > >>>> *Light-weight Review*
> > > > > >>>> I agree with you - where PRs look sensible, low-risk and unit
> > > tested
> > > > > we
> > > > > >>>> should take more risk and merge them sooner (even if there's
> not
> > > > been
> > > > > time
> > > > > >>>> for a thorough review by the community).
> > > > > >>>>
> > > > > >>>> Aled
> > > > > >>>>
> > > > > >>>>
> > > > > >>>> On 02/05/2017 15:50, Duncan Johnston Watt wrote:
> > > > > >>>>
> > > > > >>>>> Hi Alex
> > > > > >>>>>
> > > > > >>>>> This is probably covered already but I guess there needs to
> be
> > an
> > > > > >>>>> impact
> > > > > >>>>> assessment (by submitter?) before something is waved through
> by
> > > > > >>>>> default.
> > > > > >>>>>
> > > > > >>>>> Best
> > > > > >>>>>
> > > > > >>>>> Duncan
> > > > > >>>>>
> > > > > >>>>> On 2 May 2017 at 06:52, Alex Heneveld <
> > > > > [email protected]
> > > > > >>>>> >
> > > > > >>>>> wrote:
> > > > > >>>>>
> > > > > >>>>> Hi Brooklyners-
> > > > > >>>>>>
> > > > > >>>>>> As many of you know, my YOML PR #363 [1] has been open for a
> > > > while.
> > > > > >>>>>> This
> > > > > >>>>>> sets up a foundation for giving better documentation and
> > > feedback
> > > > > and
> > > > > >>>>>> hugely simplifying how we do our parsing.  However it's a
> very
> > > big
> > > > > >>>>>> PR.  I'm
> > > > > >>>>>> eager to have people spend some time using it and ideally
> > > > extending
> > > > > >>>>>> it --
> > > > > >>>>>> but here I wanted to raise a meta-question:
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>> *W**hat should we do with a PR when we aren't able to review
> > > > things
> > > > > >>>>>> in as
> > > > > >>>>>> much depth as we'd like?*
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>> One option -- call it (A) -- is to say if we can't review
> > things
> > > > > >>>>>> thoroughly in a reasonable timeframe, we do a lighter review
> > and
> > > > if
> > > > > >>>>>> the PR
> > > > > >>>>>> looks promising and safe we merge it.
> > > > > >>>>>>
> > > > > >>>>>> The other option -- call it (B) -- is to leave PRs open for
> as
> > > > long
> > > > > >>>>>> as it
> > > > > >>>>>> takes for us to do the complete review.
> > > > > >>>>>>
> > > > > >>>>>> I think most people have been approaching this with a
> mindset
> > of
> > > > > (B),
> > > > > >>>>>> and
> > > > > >>>>>> while that's great for code quality and shared code
> > > understanding,
> > > > > if
> > > > > >>>>>> we
> > > > > >>>>>> can't deliver on that quickly, it's frankly anti-social. The
> > > > > >>>>>> contributor
> > > > > >>>>>> has to deal with merge conflicts (and the rudeness of his or
> > her
> > > > > >>>>>> contribution being ignored), and Brooklyn loses velocity. My
> > PR
> > > is
> > > > > an
> > > > > >>>>>> extreme example but many have been affected by slow reviews,
> > > and I
> > > > > >>>>>> think
> > > > > >>>>>> the expectation that reviews have to be so thorough is part
> of
> > > the
> > > > > >>>>>> problem:  it even discourages reviewers, as if you're not an
> > > > expert
> > > > > >>>>>> in an
> > > > > >>>>>> area you probably don't feel qualified to review.
> > > > > >>>>>>
> > > > > >>>>>> We have good test coverage so product risk of (A) is small,
> > and
> > > we
> > > > > >>>>>> have
> > > > > >>>>>> great coders so I've no worry about us being able to solve
> > > > problems
> > > > > >>>>>> that
> > > > > >>>>>> (A) might introduce.  We should be encouraging reviewers to
> > look
> > > > at
> > > > > >>>>>> any
> > > > > >>>>>> area, and we need to solve the problem of slow reviews.
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>> *I propose that the**standard we apply is that we quickly
> > either
> > > > > >>>>>> merge PRs
> > > > > >>>>>> or identify what the contributor needs to resolve.
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>> *I'm all for thorough reviews and shared understanding, but
> if
> > > we
> > > > > >>>>>> can't do
> > > > > >>>>>> this quickly I suggest we are better to sacrifice those
> things
> > > > > rather
> > > > > >>>>>> than
> > > > > >>>>>> block contributions, stifle innovation, and discourage
> reviews
> > > by
> > > > > >>>>>> insisting
> > > > > >>>>>> on a standards that we struggle to sustain.
> > > > > >>>>>>
> > > > > >>>>>> As a general rule of thumb, maybe something like:
> > > > > >>>>>>
> > > > > >>>>>> (1) After 7 days of no activity on a PR we go with an
> "eyeball
> > > > > test";
> > > > > >>>>>> unless the following statement is untrue we say:
> > > > > >>>>>>
> > > > > >>>>>> /I haven't done as much review as I'd like, but the code is
> > > > clearly
> > > > > >>>>>> helpful, not risky or obviously wrong  or breaking
> > > compatibility,
> > > > it
> > > > > >>>>>> has
> > > > > >>>>>> good test coverage, and we can reasonably expect the
> > contributor
> > > > or
> > > > > >>>>>> committers to maintain it.  Leaving open a bit longer in
> case
> > > > > someone
> > > > > >>>>>> else
> > > > > >>>>>> wants to review more but if nothing further in the next few
> > > days,
> > > > > >>>>>> let's
> > > > > >>>>>> merge.
> > > > > >>>>>>
> > > > > >>>>>> /(If there are committers who are likely to be specifically
> > > > > >>>>>> interested,
> > > > > >>>>>> call them out as CC.)
> > > > > >>>>>>
> > > > > >>>>>> (2) After 3 more days, if no activity, merge it.
> > > > > >>>>>>
> > > > > >>>>>> And we encourage _anyone_ to review anything.  If the above
> > > > response
> > > > > >>>>>> is
> > > > > >>>>>> the baseline, everyone in our community is qualified to do
> it
> > or
> > > > > >>>>>> better and
> > > > > >>>>>> we'll be grateful!
> > > > > >>>>>>
> > > > > >>>>>> Best
> > > > > >>>>>> Alex
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>> [1]  https://github.com/apache/brooklyn-server/pull/363
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>
> > > > > >>>>
> > > > > >>>
> > > > > >>
> > > > > >>
> > > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to