On 5 August 2016 at 15:06, James Greenhalgh <james.greenha...@arm.com> wrote:
> On Thu, Aug 04, 2016 at 09:12:36PM +0100, Manuel López-Ibáñez wrote:
>> This is a problem throughout GCC. We have a single C++ maintainer, a
>> single part-time C maintainer, none? for libiberty, no regular
>> maintainer for build machinery, and so on and so forth.
>
> I'd object to this being "throughout" GCC. There are a number of very good,
> very responsive, very helpful GCC reviewers in our community. However, as
> you'd expect, the most active areas for contribution are the most active
> areas for review.

Both statements are not conflicting. It may be true at the same time
that reviewer X is very good, very responsive, very helpful, skilled,
and experienced,  and also true that patches for the areas maintained
by reviewer X may still not be reviewed timely. And, in case it needs
repeating, the solution is not that X stops reviewing or maintaining.

>> This has been a problem for a very long time, but it seems to be
>> getting worse now that fewer contributors are doing most of the
>> work.
>
> Rumours of GCC's death have been greatly exagerated!
>
> At times it might feel this way, but the data
> (BlackDuck at www.openhub.net and my own analysis on the Git Mirror)
> suggests that actually the GCC community is near enough as strong as it
> has been for most of the past 20 years, both in terms of volume of
> contributions, number of contributors, and average number of contributions
> per contributor. Some rudimentary stats gathering on the git mirror suggests
> no substantial change in the makeup of the community over the years. If
> anything there are more commits from more people and the average number of
> commits per committer is decreasing.

I think those conclusions are debatable:

* GCC has also grown over the years, there is a lot more code and
areas, specially more targets, which attract their own temporary
developers who do not contribute to the rest of the compiler (much
less review patches for the rest of the compiler).

* Your analysis includes Ada, Go and Fortran. I'd suggest to exclude
them, since in terms of developers and reviewing, they seem to be
doing fine. They also tend to organise themselves mostly independently
of the rest of the compiler. This is also mostly true for targets.

* 100 commits is less than 2%. Quite a low threshold. Perhaps 1%, 25%,
50%, 75%, 90% are more informative.

* https://www.openhub.net/p/taezaza/contributors/summary shows that
more than 25% of the commits in the last 12 months were made by 6
people. Note that those people are also the most active reviewers.

* If I adjust the numbers by the total number of contributors, then we
get a different picture:

                       X1998       X2001       X2004       X2007
X2010       X2013       X2015       X2016
0-19 commits       36.363636   47.413793   46.405229   54.491018
60.233918   62.500000   60.526316   66.447368
20-39 commits      18.181818   16.379310   16.993464   15.568862
16.374269   15.909091   16.842105   14.473684
40-59 commits       9.090909    7.758621    6.535948   11.377246
7.602339    9.090909    6.315789    8.552632
60-79 commits       6.818182    6.034483    6.535948    5.389222
1.754386    2.272727    2.631579    1.973684
80-99 commits       4.545455    5.172414    6.535948    2.994012
2.339181    2.272727    2.105263    3.289474
100+ commits       25.000000   18.103448   17.647059   10.778443
12.280702    8.522727   12.105263    5.921053

that is, most of the commits are done by smaller fraction of the
total. If that smaller percentage is also responsible for doing most
of the review work for the rest...

* Numbers for other years might shed more light. 2010, 2013 and 2015
might have been especial in one sense or another.

>> >Global Reviewers are welcome to review OpenACC/OpenMP/offloading patches.
>> >But that doesn't help if that's then not happening in reality.  (With the
>> >exception of Bernd, who then did review such patches for a while, but
>> >also seems to have stopped with that again.)
>>
>> At least half of the global reviewers in MAINTAINERS never review
>> any patches. Most of them are not active any more and presumably do
>> not read GCC emails.
>
> That's just false.

Sorry, this was poorly phrased. I meant most of the half that do not
do global reviews. Still, I should have phrased that better by saying
that it was my biased perception. I tend to only pay attention to FE
and diagnostic patches and...

> I've seen all of these active on the lists during 2016 to a lesser or
> greater exentent.

Actively reviewing patches outside their maintainership area? For
example, reviewing OpenACC/OpenMP/offloading patches? Again, this is
not a criticism of people. My point is that we cannot extrapolate that
there are enough reviewers by claiming that there are N global
reviewers in MAINTAINERS. Or as Thomas more eloquently puts it: "that
doesn't help if that's then not happening in reality."

That said, I'm ok with conceding that I flunked my numbers and
dropping this point. It doesn't seem it will lead us to anywhere
useful.


>> a) Follow LLVM's model and allow changes to be reviewed after commit.
>
> If we were to take anything from LLVM's model it should be the use of a
> review tool that makes prioritising patches to review by age easier.
> I love email, but it is a stone-age tool for patch review and makes it
> very easy for patches to fall through the cracks.

Fully agreed. But such tool will only be accepted if it makes life not
harder for the people actually reviewing and makes life easy enough
that they are motivated to use it, and we have not found such tool.
The old patch tracker had patches languishing there for ages. There is
no reason to think that a different tool will result in a different
outcome.

> The commit/revert/commit/revert/commit/revert/commit model doesn't look
> particularly helpful (though it does help your blackduck numbers I guess)
> and *requires* effective continuous integration testing infrastructure across
> targets. Only by being able to quickly identify and revert a broken commit
> do you remove the pain. Otherwise you've shifted trouble from one person
> waiting for a review to N maintainers manually bisecting to find out why
> their target has regressed. That's not a good way for any of us reviewers
> and maintainers to spend our time.

We do not require most patches to be build or tested on several
targets. And a maintainer could still ask for more testing before a
commit without spending time on a full review. Thus, I don't think the
situation that you describe is more likely than the usual churn that
happens already. I might be wrong, I don't pay much attention to
back-end patches to be honest and those may get revised by target
maintainers before being committed, more than in other areas.

> If someone wants to build some good scripts such that we could use to
> build continuous integration testing infrastructure, that would probably
> be very valuable to the community. Things like robots that repeatedly build
> and then bisect bootstrap failures on the Compile Farm machines would be
> handy. Better yet if you can automatically pick up testsuite regressions.

This seems to do at least half of what you want:
http://toolchain.lug-owl.de/buildbot/

>> d) Delegate hierarchically. Module owners should seek and delegate
>> to people with svn-write powers and ask for reviews in exchange of
>> reviews.
>
> This is current policy, see the section "Reviewers" section in MAINTAINERS.

Perhaps I was not clear here. What I mean is that maintainers would
act like editors in scientific journals, they will reject obviously
wrong patches and find a reviewer of their choosing that would accept
to review non-obvious patches whenever they do not have time or
interest in reviewing the patches themselves.

Thanks, your comments were insightful. If more people give feedback
and there is further interest, I will summarize the Pros/Cons/comments
in the wiki.

Cheers,

Manuel.

Reply via email to