+Nigel On Fri, Jul 8, 2016 at 7:42 AM, Pranith Kumar Karampuri <pkara...@redhat.com > wrote:
> What gets measured gets managed. It is good that you started this thread. > Problem is two fold. We need a way to first find people who are reviewing a > lot and give them more karma points in the community by encouraging that > behaviour(making these stats known to public lets say in monthly news > letter is one way). It is equally important to review patches when you > compare it to sending patches. What I have seen at least is that it is easy > to find people who sent patches, how many patches someone sent in a month > etc. There is no easy way to get these numbers for reviews. 'Reviewed-by' > tag in commit only includes the people who did +1/+2 on the final revision > of the patch, which is bad. So I feel that is the first problem to be > solved if we have to get better at this. Once I know how I am doing on a > regular basis in this aspect I am sure I will change my ways to contribute > better in this aspect. I would love to know what others think about this > too. > Would it be possible for you to get this data using some script may be? I think we do have apis? > > On Fri, Jul 8, 2016 at 2:02 AM, Jeff Darcy <jda...@redhat.com> wrote: > >> I'm sure a lot of you are pretty frustrated with how long it can take to >> get even a trivial patch through our Gerrit/Jenkins pipeline. I know I >> am. Slow tests, spurious failures, and bikeshedding over style issues are >> all contributing factors. I'm not here to talk about those today. What I >> am here to talk about is the difficulty of getting somebody - anybody - to >> look at a patch and (possibly) give it the votes it needs to be merged. To >> put it bluntly, laziness here is *killing* us. The more patches we have in >> flight, the more merge conflicts and rebases we have to endure for each >> one. It's a quadratic effect. That's why I personally have been trying >> really hard to get patches that have passed all regression tests and >> haven't gotten any other review attention "across the finish line" so they >> can be merged and removed from conflict with every other patch still in >> flight. The search I use for this, every day, is as follows: >> >> >> http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+label:CentOS-regression%253E0+label:NetBSD-regression%253E0+-label:Code-Review%253C0 >> >> That is: >> >> open patches on glusterfs master (change project/branch as >> appropriate to your role) >> >> CentOS and NetBSD regression tests complete >> >> no -1 or -2 votes which might represent legitimate cause for delay >> >> If other people - especially team leads and release managers - could make >> a similar habit of checking the queue and helping to get such "low hanging >> fruit" out of the way, we might see an appreciable increase in our overall >> pace of development. If not, we might have to start talking about >> mandatory reviews with deadlines and penalties for non-compliance. I'm >> sure nobody wants to see their own patches blocked and their own deadlines >> missed because they weren't doing their part to review peers' work, but >> that's a distinct possibility. Let's all try to get this train unstuck and >> back on track before extreme measures become necessary. >> _______________________________________________ >> Gluster-devel mailing list >> Gluster-devel@gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-devel >> > > > > -- > Pranith > -- Pranith
_______________________________________________ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel