Re: [openstack-dev] Bad review patterns

2013-11-12 Thread Michael Davies
On Fri, Nov 8, 2013 at 4:07 AM, Pedro Roque Marques 
pedro.r.marq...@gmail.com wrote:

 Radomir,
 An extra issue that i don't believe you've covered so far is about comment
 ownership. I've just read an email on the list that follows a pattern that
 i've heard many complaints about:
 -1 with a reasonable comment, submitter addresses the comment,
 reviewer never comes back.

 Reviewers do need to allocate time to come back and follow up on the
 answers to their comments.


This is true, but it's not necessarily easy to find those reviews that you
-1'd.  I don't think anyone nefariously -1's and then goes away.  Gerrit
could be improved in this space to assist reviewers.
-- 
Michael Davies   mich...@the-davies.net
Rackspace Cloud Builders Australia
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-11 Thread Joe Gordon
On Sun, Nov 10, 2013 at 3:50 PM, Sean Dague s...@dague.net wrote:

 Not that I know of. I've considered writing my own gerrit front end
 mail service to do just that, because I agree, the current mail volume
 and granularity is not very good. If I manage to carve time on it,
 I'll do it on stackforge. Joe Gordon took a different approach and
 wrote a front end client to mark review threads read that are past.


https://github.com/jogo/gerrit-gmail



 On Thu, Nov 7, 2013 at 8:40 PM, David Ripton drip...@redhat.com wrote:
  On 11/07/2013 07:54 PM, Sean Dague wrote:
 
  On 11/08/2013 01:37 AM, Pedro Roque Marques wrote:
 
  Radomir,
  An extra issue that i don't believe you've covered so far is about
  comment ownership. I've just read an email on the list that follows a
  pattern that i've heard many complaints about:
  -1 with a reasonable comment, submitter addresses the comment,
  reviewer never comes back.
 
  Reviewers do need to allocate time to come back and follow up on the
  answers to their comments.
 
  Perhaps there is an issue with the incentive system. You can earn karma
  by doing a code review... certainly you want to incentivise developers
 that
  help the project by improving the code quality. But if the incentive
 system
  allows for drive by shooting code reviews that can be a problem.
 
 
  It's not really an incentive system problem, this is some place where
  there are some gerrit limitations (especially when your list of reviewed
  code is long). Hopefully once we get a gerrit upgrade we can dashboard
  out some new items like that via the new rest API.
 
  I agree that reviewers could be doing better. But definitely also
  realize that part of this is just that there is *so* much code to
 review.
 
  Realize that most core reviewers aren't ignoring or failing to come back
  on patches intentionally. There is just *so* much of it. I feel guilty
  all the time by how big a review queue I have, but I also need a few
  hours a day not doing OpenStack (incredible to believe). This is where
  non core reviewers can really help in addressing the first couple of
  rounds of review to prune and improve the easy stuff.
 
  We're all in this together,
 
 
  Is there a way for Gerrit to only send email when action is required,
 rather
  than on any change to any review you've touched?  If Gerrit sent less
 mail,
  it would be easier to treat its mails as a critical call to action to
  re-review.  (There's probably a way to use fancy message filtering to
  accomplish this, but that would only work for people willing/able to set
 up
  such filtering.)
 
  --
  David Ripton   Red Hat   drip...@redhat.com
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-11 Thread Michael Basnight
 On Nov 11, 2013, at 11:27 AM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 On Sun, Nov 10, 2013 at 3:50 PM, Sean Dague s...@dague.net wrote:
 Not that I know of. I've considered writing my own gerrit front end
 mail service to do just that, because I agree, the current mail volume
 and granularity is not very good. If I manage to carve time on it,
 I'll do it on stackforge. Joe Gordon took a different approach and
 wrote a front end client to mark review threads read that are past.
 
 https://github.com/jogo/gerrit-gmail
  

Joe described this to me, and it sounded hawt. Thanks for sending this out. 

From what I gather it auto marks email as read based on merges and abandons. 
Joe, are there any dependencies, workflows, insights you can share wrt this?___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-11 Thread Mark McLoughlin
On Fri, 2013-11-08 at 09:32 +1300, Robert Collins wrote:
 On 7 November 2013 13:15, Day, Phil philip@hp.com wrote:
 
 
 
 
  Core reviewers look for the /comments/ from people, not just the votes. A
  +1 from someone that isn't core is meaningless unless they are known to be
  a thoughtful code reviewer. A -1 with no comment is also bad, because it
  doesn't help the reviewee get whatever the issue is fixed.
 
  It's very much not OK to spend an hour reviewing something and then +1
  with no comment: if I, and I think any +2er across the project see a patch 
  that
  needs an hour of review, with a commentless +1, we'll likely discount the 
  +1
  as being meaningful.
 
 
  I don't really get what you're saying here Rob.   It seems to me an almost 
  inevitable
  part of the review process that useful comments are going to be mostly 
  negative.
  If someone has invested that amount of effort because the patch is complex, 
  or
  It took then a while to work their way back into that part of the systems, 
  etc, but
  having given the code careful consideration they decide it's good - do you 
  want
  comments in there saying I really like your code, Well done on fixing 
  such a
  complex problem or some such ?
 
 Negative comments are fine: I was saying that statistically, having an
 hour-long patch (which BTW puts it waaay past the diminishing returns
 on patch size tradeoff) to review and then having nothing at all to
 say about it is super suspect. Sure, having nothing to say on a 10
 line patch - hit a +1, move on. But something that takes an hour to
 review? I'd expect at minimum the thing to prompt some suggestions
 *even while it gets a +1*.
 
  I just don't see how you can use a lack or presence of positive feedback in 
  a +1 as
  any sort of indication on the quality of that +1.   Unless you start asking 
  reviewers
  to précis the change in their own words to show that they understood it I 
  don't
  really see how additional positive comments help in most cases.   Perhaps 
  if you
  have some specific examples of this it would help to clarify
 
 It might even be negative feedback. Like - this patch is correct but
 the fact it had to be done as one patch shows our code structure here
 is wonky. If you could follow up with something to let us avoid future
 mammoth patches, that would be awesome.
 
 Again, I'm talking about hour long reviews, which is the example Radomir gave.
 
 Another way of presenting what I'm saying is this: Code reviews are a
 dialogue. How often in a discussion with a live human would you have
 no feedback at all, if they were reading a speech to you. You might go
 'that was a great speech' (+2) and *still* have something to add. As
 an observer given a very long speech, and an audience, I'd suspect
 folk went to sleep if they didn't have commentary :)

Totally agree with you here. I almost completely discount +1s with no
comments on a large patch.

I make a habit of leaving comments in reviews - positive, negative,
neutral, whatever. If I have something to say which might be useful to
the author, other reviewers, my future self, whatever ... then I'll say
it.

e.g. if I spend 10 minutes looking at one part of a patch, ultimately
convincing myself that there really is no better approach and the author
has made the right tradeoffs ... then I'll say it. I'll briefly describe
the tradeoffs, the other options that I guess the author considered and
discounted.

I sometimes feel guilty about this because I know patch authors just
want their +2 and often don't want to read through my verbiage ... but,
as you say, this is a dialogue and the dialogue can yield some
interesting thoughts and ideas.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-11 Thread Mark McLoughlin
On Thu, 2013-11-07 at 20:40 -0500, David Ripton wrote:
 On 11/07/2013 07:54 PM, Sean Dague wrote:
  On 11/08/2013 01:37 AM, Pedro Roque Marques wrote:
  Radomir,
  An extra issue that i don't believe you've covered so far is about comment 
  ownership. I've just read an email on the list that follows a pattern that 
  i've heard many complaints about:
 -1 with a reasonable comment, submitter addresses the comment, reviewer 
  never comes back.
 
  Reviewers do need to allocate time to come back and follow up on the 
  answers to their comments.
 
  Perhaps there is an issue with the incentive system. You can earn karma by 
  doing a code review... certainly you want to incentivise developers that 
  help the project by improving the code quality. But if the incentive 
  system allows for drive by shooting code reviews that can be a problem.
 
  It's not really an incentive system problem, this is some place where
  there are some gerrit limitations (especially when your list of reviewed
  code is long). Hopefully once we get a gerrit upgrade we can dashboard
  out some new items like that via the new rest API.
 
  I agree that reviewers could be doing better. But definitely also
  realize that part of this is just that there is *so* much code to review.
 
  Realize that most core reviewers aren't ignoring or failing to come back
  on patches intentionally. There is just *so* much of it. I feel guilty
  all the time by how big a review queue I have, but I also need a few
  hours a day not doing OpenStack (incredible to believe). This is where
  non core reviewers can really help in addressing the first couple of
  rounds of review to prune and improve the easy stuff.
 
  We're all in this together,
 
 Is there a way for Gerrit to only send email when action is required, 
 rather than on any change to any review you've touched?  If Gerrit sent 
 less mail, it would be easier to treat its mails as a critical call to 
 action to re-review.  (There's probably a way to use fancy message 
 filtering to accomplish this, but that would only work for people 
 willing/able to set up such filtering.)

I know you're discounting filtering here, but FWIW I filter on the email
body containing:

  Gerrit-Reviewer: Mark McLoughlin

so that I have all email related to reviews I'm subscribed to in a
single folder. I try hard to stay on top of this folder to avoid being a
drive-by-reviewer.

I group the mails by thread, so I don't mind all the email gerrit sends
out but if e.g. I wanted to only see notifications of new patch sets I'd
filter on:

  Gerrit-MessageType: newpatchset

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-09 Thread Sean Dague
Not that I know of. I've considered writing my own gerrit front end
mail service to do just that, because I agree, the current mail volume
and granularity is not very good. If I manage to carve time on it,
I'll do it on stackforge. Joe Gordon took a different approach and
wrote a front end client to mark review threads read that are past.

On Thu, Nov 7, 2013 at 8:40 PM, David Ripton drip...@redhat.com wrote:
 On 11/07/2013 07:54 PM, Sean Dague wrote:

 On 11/08/2013 01:37 AM, Pedro Roque Marques wrote:

 Radomir,
 An extra issue that i don't believe you've covered so far is about
 comment ownership. I've just read an email on the list that follows a
 pattern that i've heard many complaints about:
 -1 with a reasonable comment, submitter addresses the comment,
 reviewer never comes back.

 Reviewers do need to allocate time to come back and follow up on the
 answers to their comments.

 Perhaps there is an issue with the incentive system. You can earn karma
 by doing a code review... certainly you want to incentivise developers that
 help the project by improving the code quality. But if the incentive system
 allows for drive by shooting code reviews that can be a problem.


 It's not really an incentive system problem, this is some place where
 there are some gerrit limitations (especially when your list of reviewed
 code is long). Hopefully once we get a gerrit upgrade we can dashboard
 out some new items like that via the new rest API.

 I agree that reviewers could be doing better. But definitely also
 realize that part of this is just that there is *so* much code to review.

 Realize that most core reviewers aren't ignoring or failing to come back
 on patches intentionally. There is just *so* much of it. I feel guilty
 all the time by how big a review queue I have, but I also need a few
 hours a day not doing OpenStack (incredible to believe). This is where
 non core reviewers can really help in addressing the first couple of
 rounds of review to prune and improve the easy stuff.

 We're all in this together,


 Is there a way for Gerrit to only send email when action is required, rather
 than on any change to any review you've touched?  If Gerrit sent less mail,
 it would be easier to treat its mails as a critical call to action to
 re-review.  (There's probably a way to use fancy message filtering to
 accomplish this, but that would only work for people willing/able to set up
 such filtering.)

 --
 David Ripton   Red Hat   drip...@redhat.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-07 Thread Radomir Dopieralski
On 06/11/13 23:07, Robert Collins wrote:
 On 6 November 2013 21:34, Radomir Dopieralski openst...@sheep.art.pl wrote:

 [...] Firstly, things
 like code duplication is a sliding scale, and I think it's ok for a
 reviewer to say 'these look similar, give it a go please'. If the
 reviewee tries, and it's no good - thats fine. But saying 'hey, I
 won't even try' - thats not so good. Many times we get drive-by
 patches and no follow up, so there is a learnt response to require
 full satisfaction with each patch that comes in. Regular contributors
 get more leeway here I think.

This is of course a gut feeling thing, and different people have
different thresholds for when to deduplicate code. I came up with that
pattern for myself, because my feelings about the code often didn't
match the feelings of the rest of the team. Please note that the fact
that I notice a bad pattern in my review doesn't immediately mean I
shouldn't do it -- just that it is suspect and I should rethink it. So,
if it's a blatant copy-paste of code, of course I will point it out. If
the patch includes code that could be easily replaced by an existing
utility function, that the author might not know about, of course I will
point it out. But if the code is similar to some other code in some
other part of the project, I will try to treat it the same way as when I
spot a bug that is not related to the patch during the review -- either
follow up with a patch of my own, or report a bug if I don't have time
for this (this has a nice side effect of producing the low-hanging-fruit
bugs that teach refactoring to newcomers). I think it's important to
remember that you can always commit to that project, so not all changes
need to be mde by that single author (especially in OpenStack, where you
have the marvelous option of amending someone else's patch).

[...]
 This is quite obvious. Just don't do it. It's OK to spend an hour
 reviewing something, and then leaving no comments on it, because it's
 simply fine, or because we had to means to test someting (see the first
 pattern).
 
 Core reviewers look for the /comments/ from people, not just the
 votes. A +1 from someone that isn't core is meaningless unless they
 are known to be a thoughtful code reviewer. A -1 with no comment is
 also bad, because it doesn't help the reviewee get whatever the issue
 is fixed.

 It's very much not OK to spend an hour reviewing something and then +1
 with no comment: if I, and I think any +2er across the project see a
 patch that needs an hour of review, with a commentless +1, we'll
 likely discount the +1 as being meaningful.

That is a very good point! We want reviews that have useful comments in
them, so it is indeed a very good rule of thumb to always ask yourself
Is this comment that I am posting helpful?. What I meant here is that
if I do a review, and even after spending a lot of time on it, can't
come up with anything that would actually help the author make it a
better patch, then it is fine to not post anything at all, including no
score. I suppose it would be nice to +1 it if you find it basically
fine, but we didn't have separate +1s and +2s in my previous project.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-07 Thread Jiri Tomasek

On 11/07/2013 08:25 AM, Daniel P. Berrange wrote:

On Thu, Nov 07, 2013 at 12:21:38AM +, Day, Phil wrote:

Leaving a mark.
===

You review a change and see that it is mostly fine, but you feel that since you
did so much work reviewing it, you should at least find
*something* wrong. So you find some nitpick and -1 the change just so that
they know you reviewed it.

This is quite obvious. Just don't do it. It's OK to spend an hour reviewing
something, and then leaving no comments on it, because it's simply fine, or
because we had to means to test someting (see the first pattern).



Another one that comes into this category is adding a -1 which just says I 
agree with
the other -1's in here.   If you have some additional perspective and can 
expand on
it then that's fine - otherwise it adds very little and is just review count 
chasing.

I don't think that it is valueless as you describe. If I multiple people
add '-1' with a same comments as name, then as a core reviewer I will
consider that initial -1 to be a much stronger nack, than if only one person
had added the -1. So I welcome people adding I agree with blah to any
review.


It's an unfortunate consequence of counting and publishing review stats that 
having
such a measure will inevitable also drive behavour.

IMHO what this shows is not people trying to game the stats, but rather the
inadequacy of gerrit. We don't have any way to distinguish a -1 minor nice
to have nitpick from a -1 serious code issue that is a must-fix. Adding
a -2 is really too heavyweight because it is sticky, and implies do not
ever merge this.

It would be nice to be able to use '-2' for serious must-fix issue without
it being sticky, and have a separate way for core reviewers to put an review
into a block from being merged indefinitely state - perhaps a new state
would be more useful eg a Blocked state, to add to New, Open, Merged,
Abadoned.

Daniel
The comment describing the  -1 should be enough to distinquish between 
minor nitpick and serious code issue IMHO. If it is a serious issue, 
other reviewers also giving -1 confirming the issue is probably a good 
thing. (Not with the minor nit though...)


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-07 Thread Robert Collins
On 7 November 2013 13:15, Day, Phil philip@hp.com wrote:




 Core reviewers look for the /comments/ from people, not just the votes. A
 +1 from someone that isn't core is meaningless unless they are known to be
 a thoughtful code reviewer. A -1 with no comment is also bad, because it
 doesn't help the reviewee get whatever the issue is fixed.

 It's very much not OK to spend an hour reviewing something and then +1
 with no comment: if I, and I think any +2er across the project see a patch 
 that
 needs an hour of review, with a commentless +1, we'll likely discount the +1
 as being meaningful.


 I don't really get what you're saying here Rob.   It seems to me an almost 
 inevitable
 part of the review process that useful comments are going to be mostly 
 negative.
 If someone has invested that amount of effort because the patch is complex, or
 It took then a while to work their way back into that part of the systems, 
 etc, but
 having given the code careful consideration they decide it's good - do you 
 want
 comments in there saying I really like your code, Well done on fixing such 
 a
 complex problem or some such ?

Negative comments are fine: I was saying that statistically, having an
hour-long patch (which BTW puts it waaay past the diminishing returns
on patch size tradeoff) to review and then having nothing at all to
say about it is super suspect. Sure, having nothing to say on a 10
line patch - hit a +1, move on. But something that takes an hour to
review? I'd expect at minimum the thing to prompt some suggestions
*even while it gets a +1*.

 I just don't see how you can use a lack or presence of positive feedback in a 
 +1 as
 any sort of indication on the quality of that +1.   Unless you start asking 
 reviewers
 to précis the change in their own words to show that they understood it I 
 don't
 really see how additional positive comments help in most cases.   Perhaps if 
 you
 have some specific examples of this it would help to clarify

It might even be negative feedback. Like - this patch is correct but
the fact it had to be done as one patch shows our code structure here
is wonky. If you could follow up with something to let us avoid future
mammoth patches, that would be awesome.

Again, I'm talking about hour long reviews, which is the example Radomir gave.

Another way of presenting what I'm saying is this: Code reviews are a
dialogue. How often in a discussion with a live human would you have
no feedback at all, if they were reading a speech to you. You might go
'that was a great speech' (+2) and *still* have something to add. As
an observer given a very long speech, and an audience, I'd suspect
folk went to sleep if they didn't have commentary :)


-Rob
-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-07 Thread Robert Collins
On 8 November 2013 00:02, Radomir Dopieralski openst...@sheep.art.pl wrote:

 I created a page on the wiki,
 https://wiki.openstack.org/wiki/CodeReviewGuidelines

 I put some initial content there, based on the discussion in this
 thread. Please feel free to discuss those points further here, and to
 amend that page and add to it.

Thank you. I thought we might have a similar page already, but I only found:
https://wiki.openstack.org/wiki/ReviewChecklist
[what to look for]
and
https://wiki.openstack.org/wiki/ReviewWorkflowTips
[what reviewees should do]

I suspect we probably need to tie all three together somehow.

 Any ideas of where we could put a link to it? I'm thinking about the
 Gerrit Workflow page, maybe also some pages specific to particular teams
 (I've seen there is a page with review tips for Nova).

So https://wiki.openstack.org/wiki/Gerrit_Workflow and the
ReviewWorkflowTips page above overlap a lot.

I think we should merge ReviewWorkflowTips into Gerrit_Workflow, and
link to the review checklist and guidlines pages from Gerrit_Workflow.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-07 Thread Sean Dague
On 11/07/2013 01:56 PM, James Bottomley wrote:
 On Thu, 2013-11-07 at 00:21 +, Day, Phil wrote:

 Leaving a mark.
 ===

 You review a change and see that it is mostly fine, but you feel that since 
 you
 did so much work reviewing it, you should at least find
 *something* wrong. So you find some nitpick and -1 the change just so that
 they know you reviewed it.

 This is quite obvious. Just don't do it. It's OK to spend an hour reviewing
 something, and then leaving no comments on it, because it's simply fine, or
 because we had to means to test someting (see the first pattern).



 Another one that comes into this category is adding a -1 which just says I 
 agree with
 the other -1's in here.   If you have some additional perspective and can 
 expand on
 it then that's fine - otherwise it adds very little and is just review count 
 chasing.

 It's an unfortunate consequence of counting and publishing review stats that 
 having
 such a measure will inevitable also drive behavour.
 
 Perhaps a source of the problem is early voting.  Feeling pressure to
 add a +/-1 (or even 2) without fully asking what's going on in the code
 leads to premature adjudication.
 
 For instance, I have to do a lot of reviews in SCSI; I'm a subject
 matter expert, so I do recognise some bad coding patterns and ask for
 them to be changed unconditionally, but a lot of the time I don't
 necessarily understand why the code was done in the way it was, so I
 ask.  Before I get the answer I'm not really qualified to judge the
 merits.

+1 to this.

Realistically if I have questions in the code and am unsure how I'd
score it I'll often leave a 0 review with questions inline to help me
understand. I consider this generally good form, and it's helpful to
everyone in the learning process about the code.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-07 Thread Sean Dague
On 11/07/2013 05:35 PM, Jiri Tomasek wrote:
 On 11/07/2013 08:25 AM, Daniel P. Berrange wrote:
 On Thu, Nov 07, 2013 at 12:21:38AM +, Day, Phil wrote:
 Leaving a mark.
 ===

 You review a change and see that it is mostly fine, but you feel
 that since you
 did so much work reviewing it, you should at least find
 *something* wrong. So you find some nitpick and -1 the change just
 so that
 they know you reviewed it.

 This is quite obvious. Just don't do it. It's OK to spend an hour
 reviewing
 something, and then leaving no comments on it, because it's simply
 fine, or
 because we had to means to test someting (see the first pattern).


 Another one that comes into this category is adding a -1 which just
 says I agree with
 the other -1's in here.   If you have some additional perspective
 and can expand on
 it then that's fine - otherwise it adds very little and is just
 review count chasing.
 I don't think that it is valueless as you describe. If I multiple people
 add '-1' with a same comments as name, then as a core reviewer I will
 consider that initial -1 to be a much stronger nack, than if only one
 person
 had added the -1. So I welcome people adding I agree with blah to any
 review.

 It's an unfortunate consequence of counting and publishing review
 stats that having
 such a measure will inevitable also drive behavour.
 IMHO what this shows is not people trying to game the stats, but
 rather the
 inadequacy of gerrit. We don't have any way to distinguish a -1 minor
 nice
 to have nitpick from a -1 serious code issue that is a must-fix.
 Adding
 a -2 is really too heavyweight because it is sticky, and implies do not
 ever merge this.

 It would be nice to be able to use '-2' for serious must-fix issue
 without
 it being sticky, and have a separate way for core reviewers to put an
 review
 into a block from being merged indefinitely state - perhaps a new state
 would be more useful eg a Blocked state, to add to New, Open, Merged,
 Abadoned.

 Daniel
 The comment describing the  -1 should be enough to distinquish between
 minor nitpick and serious code issue IMHO. If it is a serious issue,
 other reviewers also giving -1 confirming the issue is probably a good
 thing. (Not with the minor nit though...)

Agreed, the comments are there for a reason. It's also handy to provide
forward looking statements to other reviewers in case you don't get back
to the review queue right away.

For instance when I know that might happen I tend to leave comments like
I'm +2 after the following  is addressed. That's also extremely
helpful to provide incentive to the submitter to make those kinds of
changes.

Not all submissions fit into this kind of category, some really need to
iterate through sets of issues to get ready for the code base. But when
they are close, defining the end state for the contributor and other
reviewers that won't feel like they need to wait for your comments on
the updated review (because you clearly explained what you thought
needed to change) is helpful.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-07 Thread Sean Dague
On 11/08/2013 01:37 AM, Pedro Roque Marques wrote:
 Radomir,
 An extra issue that i don't believe you've covered so far is about comment 
 ownership. I've just read an email on the list that follows a pattern that 
 i've heard many complaints about:
   -1 with a reasonable comment, submitter addresses the comment, reviewer 
 never comes back.
 
 Reviewers do need to allocate time to come back and follow up on the answers 
 to their comments.
 
 Perhaps there is an issue with the incentive system. You can earn karma by 
 doing a code review... certainly you want to incentivise developers that help 
 the project by improving the code quality. But if the incentive system allows 
 for drive by shooting code reviews that can be a problem.

It's not really an incentive system problem, this is some place where
there are some gerrit limitations (especially when your list of reviewed
code is long). Hopefully once we get a gerrit upgrade we can dashboard
out some new items like that via the new rest API.

I agree that reviewers could be doing better. But definitely also
realize that part of this is just that there is *so* much code to review.

Realize that most core reviewers aren't ignoring or failing to come back
on patches intentionally. There is just *so* much of it. I feel guilty
all the time by how big a review queue I have, but I also need a few
hours a day not doing OpenStack (incredible to believe). This is where
non core reviewers can really help in addressing the first couple of
rounds of review to prune and improve the easy stuff.

We're all in this together,

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-07 Thread David Ripton

On 11/07/2013 07:54 PM, Sean Dague wrote:

On 11/08/2013 01:37 AM, Pedro Roque Marques wrote:

Radomir,
An extra issue that i don't believe you've covered so far is about comment 
ownership. I've just read an email on the list that follows a pattern that i've 
heard many complaints about:
-1 with a reasonable comment, submitter addresses the comment, reviewer 
never comes back.

Reviewers do need to allocate time to come back and follow up on the answers to 
their comments.

Perhaps there is an issue with the incentive system. You can earn karma by doing a code 
review... certainly you want to incentivise developers that help the project by improving 
the code quality. But if the incentive system allows for drive by shooting 
code reviews that can be a problem.


It's not really an incentive system problem, this is some place where
there are some gerrit limitations (especially when your list of reviewed
code is long). Hopefully once we get a gerrit upgrade we can dashboard
out some new items like that via the new rest API.

I agree that reviewers could be doing better. But definitely also
realize that part of this is just that there is *so* much code to review.

Realize that most core reviewers aren't ignoring or failing to come back
on patches intentionally. There is just *so* much of it. I feel guilty
all the time by how big a review queue I have, but I also need a few
hours a day not doing OpenStack (incredible to believe). This is where
non core reviewers can really help in addressing the first couple of
rounds of review to prune and improve the easy stuff.

We're all in this together,


Is there a way for Gerrit to only send email when action is required, 
rather than on any change to any review you've touched?  If Gerrit sent 
less mail, it would be easier to treat its mails as a critical call to 
action to re-review.  (There's probably a way to use fancy message 
filtering to accomplish this, but that would only work for people 
willing/able to set up such filtering.)


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread Harshad Nakil
+1

Regards
-Harshad


 On Nov 6, 2013, at 12:36 AM, Radomir Dopieralski openst...@sheep.art.pl 
 wrote:

 Hello,

 I'm quite new in the OpenStack project, but I love it already. What is
 especially nifty is the automated review system -- I'm really impressed.
 I'm coming from a project in which we also did reviews of every change
 -- although it was mostly manual, and just one review was enough to
 merge -- and at some point in that project I noticed that it is very
 easy to give reviews that are unhelpful, frustrating and just get in the
 way of the actual work. I started paying attention to how I am reviewing
 code, and I managed to come up with several patterns that are bad. Once
 I know the patterns, it's easier to recognize when I'm doing something
 wrong and rethink the review. I would like to share the patterns that I
 noticed.


 Not sure if that works...
 =

 You see some suspicious piece of code, and you are not sure if it is
 correct or not. So you point it out in the review and -1 it, demanding
 that the author rechecks it and/or prove that it indeed works.

 It's your job as a reviewer to check such things, don't put all the
 effort on the author. They probably already checked that this code
 works, and possibly have even written tests for it. If you are not
 familiar with the technology enough to tell whether it's good or not,
 and have no means of testig it yourself, you shouldn't be reviewing it.
 If you do have the means to test it or can find the piece of
 documentation that says that it shouldn't be done -- do it as a part of
 the review.


 You ain't gonna need it.
 

 The code contains some parts that are potentially reusable later or in
 different areas, so you -1 it and tell the author to move them into
 reusable functions.

 The fact that you think something is reusable doesn't necessarily mean
 it is, and overly general code is harder to maintain. Let something
 repeat a couple of times just to be sure it actually is reusable. Once
 you find a repeating pattern, you can refactor the code to use a
 generalized function in its place -- in a separate change.


 There is an old bug here.
 =

 While you review the submitted code, you notice something wrong in the
 code not immediately related to the change you are reviewing. You -1 the
 change and tell the author to fix that bug, or formatting issue, or
 typo, or whatever.

 That fix has nothing to do with the change you are reviewing. The
 correct thing to do is to make a mental note of it, and fix it in a
 separate change -- possibly even finding more instances of it or coming
 up with a much better fix after seeing more code.


 Guess what I mean.
 ==

 You notice a pep8 violation, or pep257 violation, or awkward wording of
 a comment or docstring, or a clumsy piece of code. You -1 the change and
 just tell author to fix it.

 It's not so easy to guess what you mean, and in case of a clumsy piece
 of code, not even that certain that better code can be used instead. So
 always provide an example of what you would rather want to see. So
 instead of pointing to indentation rules, just show properly indented
 code. Instead of talking about grammar or spelling, just type the
 corrected comment or docstring. Finally, instead of saying use list
 comprehension here or don't use has_key, just type your proposal of
 how the code should look like. Then we have something concrete to talk
 about. Of course, you can also say why you think this is better, but an
 example is very important. If you are not sure how the improved code
 would look like, just let it go, chances are it would look even worse.


 Not a complete fix.
 ===

 The code fixes some problems (for example, fixes formatting and enables
 some flake8 checks), but leaves some other, related problems still not
 fixed. You -1 it and demand that the author adds fixes for the related
 problem.

 Don't live your coding career through the authors. If their fix is good
 and correct and improves the code, let it in, even if you see more
 opportunities to improve it. You can add those additional fixes yourself
 in a separate change. Or, if you don't have time or skill to do that,
 report a bug about the remaining problems and someone else will do it,
 but don't hold the author hostage with your review because you think he
 didn't do enough.


 Leaving a mark.
 ===

 You review a change and see that it is mostly fine, but you feel that
 since you did so much work reviewing it, you should at least find
 *something* wrong. So you find some nitpick and -1 the change just so
 that they know you reviewed it.

 This is quite obvious. Just don't do it. It's OK to spend an hour
 reviewing something, and then leaving no comments on it, because it's
 simply fine, or because we had to means to test someting (see the first
 pattern).



 Those are the things I'm careful about myself. I'm 

Re: [openstack-dev] Bad review patterns

2013-11-06 Thread Christopher Armstrong
On Wed, Nov 6, 2013 at 2:34 AM, Radomir Dopieralski
openst...@sheep.art.plwrote:

 Hello,

 I'm quite new in the OpenStack project, but I love it already. What is
 especially nifty is the automated review system -- I'm really impressed.
 I'm coming from a project in which we also did reviews of every change
 -- although it was mostly manual, and just one review was enough to
 merge -- and at some point in that project I noticed that it is very
 easy to give reviews that are unhelpful, frustrating and just get in the
 way of the actual work. I started paying attention to how I am reviewing
 code, and I managed to come up with several patterns that are bad. Once
 I know the patterns, it's easier to recognize when I'm doing something
 wrong and rethink the review. I would like to share the patterns that I
 noticed.



Agreed on all points. I think Gerrit is nice in that it automates a lot of
stuff, but unfortunately the workflow has not encouraged the best behavior
for reviewers. This is a good list to follow -- but how can we be sure
people will? This mailing list thread will only be seen by a small number
of reviewers over the life of the project, I'm sure.


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread Miguel Angel
All the points sound quite reasonable. I agree with Chris, the more
reviewers read this, the better will be our review quality.

Do we have some kind of reviewing guide?, if we don't this could be an
start.

--
irc: ajo  / mangelajo
Miguel Angel Ajo Pelayo
+34 636 52 25 69
skype: ajoajoajo


2013/11/6 Christopher Armstrong chris.armstr...@rackspace.com

 On Wed, Nov 6, 2013 at 2:34 AM, Radomir Dopieralski 
 openst...@sheep.art.pl wrote:

 Hello,

 I'm quite new in the OpenStack project, but I love it already. What is
 especially nifty is the automated review system -- I'm really impressed.
 I'm coming from a project in which we also did reviews of every change
 -- although it was mostly manual, and just one review was enough to
 merge -- and at some point in that project I noticed that it is very
 easy to give reviews that are unhelpful, frustrating and just get in the
 way of the actual work. I started paying attention to how I am reviewing
 code, and I managed to come up with several patterns that are bad. Once
 I know the patterns, it's easier to recognize when I'm doing something
 wrong and rethink the review. I would like to share the patterns that I
 noticed.



 Agreed on all points. I think Gerrit is nice in that it automates a lot of
 stuff, but unfortunately the workflow has not encouraged the best behavior
 for reviewers. This is a good list to follow -- but how can we be sure
 people will? This mailing list thread will only be seen by a small number
 of reviewers over the life of the project, I'm sure.


 --
 IRC: radix
 Christopher Armstrong
 Rackspace

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread Sergey Skripnick


This definitely should be somewhere in wiki or blog and in the bookmarks.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread Pitucha, Stanislaw Izaak
How about putting it on a wiki and linking it from the top bar of gerrit 
itself? (call it review guidelines for example)
That way, even people who didn't look for it explicitly could find it.

Regards,
Stanisław Pitucha
Cloud Services
Hewlett Packard

-Original Message-
From: Sergey Skripnick [mailto:sskripn...@mirantis.com]
Sent: Wednesday, November 06, 2013 6:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Bad review patterns


This definitely should be somewhere in wiki or blog and in the bookmarks.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread Andrew Woodward
Great thread and very insightful. Yes; please make this more
accessible to other reviewers. Adding this to the wiki is a great
start.

Andrew
Mirantis

On Wed, Nov 6, 2013 at 11:05 AM, Pitucha, Stanislaw Izaak
stanislaw.pitu...@hp.com wrote:
 How about putting it on a wiki and linking it from the top bar of gerrit 
 itself? (call it review guidelines for example)
 That way, even people who didn't look for it explicitly could find it.

 Regards,
 Stanisław Pitucha
 Cloud Services
 Hewlett Packard

 -Original Message-
 From: Sergey Skripnick [mailto:sskripn...@mirantis.com]
 Sent: Wednesday, November 06, 2013 6:50 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Bad review patterns


 This definitely should be somewhere in wiki or blog and in the bookmarks.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
If google has done it, Google did it right!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread Ravi Chunduru
Well said. This will encourage new developers.


On Wed, Nov 6, 2013 at 11:26 AM, Andrew Woodward xar...@gmail.com wrote:

 Great thread and very insightful. Yes; please make this more
 accessible to other reviewers. Adding this to the wiki is a great
 start.

 Andrew
 Mirantis

 On Wed, Nov 6, 2013 at 11:05 AM, Pitucha, Stanislaw Izaak
 stanislaw.pitu...@hp.com wrote:
  How about putting it on a wiki and linking it from the top bar of gerrit
 itself? (call it review guidelines for example)
  That way, even people who didn't look for it explicitly could find it.
 
  Regards,
  Stanisław Pitucha
  Cloud Services
  Hewlett Packard
 
  -Original Message-
  From: Sergey Skripnick [mailto:sskripn...@mirantis.com]
  Sent: Wednesday, November 06, 2013 6:50 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] Bad review patterns
 
 
  This definitely should be somewhere in wiki or blog and in the bookmarks.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 If google has done it, Google did it right!

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread Eugene Nikanorov
Hi!

I would like do disagree with some of the points.
First of all, '-1' mark may have a wrong perception especially among new
contributors.
-1 doesn't mean reviewers don't want your code (which is -2), it means they
either not sure it is good enough or they see how you could make it better.

 Not sure if that works
If you as a reviewer have a gut feeling ('not sure') that it may not work -
put a -1.
Let submitter explain it to you. You can change your score later without
requiring to change the patch.
Also, there are plenty of cases when the code could not be tested because
it requires specific environment to run.
I think it's ok to put -1 in case of uncertainty. The only thing you should
care about is that such -1 is not 'finished' because you don't require
submitter to change anything, so you need to check if your concerns were
addressed and potentially change the score.

On point #2:
  Let something repeat a couple of times just to be sure it actually is
reusable.
I think code repetition is not desirable in most of the cases and only
occasionally repeating code gets merged. That happens especially when
someone didn't put reusable code in a common place so others unaware of its
existance. Instead of doing refactoring later on (who will do this?) just
rely on reviewer's opinion and check if generalization could be done.
And this can grow as a snowball (oh, he has copy-pasted his stuff, why
shouldn't I do the same?) so the earlier it is caught - the better.

On other points: the patch is required to be self-consistent. It needs to
solve something that is stated in the bug/commit message and no more (and
hopefully not less) than that. So the scope really matters. I haven't see
much comments from reviewers requesting to make occasional fixes. Although
sometimes it make sense to ask for such, especially if they are related,
reasonable and the patch is going to be improved anyways (due to some other
issues). It also may happen that submitter has missed certain things that
also fall in the scope of the work he is doing.

Having said that, I believe that it's not very difficult to get through a
review where you get -1 for code  style issues.
When you have your code, IDE, git and gerrit at your hands it is a matter
of minutes (hours rarely) to resolve those and resubmit.

The real issue with the review process is that reviewers should back up his
mark once they has put it and they not usually do that for various reasons.
The worst thing to do is to put -1 and go away without leaving a chance for
submitter to explain what he meant and wanted.
So once a reviewer started a review (put a mark) - it is his responsibility
to work further on this review.
Persistent -1 should indicate that reviewer and submitter could not agree
on fundamentals of the patch (design, workflow, etc), which, in turn,
means, that a discussion should be held within a community before the work
on the patch is continued.

Thanks,
Eugene.




On Wed, Nov 6, 2013 at 11:26 PM, Andrew Woodward xar...@gmail.com wrote:

 Great thread and very insightful. Yes; please make this more
 accessible to other reviewers. Adding this to the wiki is a great
 start.

 Andrew
 Mirantis

 On Wed, Nov 6, 2013 at 11:05 AM, Pitucha, Stanislaw Izaak
 stanislaw.pitu...@hp.com wrote:
  How about putting it on a wiki and linking it from the top bar of gerrit
 itself? (call it review guidelines for example)
  That way, even people who didn't look for it explicitly could find it.
 
  Regards,
  Stanisław Pitucha
  Cloud Services
  Hewlett Packard
 
  -Original Message-
  From: Sergey Skripnick [mailto:sskripn...@mirantis.com]
  Sent: Wednesday, November 06, 2013 6:50 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] Bad review patterns
 
 
  This definitely should be somewhere in wiki or blog and in the bookmarks.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 If google has done it, Google did it right!

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread Robert Collins
On 6 November 2013 21:34, Radomir Dopieralski openst...@sheep.art.pl wrote:
 Hello,

 I'm quite new in the OpenStack project, but I love it already. What is
 especially nifty is the automated review system -- I'm really impressed.
 I'm coming from a project in which we also did reviews of every change
 -- although it was mostly manual, and just one review was enough to
 merge -- and at some point in that project I noticed that it is very
 easy to give reviews that are unhelpful, frustrating and just get in the
 way of the actual work. I started paying attention to how I am reviewing
 code, and I managed to come up with several patterns that are bad. Once
 I know the patterns, it's easier to recognize when I'm doing something
 wrong and rethink the review. I would like to share the patterns that I
 noticed.

Thank you for this, very cool. I have two nits ;). Firstly, things
like code duplication is a sliding scale, and I think it's ok for a
reviewer to say 'these look similar, give it a go please'. If the
reviewee tries, and it's no good - thats fine. But saying 'hey, I
won't even try' - thats not so good. Many times we get drive-by
patches and no follow up, so there is a learnt response to require
full satisfaction with each patch that comes in. Regular contributors
get more leeway here I think.

Secondly, this:


 Leaving a mark.
 ===

 You review a change and see that it is mostly fine, but you feel that
 since you did so much work reviewing it, you should at least find
 *something* wrong. So you find some nitpick and -1 the change just so
 that they know you reviewed it.

Thats indeed not cool. Perhaps a 0 with the nitpick. On the other
hand, perhaps the nitpick actually matters. If it's a nitpick it's
going to be what - a couple minutes to fix and push ?

... I think the real concern is that by pushing it up again you go to
the back of the queue for reviews, so maybe we should talk about that
instead. We don't want backpressure on folk polishing a patch on
request because of review latency.

 This is quite obvious. Just don't do it. It's OK to spend an hour
 reviewing something, and then leaving no comments on it, because it's
 simply fine, or because we had to means to test someting (see the first
 pattern).

Core reviewers look for the /comments/ from people, not just the
votes. A +1 from someone that isn't core is meaningless unless they
are known to be a thoughtful code reviewer. A -1 with no comment is
also bad, because it doesn't help the reviewee get whatever the issue
is fixed.

It's very much not OK to spend an hour reviewing something and then +1
with no comment: if I, and I think any +2er across the project see a
patch that needs an hour of review, with a commentless +1, we'll
likely discount the +1 as being meaningful.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread Day, Phil


 -Original Message-
 From: Robert Collins [mailto:robe...@robertcollins.net]
 Sent: 06 November 2013 22:08
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Bad review patterns
 
 On 6 November 2013 21:34, Radomir Dopieralski openst...@sheep.art.pl
 wrote:
  Hello,
 
 Secondly, this:
 
 
  Leaving a mark.
  ===
 
  You review a change and see that it is mostly fine, but you feel that
  since you did so much work reviewing it, you should at least find
  *something* wrong. So you find some nitpick and -1 the change just so
  that they know you reviewed it.
 
 Thats indeed not cool. Perhaps a 0 with the nitpick. On the other hand,
 perhaps the nitpick actually matters. If it's a nitpick it's going to be what 
 - a
 couple minutes to fix and push ?
 
 ... I think the real concern is that by pushing it up again you go to the 
 back of
 the queue for reviews, so maybe we should talk about that instead. We
 don't want backpressure on folk polishing a patch on request because of
 review latency.
 
  This is quite obvious. Just don't do it. It's OK to spend an hour
  reviewing something, and then leaving no comments on it, because it's
  simply fine, or because we had to means to test someting (see the
  first pattern).
 
 Core reviewers look for the /comments/ from people, not just the votes. A
 +1 from someone that isn't core is meaningless unless they are known to be
 a thoughtful code reviewer. A -1 with no comment is also bad, because it
 doesn't help the reviewee get whatever the issue is fixed.
 
 It's very much not OK to spend an hour reviewing something and then +1
 with no comment: if I, and I think any +2er across the project see a patch 
 that
 needs an hour of review, with a commentless +1, we'll likely discount the +1
 as being meaningful.
 

I don't really get what you're saying here Rob.   It seems to me an almost 
inevitable
part of the review process that useful comments are going to be mostly negative.
If someone has invested that amount of effort because the patch is complex, or
It took then a while to work their way back into that part of the systems, etc, 
but 
having given the code careful consideration they decide it's good - do you want
comments in there saying I really like your code, Well done on fixing such a 
 
complex problem or some such ?

I just don't see how you can use a lack or presence of positive feedback in a 
+1 as 
any sort of indication on the quality of that +1.   Unless you start asking 
reviewers
to précis the change in their own words to show that they understood it I don't 
really see how additional positive comments help in most cases.   Perhaps if you
have some specific examples of this it would help to clarify

Phil


 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread David Stanek
On Wed, Nov 6, 2013 at 7:21 PM, Day, Phil philip@hp.com wrote:

 
  Leaving a mark.
  ===
 
  You review a change and see that it is mostly fine, but you feel that
 since you
  did so much work reviewing it, you should at least find
  *something* wrong. So you find some nitpick and -1 the change just so
 that
  they know you reviewed it.
 
  This is quite obvious. Just don't do it. It's OK to spend an hour
 reviewing
  something, and then leaving no comments on it, because it's simply fine,
 or
  because we had to means to test someting (see the first pattern).
 
 

 Another one that comes into this category is adding a -1 which just says
 I agree with
 the other -1's in here.   If you have some additional perspective and can
 expand on
 it then that's fine - otherwise it adds very little and is just review
 count chasing.

 It's an unfortunate consequence of counting and publishing review stats
 that having
 such a measure will inevitable also drive behavour.


I agree that having no comment with a +1 is OK.  If I think the code looks
good I'll +1 it.  If I think the code could be better I'll -1 it and leave
comments.

I don't think that leaving a -1 with a 'what they said' comment is a bad
thing.  As the developer writing the patch it's helpful to know there is a
consensus and it's not just one person requesting a change.

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread James Bottomley
On Thu, 2013-11-07 at 00:21 +, Day, Phil wrote:
  
  Leaving a mark.
  ===
  
  You review a change and see that it is mostly fine, but you feel that since 
  you
  did so much work reviewing it, you should at least find
  *something* wrong. So you find some nitpick and -1 the change just so that
  they know you reviewed it.
  
  This is quite obvious. Just don't do it. It's OK to spend an hour reviewing
  something, and then leaving no comments on it, because it's simply fine, or
  because we had to means to test someting (see the first pattern).
  
  
 
 Another one that comes into this category is adding a -1 which just says I 
 agree with
 the other -1's in here.   If you have some additional perspective and can 
 expand on
 it then that's fine - otherwise it adds very little and is just review count 
 chasing.
 
 It's an unfortunate consequence of counting and publishing review stats that 
 having
 such a measure will inevitable also drive behavour.

Perhaps a source of the problem is early voting.  Feeling pressure to
add a +/-1 (or even 2) without fully asking what's going on in the code
leads to premature adjudication.

For instance, I have to do a lot of reviews in SCSI; I'm a subject
matter expert, so I do recognise some bad coding patterns and ask for
them to be changed unconditionally, but a lot of the time I don't
necessarily understand why the code was done in the way it was, so I
ask.  Before I get the answer I'm not really qualified to judge the
merits.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread Daniel P. Berrange
On Thu, Nov 07, 2013 at 12:21:38AM +, Day, Phil wrote:
  
  Leaving a mark.
  ===
  
  You review a change and see that it is mostly fine, but you feel that since 
  you
  did so much work reviewing it, you should at least find
  *something* wrong. So you find some nitpick and -1 the change just so that
  they know you reviewed it.
  
  This is quite obvious. Just don't do it. It's OK to spend an hour reviewing
  something, and then leaving no comments on it, because it's simply fine, or
  because we had to means to test someting (see the first pattern).
  
  
 
 Another one that comes into this category is adding a -1 which just says I 
 agree with
 the other -1's in here.   If you have some additional perspective and can 
 expand on
 it then that's fine - otherwise it adds very little and is just review count 
 chasing.

I don't think that it is valueless as you describe. If I multiple people
add '-1' with a same comments as name, then as a core reviewer I will
consider that initial -1 to be a much stronger nack, than if only one person
had added the -1. So I welcome people adding I agree with blah to any
review.

 It's an unfortunate consequence of counting and publishing review stats that 
 having
 such a measure will inevitable also drive behavour.

IMHO what this shows is not people trying to game the stats, but rather the
inadequacy of gerrit. We don't have any way to distinguish a -1 minor nice
to have nitpick from a -1 serious code issue that is a must-fix. Adding
a -2 is really too heavyweight because it is sticky, and implies do not
ever merge this.

It would be nice to be able to use '-2' for serious must-fix issue without
it being sticky, and have a separate way for core reviewers to put an review
into a block from being merged indefinitely state - perhaps a new state
would be more useful eg a Blocked state, to add to New, Open, Merged,
Abadoned.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev