Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-10 Thread Thierry Carrez
Robert Collins wrote:
 I think you have a very different definition of -core to the rest of
 OpenStack. I was actually somewhat concerned about the '+2 Guard the
 gate stuff' at the summit because it's so easily misinterpreted - and
 there is a meme going around (I don't know if it's true or not) that
 some people are assessed - performance review stuff within vendor
 organisations - on becoming core reviewers.
 
 Core reviewer is not intended to be a gateway to getting features or
 ideas into OpenStack projects. It is solely a volunteered contribution
 to the project: helping the project accept patches with confidence
 about their long term integrity: providing explanation and guidance to
 people that want to contribute patches so that their patch can be
 accepted.

I +1ed some of Steven's original email so I guess I should also +1 this,
because it is spot-on.

From the very beginning in OpenStack we tried hard not to create a caste
of elite developers (or committers) with extra rights over lowly
developers. You shouldn't have to be a core reviewer to be influential
on a project. Core reviewers are a group of people who sign up to do an
important share of quality code reviews. It's a duty, not a right. Some
people prove to be consistently good at it, and our process requires
that at least two of those people approve a patch before it can finally
make it, but that's a process detail. In particular, -1 reviews should
not be blatantly ignored and +2ed.

Personally I'm not trying to be a core reviewer because I don't have
enough time for that, and I found other ways of making myself useful to
OpenStack that are, I hope, at least as valid. That's why I thought
creating VIP parties for +2 reviewers (or giving them special badges or
T-shirts) is spreading the wrong message, and encourage people to hang
on to the extra rights associated with the duty.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-10 Thread Chmouel Boudjnah
On Tue, Dec 10, 2013 at 11:26 AM, Thierry Carrez thie...@openstack.orgwrote:

 That's why I thought
 creating VIP parties for +2 reviewers (or giving them special badges or
 T-shirts) is spreading the wrong message, and encourage people to hang
 on to the extra rights associated with the duty.



+1 million

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Zane Bitter

On 09/12/13 06:31, Steven Hardy wrote:

Hi all,

So I've been getting concerned about $subject recently, and based on some
recent discussions so have some other heat-core folks, so I wanted to start
a discussion where we can agree and communicate our expectations related to
nomination for heat-core membership (becuase we do need more core
reviewers):

The issues I have are:
- Russell's stats (while very useful) are being used by some projects as
   the principal metric related to -core membership (ref TripleO's monthly
   cull/nameshame, which I am opposed to btw).  This is in some cases
   encouraging some stats-seeking in our review process, IMO.

- Review quality can't be measured mechanically - we have some folks who
   contribute fewer, but very high quality reviews, and are also very active
   contributors (so knowledge of the codebase is not stale).  I'd like to
   see these people do more reviews, but removing people from core just
   because they drop below some arbitrary threshold makes no sense to me.


+1

Fun fact: due to the quirks of how Gerrit produces the JSON data dump, 
it's not actually possible for the reviewstats tools to count +0 
reviews. So, for example, one can juice one's review stats by actively 
obstructing someone else's work (voting -1) when a friendly comment 
would have sufficed. This is one of many ways in which metrics offer 
perverse incentives.


Statistics can be useful. They can be particularly useful *in the 
aggregate*. But as soon as you add a closed feedback loop you're no 
longer measuring what you originally thought - mostly you're just 
measuring the gain of the feedback loop.



So if you're aiming for heat-core nomination, here's my personal wish-list,
but hopefully others can proide their input and we can update the wiki with
the resulting requirements/guidelines:

- Make your reviews high-quality.  Focus on spotting logical errors,
   reducing duplication, consistency with existing interfaces, opportunities
   for reuse/simplification etc.  If every review you do is +1, or -1 for a
   trivial/cosmetic issue, you are not making a strong case for -core IMHO.

- Send patches.  Some folks argue that -core membership is only about
   reviews, I disagree - There are many aspects of reviews which require
   deep knowledge of the code, e.g spotting structural issues, logical
   errors caused by interaction with code not modified by the patch,
   effective use of test infrastructure, etc etc.  This deep knowledge comes
   from writing code, not only reviewing it.  This also gives us a way to
   verify your understanding and alignment with our sylistic conventions.


I agree, though I have also heard a lot of folks say it should be just 
about the reviews. Of course the job of core is reviewing - to make sure 
good changes get in and bad changes get turned into good changes. But 
there are few better ways to acquire and demonstrate the familiarity 
with the codebase and conventions of the project necessary to be an 
effective reviewer than to submit patches. It makes no sense to blind 
ourselves to code contributions when considering whom to add to core - a 
single patch contains far more information than a thousand +1 no 
comment or -1 typo in commit message reviews.



- Fix bugs.  Related to the above, help us fix real problems by testing,
   reporting bugs, and fixing them, or take an existing bug and post a patch
   fixing it.  Ask an existing team member to direct you if you're not sure
   which bug to tackle.  Sending patches doing trivial cosmetic cleanups is
   sometimes worthwhile, but make sure that's not all you do, as we need
   -core folk who can find, report, fix and review real user-impacting
   problems (not just new features).  This is also a great way to build
   trust and knowledge if you're aiming to contribute features to Heat.

- Engage in discussions related to the project (here on the ML, helping
   users on the general list, in #heat on Freenode, attend our weekly
   meeting if it's not an anti-social time in your TZ)

Anyone have any more thoughts to add here?


+1

The way to be recognised in an Open Source project should be to 
consistently add value to the community. Concentrate on that and the 
stats will look after themselves.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2013-12-09 03:31:36 -0800:
 Hi all,
 
 So I've been getting concerned about $subject recently, and based on some
 recent discussions so have some other heat-core folks, so I wanted to start
 a discussion where we can agree and communicate our expectations related to
 nomination for heat-core membership (becuase we do need more core
 reviewers):
 
 The issues I have are:
 - Russell's stats (while very useful) are being used by some projects as
   the principal metric related to -core membership (ref TripleO's monthly
   cull/nameshame, which I am opposed to btw).  This is in some cases
   encouraging some stats-seeking in our review process, IMO.
 

This is quite misleading, so please do put the TripleO reference in
context:

http://lists.openstack.org/pipermail/openstack-dev/2013-October/016186.html
http://lists.openstack.org/pipermail/openstack-dev/2013-October/016232.html

Reading the text of those two I think you can see that while the stats
are a tool Robert is using to find the good reviewers, it is not the
principal metric.

I also find it quite frustrating that you are laying accusations of
stats-seeking without proof. That is just spreading FUD. I'm sure that
is not what you want to do, so I'd like to suggest that we not accuse our
community of any kind of cheating or gaming of the system without
actual proof. I would also suggest that these accusations be made in
private and dealt with directly rather than as broad passive-aggressive
notes on the mailing list.

 - Review quality can't be measured mechanically - we have some folks who
   contribute fewer, but very high quality reviews, and are also very active
   contributors (so knowledge of the codebase is not stale).  I'd like to
   see these people do more reviews, but removing people from core just
   because they drop below some arbitrary threshold makes no sense to me.


Not sure I agree that it absolutely can't, but it certainly isn't
something these stats are even meant to do.

We other reviewers must keep tabs on our aspiring core reviewers and
try to rate them ourselves based on whether or not they're spotting the
problems we would spot, and whether or not they're also upholding the
culture we want to foster in our community. We express our rating of
these people when voting on a nomination in the mailing list.

So what you're saying is, there is more to our votes than the mechanical
number. I'd agree 100%. However, I think the numbers _do_ let people
know where they stand in one very limited aspect versus the rest of the
community.

I would actually find it interesting if we had a meta-gerrit that asked
us to review the reviews. This type of system works fantastically for
stackexchange. That would give us a decent mechanical number as well.

 So if you're aiming for heat-core nomination, here's my personal wish-list,
 but hopefully others can proide their input and we can update the wiki with
 the resulting requirements/guidelines:
 
 - Make your reviews high-quality.  Focus on spotting logical errors,
   reducing duplication, consistency with existing interfaces, opportunities
   for reuse/simplification etc.  If every review you do is +1, or -1 for a
   trivial/cosmetic issue, you are not making a strong case for -core IMHO.
 

Disagree. I am totally fine having somebody in core who is really good
at finding all of the trivial cosmetic issues. Those should mean that
the second +2'er of their code is looking at code free of trivial and
cosmetic issues that distract from the bigger issues.

 - Send patches.  Some folks argue that -core membership is only about
   reviews, I disagree - There are many aspects of reviews which require
   deep knowledge of the code, e.g spotting structural issues, logical
   errors caused by interaction with code not modified by the patch,
   effective use of test infrastructure, etc etc.  This deep knowledge comes
   from writing code, not only reviewing it.  This also gives us a way to
   verify your understanding and alignment with our sylistic conventions.
 

The higher the bar goes, the less reviewers we will have. There are
plenty of people that will find _tons_ of real issues but won't submit
very many patches if any. However, I think there isn't any value in
arguing over this point as most of our reviewers are also submitting
patches already.

 - Fix bugs.  Related to the above, help us fix real problems by testing,
   reporting bugs, and fixing them, or take an existing bug and post a patch
   fixing it.  Ask an existing team member to direct you if you're not sure
   which bug to tackle.  Sending patches doing trivial cosmetic cleanups is
   sometimes worthwhile, but make sure that's not all you do, as we need
   -core folk who can find, report, fix and review real user-impacting
   problems (not just new features).  This is also a great way to build
   trust and knowledge if you're aiming to contribute features to Heat.


There's a theme running through 

Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2013-12-09 09:52:25 -0800:
 On 09/12/13 06:31, Steven Hardy wrote:
  Hi all,
 
  So I've been getting concerned about $subject recently, and based on some
  recent discussions so have some other heat-core folks, so I wanted to start
  a discussion where we can agree and communicate our expectations related to
  nomination for heat-core membership (becuase we do need more core
  reviewers):
 
  The issues I have are:
  - Russell's stats (while very useful) are being used by some projects as
 the principal metric related to -core membership (ref TripleO's monthly
 cull/nameshame, which I am opposed to btw).  This is in some cases
 encouraging some stats-seeking in our review process, IMO.
 
  - Review quality can't be measured mechanically - we have some folks who
 contribute fewer, but very high quality reviews, and are also very active
 contributors (so knowledge of the codebase is not stale).  I'd like to
 see these people do more reviews, but removing people from core just
 because they drop below some arbitrary threshold makes no sense to me.
 
 +1
 
 Fun fact: due to the quirks of how Gerrit produces the JSON data dump, 
 it's not actually possible for the reviewstats tools to count +0 
 reviews. So, for example, one can juice one's review stats by actively 
 obstructing someone else's work (voting -1) when a friendly comment 
 would have sufficed. This is one of many ways in which metrics offer 
 perverse incentives.
 
 Statistics can be useful. They can be particularly useful *in the 
 aggregate*. But as soon as you add a closed feedback loop you're no 
 longer measuring what you originally thought - mostly you're just 
 measuring the gain of the feedback loop.
 

I think I understand the psychology of stats and incentives, and I know
that this _may_ happen.

However, can we please be more careful about how this is referenced?
Your message above is suggesting the absolute _worst_ behavior from our
community. That is not what I expect, and I think anybody who was doing
that would be dealt with _swiftly_.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Zane Bitter

On 09/12/13 14:03, Clint Byrum wrote:

Excerpts from Zane Bitter's message of 2013-12-09 09:52:25 -0800:

On 09/12/13 06:31, Steven Hardy wrote:

Hi all,

So I've been getting concerned about $subject recently, and based on some
recent discussions so have some other heat-core folks, so I wanted to start
a discussion where we can agree and communicate our expectations related to
nomination for heat-core membership (becuase we do need more core
reviewers):

The issues I have are:
- Russell's stats (while very useful) are being used by some projects as
the principal metric related to -core membership (ref TripleO's monthly
cull/nameshame, which I am opposed to btw).  This is in some cases
encouraging some stats-seeking in our review process, IMO.

- Review quality can't be measured mechanically - we have some folks who
contribute fewer, but very high quality reviews, and are also very active
contributors (so knowledge of the codebase is not stale).  I'd like to
see these people do more reviews, but removing people from core just
because they drop below some arbitrary threshold makes no sense to me.


+1

Fun fact: due to the quirks of how Gerrit produces the JSON data dump,
it's not actually possible for the reviewstats tools to count +0
reviews. So, for example, one can juice one's review stats by actively
obstructing someone else's work (voting -1) when a friendly comment
would have sufficed. This is one of many ways in which metrics offer
perverse incentives.

Statistics can be useful. They can be particularly useful *in the
aggregate*. But as soon as you add a closed feedback loop you're no
longer measuring what you originally thought - mostly you're just
measuring the gain of the feedback loop.



I think I understand the psychology of stats and incentives, and I know
that this _may_ happen.

However, can we please be more careful about how this is referenced?
Your message above is suggesting the absolute _worst_ behavior from our
community. That is not what I expect, and I think anybody who was doing
that would be dealt with _swiftly_.


Sorry for the confusion, I wasn't trying to suggest that at all. FWIW I 
haven't noticed anyone gaming the stats (maybe I haven't been looking at 
enough reviews ;). What I have noticed is that every time I leave a +0 
comment on a patch, I catch myself thinking this won't look good on the 
stats - and then I continue on regardless. If somebody who wasn't core 
but wanted to be were to -1 the patch instead in similar circumstances, 
then I wouldn't blame them in the least for responding to that incentive.


My point, and I think Steve's, is that we should be careful how we *use* 
the stats, so that folks won't feel this pressure. It's not at all about 
calling anybody out, but I apologise for not making that clearer.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Robert Collins
On 10 December 2013 00:31, Steven Hardy sha...@redhat.com wrote:
 Hi all,

 So I've been getting concerned about $subject recently, and based on some
 recent discussions so have some other heat-core folks, so I wanted to start
 a discussion where we can agree and communicate our expectations related to
 nomination for heat-core membership (becuase we do need more core
 reviewers):

Great! (Because I think you do too :).

 The issues I have are:
 - Russell's stats (while very useful) are being used by some projects as
   the principal metric related to -core membership (ref TripleO's monthly
   cull/nameshame, which I am opposed to btw).  This is in some cases
   encouraging some stats-seeking in our review process, IMO.

With all due respect - you are entirely wrong, and I am now worried
about the clarity of my emails vis-a-vis review team makeup. I presume
you've read them in detail before referencing them of course - have
you? What can I do to make the actual process clearer?

IMO the primary metric for inclusion is being perceived by a bunch of
existing -cores as being substantial contributors of effective, useful
reviews. The reviewstats stats are a useful aide de memoir, nothing
more. Yes, if you don't do enough reviews for an extended period -
several months - then you're likely to no longer be perceived as being
a substantial contributor of effective, useful reviews - and no longer
aware enough of the current codebase and design to just step back into
-core shoes.

So it's a gross mischaracterisation to imply that a democratic process
aided by some [crude] stats has been reduced to name  shame, and a
rather offensive one.

Anyone can propose members for inclusion in TripleO-core, and we all
vote - likewise removal. The fact I do a regular summary and propose
some folk and some cleanups is my way of ensuring that we don't get
complacent - that we recognise folk who are stepping up, and give them
guidance if they aren't stepping up in an effective manner - or if
they are no longer being effective enough to be recognised - in my
opinion. If the rest of the TripleO core team agrees with my opinion,
we get changes to -core, if not, we don't. If someone else wants to
propose a -core member, they are welcome to! Hopefully with my taking
point on this, that effort isn't needed - but it's still possible.

 - Review quality can't be measured mechanically - we have some folks who
   contribute fewer, but very high quality reviews, and are also very active
   contributors (so knowledge of the codebase is not stale).  I'd like to
   see these people do more reviews, but removing people from core just
   because they drop below some arbitrary threshold makes no sense to me.

In principle, review quality *can* be measured mechanically, but the
stats we have do not do that - and cannot. We'd need mechanical follow
through to root cause analysis for reported defects (including both
crashes and softer defects like 'not as fast as I want' or 'feature X
is taking too long to develop') and impact on review and contribution
rates to be able to assess the impact of a reviewer over time - what
bugs they prevented entering the code base, how their reviews kept the
code base maintainable and flexible, and how their interactions with
patch submitters helped grow the community. NONE of the stats we have
today even vaguely approximate that.

 So if you're aiming for heat-core nomination, here's my personal wish-list,
 but hopefully others can proide their input and we can update the wiki with
 the resulting requirements/guidelines:

I'm not aiming for heat-core, so this is just kibbitzing- take it for
what it's worth:

 - Make your reviews high-quality.  Focus on spotting logical errors,
   reducing duplication, consistency with existing interfaces, opportunities
   for reuse/simplification etc.  If every review you do is +1, or -1 for a
   trivial/cosmetic issue, you are not making a strong case for -core IMHO.

There is a tension here. I agree that many trivial/cosmetic issues are
not a big deal in themselves. But in aggregate I would argue that a
codebase with lots of tpyos, poor idioms, overly large classes and
other shallow-at-the-start issues is one that will become
progressively harder to contribute to.

 - Send patches.  Some folks argue that -core membership is only about
   reviews, I disagree - There are many aspects of reviews which require
   deep knowledge of the code, e.g spotting structural issues, logical
   errors caused by interaction with code not modified by the patch,
   effective use of test infrastructure, etc etc.  This deep knowledge comes
   from writing code, not only reviewing it.  This also gives us a way to
   verify your understanding and alignment with our sylistic conventions.

I think seeing patches from people is a great way to see whether they
are able to write code that fits with the codebase. I'm not sure it's
a good indicator that folk will be effective, positive reviewers.
Personally, I 

Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Steven Hardy
On Tue, Dec 10, 2013 at 08:34:04AM +1300, Robert Collins wrote:
 On 10 December 2013 00:31, Steven Hardy sha...@redhat.com wrote:
  Hi all,
 
  So I've been getting concerned about $subject recently, and based on some
  recent discussions so have some other heat-core folks, so I wanted to start
  a discussion where we can agree and communicate our expectations related to
  nomination for heat-core membership (becuase we do need more core
  reviewers):
 
 Great! (Because I think you do too :).
 
  The issues I have are:
  - Russell's stats (while very useful) are being used by some projects as
the principal metric related to -core membership (ref TripleO's monthly
cull/nameshame, which I am opposed to btw).  This is in some cases
encouraging some stats-seeking in our review process, IMO.
 
 With all due respect - you are entirely wrong, and I am now worried
 about the clarity of my emails vis-a-vis review team makeup. I presume
 you've read them in detail before referencing them of course - have
 you? What can I do to make the actual process clearer?
 
 IMO the primary metric for inclusion is being perceived by a bunch of
 existing -cores as being substantial contributors of effective, useful
 reviews. The reviewstats stats are a useful aide de memoir, nothing
 more. Yes, if you don't do enough reviews for an extended period -
 several months - then you're likely to no longer be perceived as being
 a substantial contributor of effective, useful reviews - and no longer
 aware enough of the current codebase and design to just step back into
 -core shoes.
 
 So it's a gross mischaracterisation to imply that a democratic process
 aided by some [crude] stats has been reduced to name  shame, and a
 rather offensive one.

Yes I have read your monthly core reviewer update emails[1] and I humbly
apologize if you feel my characterization of your process is offensive, it
certainly wasn't intended to be.

All I was trying to articulate is that your message *is* heavily reliant on
stats, you quote them repeatedly, and from my perspective the repeated
references to volume of reviews, along with so frequently naming those to
be removed from core, *could* encourage the wrong behavior.

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-December/021101.html

 Anyone can propose members for inclusion in TripleO-core, and we all
 vote - likewise removal. The fact I do a regular summary and propose
 some folk and some cleanups is my way of ensuring that we don't get
 complacent - that we recognise folk who are stepping up, and give them
 guidance if they aren't stepping up in an effective manner - or if
 they are no longer being effective enough to be recognised - in my
 opinion. If the rest of the TripleO core team agrees with my opinion,
 we get changes to -core, if not, we don't. If someone else wants to
 propose a -core member, they are welcome to! Hopefully with my taking
 point on this, that effort isn't needed - but it's still possible.
 
  - Review quality can't be measured mechanically - we have some folks who
contribute fewer, but very high quality reviews, and are also very active
contributors (so knowledge of the codebase is not stale).  I'd like to
see these people do more reviews, but removing people from core just
because they drop below some arbitrary threshold makes no sense to me.
 
 In principle, review quality *can* be measured mechanically, but the
 stats we have do not do that - and cannot. We'd need mechanical follow
 through to root cause analysis for reported defects (including both
 crashes and softer defects like 'not as fast as I want' or 'feature X
 is taking too long to develop') and impact on review and contribution
 rates to be able to assess the impact of a reviewer over time - what
 bugs they prevented entering the code base, how their reviews kept the
 code base maintainable and flexible, and how their interactions with
 patch submitters helped grow the community. NONE of the stats we have
 today even vaguely approximate that.

Right, so right now, review quality can't be measured mechanically, and the
chances that we'll be able to in any meaningful way anytime in future is
very small.

  So if you're aiming for heat-core nomination, here's my personal wish-list,
  but hopefully others can proide their input and we can update the wiki with
  the resulting requirements/guidelines:
 
 I'm not aiming for heat-core, so this is just kibbitzing- take it for
 what it's worth:
 
  - Make your reviews high-quality.  Focus on spotting logical errors,
reducing duplication, consistency with existing interfaces, opportunities
for reuse/simplification etc.  If every review you do is +1, or -1 for a
trivial/cosmetic issue, you are not making a strong case for -core IMHO.
 
 There is a tension here. I agree that many trivial/cosmetic issues are
 not a big deal in themselves. But in aggregate I would argue that a
 codebase with lots of tpyos, poor idioms, 

Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Robert Collins
On 10 December 2013 11:04, Steven Hardy sha...@redhat.com wrote:

 So it's a gross mischaracterisation to imply that a democratic process
 aided by some [crude] stats has been reduced to name  shame, and a
 rather offensive one.

 Yes I have read your monthly core reviewer update emails[1] and I humbly
 apologize if you feel my characterization of your process is offensive, it
 certainly wasn't intended to be.

Thank; enough said here - lets move on :)


 I think you have a very different definition of -core to the rest of
 OpenStack. I was actually somewhat concerned about the '+2 Guard the
 gate stuff' at the summit because it's so easily misinterpreted - and
 there is a meme going around (I don't know if it's true or not) that
 some people are assessed - performance review stuff within vendor
 organisations - on becoming core reviewers.

 Core reviewer is not intended to be a gateway to getting features or
 ideas into OpenStack projects. It is solely a volunteered contribution
 to the project: helping the project accept patches with confidence
 about their long term integrity: providing explanation and guidance to
 people that want to contribute patches so that their patch can be
 accepted.

 We need core reviewers who:
 1. Have deep knowledge of the codebase (to identify non-cosmetic structural
 issues)

mmm, core review is a place to identify really significant structural
issues, but it's not ideal. Because - you do a lot of work before you
push code for review, particularly if it's ones first contribution to
a codebase, and that means a lot of waste when the approach is wrong.
Agree that having -core that can spot this is good, but not convinced
that it's a must.

 2. Have used and debugged the codebase (to identify usability, interface
 correctness or other stuff which isn't obvious unless you're using the
 code)

If I may: 2a) Have deployed and used the codebase in production, at
scale. This may conflict with 1) in terms of individual expertise.

 3. Have demonstrated a commitment to the project (so we know they
 understand the mid-term goals and won't approve stuff which is misaligned
 with those goals)

I don't understand this. Are you saying you'd turn down contributions
that are aligned with the long term Heat vision because they don't
advance some short term goal? Or are you saying you'd turn down
contributions because they actively harm short term goals?

Seems to me that that commitment to the project is really orthogonal
to either of those things - folk may have different interpretations
about what the project needs while still being entirely committed to
it! Perhaps you mean 'shared understanding of current goals and
constraints' ? Or something like that? I am niggling on this point
because I wouldn't want someone who is committed to TripleO but
focused on the big picture to be accused of not being committed to
TripleO.

 All of those are aided and demonstrated by helping out doing a few
 bugfixes, along with reviews.

1) might be - depends on the bug. 2) really isn't, IME anyhow, and
three seems entirely disconnected to both reviews and bugfixes, and
more all about communication.



 All I wanted to do was give folks a heads-up that IMHO the stats aren't the
 be-all-and-end-all, and here are a few things which you might want to
 consider doing, if you want to become a core reviewer in due course.

Ok, thats a much better message than the one you appeared to start
with, which sounded like 'stats are bad, here's what it really takes'.

Some food for thought though: being -core is a burden, it's not a
privilege. I think we should focus more on a broad community of
effective reviewers, rather than on what it takes to be in -core.
Being in -core should simply reflect that someone is currently active,
aware of all the review needs of the project, capable (informed,
familiar with the code and design etc) of assessing +2 and APRV
status, and responsible enough to trust with that. Folk that review
and never become -core are still making a significant contribution to
code quality, if they are doing it right - by which I mean:

 - following up and learning the norms are for that project (they
differ project to project in OpenStack)
 - following up and learning about the architectural issues and design
issues they need to watch out for

Someone doing that quickly become able to act as a valuable first-pass
filter for the two +2 votes that actually land a patch; speeding up
the +2 review time [less round trips to +2 land == more +2 bandwidth]
and helping the project as a whole.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Steven Hardy
On Tue, Dec 10, 2013 at 11:25:49AM +1300, Robert Collins wrote:
 On 10 December 2013 11:04, Steven Hardy sha...@redhat.com wrote:
 
  So it's a gross mischaracterisation to imply that a democratic process
  aided by some [crude] stats has been reduced to name  shame, and a
  rather offensive one.
 
  Yes I have read your monthly core reviewer update emails[1] and I humbly
  apologize if you feel my characterization of your process is offensive, it
  certainly wasn't intended to be.
 
 Thank; enough said here - lets move on :)
 
 
  I think you have a very different definition of -core to the rest of
  OpenStack. I was actually somewhat concerned about the '+2 Guard the
  gate stuff' at the summit because it's so easily misinterpreted - and
  there is a meme going around (I don't know if it's true or not) that
  some people are assessed - performance review stuff within vendor
  organisations - on becoming core reviewers.
 
  Core reviewer is not intended to be a gateway to getting features or
  ideas into OpenStack projects. It is solely a volunteered contribution
  to the project: helping the project accept patches with confidence
  about their long term integrity: providing explanation and guidance to
  people that want to contribute patches so that their patch can be
  accepted.
 
  We need core reviewers who:
  1. Have deep knowledge of the codebase (to identify non-cosmetic structural
  issues)
 
 mmm, core review is a place to identify really significant structural
 issues, but it's not ideal. Because - you do a lot of work before you
 push code for review, particularly if it's ones first contribution to
 a codebase, and that means a lot of waste when the approach is wrong.
 Agree that having -core that can spot this is good, but not convinced
 that it's a must.

So you're saying you would give approve rights to someone without
sufficient knowledge to recognise the implications of a patch in the
context of the whole tree?  Of course it's a must.

  2. Have used and debugged the codebase (to identify usability, interface
  correctness or other stuff which isn't obvious unless you're using the
  code)
 
 If I may: 2a) Have deployed and used the codebase in production, at
 scale. This may conflict with 1) in terms of individual expertise.

Having folks involved with experience of running stuff in production is
invaluable I agree, I just meant people should have some practical
experience outside of the specific feature they may be working on.

  3. Have demonstrated a commitment to the project (so we know they
  understand the mid-term goals and won't approve stuff which is misaligned
  with those goals)
 
 I don't understand this. Are you saying you'd turn down contributions
 that are aligned with the long term Heat vision because they don't
 advance some short term goal? Or are you saying you'd turn down
 contributions because they actively harm short term goals?

I'm saying people in a position to approve patches should be able to
decouple the short term requirements of e.g their employer, or the feature
they are interested in, from the long-term goals (e.g maintainability) of
the project.

We can, and have, turned down contributions because they offered short-term
solutions to problems which didn't make sense long term from an upstream
perspective (normally with discussion of suitable alternative approaches).

 Seems to me that that commitment to the project is really orthogonal
 to either of those things - folk may have different interpretations
 about what the project needs while still being entirely committed to
 it! Perhaps you mean 'shared understanding of current goals and
 constraints' ? Or something like that? I am niggling on this point
 because I wouldn't want someone who is committed to TripleO but
 focused on the big picture to be accused of not being committed to
 TripleO.

Yeah, shared understanding, I'm saying all -core reviewers should have, and
have demonstrated, some grasp of the big picture for the project they
control the gate for.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Angus Salkeld

On 09/12/13 11:31 +, Steven Hardy wrote:

Hi all,

So I've been getting concerned about $subject recently, and based on some
recent discussions so have some other heat-core folks, so I wanted to start
a discussion where we can agree and communicate our expectations related to
nomination for heat-core membership (becuase we do need more core
reviewers):

The issues I have are:
- Russell's stats (while very useful) are being used by some projects as
 the principal metric related to -core membership (ref TripleO's monthly
 cull/nameshame, which I am opposed to btw).  This is in some cases
 encouraging some stats-seeking in our review process, IMO.

- Review quality can't be measured mechanically - we have some folks who
 contribute fewer, but very high quality reviews, and are also very active
 contributors (so knowledge of the codebase is not stale).  I'd like to
 see these people do more reviews, but removing people from core just
 because they drop below some arbitrary threshold makes no sense to me.

So if you're aiming for heat-core nomination, here's my personal wish-list,
but hopefully others can proide their input and we can update the wiki with
the resulting requirements/guidelines:

- Make your reviews high-quality.  Focus on spotting logical errors,
 reducing duplication, consistency with existing interfaces, opportunities
 for reuse/simplification etc.  If every review you do is +1, or -1 for a
 trivial/cosmetic issue, you are not making a strong case for -core IMHO.

- Send patches.  Some folks argue that -core membership is only about
 reviews, I disagree - There are many aspects of reviews which require
 deep knowledge of the code, e.g spotting structural issues, logical
 errors caused by interaction with code not modified by the patch,
 effective use of test infrastructure, etc etc.  This deep knowledge comes
 from writing code, not only reviewing it.  This also gives us a way to
 verify your understanding and alignment with our sylistic conventions.

- Fix bugs.  Related to the above, help us fix real problems by testing,
 reporting bugs, and fixing them, or take an existing bug and post a patch
 fixing it.  Ask an existing team member to direct you if you're not sure
 which bug to tackle.  Sending patches doing trivial cosmetic cleanups is
 sometimes worthwhile, but make sure that's not all you do, as we need
 -core folk who can find, report, fix and review real user-impacting
 problems (not just new features).  This is also a great way to build
 trust and knowledge if you're aiming to contribute features to Heat.

- Engage in discussions related to the project (here on the ML, helping
 users on the general list, in #heat on Freenode, attend our weekly
 meeting if it's not an anti-social time in your TZ)

Anyone have any more thoughts to add here?


Setting a side the mechanism for choosing team-core, I think we should
be re-evaluating more often (some regular interval - maybe every 2
months).

- Personally I'd not be stressed at all about been taken off core one
  period and re-add later (if I was busy with something else and
  didn't have time for reviews).
- I think this sends a good message that core is not set in stone.
  Given some hard work you too can get in core (if you aspire to).


-Angus



Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Steven Hardy
Hi all,

So I've been getting concerned about $subject recently, and based on some
recent discussions so have some other heat-core folks, so I wanted to start
a discussion where we can agree and communicate our expectations related to
nomination for heat-core membership (becuase we do need more core
reviewers):

The issues I have are:
- Russell's stats (while very useful) are being used by some projects as
  the principal metric related to -core membership (ref TripleO's monthly
  cull/nameshame, which I am opposed to btw).  This is in some cases
  encouraging some stats-seeking in our review process, IMO.

- Review quality can't be measured mechanically - we have some folks who
  contribute fewer, but very high quality reviews, and are also very active
  contributors (so knowledge of the codebase is not stale).  I'd like to
  see these people do more reviews, but removing people from core just
  because they drop below some arbitrary threshold makes no sense to me.

So if you're aiming for heat-core nomination, here's my personal wish-list,
but hopefully others can proide their input and we can update the wiki with
the resulting requirements/guidelines:

- Make your reviews high-quality.  Focus on spotting logical errors,
  reducing duplication, consistency with existing interfaces, opportunities
  for reuse/simplification etc.  If every review you do is +1, or -1 for a
  trivial/cosmetic issue, you are not making a strong case for -core IMHO.

- Send patches.  Some folks argue that -core membership is only about
  reviews, I disagree - There are many aspects of reviews which require
  deep knowledge of the code, e.g spotting structural issues, logical
  errors caused by interaction with code not modified by the patch,
  effective use of test infrastructure, etc etc.  This deep knowledge comes
  from writing code, not only reviewing it.  This also gives us a way to
  verify your understanding and alignment with our sylistic conventions.

- Fix bugs.  Related to the above, help us fix real problems by testing,
  reporting bugs, and fixing them, or take an existing bug and post a patch
  fixing it.  Ask an existing team member to direct you if you're not sure
  which bug to tackle.  Sending patches doing trivial cosmetic cleanups is
  sometimes worthwhile, but make sure that's not all you do, as we need
  -core folk who can find, report, fix and review real user-impacting
  problems (not just new features).  This is also a great way to build
  trust and knowledge if you're aiming to contribute features to Heat.

- Engage in discussions related to the project (here on the ML, helping
  users on the general list, in #heat on Freenode, attend our weekly
  meeting if it's not an anti-social time in your TZ)

Anyone have any more thoughts to add here?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Thierry Carrez
Steven Hardy wrote:
 [...]
 The issues I have are:
 - Russell's stats (while very useful) are being used by some projects as
   the principal metric related to -core membership (ref TripleO's monthly
   cull/nameshame, which I am opposed to btw).  This is in some cases
   encouraging some stats-seeking in our review process, IMO.
 
 - Review quality can't be measured mechanically - we have some folks who
   contribute fewer, but very high quality reviews, and are also very active
   contributors (so knowledge of the codebase is not stale).  I'd like to
   see these people do more reviews, but removing people from core just
   because they drop below some arbitrary threshold makes no sense to me.
 [...]

In our governance each program's PTL is free to choose his preferred
method of selecting core reviewers, but FWIW I fully agree with you here.

Using the review stats as the principal metric in deciding -core
membership is IMHO destructive in the long run. You can use it to check
for minimum volume, but then review quality should be the main factor,
and the only way to assess that is to see consistent quality reviews
made by others and then suggest them for -core membership.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Russell Bryant
On 12/09/2013 07:43 AM, Thierry Carrez wrote:
 Steven Hardy wrote:
 [...]
 The issues I have are:
 - Russell's stats (while very useful) are being used by some projects as
   the principal metric related to -core membership (ref TripleO's monthly
   cull/nameshame, which I am opposed to btw).  This is in some cases
   encouraging some stats-seeking in our review process, IMO.

 - Review quality can't be measured mechanically - we have some folks who
   contribute fewer, but very high quality reviews, and are also very active
   contributors (so knowledge of the codebase is not stale).  I'd like to
   see these people do more reviews, but removing people from core just
   because they drop below some arbitrary threshold makes no sense to me.
 [...]
 
 In our governance each program's PTL is free to choose his preferred
 method of selecting core reviewers, but FWIW I fully agree with you here.
 
 Using the review stats as the principal metric in deciding -core
 membership is IMHO destructive in the long run. You can use it to check
 for minimum volume, but then review quality should be the main factor,
 and the only way to assess that is to see consistent quality reviews
 made by others and then suggest them for -core membership.
 

Agreed here, as well.  While the stats provide some important insight,
it's far from the whole picture.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev