Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-09 Thread Jaromir Coufal

On 2013/08/10 23:53, Robert Collins wrote:

On 9 October 2013 07:24, Jiří Stránský ji...@redhat.com wrote:

Clint and Monty,

thank you for such good responses. I am new in TripleO team indeed and I was
mostly concerned by the line in the sand. Your responses shed some more
light on the issue for me and i hope we'll be heading the right way :)

Sorry for getting folk concerned! I'm really glad some folk jumped in
to clarify. Let me offer some more thoughts on top of this..
I was taking some concepts as a given - they are part of the OpenStack
culture - when I wrote my mail about TripleO reviewer status:

* That what we need is a bunch of folk actively engaged in thinking
about the structure, performance and features of the component
projects in TripleO, who *apply* that knowledge to every code review.
And we need to grow that collection of reviewers to keep up with a
growing contributor base.

* That the more reviewers we have, the less burden any one reviewer
has to carry : I'd be very happy if we normalised on everyone in -core
doing just one careful and effective review a day, *if* thats
sufficient to carry the load. I doubt it will be, because developers
can produce way more than one patch a day each, which implies 2*
developer count reviews per day *at minimum*, and even if every ATC
was a -core reviewer, we'd still need two reviews per -core per day.

* How much knowledge is needed to be a -core? And how many reviews?
There isn't a magic number of reviews IMO: we need 'lots' of reviews
and 'over a substantial period of time' : it's very hard to review
effectively in a new project, but after 3 months, if someone has been
regularly reviewing they will have had lots of mentoring taking place,
and we (-core membership is voted on by -core members) are likely to
be reasonably happy that they will do a good job.

* And finally that the job of -core is to sacrifice their own
productivity in exachange for team productivity : while there are
limits to this - reviewer fatigue, personal/company goals, etc etc, at
the heart of it it's a volunteer role which is crucial for keeping
velocity up: every time a patch lingers without feedback the developer
writing it is stalled, which is a waste (in the Lean sense).



So with those 'givens' in place, I was trying to just report in that
context.. the metric of reviews being done is a *lower bound* - it is
necessary, but not sufficient, to be -core. Dropping below it for an
extended period of time - and I've set a pretty arbitrary initial
value of approximately one per day - is a solid sign that the person
is not keeping up with evolution of the code base.

Being -core means being on top of the evolution of the program and the
state of the code, and being a regular, effective, reviewer is the one
sure fire way to do that. I'm certainly open to folk who want to focus
on just the CLI doing so, but that isn't enough to keep up to date on
the overall structure/needs - the client is part of the overall
story!. So the big thing for me is - if someone no longer has time to
offer doing reviews, thats fine, we should recognise that and release
them from the burden of -core: their reviews will still be valued and
thought deeply about, and if they contribute more time for a while
then we can ask them to shoulder -core again.

HTH,
-Rob


Hey Rob, Clint and Monty,

thanks for clarification, I was not aware of these details before. I 
hope that it will work well.


Thanks
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-09 Thread Petr Blaho
On Tue, Oct 08, 2013 at 02:31:34PM +0200, Jaromir Coufal wrote:
 Hi Chris,
 
 On 2013/08/10 13:13, Chris Jones wrote:
 
 Hi
 
 On 8 October 2013 11:59, Jaromir Coufal jcou...@redhat.com wrote:
 
     * Example: It doesn't make sense, that someone who is 
 core-reviewer
 based on image-builder is able to give +2 on UI or CLI code and
 vice-versa.
 
 
 I'm not sure this is a technical problem as much as a social problem - if
 someone isn't able to give a good review (be it -1/+1 or +2) on a
 particular change, they should just not review it, regardless of which 
 part
 of the project it relates to.
 
 I completely agree on this point. It depends on people's judgement.
 
 Question is if we will depend only on this judgment or we help that with
 splitting reviewers based on projects. I believe that the split can help us.
 Anyway, it is just proposal it depends what others think about that.
 
 
 I'm a tripleo core reviewer, but I have been ignoring the tuskar reviews
 until I have had some time to play with it and get a feel for the code. 
 You
 can argue that I therefore shouldn't even have the power to give a +2 on
 tuskar code, but I would note that before Robert added me to core he 
 wasn't
 simply watching the quantity of my reviews, he was also giving me feedback
 on areas I was going wrong. I would imagine that if I was wildly throwing
 around inappropriate reviews on code I wasn't qualified to review, he 
 would
 give me feedback on that too and ultimately remove me as a reviewer.
 
 Well it depends on the approach, if we think first or second way. I might 
 argue
 that you shouldn't have the +2 power for Tuskar until you have bigger
 contribution on Tuskar code (reviews or patches or ...). Just for me it sounds
 logical, because you are not that close to it and you are not familiar with 
 all
 the background there.
 
 If somebody will be contributing regularly there, he can become core-reviewer
 on that project as well.
 
 If you did bad code reviews on Tuskar and you were removed the 'core-' status,
 you still can do excellent job on other TripleO projects, so why to lose it at
 all of them?
 
 Let me give one example:
 There is tuskar-client which is very important project and there is not that
 big activity as in other projects. There are people who actually wrote the
 whole code and based on the amount of work (reviews), they doesn't have to get
 between core-reviewers. In the future, if they need to move forward or quickly
 fix something, they would need to ask some core-reviewer who is not familiar
 with that code, just to approve it.
 
 You see where I am getting?
 
 
 Perhaps this is something that won't scale well, but I have a great deal 
 of
 faith in Robert's judgement on who is or isn't reviewing effectively.
 
 I have no experience with Rob's distribution of core-members and I believe 
 that
 he does it based on his best faith.
 
 I am just suggesting more project based approach since the whole program
 expanded into more projects. It doesn't have to be strict project based 
 metric,
 it can be combined with 'across projects contribution', so we assure that
 people are aware of the whole effort. But I believe that the project focus
 should stay as primary metric.
 
 
 
 --
 Cheers,
 
 Chris
 
 
 Thanks
 -- Jarda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I generally agree with Jarda w/r/t more project based approach.


I am concerned with case when core reviewers can be overloaded with
review demands.

Of course if this happens we can just add another core
reviewer to the group but I would suggest doing it other way - let's
have broader core group at first and gradually lower number of core
members (using metrics, discussion, need, common agreement from
contributors...) by X every Y weeks or so.

This way core reviewers group will shrink till its members feel that
they have just enough reviews on their agenda that it does not hinder
quality of their work.

This will not eliminate any competition for core membership but it
will eliminate immediate impact on projects' review process, on
reviewers' workload and will help gradually decide if any project needs
a core-member even if that person is not that active reviewer but can
ensure that patches will not grow old for that project.

That is my 2 cents.

-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Jiří Stránský

Clint and Monty,

thank you for such good responses. I am new in TripleO team indeed and I 
was mostly concerned by the line in the sand. Your responses shed some 
more light on the issue for me and i hope we'll be heading the right way :)


Thanks

Jiri

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Robert Collins
On 9 October 2013 07:24, Jiří Stránský ji...@redhat.com wrote:
 Clint and Monty,

 thank you for such good responses. I am new in TripleO team indeed and I was
 mostly concerned by the line in the sand. Your responses shed some more
 light on the issue for me and i hope we'll be heading the right way :)

Sorry for getting folk concerned! I'm really glad some folk jumped in
to clarify. Let me offer some more thoughts on top of this..
I was taking some concepts as a given - they are part of the OpenStack
culture - when I wrote my mail about TripleO reviewer status:

* That what we need is a bunch of folk actively engaged in thinking
about the structure, performance and features of the component
projects in TripleO, who *apply* that knowledge to every code review.
And we need to grow that collection of reviewers to keep up with a
growing contributor base.

* That the more reviewers we have, the less burden any one reviewer
has to carry : I'd be very happy if we normalised on everyone in -core
doing just one careful and effective review a day, *if* thats
sufficient to carry the load. I doubt it will be, because developers
can produce way more than one patch a day each, which implies 2*
developer count reviews per day *at minimum*, and even if every ATC
was a -core reviewer, we'd still need two reviews per -core per day.

* How much knowledge is needed to be a -core? And how many reviews?
There isn't a magic number of reviews IMO: we need 'lots' of reviews
and 'over a substantial period of time' : it's very hard to review
effectively in a new project, but after 3 months, if someone has been
regularly reviewing they will have had lots of mentoring taking place,
and we (-core membership is voted on by -core members) are likely to
be reasonably happy that they will do a good job.

* And finally that the job of -core is to sacrifice their own
productivity in exachange for team productivity : while there are
limits to this - reviewer fatigue, personal/company goals, etc etc, at
the heart of it it's a volunteer role which is crucial for keeping
velocity up: every time a patch lingers without feedback the developer
writing it is stalled, which is a waste (in the Lean sense).



So with those 'givens' in place, I was trying to just report in that
context.. the metric of reviews being done is a *lower bound* - it is
necessary, but not sufficient, to be -core. Dropping below it for an
extended period of time - and I've set a pretty arbitrary initial
value of approximately one per day - is a solid sign that the person
is not keeping up with evolution of the code base.

Being -core means being on top of the evolution of the program and the
state of the code, and being a regular, effective, reviewer is the one
sure fire way to do that. I'm certainly open to folk who want to focus
on just the CLI doing so, but that isn't enough to keep up to date on
the overall structure/needs - the client is part of the overall
story!. So the big thing for me is - if someone no longer has time to
offer doing reviews, thats fine, we should recognise that and release
them from the burden of -core: their reviews will still be valued and
thought deeply about, and if they contribute more time for a while
then we can ask them to shoulder -core again.

HTH,
-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Tzu-Mainn Chen
 Hi, like most OpenStack projects we need to keep the core team up to
 date: folk who are not regularly reviewing will lose context over
 time, and new folk who have been reviewing regularly should be trusted
 with -core responsibilities.
 
 Please see Russell's excellent stats:
 http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
 http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
 
 For joining and retaining core I look at the 90 day statistics; folk
 who are particularly low in the 30 day stats get a heads up: it's not
 a purely mechanical process :).
 
 As we've just merged review teams with Tuskar devs, we need to allow
 some time for everyone to get up to speed; so for folk who are core as
 a result of the merge will be retained as core, but November I expect
 the stats will have normalised somewhat and that special handling
 won't be needed.
 
 IMO these are the reviewers doing enough over 90 days to meet the
 requirements for core:
 
 |   lifeless **| 3498 140   2 19957.6% |2
 (  1.0%)  |
 | clint-fewbar **  | 3292  54   1 27283.0% |7
 (  2.6%)  |
 | cmsj **  | 2481  25   1 22189.5% |   13
 (  5.9%)  |
 |derekh ** |  880  28  23  3768.2% |6
 ( 10.0%)  |
 
 Who are already core, so thats easy.
 
 If you are core, and not on that list, that may be because you're
 coming from tuskar, which doesn't have 90 days of history, or you need
 to get stuck into some more reviews :).
 
 Now, 30 day history - this is the heads up for folk:
 
 | clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
 | cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
 |   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
 |derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
 |  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
 |ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |
 
 
 I'm using the fairly simple metric of 'average at least one review a
 day' as a proxy for 'sees enough of the code and enough discussion of
 the code to be an effective reviewer'. James and Ghe, good stuff -
 you're well on your way to core. If you're not in that list, please
 treat this as a heads-up that you need to do more reviews to keep on
 top of what's going on, whether so you become core, or you keep it.
 
 In next month's update I'll review whether to remove some folk that
 aren't keeping on top of things, as it won't be a surprise :).
 
 Cheers,
 Rob
 
 
 
 
 
 
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Hi,

I feel like I should point out that before tuskar merged with tripleo, we had 
some distinction between the team working on the tuskar api and the team 
working on the UI, with each team focusing reviews on its particular experties. 
 The latter team works quite closely with horizon, to the extent of spending a 
lot of time involved with horizon development and blueprints.  This is done so 
that horizon changes can be understood and utilized by tuskar-ui.

For that reason, I feel like a UI core reviewer split here might make sense. . 
. ?  tuskar-ui doesn't require as many updates as tripleo/tuskar api, but a 
certain level of horizon and UI expertise is definitely helpful in reviewing 
the UI patches.

Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Tomas Sedovic
Just an fyi, tomas-8c8 on the reviewers list is yours truly. That's 
the name I got assigned when I registered to Gerrit and apparently, it 
can't be changed.


Thanks for the heads-up, will be doing more reviews.

T.

On 07/10/13 21:03, Robert Collins wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob









___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Robert Collins
Thanks, will submit a patch to reviewerstats updating it for you.

-Rob

On 8 October 2013 21:14, Tomas Sedovic tsedo...@redhat.com wrote:
 Just an fyi, tomas-8c8 on the reviewers list is yours truly. That's the
 name I got assigned when I registered to Gerrit and apparently, it can't be
 changed.

 Thanks for the heads-up, will be doing more reviews.

 T.


 On 07/10/13 21:03, Robert Collins wrote:

 Hi, like most OpenStack projects we need to keep the core team up to
 date: folk who are not regularly reviewing will lose context over
 time, and new folk who have been reviewing regularly should be trusted
 with -core responsibilities.

 Please see Russell's excellent stats:
 http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
 http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

 For joining and retaining core I look at the 90 day statistics; folk
 who are particularly low in the 30 day stats get a heads up: it's not
 a purely mechanical process :).

 As we've just merged review teams with Tuskar devs, we need to allow
 some time for everyone to get up to speed; so for folk who are core as
 a result of the merge will be retained as core, but November I expect
 the stats will have normalised somewhat and that special handling
 won't be needed.

 IMO these are the reviewers doing enough over 90 days to meet the
 requirements for core:

 |   lifeless **| 3498 140   2 19957.6% |2
 (  1.0%)  |
 | clint-fewbar **  | 3292  54   1 27283.0% |7
 (  2.6%)  |
 | cmsj **  | 2481  25   1 22189.5% |   13
 (  5.9%)  |
 |derekh ** |  880  28  23  3768.2% |6
 ( 10.0%)  |

 Who are already core, so thats easy.

 If you are core, and not on that list, that may be because you're
 coming from tuskar, which doesn't have 90 days of history, or you need
 to get stuck into some more reviews :).

 Now, 30 day history - this is the heads up for folk:

 | clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
 | cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
 |   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
 |derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
 |  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
 |ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


 I'm using the fairly simple metric of 'average at least one review a
 day' as a proxy for 'sees enough of the code and enough discussion of
 the code to be an effective reviewer'. James and Ghe, good stuff -
 you're well on your way to core. If you're not in that list, please
 treat this as a heads-up that you need to do more reviews to keep on
 top of what's going on, whether so you become core, or you keep it.

 In next month's update I'll review whether to remove some folk that
 aren't keeping on top of things, as it won't be a surprise :).

 Cheers,
 Rob








 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Martyn Taylor

On 07/10/13 20:03, Robert Collins wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob






Whilst I can see that deciding on who is Core is a difficult task, I do 
feel that creating a competitive environment based on no. reviews will 
be detrimental to the project.


I do feel this is going to result in quantity over quality. Personally, 
I'd like to see every commit properly reviewed and tested before getting 
a vote and I don't think these stats are promoting that.


Regards
Martyn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Robert Collins
On 8 October 2013 22:44, Martyn Taylor mtay...@redhat.com wrote:
 On 07/10/13 20:03, Robert Collins wrote:


 Whilst I can see that deciding on who is Core is a difficult task, I do feel
 that creating a competitive environment based on no. reviews will be
 detrimental to the project.

I'm not sure how it's competitive : I'd be delighted if every
contributor was also a -core reviewer: I'm not setting, nor do I think
we need to think about setting (at this point anyhow), a cap on the
number of reviewers.

 I do feel this is going to result in quantity over quality. Personally, I'd
 like to see every commit properly reviewed and tested before getting a vote
 and I don't think these stats are promoting that.

I think thats a valid concern. However Nova has been running a (very
slightly less mechanical) form of this for well over a year, and they
are not drowning in -core reviewers. yes, reviewing is hard, and folk
should take it seriously.

Do you have an alternative mechanism to propose? The key things for me are:
 - folk who are idling are recognised as such and gc'd around about
the time their growing staleness will become an issue with review
correctness
 - folk who have been putting in consistent reading of code + changes
get given the additional responsibility of -core around about the time
that they will know enough about whats going on to review effectively.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Ladislav Smola

On 10/08/2013 10:27 AM, Robert Collins wrote:

Perhaps the best thing to do here is to get tuskar-ui to be part of
the horizon program, and utilise it's review team?


This is planned. But it wont happen soon.



On 8 October 2013 19:31, Tzu-Mainn Chen tzuma...@redhat.com wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob






--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi,

I feel like I should point out that before tuskar merged with tripleo, we had 
some distinction between the team working on the tuskar api and the team 
working on the UI, with each team focusing reviews on its particular experties. 
 The latter team works quite closely with horizon, to the extent of spending a 
lot of time involved with horizon development and blueprints.  This is done so 
that horizon changes can be understood and utilized by tuskar-ui.

For that reason, I feel like a UI core reviewer split here might make sense. . 
. ?  tuskar-ui doesn't require as many updates as tripleo/tuskar api, but a 
certain level of horizon and UI expertise is definitely helpful in reviewing 
the UI patches.

Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Ladislav Smola

Hi,

seems like not all people agrees on what should be the 'metric' of a 
core reviewer.

Also what justify us to give +1 or +2.

Could it be a topic on today's meeting?

Ladislav


On 10/07/2013 09:03 PM, Robert Collins wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob









___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Jaromir Coufal

Hi Robert,

I have few concerns regarding metrics and core-team. To sum up, I think 
that there needs to be more metrics and core-reviewers for particular 
project (not one group). More details follow:


Measures:

* Only number of reviews shouldn't be the only indicator - you can get 
into situation where people sit at computer and start give +1s to the 
code - regardless the quality, just to get quantity.
* Delivery of solutions (code, and other stuff) should be counted as 
well. It is not responsibility of core member just to review the code 
but also to deliver.
* Also very important is general activity of the person on IRC, mailing 
lists, etc.


With multiple metrics, we really can assure that the person is a core 
member at that project. It can be delivering architectural solutions, it 
can be delivering code, it can be reviewing the work or discussing 
problems. But only reviews are not very strong metric and we can run 
into problems.



Review Process:
-
* +1... People should give +1 to something what looks good (they might 
not test it, but they indicate that they are fine with that)
* +2... Should be given only if the person tested it and if he is sure 
that the solution works (meaning running test, testing functionality, etc).
* Approved... Same for approvals - they are final step when person is 
saying 'merge it'. There needs to be clear certainty, that what I am 
merging will not brake the app and works.


Quality of code is very important. It shouldn't come into the state, 
where core reviewers will start to give +2 to code which looks ok. They 
need to be sure that it works and solves the problem and only core 
people on particular project might assure this.



Core Reviewers:
-
* Tzu-Mainn pointed out, that there are big differences between 
projects. I think that splitting core-members based on projects where 
they contribute make bigger sense.
* Example: It doesn't make sense, that someone who is core-reviewer 
based on image-builder is able to give +2 on UI or CLI code and vice-versa.
* For me it makes bigger sense to have separate core-members for each 
project then having one big group - then we can assure higher quality of 
the code.
* If there is no way to split the core-reviewers across projects and we 
have one big group for whole TripleO, then we need to make sure that all 
projects are reflected appropriately.


I think that the example speaks for everything. It is really crucial to 
consider all projects of TripleO and try to assure their quality. That's 
what core-members are here for, that's why I see them as experts in 
particular project.


I believe that we all want TripleO to succeed,let's find some solutions 
how to achieve that.


Thanks
-- Jarda



On 2013/07/10 21:03, Robert Collins wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective 

Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Ben Nemec

On 2013-10-08 05:03, Robert Collins wrote:

On 8 October 2013 22:44, Martyn Taylor mtay...@redhat.com wrote:

On 07/10/13 20:03, Robert Collins wrote:



Whilst I can see that deciding on who is Core is a difficult task, I 
do feel

that creating a competitive environment based on no. reviews will be
detrimental to the project.


I'm not sure how it's competitive : I'd be delighted if every
contributor was also a -core reviewer: I'm not setting, nor do I think
we need to think about setting (at this point anyhow), a cap on the
number of reviewers.

I do feel this is going to result in quantity over quality. 
Personally, I'd
like to see every commit properly reviewed and tested before getting a 
vote

and I don't think these stats are promoting that.


I think thats a valid concern. However Nova has been running a (very
slightly less mechanical) form of this for well over a year, and they
are not drowning in -core reviewers. yes, reviewing is hard, and folk
should take it seriously.

Do you have an alternative mechanism to propose? The key things for me 
are:

 - folk who are idling are recognised as such and gc'd around about
the time their growing staleness will become an issue with review
correctness
 - folk who have been putting in consistent reading of code + changes
get given the additional responsibility of -core around about the time
that they will know enough about whats going on to review effectively.


This is a discussion that has come up in the other projects (not 
surprisingly), and I thought I would mention some of the criteria that 
are being used in those projects.  The first, and simplest, is from 
Dolph Mathews:


'Ultimately, core contributor to me simply means that this person's 
downvotes on code reviews are consistently well thought out and 
meaningful, such that an upvote by the same person shows a lot of 
confidence in the patch.'


I personally like this definition because it requires a certain volume 
of review work (which benefits the project), but it also takes into 
account the quality of those reviews.  Obviously both are important.  
Note that the +/- and disagreements columns in Russell's stats are 
intended to help with determining review quality.  Nothing can replace 
the judgment of the current cores of course, but if someone has been 
+1'ing in 95% of their reviews it's probably a sign that they aren't 
doing quality reviews.  Likewise if they're -1'ing everything but are 
constantly disagreeing with cores.


An expanded version of that can be found in this post to the list: 
http://lists.openstack.org/pipermail/openstack-dev/2013-June/009876.html


To me, that is along the same lines as what Dolph said, just a bit more 
specific as to how quality should be demonstrated and measured.


Hope this is helpful.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Clint Byrum
I don't meant to pick on you personally Jiří, but I have singled this
message out because I feel you have captured the objections to Robert's
initial email well.

Excerpts from Jiří Stránský's message of 2013-10-08 04:30:29 -0700:
 On 8.10.2013 11:44, Martyn Taylor wrote:
  Whilst I can see that deciding on who is Core is a difficult task, I do
  feel that creating a competitive environment based on no. reviews will
  be detrimental to the project.
 
  I do feel this is going to result in quantity over quality. Personally,
  I'd like to see every commit properly reviewed and tested before getting
  a vote and I don't think these stats are promoting that.
 
 +1. I feel that such metric favors shallow i like this code-reviews as 
 opposed to deep i verified that it actually does what it 
 should-reviews. E.g. i hit one such example just today morning on 
 tuskarclient. If i just looked at the code as the other reviewer did, 
 we'd let in code that doesn't do what it should. There's nothing bad on 
 making a mistake, but I wouldn't like to foster environment of quick 
 shallow reviews by having such metrics for core team.
 


I think you may not have worked long enough with Robert Collins to
understand what Robert is doing with the stats. While it may seem that
Robert has simply drawn a line in the sand and is going to sit back and
wait for everyone to cross it before nominating them, nothing could be
further from the truth.

As one gets involved and start -1'ing and +1'ing, one can expect feedback
from all of us as core reviewers. It is part of the responsibility of
being a core reviewer to communicate not just with the submitter of
patches, but also with the other reviewers. If I see shallow +1's from
people consistently, I'm going to reach out to those people and ask them
to elaborate on their reviews, and I'm going to be especially critical
of their -1's.

 I think it's also important who actually *writes* the code, not just who 
 does reviews. I find it odd that none of the people who most contributed 
 to any of the Tuskar projects in the last 3 months would make it onto 
 the core list [1], [2], [3].
 

I think having written a lot of code in a project is indeed a good way
to get familiar with the code. However, it is actually quite valuable
to have reviewers on a project who did not write _any_ of the code,
as their investment in the code itself is not as deep. They will look
at each change with fresh eyes and bring fewer assumptions.

Reviewing is a different skill than coding, and thus I think it is o-k
to measure it differently than coding.

 This might also suggest that we should be looking at contributions to 
 the particular projects, not just the whole program in general. We're 
 such a big program that one's staleness towards some of the components 
 (or being short on global review count) doesn't necessarily mean the 
 person is not important contributor/reviewer on some of the other 
 projects, and i'd also argue this doesn't affect the quality of his work 
 (e.g. there's no relationship between tuskarclient and say, t-i-e, 
 whatsoever).
 

Indeed, I don't think we would nominate or approve a reviewer if they
just did reviews, and never came in the IRC channel, participated in
mailing list discussions, or tried to write patches. It would be pretty
difficult to hold a dialog in reviews with somebody who is not involved
with the program as a whole.

 So i'd say we should get on with having a greater base of core folks and 
 count on people using their own good judgement on where will they 
 exercise their +/-2 powers (i think it's been working very well so far), 
 or alternatively split tripleo-core into some subteams.
 

If we see the review queue get backed up and response times rising, I
could see a push to grow the core review team early. But we're talking
about a 30 day sustained review contribution. That means for 30 days
you're +1'ing instead of +2'ing, and then maybe another 30 days while we
figure out who wants core powers and hold a vote.

If this is causing anyone stress, we should definitely address that and
make a change. However, I feel the opposite. Knowing what is expected
and being able to track where I sit on some of those expectations is
extremely comforting. Of course, easy to say up here with my +2/-2. ;)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Monty Taylor


On 10/08/2013 11:22 AM, Clint Byrum wrote:
 I don't meant to pick on you personally Jiří, but I have singled this
 message out because I feel you have captured the objections to Robert's
 initial email well.

Darn. My pop-up window showed I don't meant to pick on you personally
so I rushed to go read the message, and it turns out to be reasonable
and not ranty.

 Excerpts from Jiří Stránský's message of 2013-10-08 04:30:29 -0700:
 On 8.10.2013 11:44, Martyn Taylor wrote:
 Whilst I can see that deciding on who is Core is a difficult task, I do
 feel that creating a competitive environment based on no. reviews will
 be detrimental to the project.

 I do feel this is going to result in quantity over quality. Personally,
 I'd like to see every commit properly reviewed and tested before getting
 a vote and I don't think these stats are promoting that.

 +1. I feel that such metric favors shallow i like this code-reviews as 
 opposed to deep i verified that it actually does what it 
 should-reviews. E.g. i hit one such example just today morning on 
 tuskarclient. If i just looked at the code as the other reviewer did, 
 we'd let in code that doesn't do what it should. There's nothing bad on 
 making a mistake, but I wouldn't like to foster environment of quick 
 shallow reviews by having such metrics for core team.

 
 
 I think you may not have worked long enough with Robert Collins to
 understand what Robert is doing with the stats. While it may seem that
 Robert has simply drawn a line in the sand and is going to sit back and
 wait for everyone to cross it before nominating them, nothing could be
 further from the truth.
 
 As one gets involved and start -1'ing and +1'ing, one can expect feedback
 from all of us as core reviewers. It is part of the responsibility of
 being a core reviewer to communicate not just with the submitter of
 patches, but also with the other reviewers. If I see shallow +1's from
 people consistently, I'm going to reach out to those people and ask them
 to elaborate on their reviews, and I'm going to be especially critical
 of their -1's.
 
 I think it's also important who actually *writes* the code, not just who 
 does reviews. I find it odd that none of the people who most contributed 
 to any of the Tuskar projects in the last 3 months would make it onto 
 the core list [1], [2], [3].

I believe this is consistent with every other OpenStack project. -core
is not a status badge, nor is it a value judgement on the relative
coding skills. -core is PURELY a reviewing job. The only thing is grants
is more weight to the reviews you write in the future, so it makes
perfect sense that it should be judged on the basis of your review work.

It's a mind-shift to make, because in other projects you get 'committer'
access by writing good code. We don't do that in OpenStack. Here, you
get reviewer access by writing good reviews. (this lets the good coders
code and the good reviewers review)

 I think having written a lot of code in a project is indeed a good way
 to get familiar with the code. However, it is actually quite valuable
 to have reviewers on a project who did not write _any_ of the code,
 as their investment in the code itself is not as deep. They will look
 at each change with fresh eyes and bring fewer assumptions.
 
 Reviewing is a different skill than coding, and thus I think it is o-k
 to measure it differently than coding.
 
 This might also suggest that we should be looking at contributions to 
 the particular projects, not just the whole program in general. We're 
 such a big program that one's staleness towards some of the components 
 (or being short on global review count) doesn't necessarily mean the 
 person is not important contributor/reviewer on some of the other 
 projects, and i'd also argue this doesn't affect the quality of his work 
 (e.g. there's no relationship between tuskarclient and say, t-i-e, 
 whatsoever).

 
 Indeed, I don't think we would nominate or approve a reviewer if they
 just did reviews, and never came in the IRC channel, participated in
 mailing list discussions, or tried to write patches. It would be pretty
 difficult to hold a dialog in reviews with somebody who is not involved
 with the program as a whole.

For projects inside of openstack-infra (where we have like 30 of them or
something) we've added additional core teams that include infra-core but
have space for additional reviewers. jenkins-job-builder is the first
one we did like this, as it has a fantastically active set of devs and
reviewers who solely focus on that. However, that's the only one we've
done that for so far.

 So i'd say we should get on with having a greater base of core folks and 
 count on people using their own good judgement on where will they 
 exercise their +/-2 powers (i think it's been working very well so far), 
 or alternatively split tripleo-core into some subteams.

 
 If we see the review queue get backed up and response times rising, I
 could see a 

[openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-07 Thread Robert Collins
Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob






-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev