Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-21 Thread Gary Kotton
Thanks to everyone for the reviews over the last 24 hours. I really
appreciate the time and effort from all in the review process. I hope that
once we have the tempest results posted automatically for each for each
patch then it will give the core reviewers more confidence in the Vmware
fixes.
Thanks and have a good weekend.
Gary

On 9/21/13 3:10 AM, Russell Bryant rbry...@redhat.com wrote:

On 09/20/2013 07:00 PM, Dan Wendlandt wrote:
 btw, thanks to the core devs who went and took a look at several of the
 vmware reviews today.  It was like christmas for the team today :)

And for the record, I did all of my reviews before this thread even
started, just as a part of my normal workflow.  I was working from the
havana-rc1 bug list.  :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Russell Bryant
On 09/20/2013 04:11 PM, Dan Wendlandt wrote:
 Hi Russell, 
 
 Thanks for the detailed thoughts.  Comments below,
 
 Dan
 
 
 On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 
 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
  I think the real problem here is that in Nova there are bug fixes that
  are tiny and very important to a particular subset of the user
  population and yet have been around for well over a month without
  getting a single core review.
 
  Take for example https://review.openstack.org/#/c/40298/ , which fixes
  an important snapshot bug for the vmwareapi driver.  This was posted
  well over a month ago on August 5th.  It is a solid patch, is 54
  new/changed lines including unit test enhancements.  The commit
 message
  clearly shows which tempest tests it fixes.  It has been reviewed by
  many vmware reviewers with +1s for a long time, but the patch just
 keeps
  having to be rebased as it sits waiting for core reviewer attention.
 
  To me, the high-level take away is that it is hard to get new
  contributors excited about working on Nova when their well-written and
  well-targeted bug fixes just sit there, getting no feedback and not
  moving closer to merging.  The bug above was the developer's first
 patch
  to OpenStack and while he hasn't complained a bit, I think the
  experience is far from the community behavior that we need to
 encourages
  new, high-quality contributors from diverse sources.  For Nova to
  succeed in its goals of being a platform agnostic cloud layer, I think
  this is something we need a community strategy to address and I'd love
  to see it as part of the discussion put forward by those people
  nominating themselves as PTL.
 
 I've discussed this topic quite a bit in the past.  In short, my
 approach has been:
 
 1) develop metrics
 2) set goals
 3) track progress against those goals
 
 The numbers I've been using are here:
 
 http://russellbryant.net/openstack-stats/nova-openreviews.html
 
 
 Its great that you have dashboards like this, very cool.  The
 interesting thing here is that the patches I am talking about are not
 waiting on reviews in general, but rather core review.  They have plenty
 of reviews from non-core folks who provide feedback (and they keep
 getting +1'd again as they are rebased every few days).  Perhaps a good
 additional metric to track would be be items that have spent a lot of
 time without a negative review, but have not gotten any core reviews.  I
 think that is the root of the issue in the case of the reviews I'm
 talking about.  

The numbers I track do not reset the timer on any +1 (or +2, actually).
 I only resets when it gets a -1 or -2.  At that point, the review is
waiting for an update from a submitter.  Point is, getting a bunch of
+1s does not make it show up lower on the list.  Also, the 3rd list
(time since the last -1) does not reset on a rebase, so that's covered
in this tracking, too.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Dan Wendlandt
Hi Russell,

Thanks for the detailed thoughts.  Comments below,

Dan


On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com wrote:

 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
  I think the real problem here is that in Nova there are bug fixes that
  are tiny and very important to a particular subset of the user
  population and yet have been around for well over a month without
  getting a single core review.
 
  Take for example https://review.openstack.org/#/c/40298/ , which fixes
  an important snapshot bug for the vmwareapi driver.  This was posted
  well over a month ago on August 5th.  It is a solid patch, is 54
  new/changed lines including unit test enhancements.  The commit message
  clearly shows which tempest tests it fixes.  It has been reviewed by
  many vmware reviewers with +1s for a long time, but the patch just keeps
  having to be rebased as it sits waiting for core reviewer attention.
 
  To me, the high-level take away is that it is hard to get new
  contributors excited about working on Nova when their well-written and
  well-targeted bug fixes just sit there, getting no feedback and not
  moving closer to merging.  The bug above was the developer's first patch
  to OpenStack and while he hasn't complained a bit, I think the
  experience is far from the community behavior that we need to encourages
  new, high-quality contributors from diverse sources.  For Nova to
  succeed in its goals of being a platform agnostic cloud layer, I think
  this is something we need a community strategy to address and I'd love
  to see it as part of the discussion put forward by those people
  nominating themselves as PTL.

 I've discussed this topic quite a bit in the past.  In short, my
 approach has been:

 1) develop metrics
 2) set goals
 3) track progress against those goals

 The numbers I've been using are here:

 http://russellbryant.net/openstack-stats/nova-openreviews.html


Its great that you have dashboards like this, very cool.  The interesting
thing here is that the patches I am talking about are not waiting on
reviews in general, but rather core review.  They have plenty of reviews
from non-core folks who provide feedback (and they keep getting +1'd again
as they are rebased every few days).  Perhaps a good additional metric to
track would be be items that have spent a lot of time without a negative
review, but have not gotten any core reviews.  I think that is the root of
the issue in the case of the reviews I'm talking about.




 There is also an aspect of karma involved in all of this that I think
 phttp://summit.openstack.org/cfp/details/4lays a big part when it comes
 to the vmware driver patches.  To get review attention, someone has to
 *want* to review it.  To get people to want to review your stuff, it can
 either be technology they are interested in, or they just want to help
 you out personally.

 There aren't many Nova developers that use the vmware driver, so you've
 got to work on building karma with the team.  That means contributing to
 other areas, which honestly, I haven't seen much of from this group.  I
 think that would go a long way.


I agree with you on the dynamics of review karma here (having dealt with
this same issue as a PTL of Quantum/Neutron).  My sense of of is going on
here is a bootstrapping issue.  If you have a developer who is brand new to
OpenStack, it would make sense that your first patch or two would be in an
area where you feel most comfortable (e.g., because you understand the
VMware APIs + constructs).  For new developers, who aren't pushing a big
new feature but are instead just fixing an existing bug, I would not
guessed that a huge amount of karma is needed for a review.  People like
garyk and arosen who have more Nova experience are already doing Nova work
outside of the VMware driver (instance groups, neutron / security groups
code, many reviews throughout Nova, not to mention their work in Neutron)
and I expect that to be the path others follow as well.  Nonetheless, I
like the suggestion of having these new developers also try to gain
experience outside of the vmware driver and provide value to the wider
community... is
https://bugs.launchpad.net/nova/+bugs?field.tag=low-hanging-fruit still the
best place for them to look?  Doesn't seem to be much there that isn't
already in progress.




 I think if you review the history of vmware patches, you'll see my name
 as a reviewer on many (or perhaps most) of them, so I hope nobody thinks
 that I personally am trying stall things here.  This is just based on my
 experience across all contributions.


Indeed, and in fact, when I first wrote the email, I had called out you,
Dan Smith, and Michael Still as having been very helpful with VMware API
review, but then I was worried that I had left people off the list that had
also been helpful but weren't immediately coming to mind.  I guess I was
damned if I did and damned if I didn't :)

My goal in sending the email was not to 

Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Joe Gordon
On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com wrote:

 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
  I think the real problem here is that in Nova there are bug fixes that
  are tiny and very important to a particular subset of the user
  population and yet have been around for well over a month without
  getting a single core review.
 
  Take for example https://review.openstack.org/#/c/40298/ , which fixes
  an important snapshot bug for the vmwareapi driver.  This was posted
  well over a month ago on August 5th.  It is a solid patch, is 54
  new/changed lines including unit test enhancements.  The commit message
  clearly shows which tempest tests it fixes.  It has been reviewed by
  many vmware reviewers with +1s for a long time, but the patch just keeps
  having to be rebased as it sits waiting for core reviewer attention.


Personally I tend not to review many vmwareapi patches because without
seeing any public functional tests or being able to run the patch myself, I
am uncomfortable saying it 'looks good to me'. All I can do is make sure
the code looks pythonic and make no assessment on if the patch works or
not. With no shortage of patches to review I tend to review other patches
instead.

I while back Russell announced we would like all virt drivers have a public
functional testing system by the release of Icehouse (
http://lists.openstack.org/pipermail/openstack-dev/2013-July/011280.html).
Public functional testing would allow me to review vmwareapi patches with
almost the same level of confidence as with a driver that we gate on and
that I can trivially try out, such as libvirt.  Until then, if we put an
explicit comment in the release notes explicitly saying that vmwareapi is a
group C virt driver (https://wiki.openstack.org/wiki/HypervisorSupportMatrix --
These drivers have minimal testing and may or may not work at any given
time. Use them at your own risk) that would address my concerns and I
would be happy to +2 vmwareapi patches based just on if the code looks
correct and not on how well the patch works.



  To me, the high-level take away is that it is hard to get new
  contributors excited about working on Nova when their well-written and
  well-targeted bug fixes just sit there, getting no feedback and not
  moving closer to merging.  The bug above was the developer's first patch
  to OpenStack and while he hasn't complained a bit, I think the
  experience is far from the community behavior that we need to encourages
  new, high-quality contributors from diverse sources.  For Nova to
  succeed in its goals of being a platform agnostic cloud layer, I think
  this is something we need a community strategy to address and I'd love
  to see it as part of the discussion put forward by those people
  nominating themselves as PTL.

 I've discussed this topic quite a bit in the past.  In short, my
 approach has been:

 1) develop metrics
 2) set goals
 3) track progress against those goals

 The numbers I've been using are here:

 http://russellbryant.net/openstack-stats/nova-openreviews.html

 Right now we're running a bit behind the set goal of keeping the average
 under 5 days for the latest revision (1st set of numbers), and 7 days
 for the oldest revision since the last -1 (3rd set of numbers).
 However, Nova is now (and has been since I've been tracking this) below
 the average for all OpenStack projects:

 http://russellbryant.net/openstack-stats/all-openreviews.html

 Review prioritization is not something that I or anyone else can
 strictly control, but we can provide tools and guidelines to help.  You
 can find some notes on that here:

 https://wiki.openstack.org/wiki/Nova/CoreTeam#Review_Prioritization

 There is also an aspect of karma involved in all of this that I think
 phttp://summit.openstack.org/cfp/details/4lays a big part when it comes
 to the vmware driver patches.  To get review attention, someone has to
 *want* to review it.  To get people to want to review your stuff, it can
 either be technology they are interested in, or they just want to help
 you out personally.

 There aren't many Nova developers that use the vmware driver, so you've
 got to work on building karma with the team.  That means contributing to
 other areas, which honestly, I haven't seen much of from this group.  I
 think that would go a long way.

 I think if you review the history of vmware patches, you'll see my name
 as a reviewer on many (or perhaps most) of them, so I hope nobody thinks
 that I personally am trying stall things here.  This is just based on my
 experience across all contributions.

 I already put a session on the design summit schedule to discuss the
 future of drivers.  I'm open to alternative approaches for driver
 maintenance, including moving some of them (such as the vmware driver)
 into another tree where the developers focused on it can merge their
 code without waiting for nova-core review.

 http://summit.openstack.org/cfp/details/4

 

Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Monty Taylor


On 09/20/2013 01:24 PM, Dan Wendlandt wrote:
 
 
 
 On Fri, Sep 20, 2013 at 1:09 PM, Joe Gordon joe.gord...@gmail..com
 mailto:joe.gord...@gmail.com wrote:
 
 On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 
 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
  I think the real problem here is that in Nova there are bug
 fixes that
  are tiny and very important to a particular subset of the user
  population and yet have been around for well over a month without
  getting a single core review.
 
  Take for example https://review.openstack.org/#/c/40298/ ,
 which fixes
  an important snapshot bug for the vmwareapi driver.  This was
 posted
  well over a month ago on August 5th.  It is a solid patch, is 54
  new/changed lines including unit test enhancements.  The
 commit message
  clearly shows which tempest tests it fixes.  It has been
 reviewed by
  many vmware reviewers with +1s for a long time, but the patch
 just keeps
  having to be rebased as it sits waiting for core reviewer
 attention.
 
 
 Personally I tend not to review many vmwareapi patches because
 without seeing any public functional tests or being able to run the
 patch myself, I am uncomfortable saying it 'looks good to me'. All I
 can do is make sure the code looks pythonic and make no assessment
 on if the patch works or not. With no shortage of patches to review
 I tend to review other patches instead.
 
 I while back Russell announced we would like all virt drivers have a
 public functional testing system by the release of Icehouse
 
 (http://lists.openstack.org/pipermail/openstack-dev/2013-July/011280.html).
 Public functional testing would allow me to review vmwareapi patches
 with almost the same level of confidence as with a driver that we
 gate on and that I can trivially try out, such as libvirt.  Until
 then, if we put an explicit comment in the release notes explicitly
 saying that vmwareapi is a group C virt driver
 (https://wiki.openstack.org/wiki/HypervisorSupportMatrix -- These
 drivers have minimal testing and may or may not work at any given
 time. Use them at your own risk) that would address my concerns and
 I would be happy to +2 vmwareapi patches based just on if the code
 looks correct and not on how well the patch works.
 
 
 
 Hi Joe,
 
 I couldn't agree more.  In fact, the VMware team has been working hard
 to get a fully-automated CI infrastructure setup and integrated with
 upstream Gerrit.  We already run the tempest tests internally and have
 been manually posting tempest results for some patches.  I wouldn't want
 to speak for the dev owner, but I think within a very short time (before
 Havana) you will begin seeing automated reports for tempest tests on top
 of vSphere showing up on Gerrit.  I agree that this will really help
 core reviewers gain confidence that not only does the code look OK,
 but that it works well too.

WOOT!

  To me, the high-level take away is that it is hard to get new
  contributors excited about working on Nova when their
 well-written and
  well-targeted bug fixes just sit there, getting no feedback
 and not
  moving closer to merging.  The bug above was the developer's
 first patch
  to OpenStack and while he hasn't complained a bit, I think the
  experience is far from the community behavior that we need to
 encourages
  new, high-quality contributors from diverse sources.  For Nova to
  succeed in its goals of being a platform agnostic cloud layer,
 I think
  this is something we need a community strategy to address and
 I'd love
  to see it as part of the discussion put forward by those people
  nominating themselves as PTL.
 
 I've discussed this topic quite a bit in the past.  In short, my
 approach has been:
 
 1) develop metrics
 2) set goals
 3) track progress against those goals
 
 The numbers I've been using are here:
 
 http://russellbryant.net/openstack-stats/nova-openreviews.html
 
 Right now we're running a bit behind the set goal of keeping the
 average
 under 5 days for the latest revision (1st set of numbers), and 7
 days
 for the oldest revision since the last -1 (3rd set of numbers).
 However, Nova is now (and has been since I've been tracking
 this) below
 the average for all OpenStack projects:
 

 http://russellbryant.net/openstack-stats/all-openreviews.html
 http://russellbryant.net/openstack-stats/all-openreviews..html
 
 Review prioritization is not something that I or anyone 

Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Dan Wendlandt
On Fri, Sep 20, 2013 at 1:09 PM, Joe Gordon joe.gord...@gmail.com wrote:

 On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.comwrote:

 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
  I think the real problem here is that in Nova there are bug fixes that
  are tiny and very important to a particular subset of the user
  population and yet have been around for well over a month without
  getting a single core review.
 
  Take for example https://review.openstack.org/#/c/40298/ , which fixes
  an important snapshot bug for the vmwareapi driver.  This was posted
  well over a month ago on August 5th.  It is a solid patch, is 54
  new/changed lines including unit test enhancements.  The commit message
  clearly shows which tempest tests it fixes.  It has been reviewed by
  many vmware reviewers with +1s for a long time, but the patch just keeps
  having to be rebased as it sits waiting for core reviewer attention.


 Personally I tend not to review many vmwareapi patches because without
 seeing any public functional tests or being able to run the patch myself, I
 am uncomfortable saying it 'looks good to me'. All I can do is make sure
 the code looks pythonic and make no assessment on if the patch works or
 not. With no shortage of patches to review I tend to review other patches
 instead.

 I while back Russell announced we would like all virt drivers have a
 public functional testing system by the release of Icehouse (
 http://lists.openstack.org/pipermail/openstack-dev/2013-July/011280.html).
 Public functional testing would allow me to review vmwareapi patches with
 almost the same level of confidence as with a driver that we gate on and
 that I can trivially try out, such as libvirt.  Until then, if we put an
 explicit comment in the release notes explicitly saying that vmwareapi is a
 group C virt driver (
 https://wiki.openstack.org/wiki/HypervisorSupportMatrix -- These drivers
 have minimal testing and may or may not work at any given time. Use them at
 your own risk) that would address my concerns and I would be happy to +2
 vmwareapi patches based just on if the code looks correct and not on how
 well the patch works.



Hi Joe,

I couldn't agree more.  In fact, the VMware team has been working hard to
get a fully-automated CI infrastructure setup and integrated with upstream
Gerrit.  We already run the tempest tests internally and have been manually
posting tempest results for some patches.  I wouldn't want to speak for the
dev owner, but I think within a very short time (before Havana) you will
begin seeing automated reports for tempest tests on top of vSphere showing
up on Gerrit.  I agree that this will really help core reviewers gain
confidence that not only does the code look OK, but that it works well
too.

Dan









 
  To me, the high-level take away is that it is hard to get new
  contributors excited about working on Nova when their well-written and
  well-targeted bug fixes just sit there, getting no feedback and not
  moving closer to merging.  The bug above was the developer's first patch
  to OpenStack and while he hasn't complained a bit, I think the
  experience is far from the community behavior that we need to encourages
  new, high-quality contributors from diverse sources.  For Nova to
  succeed in its goals of being a platform agnostic cloud layer, I think
  this is something we need a community strategy to address and I'd love
  to see it as part of the discussion put forward by those people
  nominating themselves as PTL.

 I've discussed this topic quite a bit in the past.  In short, my
 approach has been:

 1) develop metrics
 2) set goals
 3) track progress against those goals

 The numbers I've been using are here:

 http://russellbryant.net/openstack-stats/nova-openreviews.html

 Right now we're running a bit behind the set goal of keeping the average
 under 5 days for the latest revision (1st set of numbers), and 7 days
 for the oldest revision since the last -1 (3rd set of numbers).
 However, Nova is now (and has been since I've been tracking this) below
 the average for all OpenStack projects:

 http://russellbryant.net/openstack-stats/all-openreviews.html

 Review prioritization is not something that I or anyone else can
 strictly control, but we can provide tools and guidelines to help.  You
 can find some notes on that here:

 https://wiki.openstack.org/wiki/Nova/CoreTeam#Review_Prioritization

 There is also an aspect of karma involved in all of this that I think
 phttp://summit.openstack.org/cfp/details/4lays a big part when it comes
 to the vmware driver patches.  To get review attention, someone has to
 *want* to review it.  To get people to want to review your stuff, it can
 either be technology they are interested in, or they just want to help
 you out personally.

 There aren't many Nova developers that use the vmware driver, so you've
 got to work on building karma with the team.  That means contributing to
 other areas, which 

Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Dan Wendlandt
On Fri, Sep 20, 2013 at 1:37 PM, Russell Bryant rbry...@redhat.com wrote:

 On 09/20/2013 04:11 PM, Dan Wendlandt wrote:
  Hi Russell,
 
  Thanks for the detailed thoughts.  Comments below,
 
  Dan
 
 
  On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com
  mailto:rbry...@redhat.com wrote:
 
  On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
   I think the real problem here is that in Nova there are bug fixes
 that
   are tiny and very important to a particular subset of the user
   population and yet have been around for well over a month without
   getting a single core review.
  
   Take for example https://review.openstack.org/#/c/40298/ , which
 fixes
   an important snapshot bug for the vmwareapi driver.  This was
 posted
   well over a month ago on August 5th.  It is a solid patch, is 54
   new/changed lines including unit test enhancements.  The commit
  message
   clearly shows which tempest tests it fixes.  It has been reviewed
 by
   many vmware reviewers with +1s for a long time, but the patch just
  keeps
   having to be rebased as it sits waiting for core reviewer
 attention.
  
   To me, the high-level take away is that it is hard to get new
   contributors excited about working on Nova when their well-written
 and
   well-targeted bug fixes just sit there, getting no feedback and not
   moving closer to merging.  The bug above was the developer's first
  patch
   to OpenStack and while he hasn't complained a bit, I think the
   experience is far from the community behavior that we need to
  encourages
   new, high-quality contributors from diverse sources.  For Nova to
   succeed in its goals of being a platform agnostic cloud layer, I
 think
   this is something we need a community strategy to address and I'd
 love
   to see it as part of the discussion put forward by those people
   nominating themselves as PTL.
 
  I've discussed this topic quite a bit in the past.  In short, my
  approach has been:
 
  1) develop metrics
  2) set goals
  3) track progress against those goals
 
  The numbers I've been using are here:
 
  http://russellbryant.net/openstack-stats/nova-openreviews.html
 
 
  Its great that you have dashboards like this, very cool.  The
  interesting thing here is that the patches I am talking about are not
  waiting on reviews in general, but rather core review.  They have plenty
  of reviews from non-core folks who provide feedback (and they keep
  getting +1'd again as they are rebased every few days).  Perhaps a good
  additional metric to track would be be items that have spent a lot of
  time without a negative review, but have not gotten any core reviews.  I
  think that is the root of the issue in the case of the reviews I'm
  talking about.

 The numbers I track do not reset the timer on any +1 (or +2, actually).
  I only resets when it gets a -1 or -2.  At that point, the review is
 waiting for an update from a submitter.  Point is, getting a bunch of
 +1s does not make it show up lower on the list.  Also, the 3rd list
 (time since the last -1) does not reset on a rebase, so that's covered
 in this tracking, too.


I see, I misunderstood the labels.  One thing to consider adding would be
something measuring the patches that have gone the longest without any core
review, which (by my current understanding) isn't currently measured.

Again, I think its great that you have these charts, and I'm quite sure
that its because of your use to charts like this that helped you be so
helpful in spotting reviewers that are stalled in the vmware driver and
elsewhere.  Thanks again for your help on that front.

Dan






 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Dan Smith
 What criteria would be used to determine which drivers stay in-tree
 vs. maintained as forks? E.g. libvirt driver in, everyone else out?
 Open-platform drivers (libvirt, xen) in, closed-platform drivers
 (vmware, hyperv) out? Drivers for platforms with large (possibly
 non-OpenStack) production deployments (libvirt, xen, vmware, hyperv)
 in, drivers without (e.g. docker), out?

I think this is in response to demand, not necessarily desire by the
nova-core folks. IMHO, maintaining any sort of stable virt driver API
for out-of-tree drivers is something we should try to avoid if at all
possible. I think the potential option for having a driver moved out of
tree would be because the maintainers of that driver would prefer the
freedom of merging anything they want without waiting for reviews.

As was mentioned earlier in the thread, however, there is a goal to get
every driver that is in-tree to have functional testing by Icehouse.
This is unrelated to a move for maintenance reasons, and is something I
fully support. If we don't have functional testing on a driver, we
should consider it broken (and not supported) IMHO.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Michael Still
On Sat, Sep 21, 2013 at 6:24 AM, Dan Wendlandt d...@nicira.com wrote:
 On Fri, Sep 20, 2013 at 1:09 PM, Joe Gordon joe.gord...@gmail.com wrote:
 On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com
 wrote:
 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:

 I couldn't agree more.  In fact, the VMware team has been working hard to
 get a fully-automated CI infrastructure setup and integrated with upstream
 Gerrit.  We already run the tempest tests internally and have been manually
 posting tempest results for some patches.  I wouldn't want to speak for the
 dev owner, but I think within a very short time (before Havana) you will
 begin seeing automated reports for tempest tests on top of vSphere showing
 up on Gerrit.  I agree that this will really help core reviewers gain
 confidence that not only does the code look OK, but that it works well
 too.

How are you doing this? Joshua Hesketh has been working on integrating
our internal DB CI tests into upstream zuul, so I wonder there are
synergies that can be harnessed here.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Joe Gordon
On Sep 20, 2013 1:27 PM, Dan Wendlandt d...@nicira.com wrote:




 On Fri, Sep 20, 2013 at 1:09 PM, Joe Gordon joe.gord...@gmail.com wrote:

 On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com
wrote:

 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
  I think the real problem here is that in Nova there are bug fixes that
  are tiny and very important to a particular subset of the user
  population and yet have been around for well over a month without
  getting a single core review.
 
  Take for example https://review.openstack.org/#/c/40298/ , which fixes
  an important snapshot bug for the vmwareapi driver.  This was posted
  well over a month ago on August 5th.  It is a solid patch, is 54
  new/changed lines including unit test enhancements.  The commit
message
  clearly shows which tempest tests it fixes.  It has been reviewed by
  many vmware reviewers with +1s for a long time, but the patch just
keeps
  having to be rebased as it sits waiting for core reviewer attention.


 Personally I tend not to review many vmwareapi patches because without
seeing any public functional tests or being able to run the patch myself, I
am uncomfortable saying it 'looks good to me'. All I can do is make sure
the code looks pythonic and make no assessment on if the patch works or
not. With no shortage of patches to review I tend to review other patches
instead.

 I while back Russell announced we would like all virt drivers have a
public functional testing system by the release of Icehouse (
http://lists.openstack.org/pipermail/openstack-dev/2013-July/011280.html).
Public functional testing would allow me to review vmwareapi patches with
almost the same level of confidence as with a driver that we gate on and
that I can trivially try out, such as libvirt.  Until then, if we put an
explicit comment in the release notes explicitly saying that vmwareapi is a
group C virt driver (https://wiki.openstack.org/wiki/HypervisorSupportMatrix --
These drivers have minimal testing and may or may not work at any given
time. Use them at your own risk) that would address my concerns and I
would be happy to +2 vmwareapi patches based just on if the code looks
correct and not on how well the patch works.



 Hi Joe,

 I couldn't agree more.  In fact, the VMware team has been working hard to
get a fully-automated CI infrastructure setup and integrated with upstream
Gerrit.  We already run the tempest tests internally and have been manually
posting tempest results for some patches.  I wouldn't want to speak for the
dev owner, but I think within a very short time (before Havana) you will
begin seeing automated reports for tempest tests on top of vSphere showing
up on Gerrit.  I agree that this will really help core reviewers gain
confidence that not only does the code look OK, but that it works well
too.

 Dan

Awesome, when that happens I hope to review more vmwareapi patches.  Part
of the trick will be in how I can see that after a nova patch is merged the
vmwareapi system will cover that case going forward.









 
  To me, the high-level take away is that it is hard to get new
  contributors excited about working on Nova when their well-written and
  well-targeted bug fixes just sit there, getting no feedback and not
  moving closer to merging.  The bug above was the developer's first
patch
  to OpenStack and while he hasn't complained a bit, I think the
  experience is far from the community behavior that we need to
encourages
  new, high-quality contributors from diverse sources.  For Nova to
  succeed in its goals of being a platform agnostic cloud layer, I think
  this is something we need a community strategy to address and I'd love
  to see it as part of the discussion put forward by those people
  nominating themselves as PTL.

 I've discussed this topic quite a bit in the past.  In short, my
 approach has been:

 1) develop metrics
 2) set goals
 3) track progress against those goals

 The numbers I've been using are here:

 http://russellbryant.net/openstack-stats/nova-openreviews.html

 Right now we're running a bit behind the set goal of keeping the average
 under 5 days for the latest revision (1st set of numbers), and 7 days
 for the oldest revision since the last -1 (3rd set of numbers).
 However, Nova is now (and has been since I've been tracking this) below
 the average for all OpenStack projects:

 http://russellbryant.net/openstack-stats/all-openreviews.html

 Review prioritization is not something that I or anyone else can
 strictly control, but we can provide tools and guidelines to help.  You
 can find some notes on that here:

 https://wiki.openstack.org/wiki/Nova/CoreTeam#Review_Prioritization

 There is also an aspect of karma involved in all of this that I think
 phttp://summit.openstack.org/cfp/details/4lays a big part when it comes
 to the vmware driver patches.  To get review attention, someone has to
 *want* to review it.  To get people to want to review your stuff, it can
 

Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread David Ripton

On 09/20/2013 04:11 PM, Dan Wendlandt wrote:


Its great that you have dashboards like this, very cool.  The
interesting thing here is that the patches I am talking about are not
waiting on reviews in general, but rather core review.  They have plenty
of reviews from non-core folks who provide feedback (and they keep
getting +1'd again as they are rebased every few days).  Perhaps a good
additional metric to track would be be items that have spent a lot of
time without a negative review, but have not gotten any core reviews.  I
think that is the root of the issue in the case of the reviews I'm
talking about.


I feel the pain.  Especially when you have a +2 but need to rebase 
before the second +2 comes in, and lose it.


I just sent a pull request to next-review to add --onlyplusone and 
--onlyplustwo to next-review, to give core reviewers an easy way to 
focus on already-somewhat-vetted reviews, and leave the new reviews to 
non-core reviewers.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Dan Wendlandt
On Fri, Sep 20, 2013 at 1:58 PM, Joe Gordon joe.gord...@gmail.com wrote:


 On Sep 20, 2013 1:27 PM, Dan Wendlandt d...@nicira.com wrote:
 
 
 
 
  On Fri, Sep 20, 2013 at 1:09 PM, Joe Gordon joe.gord...@gmail.com
 wrote:
 
  On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com
 wrote:
 
  On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
   I think the real problem here is that in Nova there are bug fixes
 that
   are tiny and very important to a particular subset of the user
   population and yet have been around for well over a month without
   getting a single core review.
  
   Take for example https://review.openstack.org/#/c/40298/ , which
 fixes
   an important snapshot bug for the vmwareapi driver.  This was posted
   well over a month ago on August 5th.  It is a solid patch, is 54
   new/changed lines including unit test enhancements.  The commit
 message
   clearly shows which tempest tests it fixes.  It has been reviewed by
   many vmware reviewers with +1s for a long time, but the patch just
 keeps
   having to be rebased as it sits waiting for core reviewer attention.
 
 
  Personally I tend not to review many vmwareapi patches because without
 seeing any public functional tests or being able to run the patch myself, I
 am uncomfortable saying it 'looks good to me'. All I can do is make sure
 the code looks pythonic and make no assessment on if the patch works or
 not. With no shortage of patches to review I tend to review other patches
 instead.
 
  I while back Russell announced we would like all virt drivers have a
 public functional testing system by the release of Icehouse (
 http://lists.openstack.org/pipermail/openstack-dev/2013-July/011280.html).
 Public functional testing would allow me to review vmwareapi patches with
 almost the same level of confidence as with a driver that we gate on and
 that I can trivially try out, such as libvirt.  Until then, if we put an
 explicit comment in the release notes explicitly saying that vmwareapi is a
 group C virt driver (
 https://wiki.openstack.org/wiki/HypervisorSupportMatrix -- These drivers
 have minimal testing and may or may not work at any given time. Use them at
 your own risk) that would address my concerns and I would be happy to +2
 vmwareapi patches based just on if the code looks correct and not on how
 well the patch works.
 
 
 
  Hi Joe,
 
  I couldn't agree more.  In fact, the VMware team has been working hard
 to get a fully-automated CI infrastructure setup and integrated with
 upstream Gerrit.  We already run the tempest tests internally and have been
 manually posting tempest results for some patches.  I wouldn't want to
 speak for the dev owner, but I think within a very short time (before
 Havana) you will begin seeing automated reports for tempest tests on top of
 vSphere showing up on Gerrit.  I agree that this will really help core
 reviewers gain confidence that not only does the code look OK, but that
 it works well too.
 
  Dan

 Awesome, when that happens I hope to review more vmwareapi patches.  Part
 of the trick will be in how I can see that after a nova patch is merged the
 vmwareapi system will cover that case going forward.


Great.  By the way, all of this automated tempest testing is running nested
on top of a physical OpenStack on vSphere cloud we run internally.  That
cloud has the ability to host labs that are externally accessible.  So if
you're a core Nova reviewer (or other person who does a lot of Nova
reviews), I could definitely look into how we can get you access to a
devstack + vSphere environment for your personal use of reviewing + testing
patches.  Anything I can do to make a core reviewers life easier here, I'm
all for.  Feel free to reach out to me off-list.

Dan




 
 
 
 
 
 
 
 
 
  
   To me, the high-level take away is that it is hard to get new
   contributors excited about working on Nova when their well-written
 and
   well-targeted bug fixes just sit there, getting no feedback and not
   moving closer to merging.  The bug above was the developer's first
 patch
   to OpenStack and while he hasn't complained a bit, I think the
   experience is far from the community behavior that we need to
 encourages
   new, high-quality contributors from diverse sources.  For Nova to
   succeed in its goals of being a platform agnostic cloud layer, I
 think
   this is something we need a community strategy to address and I'd
 love
   to see it as part of the discussion put forward by those people
   nominating themselves as PTL.
 
  I've discussed this topic quite a bit in the past.  In short, my
  approach has been:
 
  1) develop metrics
  2) set goals
  3) track progress against those goals
 
  The numbers I've been using are here:
 
  http://russellbryant.net/openstack-stats/nova-openreviews.html
 
  Right now we're running a bit behind the set goal of keeping the
 average
  under 5 days for the latest revision (1st set of numbers), and 7 days
  for the oldest revision since 

Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Dan Wendlandt
btw, thanks to the core devs who went and took a look at several of the
vmware reviews today.  It was like christmas for the team today :)


On Fri, Sep 20, 2013 at 2:25 PM, Michael Still mi...@stillhq.com wrote:

 On Sat, Sep 21, 2013 at 7:12 AM, Dan Wendlandt d...@nicira.com wrote:
  On Fri, Sep 20, 2013 at 2:05 PM, Michael Still mi...@stillhq.com
 wrote:

  How are you doing this? Joshua Hesketh has been working on integrating
  our internal DB CI tests into upstream zuul, so I wonder there are
  synergies that can be harnessed here.
 
  We're just using the standard stuff built by the OpenStack CI team for
  third-party testing: http://ci.openstack.org/third_party.html
 
  Is that what you were asking, or am I misunderstanding?

 Ahhh, so that's how our initial prototype was built as well, but the
 new zuul way is much nicer (he says in a handwavey way). I didn't do
 the work though, so I can't be too specific apart from saying you
 don't need to do any of the talking-to-gerrit bits any more -- its
 possible to just run up a zuul instance which hooks into the upstream
 one and that runs your tests. zuul handles detecting new reviews and
 writing results to gerrit for you.

 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Russell Bryant
On 09/20/2013 07:00 PM, Dan Wendlandt wrote:
 btw, thanks to the core devs who went and took a look at several of the
 vmware reviews today.  It was like christmas for the team today :) 

And for the record, I did all of my reviews before this thread even
started, just as a part of my normal workflow.  I was working from the
havana-rc1 bug list.  :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev