Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Duncan Thomas
On 11 October 2013 15:41, Alessandro Pilotti
apilo...@cloudbasesolutions.com wrote:
 Current reviews require:

 +1 de facto driver X mantainer(s)
 +2  core reviewer
 +2A  core reviewer

 While with the proposed scenario we'd get to a way faster route:

 +2  driver X mantainer
 +2A another driver X mantainer or a core reviewer

 This would make a big difference in terms of review time.

Unfortunately I suspect it would also lead to a big difference in
review quality, and not in a positive way. The things that are
important / obvious to somebody who focuses on one driver are totally
different, and often far more limited, than the concerns of somebody
who reviews many drivers and core code changes.

-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Alessandro Pilotti

On Oct 15, 2013, at 18:14 , Duncan Thomas 
duncan.tho...@gmail.commailto:duncan.tho...@gmail.com wrote:

On 11 October 2013 15:41, Alessandro Pilotti
apilo...@cloudbasesolutions.commailto:apilo...@cloudbasesolutions.com wrote:
Current reviews require:

+1 de facto driver X mantainer(s)
+2  core reviewer
+2A  core reviewer

While with the proposed scenario we'd get to a way faster route:

+2  driver X mantainer
+2A another driver X mantainer or a core reviewer

This would make a big difference in terms of review time.

Unfortunately I suspect it would also lead to a big difference in
review quality, and not in a positive way. The things that are
important / obvious to somebody who focuses on one driver are totally
different, and often far more limited, than the concerns of somebody
who reviews many drivers and core code changes.

Although the eyes of somebody which comes from a different domain bring usually 
additional points of views and befits, this was not particularly the case for 
what our driver is concerned. As I already wrote, almost all the reviews so far 
have been related to unit tests or minor formal corrections.

I disagree on the far more limited: driver devs (at least in our case), have 
to work on a wider range of projects beside Nova (e.g.: Neutron, Cinder, 
Ceilometer and outside proper OpenStack OpenVSwitch and Crowbar, to name the 
most relevant cases).





--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Duncan Thomas
On 11 October 2013 20:51, Rochelle.Grober rochelle.gro...@huawei.com wrote:
 Proposed solution:

 There have been a couple of solutions proposed.  I’m presenting a
 merged/hybrid solution that may work

 · Create a new repository for the extra drivers:

 o   Keep kvm and Xenapi in the Nova project as “reference” drivers

 oopenstack/nova-extra-drivers (proposed by rbryant)

 oHave all drivers other than reference drivers in the extra-drivers
 project until they meet the maturity of the ones in Nova

 o   The core reviewers for nova-extra-drivers will come from its developer
 pool.  As Alessandro pointed out, all the driver developers have more in
 common with each other than core Nova, so they should be able to do a better
 job of reviewing these patches than Nova core.  Plus, this might create some
 synergy between different drivers that will result in more commonalities
 across drivers and better stability.  This also reduces the workloads on
 both Nova Core reviewers and the driver developers/core reviewers.

 o   If you don’t feel comfortable with the last bullet, have the Nova core
 reviewers do the final approval, but only for the obvious “does this code
 meet our standards?”



 The proposed solution focuses the strengths of the different developers in
 their strong areas.  Everyone will still have to stretch to do reviews and
 now there is a possibility that the developers that best understand the
 drivers might be able to advance the state of the drivers by sharing their
 expertise amongst each other.

The problem here is that you need to keep nova-core and the drivers
tree in sync... if a core change causes CI to break because it
requires a change to the driver code (this has happened a couple of
times in cinder... when they're in the same tree you can just fix em
all up, easy), there's a nasty dance to get the patches in since the
drivers need updating to work with both the old and the new core code,
then the core code updating, then the support for the old core code
removing... yuck

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Duncan Thomas
On 13 October 2013 00:19, Alessandro Pilotti
apilo...@cloudbasesolutions.com wrote:

 If you don't like any of the options that this already long thread is
 providing, I'm absolutely open to discuss any constructive idea. But please,
 let's get out of this awful mess.

 OpenStack is still a young project. Let's make sure that we can hand it on
 to the next devs generations by getting rid of these management bottlenecks
 now!

Get a hyper-v person trained up to the point they are a nova core
reviewer, where they can not only prioritise hyper-v related reviews
but also reduce the general review backlog in nova (which affects
everybody... there are a few cinder features that required nova merges
that didn't get in before feature freeze either and had to be disabled
in cinder).

There's a 'tax' to contributing to openstack, which is dedicating some
time to reviewing other people's work. The more people do that, the
faster things go for everybody. The higher rate tax is being a core
reviewer, but that comes with certain advantages too.

-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Alessandro Pilotti


On Oct 15, 2013, at 19:18 , Duncan Thomas duncan.tho...@gmail.com
 wrote:

 On 13 October 2013 00:19, Alessandro Pilotti
 apilo...@cloudbasesolutions.com wrote:
 
 If you don't like any of the options that this already long thread is
 providing, I'm absolutely open to discuss any constructive idea. But please,
 let's get out of this awful mess.
 
 OpenStack is still a young project. Let's make sure that we can hand it on
 to the next devs generations by getting rid of these management bottlenecks
 now!
 
 Get a hyper-v person trained up to the point they are a nova core
 reviewer, where they can not only prioritise hyper-v related reviews
 but also reduce the general review backlog in nova (which affects
 everybody... there are a few cinder features that required nova merges
 that didn't get in before feature freeze either and had to be disabled
 in cinder).
 

About getting a Nova core, from a previous email that I wrote on this thread:

 …
 Our domain is the area in which me and my sub-team can add the biggest value. 
 Being also an independent startup, we reached now the stage in which we can 
 sponsor some devs to do reviews all the time outside of our core domain, but 
 this will take a few months spawning one or two releases as aquiring the 
 necessary understanding of a project like e.g. Nova cannot be done overnight.
 …

Anyway, although this will help, it won't be a solution. A central control 
without delegation is going to fail anyway as the project size increases.


 There's a 'tax' to contributing to openstack, which is dedicating some
 time to reviewing other people's work. The more people do that, the
 faster things go for everybody. The higher rate tax is being a core
 reviewer, but that comes with certain advantages too.
 

Here's the point. A driver dev pays this tax across multiple projects, e.g.: 
by reviewing Hyper-V code in Nova, Neutron, Cinder, Ceilometer, Cloudbase-Init, 
Crowbar, OpenVSwitch and so on. 
As a consequence the amount of review work in a single project will never be 
enough to get a core status.


 -- 
 Duncan Thomas
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Alessandro Pilotti

On Oct 15, 2013, at 19:03 , Duncan Thomas duncan.tho...@gmail.com
 wrote:

 On 11 October 2013 20:51, Rochelle.Grober rochelle.gro...@huawei.com wrote:
 Proposed solution:
 
 There have been a couple of solutions proposed.  I’m presenting a
 merged/hybrid solution that may work
 
 · Create a new repository for the extra drivers:
 
 o   Keep kvm and Xenapi in the Nova project as “reference” drivers
 
 oopenstack/nova-extra-drivers (proposed by rbryant)
 
 oHave all drivers other than reference drivers in the extra-drivers
 project until they meet the maturity of the ones in Nova
 
 o   The core reviewers for nova-extra-drivers will come from its developer
 pool.  As Alessandro pointed out, all the driver developers have more in
 common with each other than core Nova, so they should be able to do a better
 job of reviewing these patches than Nova core.  Plus, this might create some
 synergy between different drivers that will result in more commonalities
 across drivers and better stability.  This also reduces the workloads on
 both Nova Core reviewers and the driver developers/core reviewers.
 
 o   If you don’t feel comfortable with the last bullet, have the Nova core
 reviewers do the final approval, but only for the obvious “does this code
 meet our standards?”
 
 
 
 The proposed solution focuses the strengths of the different developers in
 their strong areas.  Everyone will still have to stretch to do reviews and
 now there is a possibility that the developers that best understand the
 drivers might be able to advance the state of the drivers by sharing their
 expertise amongst each other.
 
 The problem here is that you need to keep nova-core and the drivers
 tree in sync... if a core change causes CI to break because it
 requires a change to the driver code (this has happened a couple of
 times in cinder... when they're in the same tree you can just fix em
 all up, easy), there's a nasty dance to get the patches in since the
 drivers need updating to work with both the old and the new core code,
 then the core code updating, then the support for the old core code
 removing… yuck
 

We are discussing about this since a while and it's IMO almost an inexistent 
issue in Nova considering how seldomly those changes happen in the driver 
interface.
It might be obviously helpful if the Nova team would like to make the driver 
interface stable (e.g. versioned).

Said that, if having to cope with occasional breakings during a dev cycle is 
the price to pay to get out of the current management mess, well, that'd be by 
far the lesser of the two evils. :-)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Alessandro Pilotti


On Oct 15, 2013, at 18:59 , Matt Riedemann 
mrie...@us.ibm.commailto:mrie...@us.ibm.com
 wrote:

Sorry to pile on, but:

this was not particularly the case for what our driver is concerned. As I 
already wrote, almost all the reviews so far have been related to unit tests or 
minor formal corrections.

As was pointed out by me in patch set 1 here: 
https://review.openstack.org/#/c/43592/

There was no unit test coverage for an entire module 
(nova.virt.hyperv.volumeops) before that patch.

So while I agree that driver maintainers know their code the best and how it 
all works with the dirty details, but they are also going to be the ones to cut 
corners to get things fixed which usually shows up in a lack of test coverage - 
and that's a good reason to have external reviewers on everything, to keep us 
all honest.


Let me add to this as an example this patch with a large number of additional 
unit tests that we decided to provide to improve our (already good) test 
coverage without external input:
https://review.openstack.org/#/c/48940/

I agree on the fact that peer review is a fundamental part of the development 
process, as you were saying also to keep each other on track. But this is 
something that we can do among the driver team with or without the help of the 
Nova team, especially now that the Hyper-V community is growing up at a fast 
pace.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development


Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.commailto:mrie...@us.ibm.com
Mail Attachment.gif

3605 Hwy 52 N
Rochester, MN 55901-1407
United States






From:Alessandro Pilotti 
apilo...@cloudbasesolutions.commailto:apilo...@cloudbasesolutions.com
To:OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date:10/15/2013 10:39 AM
Subject:Re: [openstack-dev] [Hyper-V] Havana status





On Oct 15, 2013, at 18:14 , Duncan Thomas 
duncan.tho...@gmail.commailto:duncan.tho...@gmail.com wrote:

On 11 October 2013 15:41, Alessandro Pilotti
apilo...@cloudbasesolutions.commailto:apilo...@cloudbasesolutions.com wrote:
Current reviews require:

+1 de facto driver X mantainer(s)
+2  core reviewer
+2A  core reviewer

While with the proposed scenario we'd get to a way faster route:

+2  driver X mantainer
+2A another driver X mantainer or a core reviewer

This would make a big difference in terms of review time.

Unfortunately I suspect it would also lead to a big difference in
review quality, and not in a positive way. The things that are
important / obvious to somebody who focuses on one driver are totally
different, and often far more limited, than the concerns of somebody
who reviews many drivers and core code changes.

Although the eyes of somebody which comes from a different domain bring usually 
additional points of views and befits, this was not particularly the case for 
what our driver is concerned. As I already wrote, almost all the reviews so far 
have been related to unit tests or minor formal corrections.

I disagree on the far more limited: driver devs (at least in our case), have 
to work on a wider range of projects beside Nova (e.g.: Neutron, Cinder, 
Ceilometer and outside proper OpenStack OpenVSwitch and Crowbar, to name the 
most relevant cases).





--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-14 Thread Thierry Carrez
Joe Gordon wrote:
 [...]
 This sounds like a very myopic solution to the issue you originally
 raised, and I don't think it will solve the underlying issues.
 
 Taking a step back, you originally raised a concern about how we
 prioritize reviews with the havana-rc-potential tag.
 [...]

I'm with Joe here. Additionally, I don't see how the proposed solution
would solve anything for the original issue.

You propose letting a subteam approve incremental patches in a specific
branch, and propose a big blob every milestone to merge in Nova proper,
so that the result can be considered golden and maintained by the Nova
team. So nova-core would still have to review it, since they put their
name on it. I don't see how reviewing the big blob is a lot easier than
reviewing incremental patches. Doing it under the time pressure of the
upcoming milestone won't drive better results.

Furthermore, the issue you raised was with havana release candidates,
for which we'd definitely not take the big blob approach anyway, and
go incremental all the way.


The subsystem mechanism works for the Linux kernel due to a trust model.
Linus doesn't review all patches coming from a subsystem maintainer, he
developed a trust in the work coming from that person over the years.

The equivalent in the OpenStack world would be to demonstrate that you
understand enough of the rest of the Nova code to avoid breaking it (and
to follow new conventions and features added there). This is done by
participating to reviews on the rest of the code. Then your +1s can be
considered as +2s, since you're the domain expert and you are trusted to
know enough of the rest of the code. That model works quite well with
oslo-incubator, without needing a separate branch for every incubated API.

Finally, the best way to not run into those priority hurdles is by
anticipating them. Since hyper-V is not tested at the gate, reviewers
will always be more reluctant in accepting late features and RC fixes
that affect hyper-V code. Landing features at the beginning of a cycle
and working on bugfixes well before we enter the release candidate
phases... that's the best way to make sure your work gets in before release.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-14 Thread Alessandro Pilotti


 On 14.10.2013, at 11:18, Thierry Carrez thie...@openstack.org wrote:
 
 Joe Gordon wrote:
 [...]
 This sounds like a very myopic solution to the issue you originally
 raised, and I don't think it will solve the underlying issues.
 
 Taking a step back, you originally raised a concern about how we
 prioritize reviews with the havana-rc-potential tag.
 [...]
 
 I'm with Joe here. Additionally, I don't see how the proposed solution
 would solve anything for the original issue.
 
 You propose letting a subteam approve incremental patches in a specific
 branch, and propose a big blob every milestone to merge in Nova proper,
 so that the result can be considered golden and maintained by the Nova
 team. So nova-core would still have to review it, since they put their
 name on it. I don't see how reviewing the big blob is a lot easier than
 reviewing incremental patches. Doing it under the time pressure of the
 upcoming milestone won't drive better results.
 
I already replied on this in the following emails, it was a proposal based on 
all the feedbacks to try to find a common ground, surely not the best option.

 Furthermore, the issue you raised was with havana release candidates,
 for which we'd definitely not take the big blob approach anyway, and
 go incremental all the way.
 
Ditto

 
 The subsystem mechanism works for the Linux kernel due to a trust model.
 Linus doesn't review all patches coming from a subsystem maintainer, he
 developed a trust in the work coming from that person over the years.
 
That's the only way to go IMO as the project gets bigger.

 The equivalent in the OpenStack world would be to demonstrate that you
 understand enough of the rest of the Nova code to avoid breaking it (and
 to follow new conventions and features added there). This is done by
 participating to reviews on the rest of the code. Then your +1s can be
 considered as +2s, since you're the domain expert and you are trusted to
 know enough of the rest of the code. That model works quite well with
 oslo-incubator, without needing a separate branch for every incubated API.
 

How should a driver break Nova? It's 100% decoupled. The only contact area is 
the driver interface that we simply consume, without changing it.

In the very rare cases in which we propose changes to Nova code (only the RDP 
patch so far in 3 releases) that'd be of course part of the Nova project, not 
the driver.

Oslo-incubator is definitely not a good example here as its code gets consumed 
by the other projects.

A separate project would give no concerns on breaking anything, only users of 
that specific driver would install it, e.g.:

pip install nova-driver-hyperv

For the moment, our Windows code is included in every Linux release with 
OpenStack (Ubuntu, RH/CentOS+RDO, etc). I find it quite funny to be honest.

 Finally, the best way to not run into those priority hurdles is by
 anticipating them. Since hyper-V is not tested at the gate, reviewers
 will always be more reluctant in accepting late features and RC fixes
 that affect hyper-V code. Landing features at the beginning of a cycle
 and working on bugfixes well before we enter the release candidate
 phases... that's the best way to make sure your work gets in before release.
 

What if a bug gets reported during the RC phase like it happened now? How can 
we work on it before it gets reported? Should I look for a crystal ball? :-)

Landing all features at the beginning of the cycle and spending the next 3 
months to beg for reviews that won't add almost anything to the patches and 
without any guarantee that this will happen? That would simply mean being 
constantly one entire release late in the development cycle without any 
advantage.

Beside the blueprints, the big problem are with bug fixes. Once you have a fix, 
why waiting weeks before releasing it and getting users unhappy? 

As a an example we have a couple of critical bugs for Havana with their fix 
already under review that nobody cared even to triage, let alone review.

Considering that we are not singled out here, the only explanation is that the 
Nova team is simply not able to face anymore the increasing amount of bugs and 
new features, with the obvious negative impact on the users.

Let's face it: the Nova team cannot scale fast enough as the project size 
increases at this pace.

Delegation of responsibility on partitioned and decoupled areas is the only 
proven way out, as for example the Linux kernel project clearly shows.

Alessandro

 Regards,
 
 -- 
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-14 Thread Christopher Yeoh
On Mon, 14 Oct 2013 11:58:22 +
Alessandro Pilotti apilo...@cloudbasesolutions.com wrote:
 
 As a an example we have a couple of critical bugs for Havana with
 their fix already under review that nobody cared even to triage, let
 alone review.
 

Anyone can join the nova-bugs team on launchpad and help triage the
incoming bugs. You don't need to be a core to do that.

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-14 Thread Russell Bryant
On 10/14/2013 04:10 AM, Thierry Carrez wrote:
 Joe Gordon wrote:
 [...]
 This sounds like a very myopic solution to the issue you originally
 raised, and I don't think it will solve the underlying issues.

 Taking a step back, you originally raised a concern about how we
 prioritize reviews with the havana-rc-potential tag.
 [...]
 
 I'm with Joe here. Additionally, I don't see how the proposed solution
 would solve anything for the original issue.
 
 You propose letting a subteam approve incremental patches in a specific
 branch, and propose a big blob every milestone to merge in Nova proper,
 so that the result can be considered golden and maintained by the Nova
 team. So nova-core would still have to review it, since they put their
 name on it. I don't see how reviewing the big blob is a lot easier than
 reviewing incremental patches. Doing it under the time pressure of the
 upcoming milestone won't drive better results.
 
 Furthermore, the issue you raised was with havana release candidates,
 for which we'd definitely not take the big blob approach anyway, and
 go incremental all the way.

Regarding the original issue, I actually try very hard to stay on top
of how nova is doing with the review queue.  I wrote about this in
detail in my PTL candidacy (see Code Review Process of [1]).

I still maintain that the problem with review times is not quite as bad
as some people make it out to be now and then.  If there's an angle I'm
not tracking, I would love help adding more to these stats.

Of course, there's probably also quite a bit of variety in expectations.
 Perhaps we could do a better job of communicating what is a reasonable
expectation when posting reviews.

Note that the times look worse than usual right now, but that's
explained by a bunch of patches that were blocked by the feature freeze
being restored, and it looks like they've been waiting for review the
whole time, even though they were abandoned for a while.

http://russellbryant.net/openstack-stats/nova-openreviews.html

 The subsystem mechanism works for the Linux kernel due to a trust model.
 Linus doesn't review all patches coming from a subsystem maintainer, he
 developed a trust in the work coming from that person over the years.
 
 The equivalent in the OpenStack world would be to demonstrate that you
 understand enough of the rest of the Nova code to avoid breaking it (and
 to follow new conventions and features added there). This is done by
 participating to reviews on the rest of the code. Then your +1s can be
 considered as +2s, since you're the domain expert and you are trusted to
 know enough of the rest of the code. That model works quite well with
 oslo-incubator, without needing a separate branch for every incubated API.

While we don't have a MAINTAINERS file, I feel that we do this for Nova
today.  I do not expect everyone on nova-core to be an expert across the
whole tree.  Part of being on the core team is a trust in your reviews
that you would only +2 stuff that you are comfortable with.


[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015370.html

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-13 Thread Christopher Yeoh
On Sat, 12 Oct 2013 23:12:26 +1300
Robert Collins robe...@robertcollins.net wrote:
 On 12 October 2013 21:35, Christopher Yeoh cbky...@gmail.com wrote:
  On Fri, 11 Oct 2013 08:27:54 -0700
  Dan Smith d...@danplanet.com wrote:
 
 
 A fairly fundamental thing in SOA architectures - which we have here -
 is to make all changes backwards compatibly, it's pretty easy if
 you're in the habit of it - there's only a handful of basic primitives
 around evolving APIs gracefully - and it results in a much smoother
 deployment story - and ultimately thats what we're aiming at.

I think that approach is fine for external APIs where we want
stability. But for internal APIs where we don't seek to provide
that sort of guarantee there is a benefit to be being able to do
major reworking of code without having to worry about backwards
compatibility and the cruft that you get with having to provide that.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-13 Thread Christopher Yeoh
On Sat, 12 Oct 2013 09:30:30 -0700
Dan Smith d...@danplanet.com wrote:

  If the idea is to gate with nova-extra-drivers this could lead to a
  rather painful process to change the virt driver API. When all the
  drivers are in the same tree all of them can be updated at the same
  time as the infrastructure. 
 
 Right, and I think if we split those drivers out, then we do *not*
 gate on them for the main tree. It's asymmetric, which means
 potentially more trouble for the maintainers of the extra drivers.
 However, as has been said, we *want* the drivers in the tree as we
 have them now. Being moved out would be something the owners of a
 driver would choose in order to achieve a faster pace of development,
 with the consequence of having to place catch-up if and when we
 change the driver API.

If that's what the owners of the driver want to do then I've no problem
with supporting that approach. But I very much think that we should aim
to have drivers integrated into the Nova tree as they mature so we can
gate on them. Or if not in the tree then at least have a system that
supports developing in a way that makes gating on them possible without
the downside pains of not being able to change internal APIs easily. 

Chris.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-13 Thread Christopher Yeoh
On Sat, 12 Oct 2013 15:20:44 -0700
Joe Gordon joe.gord...@gmail.com wrote:
 
 Once again you raise the issue of bug triage and prioritization of
 reviews (and blueprints), so help us fix that!  This isn't a virt
 driver only issue though.
 
 The issues you originally raise are only incidentally related to virt
 drivers and hyper-v.  The same issues can be brought up as you point
 out by any sub-project (scheduling, APIs, DB, etc).  So a fix for
 only virt drivers hardly sounds like an appropriate solution.

+1

I see there is a session submitted for the summit which is meant to
specifically cover the future of compute drivers
http://summit.openstack.org/cfp/details/4

But I'm wondering if it would better to generalise this to one where we
can more generally discuss the issues around bug triage, review
prioritisation, feature planning (eg risk of aiming for merging of
major features in H3/I3) - what people can do to help the situation and
what tools might help make reviewers more effective.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-13 Thread Alessandro Pilotti


On Oct 13, 2013, at 14:54 , Christopher Yeoh cbky...@gmail.com
 wrote:

 On Sat, 12 Oct 2013 09:30:30 -0700
 Dan Smith d...@danplanet.com wrote:
 
 If the idea is to gate with nova-extra-drivers this could lead to a
 rather painful process to change the virt driver API. When all the
 drivers are in the same tree all of them can be updated at the same
 time as the infrastructure. 
 
 Right, and I think if we split those drivers out, then we do *not*
 gate on them for the main tree. It's asymmetric, which means
 potentially more trouble for the maintainers of the extra drivers.
 However, as has been said, we *want* the drivers in the tree as we
 have them now. Being moved out would be something the owners of a
 driver would choose in order to achieve a faster pace of development,
 with the consequence of having to place catch-up if and when we
 change the driver API.
 
 If that's what the owners of the driver want to do then I've no problem
 with supporting that approach. But I very much think that we should aim
 to have drivers integrated into the Nova tree as they mature so we can
 gate on them. Or if not in the tree then at least have a system that
 supports developing in a way that makes gating on them possible without
 the downside pains of not being able to change internal APIs easily. 
 

For what the driver's interface stability is concerned, I wouldn't see it as a 
major issue as long as Nova and driver devs coordinate the effort.
Beside that, having a versioned stable driver interface wouldn't be IMHO such a 
hassle, but as I wrote, this is our very last problem.


 Chris.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Christopher Yeoh
On Fri, 11 Oct 2013 08:27:54 -0700
Dan Smith d...@danplanet.com wrote:
 
 Agreed, a stable virt driver API is not feasible or healthy at this
 point, IMHO. However, it doesn't change that much as it is. I know
 I'll be making changes to virt drivers in the coming cycle due to
 objects and I have no problem submitting the corresponding changes to
 the nova-extra-drivers tree for those drivers alongside any that go
 for the main one.

If the idea is to gate with nova-extra-drivers this could lead to a
rather painful process to change the virt driver API. When all the
drivers are in the same tree all of them can be updated at the same
time as the infrastructure. 

If they are in separate trees and Nova gates on nova-extra-drivers then
at least temporarily a backwards compatible API would have to remain so
the nova-extra-drivers tests still passed. The changes would then be
applied to nova-extra-drivers and finally a third changeset to remove
the backwards compatible code. 

We see this in tempest/nova or tempest/cinder occasionally (not often
as the APIs are stable) and its not very pretty. Ideally we'd be able to
link two changesets for different projects so they can be processed as
one. But without that ability I think splitting any drivers out and
continuing to gate on them would be bad.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Dan Smith
 If the idea is to gate with nova-extra-drivers this could lead to a
 rather painful process to change the virt driver API. When all the
 drivers are in the same tree all of them can be updated at the same
 time as the infrastructure. 

Right, and I think if we split those drivers out, then we do *not* gate
on them for the main tree. It's asymmetric, which means potentially more
trouble for the maintainers of the extra drivers. However, as has been
said, we *want* the drivers in the tree as we have them now. Being moved
out would be something the owners of a driver would choose in order to
achieve a faster pace of development, with the consequence of having to
place catch-up if and when we change the driver API.

Like I said, I'll be glad to submit patches to the extra tree in unison
with patches to the main tree to make some of the virt API changes that
will be coming soon, which should minimize the troubles.

I believe Alex has already said that he'd prefer the occasional catch-up
activities over what he's currently experiencing.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Tim Bell

From the user perspective, splitting off the projects seems to be focussing on 
the ease of commit compared to the final user experience. An 'extras' project 
without *strong* testing co-ordination with packagers such as SUSE and RedHat 
would end up with the consumers of the product facing the integration problems 
rather than resolving where they should be, within the OpenStack project 
itself.

I am sympathetic to the 'extra' drivers problem such as Hyper-V and powervm, 
but I do not feel the right solution is to split.

As CERN uses the Hyper-V driver (we have a dual KVM/Hyper-V approach), we want 
that this configuration is certified before it reaches us.

Assuming there is a summit session on how to address this, I can arrange a user 
representation in that session.

Tim

 -Original Message-
 From: Dan Smith [mailto:d...@danplanet.com]
 Sent: 12 October 2013 18:31
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [Hyper-V] Havana status
 
  If the idea is to gate with nova-extra-drivers this could lead to a
  rather painful process to change the virt driver API. When all the
  drivers are in the same tree all of them can be updated at the same
  time as the infrastructure.
 
 Right, and I think if we split those drivers out, then we do *not* gate on 
 them for the main tree. It's asymmetric, which means potentially
 more trouble for the maintainers of the extra drivers. However, as has been 
 said, we *want* the drivers in the tree as we have them now.
 Being moved out would be something the owners of a driver would choose in 
 order to achieve a faster pace of development, with the
 consequence of having to place catch-up if and when we change the driver API.
 
 Like I said, I'll be glad to submit patches to the extra tree in unison with 
 patches to the main tree to make some of the virt API changes
 that will be coming soon, which should minimize the troubles.
 
 I believe Alex has already said that he'd prefer the occasional catch-up 
 activities over what he's currently experiencing.
 
 --Dan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Dan Smith
 From the user perspective, splitting off the projects seems to be 
 focussing on the ease of commit compared to the final user 
 experience.

I think what you describe is specifically the desire that originally
spawned the thread: making the merging of changes to the hyper-v driver
faster by having them not reviewed by the rest of the Nova team. It
seems to be what the hyper-v developers want, not necessarily what the
Nova team as a whole wants.

 An 'extras' project without *strong* testing co-ordination with
 packagers such as SUSE and RedHat would end up with the consumers of
 the product facing the integration problems rather than resolving
 where they should be, within the OpenStack project itself.

I don't think splitting out to -extras means that it loses strong
testing coordination (note that strong testing coordination does not
exist with the hyper-v driver at this point in time). Every patch to the
-extras tree could still be unit (and soon, integration) tested against
the current nova tree, using the proposed patch applied to the -extras
tree. It just means that a change against nova wouldn't trigger the
same, which is why the potential for catch up behavior would be required.

 I am sympathetic to the 'extra' drivers problem such as Hyper-V and 
 powervm, but I do not feel the right solution is to split.
 
 Assuming there is a summit session on how to address this, I can 
 arrange a user representation in that session.

Cool, I really think we're at the point where we know the advantages and
disadvantages of the various options and further face-to-face discussion
at the summit is what is going to move us to the next stage.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Alessandro Pilotti


 On 12.10.2013, at 20:04, Tim Bell tim.b...@cern.ch wrote:
 
 
 From the user perspective, splitting off the projects seems to be focussing 
 on the ease of commit compared to the final user experience. An 'extras' 
 project without *strong* testing co-ordination with packagers such as SUSE 
 and RedHat would end up with the consumers of the product facing the 
 integration problems rather than resolving where they should be, within the 
 OpenStack project itself.
 
 I am sympathetic to the 'extra' drivers problem such as Hyper-V and powervm, 
 but I do not feel the right solution is to split.
 
 As CERN uses the Hyper-V driver (we have a dual KVM/Hyper-V approach), we 
 want that this configuration is certified before it reaches us.
 
I don't see your point here. From any practical perspective, most of the Nova 
core review work in the sub-project areas consists in formal validation of the 
patches (beyond the basic pep8 / pylinting done by Jenkins) or unit test 
requests while 99% of the authoritative work on the patches is done by the 
de-facto sub-project maintainers, simply because those are the people knowing 
the domain. This wouldn't change with a separate project. It would actually 
improve. 

Informal Certification, to call it this way, is eventually coming from the 
users (including CERN of course), not from the reviewers: in the end you (the 
users) are the ones using this stuff in production environments and you are 
filing bugs and asking for new features.

On the other side, if by extra you mean a repo outside of OpenStack (the 
vendor repo suggested in previous replies in this thread), I totally agree, as 
it would move the project outside of the focus of the largest part of the 
community in most cases.

 Assuming there is a summit session on how to address this, I can arrange a 
 user representation in that session.
 
 Tim
 
 -Original Message-
 From: Dan Smith [mailto:d...@danplanet.com]
 Sent: 12 October 2013 18:31
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [Hyper-V] Havana status
 
 If the idea is to gate with nova-extra-drivers this could lead to a
 rather painful process to change the virt driver API. When all the
 drivers are in the same tree all of them can be updated at the same
 time as the infrastructure.
 
 Right, and I think if we split those drivers out, then we do *not* gate on 
 them for the main tree. It's asymmetric, which means potentially
 more trouble for the maintainers of the extra drivers. However, as has been 
 said, we *want* the drivers in the tree as we have them now.
 Being moved out would be something the owners of a driver would choose in 
 order to achieve a faster pace of development, with the
 consequence of having to place catch-up if and when we change the driver API.
 
 Like I said, I'll be glad to submit patches to the extra tree in unison with 
 patches to the main tree to make some of the virt API changes
 that will be coming soon, which should minimize the troubles.
 
 I believe Alex has already said that he'd prefer the occasional catch-up 
 activities over what he's currently experiencing.
 
 --Dan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Alessandro Pilotti


On 12.10.2013, at 20:22, Dan Smith d...@danplanet.com wrote:

 From the user perspective, splitting off the projects seems to be 
 focussing on the ease of commit compared to the final user 
 experience.
 
 I think what you describe is specifically the desire that originally
 spawned the thread: making the merging of changes to the hyper-v driver
 faster by having them not reviewed by the rest of the Nova team. It
 seems to be what the hyper-v developers want, not necessarily what the
 Nova team as a whole wants.
 
 An 'extras' project without *strong* testing co-ordination with
 packagers such as SUSE and RedHat would end up with the consumers of
 the product facing the integration problems rather than resolving
 where they should be, within the OpenStack project itself.
 
 I don't think splitting out to -extras means that it loses strong
 testing coordination (note that strong testing coordination does not
 exist with the hyper-v driver at this point in time). Every patch to the
 -extras tree could still be unit (and soon, integration) tested against
 the current nova tree, using the proposed patch applied to the -extras
 tree. It just means that a change against nova wouldn't trigger the
 same, which is why the potential for catch up behavior would be required.
 
 I am sympathetic to the 'extra' drivers problem such as Hyper-V and 
 powervm, but I do not feel the right solution is to split.
 
 Assuming there is a summit session on how to address this, I can 
 arrange a user representation in that session.
 
 Cool, I really think we're at the point where we know the advantages and
 disadvantages of the various options and further face-to-face discussion
 at the summit is what is going to move us to the next stage.
 

I agree. Looks like we are converging towards a common ground. I'm summing it 
up here, including a few additional details, for the benefit of who will not 
join us in HK (sorry, we'll party for you as well :-)):

1) All the drivers will still be part of Nova.

2) One official project (nova-drivers-incubator?) or more than one will be 
created for the purpose of supporting a leaner and faster development pace of 
the drivers.

3) Current driver sub-project teams will informally elect their maintainer(s) 
which will have +2a rights on the new project or specific subtrees.

4) Periodically, code from the new project(s) must be merged into Nova. 
Only Nova core reviewers will have obviously +2a rights here.
I propose to do it on scheduled days before every milestone, differentiated per 
driver to distribute the review effort (what about also having Nova core 
reviewers assigned to each driver? Dan was suggesting something similar some 
time ago).

5) All drivers will be treated equally and new features and bug fixes for 
master (except security ones) should land in the new project before moving to 
Nova.

6) CI gates for all drivers, once available, will be added to the new project 
as well. Only drivers code with a CI gate will be merged in Nova (starting with 
the Icehouse release as we already discussed).

7) Active communication should be maintained between the Nova core team and the 
drivers maintainers. This means something more than: I wrote it on the ML 
didn't you see it? :-)

A couple if questions: will we keep version branches on the new project or just 
master?

Bug fixes for older releases will be proposed to the incubator for the current 
release in development and to Nova for past versions branches?

Please correct me if I missed something!

Thanks,

Alessandro

 --Dan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Bob Ball

From: Alessandro Pilotti [apilo...@cloudbasesolutions.com]
Sent: 12 October 2013 20:21
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Hyper-V] Havana status

 1) All the drivers will still be part of Nova.
 
 2) One official project (nova-drivers-incubator?) or more than one will be 
 created for
 the purpose of supporting a leaner and faster development pace of the drivers.

I still think that all drivers should be treated equally; if we are to create a 
separate repository for drivers then I think we should officially split the 
driver repository out, including KVM and XenAPI drivers.  Certainly the XenAPI 
team have experienced a very similar issue with the time it takes to get 
reviews in - although I fully accept it may be to a lesser degree than Hyper-V.

 3) Current driver sub-project teams will informally elect their maintainer(s) 
 which will
 have +2a rights on the new project or specific subtrees.

The more I've thought about it the more I think we need common +2a's across all 
drivers to identify commonality before a one-big-drop and not per-driver +2a's. 
 Perhaps if there were dedicate nova driver core folk then the pace of driver 
development would be increased without sacrificing the good things we get by 
having people familiar with the expectations of the API, how other drivers 
implement things or identifying code that should not be written in drivers but 
moved to oslo or the main nova repository for the good of everyone rather than 
the specific driver.

 4) Periodically, code from the new project(s) must be merged into Nova.
 Only Nova core reviewers will have obviously +2a rights here.
 I propose to do it on scheduled days before every milestone, differentiated 
 per
 driver to distribute the review effort (what about also having Nova core 
 reviewers
 assigned to each driver? Dan was suggesting something similar some time ago).

I don't think this is maintainable.  Assuming there is a high rate of change in 
the drivers, the number of changes that would likely need to be reviewed before 
each milestone could be huge and completely impossible to review - which could 
cause an even bigger issue.  I worry that if the Nova core reviewers aren't 
convinced by the code coming from this separate repository their choice would 
either be to reject the lot or just accept it without review.

 5) All drivers will be treated equally and new features and bug fixes for 
 master
 (except security ones) should land in the new project before moving to Nova.

Perhaps I don't understand this in relation to nova-drivers-incubator - but 
are you suggesting that new APIs are added to Nova, but their implementation is 
only added to nova-drivers-incubator until the scheduled day before the 
milestone, when the functionality can be moved into Nova?  If so I'm not sure 
of the benefit of having any drivers in Nova at all is, since the expectation 
would be you must always deploy the matching nova-drivers to get API 
compatibility.  Or are you suggesting that it is the developers choice about 
whether to push the new code to both repositories at the same time, or whether 
they want to wait for the big merge pre-milestone?

 6) CI gates for all drivers, once available, will be added to the new project 
 as
 well. Only drivers code with a CI gate will be merged in Nova (starting with 
 the
 Icehouse release as we already discussed).

I think we can all agree on this one - although I thought the IceHouse 
expectation was not CI gate, but unit test gate and automated test (possibly 
through an external system) posting review comments.  Having said that, I would 
be very happy with enforcing CI gate for all drivers.

 7) Active communication should be maintained between the Nova core team
 and the drivers maintainers. This means something more than: I wrote it on 
 the
 ML didn't you see it? :-)

Definitely.  I'd suggest an IRC meeting - they are fun.

Bob
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Dan Smith
 4) Periodically, code from the new project(s) must be merged into Nova. 
 Only Nova core reviewers will have obviously +2a rights here.
 I propose to do it on scheduled days before every milestone, differentiated 
 per driver to distribute the review effort (what about also having Nova core 
 reviewers assigned to each driver? Dan was suggesting something similar some 
 time ago).

FWIW, this is not what I had intended. I think that if you want (or
need) to be in the extras tree, then that's where you are. Periodic
syncs generate extra work and add the previously-mentioned confusion of
which driver is the official/best one?

I think that any driver that gets put into -extra gets removed from the
mainline nova tree. If that driver has full CI testing and wants to be
moved into the main tree, then that happens once.

Having commit rights to the extras tree and periodic nearly-unattended
or too-large-to-reasonably-review sync patches just sidesteps the
process. That gains you the recgonition of being in the tree, without
having to undergo the aggressive review and participate in the planning
and coordination of the process that goes with it. That is NOT okay, IMHO.

Sorry if that was unclear with the previous discussion. I'm not sure who
else was thinking that those drivers would exist in both places, but I
definitely was not.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Joe Gordon
On Sat, Oct 12, 2013 at 12:21 PM, Alessandro Pilotti 
apilo...@cloudbasesolutions.com wrote:



 On 12.10.2013, at 20:22, Dan Smith d...@danplanet.com wrote:

  From the user perspective, splitting off the projects seems to be
  focussing on the ease of commit compared to the final user
  experience.
 
  I think what you describe is specifically the desire that originally
  spawned the thread: making the merging of changes to the hyper-v driver
  faster by having them not reviewed by the rest of the Nova team. It
  seems to be what the hyper-v developers want, not necessarily what the
  Nova team as a whole wants.
 
  An 'extras' project without *strong* testing co-ordination with
  packagers such as SUSE and RedHat would end up with the consumers of
  the product facing the integration problems rather than resolving
  where they should be, within the OpenStack project itself.
 
  I don't think splitting out to -extras means that it loses strong
  testing coordination (note that strong testing coordination does not
  exist with the hyper-v driver at this point in time). Every patch to the
  -extras tree could still be unit (and soon, integration) tested against
  the current nova tree, using the proposed patch applied to the -extras
  tree. It just means that a change against nova wouldn't trigger the
  same, which is why the potential for catch up behavior would be
 required.
 
  I am sympathetic to the 'extra' drivers problem such as Hyper-V and
  powervm, but I do not feel the right solution is to split.
 
  Assuming there is a summit session on how to address this, I can
  arrange a user representation in that session.
 
  Cool, I really think we're at the point where we know the advantages and
  disadvantages of the various options and further face-to-face discussion
  at the summit is what is going to move us to the next stage.
 

 I agree. Looks like we are converging towards a common ground. I'm summing
 it up here, including a few additional details, for the benefit of who will
 not join us in HK (sorry, we'll party for you as well :-)):



This sounds like a very myopic solution to the issue you originally raised,
and I don't think it will solve the underlying issues.



Taking a step back, you originally raised a concern about how we prioritize
reviews with the havana-rc-potential tag.

In the past weeks we diligently marked bugs that are related to Havana
features with the havana-rc-potential tag, which at least for what Nova
is concerned, had absolutely no effect.
Our code is sitting in the review queue as usual and, not being tagged for
a release or prioritised, there's no guarantee that anybody will take a
look at the patches in time for the release. Needless to say, this starts
to feel like a Kafka novel. :-) [1]

If the issue is just better bug triage and prioritizing reviews, help us do
that!

[2] shows the current status of your hyper-v havana-rc-potential bugs.
Currently there are only 7 bugs that have both tags.  Of those 7, 3 have no
pending patches to trunk, and one doesn't sound like it warrants a back
port (https://bugs.launchpad.net/nova/+bug/1220256).

Looking at the remaining 4, one is marked as a WIP by you (
https://bugs.launchpad.net/nova/+bug/1231911
https://review.openstack.org/#/c/48645/) which leaves three patches for
nova team to review.  Three reviews open for a week doesn't sound like an
issue that warrants a whole new repository.

You went on to clarify your position.

I'm not putting into discussion how much and well you guys are working (I
actually firmly believe that you DO work very well), I'm just discussing
about the way in which blueprints and bugs get prioritised.

snip

On the other side, to get our code reviewed and merged we are always
dependent on the good will and best effort of core reviewers that don't
necessarily know or care about specific driver, plugin or agent internals.
This brings to even longer review cycles even considering that reviewers
are clearly doing their best in understanding the patches and we couldn't
be more thankful.

Best effort has also a very specific meaning: in Nova all the Havana
Hyper-V blueprints were marked as low priority (which can be translated
in: the only way to get them merged is to beg for reviews or maybe commit
them on day 1 of the release cycle and pray) while most of the Hyper-V
bugs had no priority at all (which can be translated in make some noise on
the ML and IRC or nobody will care). :-)

This reality unfortunately applies to most of the sub-projects (non only
Hyper-V) and can be IMHO solved only by delegating more authonomy to the
sub-project teams on their specific area of competence across OpenStack as
a whole. Hopefully we'll manage to find a solution during the design summit
as we are definitely not the only ones feeling this way, by judging on
various threads in this ML. [3]


Once again you raise the issue of bug triage and prioritization of reviews
(and blueprints), so help us fix that!  This isn't 

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Alessandro Pilotti


On 13.10.2013, at 01:26, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:




On Sat, Oct 12, 2013 at 12:21 PM, Alessandro Pilotti 
apilo...@cloudbasesolutions.commailto:apilo...@cloudbasesolutions.com wrote:


On 12.10.2013, at 20:22, Dan Smith 
d...@danplanet.commailto:d...@danplanet.com wrote:

 From the user perspective, splitting off the projects seems to be
 focussing on the ease of commit compared to the final user
 experience.

 I think what you describe is specifically the desire that originally
 spawned the thread: making the merging of changes to the hyper-v driver
 faster by having them not reviewed by the rest of the Nova team. It
 seems to be what the hyper-v developers want, not necessarily what the
 Nova team as a whole wants.

 An 'extras' project without *strong* testing co-ordination with
 packagers such as SUSE and RedHat would end up with the consumers of
 the product facing the integration problems rather than resolving
 where they should be, within the OpenStack project itself.

 I don't think splitting out to -extras means that it loses strong
 testing coordination (note that strong testing coordination does not
 exist with the hyper-v driver at this point in time). Every patch to the
 -extras tree could still be unit (and soon, integration) tested against
 the current nova tree, using the proposed patch applied to the -extras
 tree. It just means that a change against nova wouldn't trigger the
 same, which is why the potential for catch up behavior would be required.

 I am sympathetic to the 'extra' drivers problem such as Hyper-V and
 powervm, but I do not feel the right solution is to split.

 Assuming there is a summit session on how to address this, I can
 arrange a user representation in that session.

 Cool, I really think we're at the point where we know the advantages and
 disadvantages of the various options and further face-to-face discussion
 at the summit is what is going to move us to the next stage.


I agree. Looks like we are converging towards a common ground. I'm summing it 
up here, including a few additional details, for the benefit of who will not 
join us in HK (sorry, we'll party for you as well :-)):


This sounds like a very myopic solution to the issue you originally raised, and 
I don't think it will solve the underlying issues.

The solution I just proposed was based on the feedbacks received on this thread 
trying to make everybody happy, so if you find it myopic please be my guest 
and find a better one that suits all the different positions. :-)



Taking a step back, you originally raised a concern about how we prioritize 
reviews with the havana-rc-potential tag.

In the past weeks we diligently marked bugs that are related to Havana 
features with the havana-rc-potential tag, which at least for what Nova is 
concerned, had absolutely no effect.
Our code is sitting in the review queue as usual and, not being tagged for a 
release or prioritised, there's no guarantee that anybody will take a look at 
the patches in time for the release. Needless to say, this starts to feel like 
a Kafka novel. :-) [1]

If the issue is just better bug triage and prioritizing reviews, help us do 
that!

[2] shows the current status of your hyper-v havana-rc-potential bugs. 
Currently there are only 7 bugs that have both tags.  Of those 7, 3 have no 
pending patches to trunk, and one doesn't sound like it warrants a back port 
(https://bugs.launchpad.net/nova/+bug/1220256).

Looking at the remaining 4, one is marked as a WIP by you 
(https://bugs.launchpad.net/nova/+bug/1231911 
https://review.openstack.org/#/c/48645/) which leaves three patches for nova 
team to review.  Three reviews open for a week doesn't sound like an issue that 
warrants a whole new repository.


Sure, it's not the volume of reviews the subject here. This is just the icing 
on the cake on something that goes on since a while (see Havana feature freeze).

You went on to clarify your position.

I'm not putting into discussion how much and well you guys are working (I 
actually firmly believe that you DO work very well), I'm just discussing about 
the way in which blueprints and bugs get prioritised.

snip

On the other side, to get our code reviewed and merged we are always dependent 
on the good will and best effort of core reviewers that don't necessarily know 
or care about specific driver, plugin or agent internals. This brings to even 
longer review cycles even considering that reviewers are clearly doing their 
best in understanding the patches and we couldn't be more thankful.

Best effort has also a very specific meaning: in Nova all the Havana Hyper-V 
blueprints were marked as low priority (which can be translated in: the only 
way to get them merged is to beg for reviews or maybe commit them on day 1 of 
the release cycle and pray) while most of the Hyper-V bugs had no priority at 
all (which can be translated in make some noise on the ML and IRC or nobody 
will 

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Alessandro Pilotti


On 13.10.2013, at 01:09, Dan Smith d...@danplanet.com wrote:

 4) Periodically, code from the new project(s) must be merged into Nova. 
 Only Nova core reviewers will have obviously +2a rights here.
 I propose to do it on scheduled days before every milestone, differentiated 
 per driver to distribute the review effort (what about also having Nova core 
 reviewers assigned to each driver? Dan was suggesting something similar some 
 time ago).
 
 FWIW, this is not what I had intended. I think that if you want (or
 need) to be in the extras tree, then that's where you are. Periodic
 syncs generate extra work and add the previously-mentioned confusion of
 which driver is the official/best one?
 
 I think that any driver that gets put into -extra gets removed from the
 mainline nova tree. If that driver has full CI testing and wants to be
 moved into the main tree, then that happens once.
 

extra sounds to me like a ghetto for drivers which are not good enough to 
stay in Nova. No thanks.

My suggestion in the previous email was just to make happy also who wanted to 
keep the drivers in Nova.
At this point, based on your reply, why not a clear and simple 
nova-driver-hyperv project as Russell was initially suggesting? What's the 
practical difference from extra?

It'd be an official project, we won't have to beg you for reviews, you won't 
need to understand the Hyper-V internals, the community would still support it 
(definitely more than now), users would have TIMELY bug fixes and new features 
instead of this mess, the sun would shine, etc etc.

As a side note, the stability of the driver's interface is IMO an irrelevant 
issue here compared to all the opposite drawbacks. 

 Having commit rights to the extras tree and periodic nearly-unattended
 or too-large-to-reasonably-review sync patches just sidesteps the
 process. That gains you the recgonition of being in the tree, without
 having to undergo the aggressive review and participate in the planning
 and coordination of the process that goes with it. That is NOT okay, IMHO.
 
 Sorry if that was unclear with the previous discussion. I'm not sure who
 else was thinking that those drivers would exist in both places, but I
 definitely was not.
 
 --Dan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Sean Dague

On 10/10/2013 08:43 PM, Tim Smith wrote:
snip

Again, I don't have any vested interest in this discussion, except that
I believe the concept of reviewer karma to be counter to both software
quality and openness. In this particular case it would seem that the
simplest solution to this problem would be to give one of the hyper-v
team members core reviewer status, but perhaps there are consequences to
that that elude me.


There are very deep consequences to that. The core team model, where you 
have 15 - 20 reviewers, but it only takes 2 to land code, only works 
when the core teams share a culture. This means they know, or are 
willing to learn, code outside their comfort zone. Will they catch all 
the bugs in that? nope. But code blindness hits everyone, and there are 
real implications for the overall quality and maintainability of a 
project as complicated as Nova if everyone only stays in their 
comfortable corner.


Also, from my experience in Nova, code contributions written by people 
that aren't regularly reviewing outside of their corner of the world are 
demonstrably lower quality than those who are. Reviewing code outside 
your specific area is also educational, gets you familiar with norms and 
idioms beyond what simple style checking handles, and makes you a better 
developer.


We need to all be caring about the whole. That culture is what makes 
OpenStack long term sustainable, and there is a reason that it is 
behavior that's rewarded with more folks looking at your proposed 
patches. When people only care about their corner world, and don't put 
in hours on keeping things whole, they balkanize and fragment.


Review bandwidth, and people working on core issues, are our most 
constrained resources. If teams feel they don't need to contribute 
there, because it doesn't directly affect their code, we end up with 
this - http://en.wikipedia.org/wiki/Tragedy_of_the_commons


So it's really crazy to call OpenStack less open by having a culture 
that encourages people to actually work and help on the common parts. 
It's good for the project, as it keeps us whole; it's good for everyone 
working on the project, because they learn about more parts of 
OpenStack, and how their part fits in with the overall system; and it 
makes everyone better developers from learning from each other, on both 
sides of the review line.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti

On Oct 11, 2013, at 14:15 , Sean Dague s...@dague.net
 wrote:

 On 10/10/2013 08:43 PM, Tim Smith wrote:
 snip
 Again, I don't have any vested interest in this discussion, except that
 I believe the concept of reviewer karma to be counter to both software
 quality and openness. In this particular case it would seem that the
 simplest solution to this problem would be to give one of the hyper-v
 team members core reviewer status, but perhaps there are consequences to
 that that elude me.
 
 There are very deep consequences to that. The core team model, where you have 
 15 - 20 reviewers, but it only takes 2 to land code, only works when the core 
 teams share a culture. This means they know, or are willing to learn, code 
 outside their comfort zone. Will they catch all the bugs in that? nope. But 
 code blindness hits everyone, and there are real implications for the overall 
 quality and maintainability of a project as complicated as Nova if everyone 
 only stays in their comfortable corner.
 
 Also, from my experience in Nova, code contributions written by people that 
 aren't regularly reviewing outside of their corner of the world are 
 demonstrably lower quality than those who are. Reviewing code outside your 
 specific area is also educational, gets you familiar with norms and idioms 
 beyond what simple style checking handles, and makes you a better developer.


There's IMO a practical contradiction here: most people contribute code and do 
reviews on partitioned areas of OpenStack only. For example, Nova devs rarely 
commit on Neutron, so you can say that for a Nova dev the confort zone is 
Nova, but by your description, a fair amount of time should be spent in 
reviewing and learning all the OpenStack projects code, unless you want to 
limit the scope of this discussion to Nova, which does not make much sense when 
you work on a whole technology layer like in our case.

On the contrary, as an example, our job as driver/plugin/agent mantainers 
brings us in contact will all the major projects codebases, with the result 
that we are learning a lot from each of them. Beside that, obviously a 
driver/plugin/agent dev spends normally time learning how similar solutions are 
implemented for other technologies already in the tree, which leads to further 
improvement in the code due to the same knowledge sharing that you are 
referring to.

 
 We need to all be caring about the whole. That culture is what makes 
 OpenStack long term sustainable, and there is a reason that it is behavior 
 that's rewarded with more folks looking at your proposed patches. When people 
 only care about their corner world, and don't put in hours on keeping things 
 whole, they balkanize and fragment.
 
 Review bandwidth, and people working on core issues, are our most constrained 
 resources. If teams feel they don't need to contribute there, because it 
 doesn't directly affect their code, we end up with this - 
 http://en.wikipedia.org/wiki/Tragedy_of_the_commons
 

This reminds me about how peer to peer sharing technologies work. Why don't we 
put some ratios, for example for each commit that a dev does at least 2-3 
reviews of other people's code are required? Enforcing it wouldn't be that 
complicated. The negative part is that it might lead to low quality or fake 
reviews, but at least it could be easy to outline in the stats.

One thing is sure: review bandwidth is the obvious bottleneck in today's 
OpenStack status. If we don't find a reasonably quick solution, the more 
OpenStack grows, the more complicated it will become, leading to even worse 
response times in merging bug fixes and limiting the new features that each new 
version can bring, which is IMO the negation of what a vital and dynamic 
project should be.

From what I see on the Linux kernel project, which can be considered as a good 
source of inspiration when it comes to review bandwidth optimization in a 
large project, they have a pyramidal structure in the way in which the git 
repo origins are interconnected. This looks pretty similar to what we are 
proposing: teams work on specific areas with a topic mantainer and somebody 
merges their work at a higher level, with Linus ultimately managing the root 
repo. 

OpenStack is organized differently: there are lots of separate projects (Nova, 
Neutrom, Glance, etc) instead of a single one (which is a good thing), but I 
believe that a similar approach can be applied. Specific contributors can be 
nominated core rewievers on specific directories in the tree only and that 
would scale immediately the core review bandwidth. 

As a practical example for Nova: in our case that would simply include the 
following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv. Other 
projects didn't hit the review bandwidth limits yet as heavily as Nova did, but 
the same concept could be applied everywhere. 

Alessandro




 So it's really crazy to call OpenStack less open by having a culture that 
 encourages people 

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Russell Bryant
On 10/11/2013 09:02 AM, Alessandro Pilotti wrote:
 OpenStack is organized differently: there are lots of separate projects 
 (Nova, Neutrom, Glance, etc) instead of a single one (which is a good thing), 
 but I believe that a similar approach can be applied. Specific contributors 
 can be nominated core rewievers on specific directories in the tree only 
 and that would scale immediately the core review bandwidth. 
 
 As a practical example for Nova: in our case that would simply include the 
 following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv. Other 
 projects didn't hit the review bandwidth limits yet as heavily as Nova did, 
 but the same concept could be applied everywhere. 

If maintainers of a particular driver would prefer this sort of
autonomy, I'd rather look at creating new repositories.  I'm completely
open to going that route on a per-driver basis.  Thoughts?

For the main tree, I think we already do something like this in
practice.  Core reviewers look for feedback (+1/-1) from experts of that
code and take it heavily into account when doing the review.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti


On Oct 11, 2013, at 17:17 , Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com
 wrote:

On 10/11/2013 09:02 AM, Alessandro Pilotti wrote:
OpenStack is organized differently: there are lots of separate projects (Nova, 
Neutrom, Glance, etc) instead of a single one (which is a good thing), but I 
believe that a similar approach can be applied. Specific contributors can be 
nominated core rewievers on specific directories in the tree only and that 
would scale immediately the core review bandwidth.

As a practical example for Nova: in our case that would simply include the 
following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv. Other 
projects didn't hit the review bandwidth limits yet as heavily as Nova did, but 
the same concept could be applied everywhere.

If maintainers of a particular driver would prefer this sort of
autonomy, I'd rather look at creating new repositories.  I'm completely
open to going that route on a per-driver basis.  Thoughts?

Well, as long as it is an official project this would make definitely sense, at 
least for Hyper-V.
Stability of the driver's interface has never been a particular issue to 
prevent this to happen IMO.
We should think about how to handle the testing, considering that we are 
getting ready with the CI gate.

For the main tree, I think we already do something like this in
practice.  Core reviewers look for feedback (+1/-1) from experts of that
code and take it heavily into account when doing the review.


There's only one small issue with the current approach.

Current reviews require:

+1 de facto driver X mantainer(s)
+2  core reviewer
+2A  core reviewer

While with the proposed scenario we'd get to a way faster route:

+2  driver X mantainer
+2A another driver X mantainer or a core reviewer

This would make a big difference in terms of review time.

Thanks,

Alessandro


--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Russell Bryant
On 10/11/2013 10:41 AM, Alessandro Pilotti wrote:
 
 
 On Oct 11, 2013, at 17:17 , Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com
  wrote:
 
 On 10/11/2013 09:02 AM, Alessandro Pilotti wrote:
 OpenStack is organized differently: there are lots of separate
 projects (Nova, Neutrom, Glance, etc) instead of a single one (which
 is a good thing), but I believe that a similar approach can be
 applied. Specific contributors can be nominated core rewievers on
 specific directories in the tree only and that would scale
 immediately the core review bandwidth.

 As a practical example for Nova: in our case that would simply
 include the following subtrees: nova/virt/hyperv and
 nova/tests/virt/hyperv. Other projects didn't hit the review
 bandwidth limits yet as heavily as Nova did, but the same concept
 could be applied everywhere.

 If maintainers of a particular driver would prefer this sort of
 autonomy, I'd rather look at creating new repositories.  I'm completely
 open to going that route on a per-driver basis.  Thoughts?
 
 Well, as long as it is an official project this would make definitely
 sense, at least for Hyper-V.

What I envision here would be another repository/project under the
OpenStack Compute program.  You can sort of look at it as similar to
python-novaclient, even though that project uses the same review team
right now.

So, that means it would also be a separate release deliverable.  It
wouldn't be integrated into the main nova release.  They could be
released at the same time, though.

We could either have a single repo:

openstack/nova-extra-drivers

or a repo per driver that wants to split:

openstack/nova-driver-hyperv

The latter is a bit more to keep track of, but might make the most sense
so that we can have a review team per driver.

 Stability of the driver's interface has never been a particular issue to
 prevent this to happen IMO.

Note that I would actually *not* want to necessarily guarantee a stable
API here for master.  We should be able to mitigate sync issues with CI.

 We should think about how to handle the testing, considering that we are
 getting ready with the CI gate.

Hopefully the testing isn't too much different.  It's just grabbing the
bits from another repo.

Also note that I have a session for the summit that is intended to talk
about all of this, as well:

http://summit.openstack.org/cfp/details/4

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Bob Ball
 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: 11 October 2013 15:18
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Hyper-V] Havana status
 
  As a practical example for Nova: in our case that would simply include the
 following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv.
 
 If maintainers of a particular driver would prefer this sort of
 autonomy, I'd rather look at creating new repositories.  I'm completely
 open to going that route on a per-driver basis.  Thoughts?

I think that all drivers that are officially supported must be treated in the 
same way.

If we are going to split out drivers into a separate but still official 
repository then we should do so for all drivers.  This would allow Nova core 
developers to focus on the architectural side rather than how each individual 
driver implements the API that is presented.

Of course, with the current system it is much easier for a Nova core to 
identify and request a refactor or generalisation of code written in one or 
multiple drivers so they work for all of the drivers - we've had a few of those 
with XenAPI where code we have written has been pushed up into Nova core rather 
than the XenAPI tree.

Perhaps one approach would be to re-use the incubation approach we have; if 
drivers want to have the fast-development cycles uncoupled from core reviewers 
then they can be moved into an incubation project.  When there is a suitable 
level of integration (and automated testing to maintain it of course) then they 
can graduate.  I imagine at that point there will be more development of new 
features which affect Nova in general (to expose each hypervisor's strengths), 
so there would be fewer cases of them being restricted just to the virt/* tree.

Bob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti


On Oct 11, 2013, at 18:02 , Russell Bryant rbry...@redhat.com
 wrote:

 On 10/11/2013 10:41 AM, Alessandro Pilotti wrote:
 
 
 On Oct 11, 2013, at 17:17 , Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com
 wrote:
 
 On 10/11/2013 09:02 AM, Alessandro Pilotti wrote:
 OpenStack is organized differently: there are lots of separate
 projects (Nova, Neutrom, Glance, etc) instead of a single one (which
 is a good thing), but I believe that a similar approach can be
 applied. Specific contributors can be nominated core rewievers on
 specific directories in the tree only and that would scale
 immediately the core review bandwidth.
 
 As a practical example for Nova: in our case that would simply
 include the following subtrees: nova/virt/hyperv and
 nova/tests/virt/hyperv. Other projects didn't hit the review
 bandwidth limits yet as heavily as Nova did, but the same concept
 could be applied everywhere.
 
 If maintainers of a particular driver would prefer this sort of
 autonomy, I'd rather look at creating new repositories.  I'm completely
 open to going that route on a per-driver basis.  Thoughts?
 
 Well, as long as it is an official project this would make definitely
 sense, at least for Hyper-V.
 
 What I envision here would be another repository/project under the
 OpenStack Compute program.  You can sort of look at it as similar to
 python-novaclient, even though that project uses the same review team
 right now.
 
 So, that means it would also be a separate release deliverable.  It
 wouldn't be integrated into the main nova release.  They could be
 released at the same time, though.
 
 We could either have a single repo:
 
openstack/nova-extra-drivers
 
 or a repo per driver that wants to split:
 
openstack/nova-driver-hyperv
 

+1 for openstack/nova-driver-hyperv

That would be perfect. Fast bug fixes, independent reviewers and autonomous 
blueprints management.

Our users would cry of joy for such a solution. :-)


 The latter is a bit more to keep track of, but might make the most sense
 so that we can have a review team per driver.
 
 Stability of the driver's interface has never been a particular issue to
 prevent this to happen IMO.
 
 Note that I would actually *not* want to necessarily guarantee a stable
 API here for master.  We should be able to mitigate sync issues with CI.
 
 We should think about how to handle the testing, considering that we are
 getting ready with the CI gate.
 
 Hopefully the testing isn't too much different.  It's just grabbing the
 bits from another repo.
 
 Also note that I have a session for the summit that is intended to talk
 about all of this, as well:
 
http://summit.openstack.org/cfp/details/4

Sure, looking forward to meet you there!

 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Dan Smith
 We could either have a single repo:
 
 openstack/nova-extra-drivers

This would be my preference for sure, just from the standpoint of
additional release complexity otherwise. I know it might complicate how
the core team works, but presumably we could get away with just having
driver maintainers with core abilities on the whole project, especially
given that most of them care only about their own driver.

 Note that I would actually *not* want to necessarily guarantee a stable
 API here for master.  We should be able to mitigate sync issues with CI.

Agreed, a stable virt driver API is not feasible or healthy at this
point, IMHO. However, it doesn't change that much as it is. I know I'll
be making changes to virt drivers in the coming cycle due to objects and
I have no problem submitting the corresponding changes to the
nova-extra-drivers tree for those drivers alongside any that go for the
main one.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Dan Smith
 I think that all drivers that are officially supported must be
 treated in the same way.

Well, we already have multiple classes of support due to the various
states of testing that the drivers have.

 If we are going to split out drivers into a separate but still
 official repository then we should do so for all drivers.  This would
 allow Nova core developers to focus on the architectural side rather
 than how each individual driver implements the API that is
 presented.

I really don't want to see KVM and XenAPI pulled out of the main tree,
FWIW. I think we need a critical mass of the (currently) most used
drivers there to be the reference platform. Going the route of kicking
everything out of tree means the virt driver API is necessarily needs to
be a stable thing that the others can depend on and I definitely don't
want to see that happen at this point.

The other thing is, this is driven mostly by the desire of some driver
maintainers to be able to innovate in their driver without the
restrictions of being in the main tree. That's not to say that once they
reach a level of completeness that they might want back into the main
tree as other new drivers continue to be cultivated in the faster-moving
extra drivers tree.

 Perhaps one approach would be to re-use the incubation approach we
 have; if drivers want to have the fast-development cycles uncoupled
 from core reviewers then they can be moved into an incubation
 project.  When there is a suitable level of integration (and
 automated testing to maintain it of course) then they can graduate.

Yeah, I think this makes sense. New drivers from here on out start in
the extra drivers tree and graduate to the main tree. It sounds like
Hyper-V will move back there to achieve a fast pace of development for a
while, which I think is fine. It will bring with it some additional
review overhead when the time comes to bring it back into the main nova
tree, but hopefully we can plan for that and make it happen swiftly.

Also, we have a looming deadline of required CI integration for the
drivers, so having an extra drivers tree gives us a very good landing
spot and answer to the question of what if we can't satisfy the CI
requirements?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread John Griffith
On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball bob.b...@citrix.com wrote:

  -Original Message-
  From: Russell Bryant [mailto:rbry...@redhat.com]
  Sent: 11 October 2013 15:18
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Hyper-V] Havana status
 
   As a practical example for Nova: in our case that would simply include
 the
  following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv.
 
  If maintainers of a particular driver would prefer this sort of
  autonomy, I'd rather look at creating new repositories.  I'm completely
  open to going that route on a per-driver basis.  Thoughts?

 I think that all drivers that are officially supported must be treated in
 the same way.

 If we are going to split out drivers into a separate but still official
 repository then we should do so for all drivers.  This would allow Nova
 core developers to focus on the architectural side rather than how each
 individual driver implements the API that is presented.

 Of course, with the current system it is much easier for a Nova core to
 identify and request a refactor or generalisation of code written in one or
 multiple drivers so they work for all of the drivers - we've had a few of
 those with XenAPI where code we have written has been pushed up into Nova
 core rather than the XenAPI tree.

 Perhaps one approach would be to re-use the incubation approach we have;
 if drivers want to have the fast-development cycles uncoupled from core
 reviewers then they can be moved into an incubation project.  When there is
 a suitable level of integration (and automated testing to maintain it of
 course) then they can graduate.  I imagine at that point there will be more
 development of new features which affect Nova in general (to expose each
 hypervisor's strengths), so there would be fewer cases of them being
 restricted just to the virt/* tree.

 Bob

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I've thought about this in the past, but always come back to a couple of
things.

Being a community driven project, if a vendor doesn't want to participate
in the project then why even pretend (ie having their own project/repo,
reviewers etc).  Just post your code up in your own github and let people
that want to use it pull it down.  If it's a vendor project, then that's
fine; have it be a vendor project.

In my opinion pulling out and leaving things up to the vendors as is being
described has significant negative impacts.  Not the least of which is
consistency in behaviors.  On the Cinder side, the core team spends the
bulk of their review time looking at things like consistent behaviors,
missing features or paradigms that are introduced that break other
drivers.  For example looking at things like, are all the base features
implemented, do they work the same way, are we all using the same
vocabulary, will it work in an multi-backend environment.  In addition,
it's rare that a vendor implements a new feature in their driver that
doesn't impact/touch the core code somewhere.

Having drivers be a part of the core project is very valuable in my
opinion.  It's also very important in my view that the core team for Nova
actually has some idea and notion of what's being done by the drivers that
it's supporting.  Moving everybody further and further into additional
private silos seems like a very bad direction to me, it makes things like
knowledge transfer, documentation and worst of all bug triaging extremely
difficult.

I could go on and on here, but nobody likes to hear anybody go on a rant.
 I would just like to see if there are other alternatives to improving the
situation than fragmenting the projects.

John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti


On Oct 11, 2013, at 18:36 , Dan Smith d...@danplanet.com
 wrote:

 I think that all drivers that are officially supported must be
 treated in the same way.
 
 Well, we already have multiple classes of support due to the various
 states of testing that the drivers have.
 
 If we are going to split out drivers into a separate but still
 official repository then we should do so for all drivers.  This would
 allow Nova core developers to focus on the architectural side rather
 than how each individual driver implements the API that is
 presented.
 
 I really don't want to see KVM and XenAPI pulled out of the main tree,
 FWIW. I think we need a critical mass of the (currently) most used
 drivers there to be the reference platform. Going the route of kicking
 everything out of tree means the virt driver API is necessarily needs to
 be a stable thing that the others can depend on and I definitely don't
 want to see that happen at this point.
 

I see libvirt/KVM treated as the reference driver, so it could make sense to 
leave it in Nova. 

My only request here is that we can make sure that new driver features can land 
for other drivers without necessarilky having them implemented for libvirt/KVM 
first. 

 The other thing is, this is driven mostly by the desire of some driver
 maintainers to be able to innovate in their driver without the
 restrictions of being in the main tree. That's not to say that once they
 reach a level of completeness that they might want back into the main
 tree as other new drivers continue to be cultivated in the faster-moving
 extra drivers tree.
 
 Perhaps one approach would be to re-use the incubation approach we
 have; if drivers want to have the fast-development cycles uncoupled
 from core reviewers then they can be moved into an incubation
 project.  When there is a suitable level of integration (and
 automated testing to maintain it of course) then they can graduate.
 
 Yeah, I think this makes sense. New drivers from here on out start in
 the extra drivers tree and graduate to the main tree. It sounds like
 Hyper-V will move back there to achieve a fast pace of development for a
 while, which I think is fine. It will bring with it some additional
 review overhead when the time comes to bring it back into the main nova
 tree, but hopefully we can plan for that and make it happen swiftly.
 

I personally don't agree with this option, as it would create a A class 
version of the driver supposely mature and a B version supposely experimental 
which would just confuse users.

It's not a matter of stability, as the code is already stable so I really don't 
see a point in the incubator approach. We already have our forks for 
experimental features without needing to complicate things more.
The code that we publish in the OpenStack repos is meant to be production ready.

As Dan was pointing out, merging the code back into Nova would require a review 
at some point in time of a huge patch that would send us straight back into 
review hell. No thanks!

The best option for us, is to have a separate project (nova-driver-hyperv would 
be perfect) where we can handle blueprints, commit bug fixes independently with 
no intention to merge it back into the Nova tree as much as I guess there's no 
reason to merge, say, python-nova-client. 

Any area that would require additional features in Nova (e.g. our notorious RDP 
blueprint :-) ) would anyway go through the Nova review process, while 
blueprints that implement a feature already present in Nova (e.g. 
live-snapshots) can be handled entirely independently.

This approach would save quite some precious review bandwidths from the Nova 
reviewers and give us the required headroom to innovate and fix bugs in a 
timely manner, bringing the best OpenStack experience to our users.


 Also, we have a looming deadline of required CI integration for the
 drivers, so having an extra drivers tree gives us a very good landing
 spot and answer to the question of what if we can't satisfy the CI
 requirements?
 

I agree on this point: purgatory -ahem-, I mean, incubation, for the drivers 
that witll not have a CI ready in time for Icehouse.

Alessandro


 --Dan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Russell Bryant
On 10/11/2013 12:04 PM, John Griffith wrote:
 
 
 
 On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball bob.b...@citrix.com
 mailto:bob.b...@citrix.com wrote:
 
  -Original Message-
  From: Russell Bryant [mailto:rbry...@redhat.com
 mailto:rbry...@redhat.com]
  Sent: 11 October 2013 15:18
  To: openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Hyper-V] Havana status
 
   As a practical example for Nova: in our case that would simply
 include the
  following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv.
 
  If maintainers of a particular driver would prefer this sort of
  autonomy, I'd rather look at creating new repositories.  I'm
 completely
  open to going that route on a per-driver basis.  Thoughts?
 
 I think that all drivers that are officially supported must be
 treated in the same way.
 
 If we are going to split out drivers into a separate but still
 official repository then we should do so for all drivers.  This
 would allow Nova core developers to focus on the architectural side
 rather than how each individual driver implements the API that is
 presented.
 
 Of course, with the current system it is much easier for a Nova core
 to identify and request a refactor or generalisation of code written
 in one or multiple drivers so they work for all of the drivers -
 we've had a few of those with XenAPI where code we have written has
 been pushed up into Nova core rather than the XenAPI tree.
 
 Perhaps one approach would be to re-use the incubation approach we
 have; if drivers want to have the fast-development cycles uncoupled
 from core reviewers then they can be moved into an incubation
 project.  When there is a suitable level of integration (and
 automated testing to maintain it of course) then they can graduate.
  I imagine at that point there will be more development of new
 features which affect Nova in general (to expose each hypervisor's
 strengths), so there would be fewer cases of them being restricted
 just to the virt/* tree.
 
 Bob
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 I've thought about this in the past, but always come back to a couple of
 things.
 
 Being a community driven project, if a vendor doesn't want to
 participate in the project then why even pretend (ie having their own
 project/repo, reviewers etc).  Just post your code up in your own github
 and let people that want to use it pull it down.  If it's a vendor
 project, then that's fine; have it be a vendor project.
 
 In my opinion pulling out and leaving things up to the vendors as is
 being described has significant negative impacts.  Not the least of
 which is consistency in behaviors.  On the Cinder side, the core team
 spends the bulk of their review time looking at things like consistent
 behaviors, missing features or paradigms that are introduced that
 break other drivers.  For example looking at things like, are all the
 base features implemented, do they work the same way, are we all using
 the same vocabulary, will it work in an multi-backend environment.  In
 addition, it's rare that a vendor implements a new feature in their
 driver that doesn't impact/touch the core code somewhere.
 
 Having drivers be a part of the core project is very valuable in my
 opinion.  It's also very important in my view that the core team for
 Nova actually has some idea and notion of what's being done by the
 drivers that it's supporting.  Moving everybody further and further into
 additional private silos seems like a very bad direction to me, it makes
 things like knowledge transfer, documentation and worst of all bug
 triaging extremely difficult.
 
 I could go on and on here, but nobody likes to hear anybody go on a
 rant.  I would just like to see if there are other alternatives to
 improving the situation than fragmenting the projects.

Really good points here.  I'm glad you jumped in, because the underlying
issue here applies well to other projects (especially Cinder and Neutron).

So, the alternative to the split official repos is to either:

1) Stay in tree, participate, and help share the burden of maintenance
of the project

or

2) Truly be a vendor project, and to make that more clear, split out
into your own (not nova) repository.

#2 really isn't so bad if that's what you want, and it honestly sounds
like this may be the case for the Hyper-V team.  You could still be very
close to the OpenStack community by using the same tools.  Use
stackforge for the code (same gerrit, jenkins, etc), and have your own
launchpad project.  If you go that route, you get all of the control you
want, but the project

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Dan Smith
 My only request here is that we can make sure that new driver
 features can land for other drivers without necessarilky having them
 implemented for libvirt/KVM first.

We've got lots of things supported by the XenAPI drivers that aren't
supported by libvirt, so I don't think this is a problem even today.

 I personally don't agree with this option, as it would create a A
 class version of the driver supposely mature and a B version
 supposely experimental which would just confuse users.

If you're expecting that there would be two copies of the driver, one in
the main tree and one in the extra drivers tree, that's not what I was
suggesting.

 The best option for us, is to have a separate project
 (nova-driver-hyperv would be perfect) where we can handle blueprints,
 commit bug fixes independently with no intention to merge it back
 into the Nova tree as much as I guess there's no reason to merge,
 say, python-nova-client.

So if that's really the desire, why not go John's route and just push
your official version of the driver to a github repo and be done with it?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Joe Gordon
On Fri, Oct 11, 2013 at 6:02 AM, Alessandro Pilotti 
apilo...@cloudbasesolutions.com wrote:


 On Oct 11, 2013, at 14:15 , Sean Dague s...@dague.net
  wrote:

  On 10/10/2013 08:43 PM, Tim Smith wrote:
  snip
  Again, I don't have any vested interest in this discussion, except that
  I believe the concept of reviewer karma to be counter to both software
  quality and openness. In this particular case it would seem that the
  simplest solution to this problem would be to give one of the hyper-v
  team members core reviewer status, but perhaps there are consequences to
  that that elude me.
 
  There are very deep consequences to that. The core team model, where you
 have 15 - 20 reviewers, but it only takes 2 to land code, only works when
 the core teams share a culture. This means they know, or are willing to
 learn, code outside their comfort zone. Will they catch all the bugs in
 that? nope. But code blindness hits everyone, and there are real
 implications for the overall quality and maintainability of a project as
 complicated as Nova if everyone only stays in their comfortable corner.
 
  Also, from my experience in Nova, code contributions written by people
 that aren't regularly reviewing outside of their corner of the world are
 demonstrably lower quality than those who are. Reviewing code outside your
 specific area is also educational, gets you familiar with norms and idioms
 beyond what simple style checking handles, and makes you a better developer.


 There's IMO a practical contradiction here: most people contribute code
 and do reviews on partitioned areas of OpenStack only. For example, Nova
 devs rarely commit on Neutron, so you can say that for a Nova dev the
 confort zone is Nova, but by your description, a fair amount of time
 should be spent in reviewing and learning all the OpenStack projects code,
 unless you want to limit the scope of this discussion to Nova, which does
 not make much sense when you work on a whole technology layer like in our
 case.

 On the contrary, as an example, our job as driver/plugin/agent mantainers
 brings us in contact will all the major projects codebases, with the result
 that we are learning a lot from each of them. Beside that, obviously a
 driver/plugin/agent dev spends normally time learning how similar solutions
 are implemented for other technologies already in the tree, which leads to
 further improvement in the code due to the same knowledge sharing that you
 are referring to.

 
  We need to all be caring about the whole. That culture is what makes
 OpenStack long term sustainable, and there is a reason that it is behavior
 that's rewarded with more folks looking at your proposed patches. When
 people only care about their corner world, and don't put in hours on
 keeping things whole, they balkanize and fragment.
 
  Review bandwidth, and people working on core issues, are our most
 constrained resources. If teams feel they don't need to contribute there,
 because it doesn't directly affect their code, we end up with this -
 http://en.wikipedia.org/wiki/Tragedy_of_the_commons
 

 This reminds me about how peer to peer sharing technologies work. Why
 don't we put some ratios, for example for each commit that a dev does at
 least 2-3 reviews of other people's code are required? Enforcing it
 wouldn't be that complicated. The negative part is that it might lead to
 low quality or fake reviews, but at least it could be easy to outline in
 the stats.

 One thing is sure: review bandwidth is the obvious bottleneck in today's
 OpenStack status. If we don't find a reasonably quick solution, the more
 OpenStack grows, the more complicated it will become, leading to even worse
 response times in merging bug fixes and limiting the new features that each
 new version can bring, which is IMO the negation of what a vital and
 dynamic project should be.


Yes, review bandwidth is a bottleneck, and although there are some
organizational changes that may help at the risk of changing our entire
review process and culture (which perhaps we should consider?).  The
easiest solution is for everyone to do more reviews. For just one review a
day you can make the whole project much stronger.  Complaining about the
review bandwidth issue while only doing 33 reviews in all of OpenStack [1]
in the past 90 months (I don't mean to pick on you out here, you are just
an example) doesn't seem right.

[1] http://www.russellbryant.net/openstack-stats/all-reviewers-90.txt



 From what I see on the Linux kernel project, which can be considered as a
 good source of inspiration when it comes to review bandwidth optimization
 in a large project, they have a pyramidal structure in the way in which the
 git repo origins are interconnected. This looks pretty similar to what we
 are proposing: teams work on specific areas with a topic mantainer and
 somebody merges their work at a higher level, with Linus ultimately
 managing the root repo.

 OpenStack is organized differently: there 

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Matt Riedemann
I'd like to see the powervm driver fall into that first category.  We 
don't nearly have the rapid development that the hyper-v driver does, but 
we do have some out of tree stuff anyway simply because it hasn't landed 
upstream yet (DB2, config drive support for the powervm driver, etc), and 
maintaining that out of tree code is not fun.  So I definitely don't want 
to move out of tree.

Given that, I think at least I'm trying to contribute overall [1][2] by 
doing reviews outside my comfort zone, bug triage, fixing bugs when I can, 
and because we run tempest in house (with neutron-openvswitch) we find 
issues there that I get to push patches for.

Having said all that, it's moot for the powervm driver if we don't get the 
CI hooked up in Icehouse and I completely understand that so it's a top 
priority.


[1] 
http://stackalytics.com/?release=havanametric=commitsproject_type=openstackmodule=company=user_id=mriedem
 

[2] 
https://review.openstack.org/#/q/reviewer:6873+project:openstack/nova,n,z 


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Russell Bryant rbry...@redhat.com
To: openstack-dev@lists.openstack.org, 
Date:   10/11/2013 11:33 AM
Subject:Re: [openstack-dev] [Hyper-V] Havana status



On 10/11/2013 12:04 PM, John Griffith wrote:
 
 
 
 On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball bob.b...@citrix.com
 mailto:bob.b...@citrix.com wrote:
 
  -Original Message-
  From: Russell Bryant [mailto:rbry...@redhat.com
 mailto:rbry...@redhat.com]
  Sent: 11 October 2013 15:18
  To: openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Hyper-V] Havana status
 
   As a practical example for Nova: in our case that would simply
 include the
  following subtrees: nova/virt/hyperv and 
nova/tests/virt/hyperv.
 
  If maintainers of a particular driver would prefer this sort of
  autonomy, I'd rather look at creating new repositories.  I'm
 completely
  open to going that route on a per-driver basis.  Thoughts?
 
 I think that all drivers that are officially supported must be
 treated in the same way.
 
 If we are going to split out drivers into a separate but still
 official repository then we should do so for all drivers.  This
 would allow Nova core developers to focus on the architectural side
 rather than how each individual driver implements the API that is
 presented.
 
 Of course, with the current system it is much easier for a Nova core
 to identify and request a refactor or generalisation of code written
 in one or multiple drivers so they work for all of the drivers -
 we've had a few of those with XenAPI where code we have written has
 been pushed up into Nova core rather than the XenAPI tree.
 
 Perhaps one approach would be to re-use the incubation approach we
 have; if drivers want to have the fast-development cycles uncoupled
 from core reviewers then they can be moved into an incubation
 project.  When there is a suitable level of integration (and
 automated testing to maintain it of course) then they can graduate.
  I imagine at that point there will be more development of new
 features which affect Nova in general (to expose each hypervisor's
 strengths), so there would be fewer cases of them being restricted
 just to the virt/* tree.
 
 Bob
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 I've thought about this in the past, but always come back to a couple of
 things.
 
 Being a community driven project, if a vendor doesn't want to
 participate in the project then why even pretend (ie having their own
 project/repo, reviewers etc).  Just post your code up in your own github
 and let people that want to use it pull it down.  If it's a vendor
 project, then that's fine; have it be a vendor project.
 
 In my opinion pulling out and leaving things up to the vendors as is
 being described has significant negative impacts.  Not the least of
 which is consistency in behaviors.  On the Cinder side, the core team
 spends the bulk of their review time looking at things like consistent
 behaviors, missing features or paradigms that are introduced that
 break other drivers.  For example looking at things like, are all the
 base features implemented, do they work the same way, are we all using
 the same vocabulary, will it work in an multi-backend environment.  In
 addition, it's rare that a vendor implements a new feature in their
 driver that doesn't impact/touch the core code somewhere.
 
 Having

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti


On Oct 11, 2013, at 19:04 , John Griffith 
john.griff...@solidfire.commailto:john.griff...@solidfire.com
 wrote:




On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball 
bob.b...@citrix.commailto:bob.b...@citrix.com wrote:
 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.commailto:rbry...@redhat.com]
 Sent: 11 October 2013 15:18
 To: 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Hyper-V] Havana status

  As a practical example for Nova: in our case that would simply include the
 following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv.

 If maintainers of a particular driver would prefer this sort of
 autonomy, I'd rather look at creating new repositories.  I'm completely
 open to going that route on a per-driver basis.  Thoughts?

I think that all drivers that are officially supported must be treated in the 
same way.

If we are going to split out drivers into a separate but still official 
repository then we should do so for all drivers.  This would allow Nova core 
developers to focus on the architectural side rather than how each individual 
driver implements the API that is presented.

Of course, with the current system it is much easier for a Nova core to 
identify and request a refactor or generalisation of code written in one or 
multiple drivers so they work for all of the drivers - we've had a few of those 
with XenAPI where code we have written has been pushed up into Nova core rather 
than the XenAPI tree.

Perhaps one approach would be to re-use the incubation approach we have; if 
drivers want to have the fast-development cycles uncoupled from core reviewers 
then they can be moved into an incubation project.  When there is a suitable 
level of integration (and automated testing to maintain it of course) then they 
can graduate.  I imagine at that point there will be more development of new 
features which affect Nova in general (to expose each hypervisor's strengths), 
so there would be fewer cases of them being restricted just to the virt/* tree.

Bob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I've thought about this in the past, but always come back to a couple of things.

Being a community driven project, if a vendor doesn't want to participate in 
the project then why even pretend (ie having their own project/repo, reviewers 
etc).  Just post your code up in your own github and let people that want to 
use it pull it down.  If it's a vendor project, then that's fine; have it be a 
vendor project.


There are quite a few reasons why putting this project soemwehere else wouldn't 
make sense:

1) It's not a vendor project, we're having contributions from community members 
belonging to other companies as well
2) Legitimation. Users want to know that this code is going to be there with or 
without us
3) Driver interface stability, as everybody is against a stable interface (even 
if de facto is perfectly stable)
4) it's not a vendor project, did I say it already? :-)

Said that, we are constantly on the verge of starting pushing code to customers 
from a fork, but we are trying as hard as possible to avoid as it is definitely 
bad for the whole community.


In my opinion pulling out and leaving things up to the vendors as is being 
described has significant negative impacts.  Not the least of which is 
consistency in behaviors.  On the Cinder side, the core team spends the bulk of 
their review time looking at things like consistent behaviors, missing features 
or paradigms that are introduced that break other drivers.  For example 
looking at things like, are all the base features implemented, do they work the 
same way, are we all using the same vocabulary, will it work in an 
multi-backend environment.  In addition, it's rare that a vendor implements a 
new feature in their driver that doesn't impact/touch the core code somewhere.


In the moment in which you have a separate project for a driver, why should you 
care about if a driver breaks something or now? IMO It's a job for the driver 
mantainers and for it's CI.

Having drivers be a part of the core project is very valuable in my opinion.  
It's also very important in my view that the core team for Nova actually has 
some idea and notion of what's being done by the drivers that it's supporting.  
Moving everybody further and further into additional private silos seems like a 
very bad direction to me, it makes things like knowledge transfer, 
documentation and worst of all bug triaging extremely difficult.


That code is not going to disappear. Nova devs can anytime look into their 
offspring projects and contribute. I expect also driver devs to contribute to 
the Nova project as much as possible, as it is a common interest.


I could go on and on here, but nobody likes to hear

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Russell Bryant
On 10/11/2013 01:18 PM, Alessandro Pilotti wrote:
 
 
 On Oct 11, 2013, at 19:04 , John Griffith john.griff...@solidfire.com
 mailto:john.griff...@solidfire.com
  wrote:
 



 On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball bob.b...@citrix.com
 mailto:bob.b...@citrix.com wrote:

  -Original Message-
  From: Russell Bryant [mailto:rbry...@redhat.com
 mailto:rbry...@redhat.com]
  Sent: 11 October 2013 15:18
  To: openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Hyper-V] Havana status
 
   As a practical example for Nova: in our case that would simply
 include the
  following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv.
 
  If maintainers of a particular driver would prefer this sort of
  autonomy, I'd rather look at creating new repositories.  I'm
 completely
  open to going that route on a per-driver basis.  Thoughts?

 I think that all drivers that are officially supported must be
 treated in the same way.

 If we are going to split out drivers into a separate but still
 official repository then we should do so for all drivers.  This
 would allow Nova core developers to focus on the architectural
 side rather than how each individual driver implements the API
 that is presented.

 Of course, with the current system it is much easier for a Nova
 core to identify and request a refactor or generalisation of code
 written in one or multiple drivers so they work for all of the
 drivers - we've had a few of those with XenAPI where code we have
 written has been pushed up into Nova core rather than the XenAPI tree.

 Perhaps one approach would be to re-use the incubation approach we
 have; if drivers want to have the fast-development cycles
 uncoupled from core reviewers then they can be moved into an
 incubation project.  When there is a suitable level of integration
 (and automated testing to maintain it of course) then they can
 graduate.  I imagine at that point there will be more development
 of new features which affect Nova in general (to expose each
 hypervisor's strengths), so there would be fewer cases of them
 being restricted just to the virt/* tree.

 Bob

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I've thought about this in the past, but always come back to a couple
 of things.

 Being a community driven project, if a vendor doesn't want to
 participate in the project then why even pretend (ie having their own
 project/repo, reviewers etc).  Just post your code up in your own
 github and let people that want to use it pull it down.  If it's a
 vendor project, then that's fine; have it be a vendor project.

 
 There are quite a few reasons why putting this project soemwehere else
 wouldn't make sense:
 
 1) It's not a vendor project, we're having contributions from community
 members belonging to other companies as well
 2) Legitimation. Users want to know that this code is going to be there
 with or without us
 3) Driver interface stability, as everybody is against a stable
 interface (even if de facto is perfectly stable)
 4) it's not a vendor project, did I say it already? :-)
 
 Said that, we are constantly on the verge of starting pushing code to
 customers from a fork, but we are trying as hard as possible to avoid as
 it is definitely bad for the whole community. 

A vendor project doesn't mean you couldn't accept contributions.  It
means that it would be primarily developed/maintained/managed by someone
other than the OpenStack project, which would in this case be Microsoft
(or its contractor(s)).

I totally agree with the benefits of staying in tree.  The question is
whether you are willing to pay the cost to get those benefits.

Splitting into repos and giving over control is starting to feel like
giving you all of the benefits (primarily being legitimate as you
say), without having to pay the cost (more involvement).

The reason we're at this point and having this conversation about the
fate of hyper-v is that there has been an imbalance.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti




On Oct 11, 2013, at 19:29 , Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com
 wrote:

On 10/11/2013 12:04 PM, John Griffith wrote:



On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball 
bob.b...@citrix.commailto:bob.b...@citrix.com
mailto:bob.b...@citrix.com wrote:

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.comhttp://redhat.com
   mailto:rbry...@redhat.com]
Sent: 11 October 2013 15:18
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
   mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Hyper-V] Havana status

As a practical example for Nova: in our case that would simply
   include the
following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv.

If maintainers of a particular driver would prefer this sort of
autonomy, I'd rather look at creating new repositories.  I'm
   completely
open to going that route on a per-driver basis.  Thoughts?

   I think that all drivers that are officially supported must be
   treated in the same way.

   If we are going to split out drivers into a separate but still
   official repository then we should do so for all drivers.  This
   would allow Nova core developers to focus on the architectural side
   rather than how each individual driver implements the API that is
   presented.

   Of course, with the current system it is much easier for a Nova core
   to identify and request a refactor or generalisation of code written
   in one or multiple drivers so they work for all of the drivers -
   we've had a few of those with XenAPI where code we have written has
   been pushed up into Nova core rather than the XenAPI tree.

   Perhaps one approach would be to re-use the incubation approach we
   have; if drivers want to have the fast-development cycles uncoupled
   from core reviewers then they can be moved into an incubation
   project.  When there is a suitable level of integration (and
   automated testing to maintain it of course) then they can graduate.
I imagine at that point there will be more development of new
   features which affect Nova in general (to expose each hypervisor's
   strengths), so there would be fewer cases of them being restricted
   just to the virt/* tree.

   Bob

   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
   mailto:OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I've thought about this in the past, but always come back to a couple of
things.

Being a community driven project, if a vendor doesn't want to
participate in the project then why even pretend (ie having their own
project/repo, reviewers etc).  Just post your code up in your own github
and let people that want to use it pull it down.  If it's a vendor
project, then that's fine; have it be a vendor project.

In my opinion pulling out and leaving things up to the vendors as is
being described has significant negative impacts.  Not the least of
which is consistency in behaviors.  On the Cinder side, the core team
spends the bulk of their review time looking at things like consistent
behaviors, missing features or paradigms that are introduced that
break other drivers.  For example looking at things like, are all the
base features implemented, do they work the same way, are we all using
the same vocabulary, will it work in an multi-backend environment.  In
addition, it's rare that a vendor implements a new feature in their
driver that doesn't impact/touch the core code somewhere.

Having drivers be a part of the core project is very valuable in my
opinion.  It's also very important in my view that the core team for
Nova actually has some idea and notion of what's being done by the
drivers that it's supporting.  Moving everybody further and further into
additional private silos seems like a very bad direction to me, it makes
things like knowledge transfer, documentation and worst of all bug
triaging extremely difficult.

I could go on and on here, but nobody likes to hear anybody go on a
rant.  I would just like to see if there are other alternatives to
improving the situation than fragmenting the projects.

Really good points here.  I'm glad you jumped in, because the underlying
issue here applies well to other projects (especially Cinder and Neutron).

So, the alternative to the split official repos is to either:

1) Stay in tree, participate, and help share the burden of maintenance
of the project


Which means getting back to the status quo with all the problems we had. I hope 
we'll be able to find something better than that.

or

2) Truly be a vendor project, and to make that more clear, split out
into your own (not nova) repository.

I explained in my previous relpy some points about why it would be IMO totally 
counterproductive to have a fork outside of OpenStack.
Our goal is to have more and more independent community

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Russell Bryant
On 10/11/2013 02:03 PM, Alessandro Pilotti wrote:

 Talking about new community involvements, newcomers are getting very
 frustrated to have to wait for weeks to get a meaningful review and I
 cannot blame them if they don't want to get involved anymore after the
 first patch!
 This makes appear public bureocracy here in eastern Europe a lightweight
 process in comparison! :-)

You keep making it sound like the situation is absolutely terrible.  The
stats that I track say otherwise.  That's why I brought them up at the
very beginning of this message.  So:

1) I don't think it's as bad as you make it out to be (based on actual
numbers).

http://russellbryant.net/openstack-stats/nova-openreviews.html
http://russellbryant.net/openstack-stats/all-openreviews.html

2) I don't think you (or hyper-v in general) is a victim (again based on
my stats).  If review times need to improve, it's a much more general
problem.

3) There's only one way to improve review times, which is more people
reviewing.  We could use review help in Nova, as could all projects I'm
sure.  We've also established that your review contribution is rather
small (30 reviews over 3 months across *all* openstack projects) [1], I
don't think you can really claim to be helping the problem.  I wouldn't
normally call anyone out like this.  It's not necessarily a *problem*
... until you complain.

So, are you in?  Let's work together to make things better.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-October/016470.html

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread David Kranz

On 10/11/2013 02:03 PM, Alessandro Pilotti wrote:





On Oct 11, 2013, at 19:29 , Russell Bryant rbry...@redhat.com 
mailto:rbry...@redhat.com

 wrote:


On 10/11/2013 12:04 PM, John Griffith wrote:


[... snip ...]


Talking about new community involvements, newcomers are getting very 
frustrated to have to wait for weeks to get a meaningful review and I 
cannot blame them if they don't want to get involved anymore after the 
first patch!
This makes appear public bureocracy here in eastern Europe a 
lightweight process in comparison! :-)


Let me add another practical reason about why a separate OpenStack 
project would be a good idea:


Anytime that we commit a driver specific patch, a lot of Tempests 
tests are executed on Libvirt and XenServer (for Icehouse those will 
be joined by another pack of CIs, including Hyper-V).
On the jenkins side, we have to wait for regression tests that have 
nothing to do with the code that we are pushing. During the H3 push, 
this meant waiting for hours and hoping not to have to issue the 100th 
recheck / revery bug xxx.


A separate project would obviously include only the required tests and 
be definitely more lightweight, offloading quite some work from the 
SmokeStack / Jenkins job for everybody's happiness.



I'm glad you brought this up. There are two issues here, both discussed 
by the qe/infra groups and others at the Havana summit and after.


How do you/we know which regression tests have nothing to do with the 
code changed in a particular patch? Or that the answer won't change 
tomorrow? The only way to do that is to assert dependencies and 
non-dependencies between components that will be used to decide which 
tests should be run for each patch. There was a lively discussion (with 
me taking your side initially) at the summit and it was decided that a 
generic wasting resources argument was not sufficient to introduce 
that fragility and so we would run the whole test suite as a gate on all 
projects. That decision was to be revisited if resources became a problem.


As for the 100th recheck, that is a result of the recent introduction of 
parallel tempest runs before the Havana rush. It was decided that the 
benefit in throughput from drastically reduced gate job times outweighed 
the pain of potentially doing a lot of rechecks. For the most part the 
bugs being surfaced were real OpenStack bugs that were showing up due to 
the new stress of parallel test execution. This was a good thing, 
though certainly painful to all. With hindsight I'm not sure if that was 
the right decision or not.


This is just an explanation of what has happened and why. There are 
obviously costs and benefits of being tightly bound to the project.


 -David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread John Griffith
On Fri, Oct 11, 2013 at 12:43 PM, David Kranz dkr...@redhat.com wrote:

  On 10/11/2013 02:03 PM, Alessandro Pilotti wrote:





  On Oct 11, 2013, at 19:29 , Russell Bryant rbry...@redhat.com
  wrote:

 On 10/11/2013 12:04 PM, John Griffith wrote:


Umm... just to clarify the section below is NOT from my message.  :)


 [... snip ...]


  Talking about new community involvements, newcomers are getting very
 frustrated to have to wait for weeks to get a meaningful review and I
 cannot blame them if they don't want to get involved anymore after the
 first patch!
 This makes appear public bureocracy here in eastern Europe a lightweight
 process in comparison! :-)

  Let me add another practical reason about why a separate OpenStack
 project would be a good idea:

  Anytime that we commit a driver specific patch, a lot of Tempests tests
 are executed on Libvirt and XenServer (for Icehouse those will be joined by
 another pack of CIs, including Hyper-V).
 On the jenkins side, we have to wait for regression tests that have
 nothing to do with the code that we are pushing. During the H3 push, this
 meant waiting for hours and hoping not to have to issue the 100th recheck
 / revery bug xxx.

  A separate project would obviously include only the required tests and
 be definitely more lightweight, offloading quite some work from the
 SmokeStack / Jenkins job for everybody's happiness.


  I'm glad you brought this up. There are two issues here, both discussed
 by the qe/infra groups and others at the Havana summit and after.

 How do you/we know which regression tests have nothing to do with the code
 changed in a particular patch? Or that the answer won't change tomorrow?
 The only way to do that is to assert dependencies and non-dependencies
 between components that will be used to decide which tests should be run
 for each patch. There was a lively discussion (with me taking your side
 initially) at the summit and it was decided that a generic wasting
 resources argument was not sufficient to introduce that fragility and so
 we would run the whole test suite as a gate on all projects. That decision
 was to be revisited if resources became a problem.

 As for the 100th recheck, that is a result of the recent introduction of
 parallel tempest runs before the Havana rush. It was decided that the
 benefit in throughput from drastically reduced gate job times outweighed
 the pain of potentially doing a lot of rechecks. For the most part the bugs
 being surfaced were real OpenStack bugs that were showing up due to the new
 stress of parallel test execution. This was a good thing, though
 certainly painful to all. With hindsight I'm not sure if that was the right
 decision or not.

 This is just an explanation of what has happened and why. There are
 obviously costs and benefits of being tightly bound to the project.

  -David

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Mathew R Odden
Not to derail the current direction this thread is heading but my 2 cents
on the topic of moving drivers out of tree:

I share a lot of the same concerns that John Griffith pointed out. As a one
of the maintainers of the PowerVM driver in nova,
I view the official-ness of having the driver in tree as a huge benefit.

For the hyper-v driver, it might make sense to be out of tree, or have an
out of tree copy for fast iteration. As one of the original
authors of the PowerVM driver, this is how we started. We had an internal
project and were able to iterate fast, fix issues quickly and
efficiently, and release as often or as little as we wanted. The copy of
the driver in Nova today is that same driver, but evolved and has a
different purpose. It is an 'official' shared copy that other teams and
community members can contribute to. It is meant to be
the community's driver, not a vendor driver. There is nothing stopping
anyone from taking that code and making their own version if they
want to go back to a fast iteration model, but their are obvious
consequences to that. Some of those consequences might come to
light during the Icehouse development cycle, so stick around if you want to
see an example of the problems with that approach.

I don't think we should move drivers out of tree, but I did like the idea
of an incubator area for new drivers. As Dan pointed out already,
this gives us a workflow to match the new requirement of CI integration for
each driver as well.

Also, I think someone already pointed this out, but doing code reviews and
helping out elsewhere in the community is important and
would definitely help the hyper-V case. Code reviews are obviously the most
in demand activity needed right now, but the idea is
that contributors should be involved in the entire project and help
optimize the whole. No matter how many bugs you fix in the hyper-V
driver,
the driver itself would be useless if the rest of Nova was so buggy it was
useless.
Similar case between Nova and the other OpenStack projects.

Mathew Odden, Software Developer
IBM STG OpenStack Development___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Alessandro Pilotti


On 11.10.2013, at 22:58, Rochelle.Grober 
rochelle.gro...@huawei.commailto:rochelle.gro...@huawei.com wrote:

Pardon me for cutting out most of the discussion.  I’d like to summarize a bit 
here and make a proposal.

Issues:


· Driver and Plugin writers for Nova (and other Core OpenStack 
projects) have a different development focus than core developers which can 
create both delays in getting submitted code reviewed and tensions between to 
two camps.

· It is in OpenStack’s best interests to have these driver/plugin 
writers participating in OpenStack development as their contributions help make 
OpenStack a more relevant and compelling set of products in the Cloud space

· Delays of reviews are painful to driver writers causing extra 
branching, lots of duplicated work, etc.

· Nova Core reviewers are overworked and are less versed on the 
driver/plugin code, architecture, issues which makes them a little averse to 
performing reviews on these patches

· [developers|reviewers] aren’t appreciated

· Tempers flair

Proposed solution:
There have been a couple of solutions proposed.  I’m presenting a merged/hybrid 
solution that may work

· Create a new repository for the extra drivers:

o   Keep kvm and Xenapi in the Nova project as “reference” drivers

oopenstack/nova-extra-drivers (proposed by rbryant)

oHave all drivers other than reference drivers in the extra-drivers project 
until they meet the maturity of the ones in Nova

o   The core reviewers for nova-extra-drivers will come from its developer 
pool.  As Alessandro pointed out, all the driver developers have more in common 
with each other than core Nova, so they should be able to do a better job of 
reviewing these patches than Nova core.  Plus, this might create some synergy 
between different drivers that will result in more commonalities across drivers 
and better stability.  This also reduces the workloads on both Nova Core 
reviewers and the driver developers/core reviewers.

The Hyper-V driver is definitely stable, production grade and feature complete 
for our targets since Grizzly, the fact that we push a lot on the blueprints 
development side is simply because we see potential in new features.

So if a nova-extra-drivers projects means a ghetto for B class drivers, my 
answer is no way, unless they miss a CI gate starting from Icehouse. :-)

Getting back to the initial topic, we have only a small bunch of bug fixes that 
need to be merged for the features that got added in Havana, which are just 
staying in the review limbus and that originated all this discussion 
(incidentally all in Nova).

I still see our work completely independent from Nova, but getting along with 
the entire community has of course a value that goes beyond the merits of our 
driver or any other single aspect of OpenStack. My suggestion is to bring this 
discussion to HK, possibly with a few beers in front and sort it out :-)


o   If you don’t feel comfortable with the last bullet, have  Nova core 
reviewers do the final approval, but only for the obvious “does this code meet 
our standards?”

The proposed solution focuses the strengths of the different developers in 
their strong areas.  Everyone will still have to stretch to do reviews and now 
there is a possibility that the developers that best understand the drivers 
might be able to advance the state of the drivers by sharing their expertise 
amongst each other.

The proposal also offloads some of the workload for Nova Core reviewers and 
places it where it is best handled.

And, no more sniping about participation.  The driver developers will 
participate more because their vested interests are communal to the project 
they are now in.  Maybe the integration of tests, etc will even happen faster 
and expand coverage faster.

And by the way,  the statistics on participation are just that: statistics.  If 
you look at rbryant’s numbers, they are different from Stackalytics which are 
different from Launchpad which are different from 
review.openstack.orghttp://review.openstack.org.

And as and fYI.  Guess what?  Anyone working on a branch, such as stable (which 
promotes the commercial viability of OpenStack) gets ignored for their 
contributions once the branch has happened.  At least on Stackalytics.  I don’t 
know about rbryant’s numbers.

--Rocky Grober

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Russell Bryant
On 10/11/2013 05:09 PM, Alessandro Pilotti wrote:
 My suggestion is to bring this discussion to HK, possibly with a few beers
 in front and sort it out :-)

Sounds like a good plan to me!

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Rochelle.Grober
When you do, have a beer for me.  I'll be looking for what you guys come up 
with.

And I don't think a separate project would be a second class project.  The 
driver guys could be so successful that all the drivers end up there and the 
interfaces between Nova and the drivers get *real* clean and fast.

--Rocky Grober

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com] 
Sent: Friday, October 11, 2013 3:59 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Hyper-V] Havana status

On 10/11/2013 05:09 PM, Alessandro Pilotti wrote:
 My suggestion is to bring this discussion to HK, possibly with a few beers
 in front and sort it out :-)

Sounds like a good plan to me!

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Alessandro Pilotti
Hi all,

As the Havana release date is approaching fast, I'm sending this email to sum 
up the situation for pending bugs and reviews related to the Hyper-V 
integration in OpenStack.

In the past weeks we diligently marked bugs that are related to Havana features 
with the havana-rc-potential tag, which at least for what Nova is concerned, 
had absolutely no effect.
Our code is sitting in the review queue as usual and, not being tagged for a 
release or prioritised, there's no guarantee that anybody will take a look at 
the patches in time for the release. Needless to say, this starts to feel like 
a Kafka novel. :-)
The goal for us is to make sure that our efforts are directed to the main 
project tree, avoiding the need to focus on a separate fork with more advanced 
features and updated code, even if this means slowing down a lot our pace. Due 
to the limited review bandwidth available in Nova we had to postpone to 
Icehouse blueprints which were already implemented for Havana, which is fine, 
but we definitely cannot leave bug fixes behind (even if they are just a small 
number, like in this case).

Some of those bugs are critical for Hyper-V support in Havana, while the 
related fixes typically consist in small patches with very few line changes.

Here's the detailed status:


--Nova--

The following bugs have already been fixed and are waiting for review:


VHD format check is not properly performed for fixed disks in the Hyper-V driver

https://bugs.launchpad.net/nova/+bug/1233853
https://review.openstack.org/#/c/49269/


Deploy instances failed on Hyper-V with Chinese locale

https://bugs.launchpad.net/nova/+bug/1229671
https://review.openstack.org/#/c/48267/


Nova Hyper-V driver volumeutils iscsicli ListTargets contains a typo

https://bugs.launchpad.net/nova/+bug/1237432
https://review.openstack.org/#/c/50671/


Hyper-V driver needs tests for WMI WQL instructions

https://bugs.launchpad.net/nova/+bug/1220256
https://review.openstack.org/#/c/48940/


target_iqn is referenced before assignment after exceptions in 
hyperv/volumeop.py attch_volume()

https://bugs.launchpad.net/nova/+bug/1233837
https://review.openstack.org/#/c/49259/


--Neutron--

Waiting for review

ml2 plugin may let hyperv agents ports to build status

https://bugs.launchpad.net/neutron/+bug/1224991
https://review.openstack.org/#/c/48306/



The following two bugs are still requiring some work, but will be done in teh 
next days.

Hyper-V fails to spawn snapshots

https://bugs.launchpad.net/nova/+bug/1234759
https://review.openstack.org/#/c/50439/

VHDX snapshot from Hyper-V driver is bigger than original instance

https://bugs.launchpad.net/nova/+bug/1231911
https://review.openstack.org/#/c/48645/


As usual, thanks for your help!

Alessandro



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Alessandro Pilotti




On Oct 10, 2013, at 23:50 , Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com
 wrote:

On 10/10/2013 02:20 PM, Alessandro Pilotti wrote:
Hi all,

As the Havana release date is approaching fast, I'm sending this email
to sum up the situation for pending bugs and reviews related to the
Hyper-V integration in OpenStack.

In the past weeks we diligently marked bugs that are related to Havana
features with the havana-rc-potential tag, which at least for what
Nova is concerned, had absolutely no effect.
Our code is sitting in the review queue as usual and, not being tagged
for a release or prioritised, there's no guarantee that anybody will
take a look at the patches in time for the release. Needless to say,
this starts to feel like a Kafka novel. :-)
The goal for us is to make sure that our efforts are directed to the
main project tree, avoiding the need to focus on a separate fork with
more advanced features and updated code, even if this means slowing down
a lot our pace. Due to the limited review bandwidth available in Nova we
had to postpone to Icehouse blueprints which were already implemented
for Havana, which is fine, but we definitely cannot leave bug fixes
behind (even if they are just a small number, like in this case).

Some of those bugs are critical for Hyper-V support in Havana, while the
related fixes typically consist in small patches with very few line changes.

Does the rant make you feel better?  :-)


Hi Russell,

This was definitely not meant to sound like a rant, I apologise if you felt it 
that way. :-)

With a more general view of nova review performance, our averages are
very good right now and are meeting our goals for review turnaround times:

http://russellbryant.net/openstack-stats/nova-openreviews.html

-- Total Open Reviews: 230
-- Waiting on Submitter: 105
-- Waiting on Reviewer: 125

-- Stats since the latest revision:
 Average wait time: 3 days, 12 hours, 14 minutes
 Median wait time: 1 days, 12 hours, 31 minutes
 Number waiting more than 7 days: 19

-- Stats since the last revision without -1 or -2 (ignoring jenkins):
 Average wait time: 5 days, 10 hours, 57 minutes
 Median wait time: 2 days, 13 hours, 27 minutes


Usually when this type of discussion comes up, the first answer that I hear is 
some defensive data about how well project X ranks compared to some metric or 
the whole OpenStack average.
I'm not putting into discussion how much and well you guys are working (I 
actually firmly believe that you DO work very well), I'm just discussing about 
the way in which blueprints and bugs get prioritised.

Working on areas like Hyper-V inside of the OpenStack ecosystem is currently 
quite peculiar from a project management perspective due to the fragmentation 
of the commits among a number of larger projects.
Our bits are spread allover between Nova, Neutron, Cinder, Ceilometer, Windows 
Cloud-Init and let's not forget Crowbar and OpenVSwitch, although not stricly 
part of OpenStack. Except obviously Windows Cloud-Init, in none of those 
projects our contribution reaches the critical mass required for the project to 
be somehow dependent on what we do and reach a core status that would 
generate a sufficient autonomy. Furthermore, to complicate things more, with 
every release we are adding features to more projects.

On the other side, to get our code reviewed and merged we are always dependent 
on the good will and best effort of core reviewers that don't necessarily know 
or care about specific driver, plugin or agent internals. This brings to even 
longer review cycles even considering that reviewers are clearly doing their 
best in understanding the patches and we couldn't be more thankful.

Best effort has also a very specific meaning: in Nova all the Havana Hyper-V 
blueprints were marked as low priority (which can be translated in: the only 
way to get them merged is to beg for reviews or maybe commit them on day 1 of 
the release cycle and pray) while most of the Hyper-V bugs had no priority at 
all (which can be translated in make some noise on the ML and IRC or nobody 
will care). :-)

This reality unfortunately applies to most of the sub-projects (non only 
Hyper-V) and can be IMHO solved only by delegating more authonomy to the 
sub-project teams on their specific area of competence across OpenStack as a 
whole. Hopefully we'll manage to find a solution during the design summit as we 
are definitely not the only ones feeling this way, by judging on various 
threads in this ML.

I personally consider that in a large project like this one there are multiple 
ways to work towards the achievement of the greater good. Our call obviously 
consists in bringing OpenStack to the Microsoft world, which so far worked very 
well, I'd just prefer to be able to dedicate more resources on adding features, 
fixing bugs and make users happy instead of useless waits.

Also note that there are no hyper-v patches that are in the top 5 of any
of the 

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Tim Smith
On Thu, Oct 10, 2013 at 1:50 PM, Russell Bryant rbry...@redhat.com wrote:


 Please understand that I only want to help here.  Perhaps a good way for
 you to get more review attention is get more karma in the dev community
 by helping review other patches.  It looks like you don't really review
 anything outside of your own stuff, or patches that touch hyper-v.  In
 the absence of significant interest in hyper-v from others, the only way
 to get more attention is by increasing your karma.


NB: I don't have any vested interest in this discussion except that I want
to make sure OpenStack stays Open, i.e. inclusive. I believe the concept
of reviewer karma, while seemingly sensible, is actually subtly counter
to the goals of openness, innovation, and vendor neutrality, and would also
lead to overall lower commit quality.

Brian Kernighan famously wrote: Debugging is twice as hard as writing the
code in the first place. A corollary is that constructing a mental model
of code is hard; perhaps harder than writing the code in the first place.
It follows that reviewing code is not an easy task, especially if one has
not been intimately involved in the original development of the code under
review. In fact, if a reviewer is not intimately familiar with the code
under review, and therefore only able to perform the functions of human
compiler and style-checker (functions which can be and typically are
performed by automatic tools), the rigor of their review is at best
less-than-ideal, and at worst purely symbolic.

It is logical, then, that a reviewer should review changes to code that
he/she is familiar with. Attempts to gamify the implicit review
prioritization system through a karma scheme are sadly doomed to fail, as
contributors hoping to get their patches reviewed will have no option but
to build karma reviewing patches in code they are unfamiliar with,
leading to a higher number of low quality reviews.

So, if a cross-functional karma system won't accomplish the desired
result (high-quality reviews of commits across all functional units), what
will it accomplish (besides overall lower commit quality)?

Because the karma system inherently favors entrenched (read: heavily
deployed) code, it forms a slippery slope leading to a mediocre
one-size-fits-all stack, where contributors of new technologies,
approaches, and hardware/software drivers will see their contributions die
on the vine due to lack of core reviewer attention. If the driver team for
a widely deployed hypervisor (outside of the OpenStack space - they can't
really be expected to have wide OpenStack deployment without a mature
driver) is having difficulty with reviews due to an implicit karma
deficit, imagine the challenges that will be faced by the future
SDN/SDS/SDx innovators of the world hoping to find a platform for their
innovation in OpenStack.

Again, I don't have any vested interest in this discussion, except that I
believe the concept of reviewer karma to be counter to both software
quality and openness. In this particular case it would seem that the
simplest solution to this problem would be to give one of the hyper-v team
members core reviewer status, but perhaps there are consequences to that
that elude me.

Regards,
Tim



 https://review.openstack.org/#/q/reviewer:3185+project:openstack/nova,n,z

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Matt Riedemann
Getting integration testing hooked up for the hyper-v driver with tempest 
should go a long way here which is a good reason to have it.  As has been 
mentioned, there is a core team of people that understand the internals of 
the hyper-v driver and the subtleties of when it won't work, and only 
those with a vested interest in using it will really care about it.

My team has the same issue with the powervm driver.  We don't have 
community integration testing hooked up yet.  We run tempest against it 
internally so we know what works and what doesn't, but besides standard 
code review practices that apply throughout everything (strong unit test 
coverage, consistency with other projects, hacking rules, etc), any other 
reviewer has to generally take it on faith that what's in there works as 
it's supposed to.  Sure, there is documentation available on what the 
native commands do and anyone can dig into those to figure it out, but I 
wouldn't expect that low-level of review from anyone that doesn't 
regularly work on the powervm driver.  I think the same is true for 
anything here.  So the equalizer is a rigorously tested and broad set of 
integration tests, which is where we all need to get to with tempest and 
continuous integration.

We've had the same issues as mentioned in the original note about things 
slipping out of releases or taking a long time to get reviewed, and we've 
had to fork code internally because of it which we then have to continue 
to try and get merged upstream - and it's painful, but it is what it is, 
that's the nature of the business.

Personally my experience has been that the more I give the more I get. The 
more I'm involved in what others are doing and the more I review other's 
code, the more I can build a relationship which is mutually beneficial. 
Sometimes I can only say 'hey, you need unit tests for this or this 
doesn't seem right but I'm not sure', but unless you completely automate 
code coverage metrics and build that back into reviews, e.g. does your 
1000 line blueprint have 95% code coverage in the tests, you still need 
human reviewers on everything, regardless of context.  Even then it's not 
going to be enough, there will always be a need for people with a broader 
vision of the project as a whole that can point out where things are going 
in the wrong direction even if it fixes a bug.

The point is I see both sides of the argument, I'm sure many people do. In 
a large complicated project like this it's inevitable.  But I think the 
quality and adoption of OpenStack speaks for itself and I believe a key 
component of that is the review system and that's only as good as the 
people which are going to uphold the standards across the project.  I've 
been on enough development projects that give plenty of lip service to 
code quality and review standards which are always the first thing to go 
when a deadline looms, and those projects are always ultimately failures.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Tim Smith tsm...@gridcentric.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   10/10/2013 07:48 PM
Subject:Re: [openstack-dev] [Hyper-V] Havana status



On Thu, Oct 10, 2013 at 1:50 PM, Russell Bryant rbry...@redhat.com 
wrote:
 
Please understand that I only want to help here.  Perhaps a good way for
you to get more review attention is get more karma in the dev community
by helping review other patches.  It looks like you don't really review
anything outside of your own stuff, or patches that touch hyper-v.  In
the absence of significant interest in hyper-v from others, the only way
to get more attention is by increasing your karma.

NB: I don't have any vested interest in this discussion except that I want 
to make sure OpenStack stays Open, i.e. inclusive. I believe the concept 
of reviewer karma, while seemingly sensible, is actually subtly counter 
to the goals of openness, innovation, and vendor neutrality, and would 
also lead to overall lower commit quality.

Brian Kernighan famously wrote: Debugging is twice as hard as writing the 
code in the first place. A corollary is that constructing a mental model 
of code is hard; perhaps harder than writing the code in the first place. 
It follows that reviewing code is not an easy task, especially if one has 
not been intimately involved in the original development of the code under 
review. In fact, if a reviewer is not intimately familiar with the code 
under review, and therefore only able to perform the functions of human 
compiler and style-checker (functions which can be and typically are 
performed by automatic tools), the rigor of their review is at best 
less-than-ideal, and at worst purely symbolic.

It is logical, then, that a reviewer should review

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Joe Gordon
On Thu, Oct 10, 2013 at 11:20 AM, Alessandro Pilotti 
apilo...@cloudbasesolutions.com wrote:

  Hi all,

  As the Havana release date is approaching fast, I'm sending this email
 to sum up the situation for pending bugs and reviews related to the Hyper-V
 integration in OpenStack.

  In the past weeks we diligently marked bugs that are related to Havana
 features with the havana-rc-potential tag, which at least for what Nova
 is concerned, had absolutely no effect.
 Our code is sitting in the review queue as usual and, not being tagged for
 a release or prioritised, there's no guarantee that anybody will take a
 look at the patches in time for the release. Needless to say, this starts
 to feel like a Kafka novel. :-)
 The goal for us is to make sure that our efforts are directed to the main
 project tree, avoiding the need to focus on a separate fork with more
 advanced features and updated code, even if this means slowing down a lot
 our pace. Due to the limited review bandwidth available in Nova we had to
 postpone to Icehouse blueprints which were already implemented for Havana,
 which is fine, but we definitely cannot leave bug fixes behind (even if
 they are just a small number, like in this case).

  Some of those bugs are critical for Hyper-V support in Havana, while the
 related fixes typically consist in small patches with very few line changes.

  Here's the detailed status:


  --Nova--

  The following bugs have already been fixed and are waiting for review:


  VHD format check is not properly performed for fixed disks in the
 Hyper-V driver

  https://bugs.launchpad.net/nova/+bug/1233853
 https://review.openstack.org/#/c/49269/


 Deploy instances failed on Hyper-V with Chinese locale

  https://bugs.launchpad.net/nova/+bug/1229671
 https://review.openstack.org/#/c/48267/


  Nova Hyper-V driver volumeutils iscsicli ListTargets contains a typo

  https://bugs.launchpad.net/nova/+bug/1237432
 https://review.openstack.org/#/c/50671/


This link is incorrect, it should read
https://review.openstack.org/#/c/50482/




Hyper-V driver needs tests for WMI WQL instructions

https://bugs.launchpad.net/nova/+bug/1220256
https://review.openstack.org/#/c/48940/


As a core reviewer, if I see no +1 from any Hyper-V people on a patch, I an
inclined to come back to it later. Like this one.




target_iqn is referenced before assignment after exceptions in
 hyperv/volumeop.py attch_volume()
https://bugs.launchpad.net/nova/+bug/1233837
https://review.openstack.org/#/c/49259/

  --Neutron--

  Waiting for review

ml2 plugin may let hyperv agents ports to build status

https://bugs.launchpad.net/neutron/+bug/1224991
https://review.openstack.org/#/c/48306/



  The following two bugs are still requiring some work, but will be done
 in teh next days.

Hyper-V fails to spawn snapshots

 https://bugs.launchpad.net/nova/+bug/1234759
https://review.openstack.org/#/c/50439/

VHDX snapshot from Hyper-V driver is bigger than original instance

https://bugs.launchpad.net/nova/+bug/1231911
https://review.openstack.org/#/c/48645/


  As usual, thanks for your help!

  Alessandro




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Joe Gordon
On Thu, Oct 10, 2013 at 6:57 PM, Matt Riedemann mrie...@us.ibm.com wrote:

 Getting integration testing hooked up for the hyper-v driver with tempest
 should go a long way here which is a good reason to have it.  As has been
 mentioned, there is a core team of people that understand the internals of
 the hyper-v driver and the subtleties of when it won't work, and only those
 with a vested interest in using it will really care about it.

 My team has the same issue with the powervm driver.  We don't have
 community integration testing hooked up yet.  We run tempest against it
 internally so we know what works and what doesn't, but besides standard
 code review practices that apply throughout everything (strong unit test
 coverage, consistency with other projects, hacking rules, etc), any other
 reviewer has to generally take it on faith that what's in there works as
 it's supposed to.  Sure, there is documentation available on what the
 native commands do and anyone can dig into those to figure it out, but I
 wouldn't expect that low-level of review from anyone that doesn't regularly
 work on the powervm driver.  I think the same is true for anything here.
  So the equalizer is a rigorously tested and broad set of integration
 tests, which is where we all need to get to with tempest and continuous
 integration.


Well said, I couldn't agree more!



 We've had the same issues as mentioned in the original note about things
 slipping out of releases or taking a long time to get reviewed, and we've
 had to fork code internally because of it which we then have to continue to
 try and get merged upstream - and it's painful, but it is what it is,
 that's the nature of the business.

 Personally my experience has been that the more I give the more I get.
  The more I'm involved in what others are doing and the more I review
 other's code, the more I can build a relationship which is mutually
 beneficial.  Sometimes I can only say 'hey, you need unit tests for this or
 this doesn't seem right but I'm not sure', but unless you completely
 automate code coverage metrics and build that back into reviews, e.g. does
 your 1000 line blueprint have 95% code coverage in the tests, you still
 need human reviewers on everything, regardless of context.  Even then it's
 not going to be enough, there will always be a need for people with a
 broader vision of the project as a whole that can point out where things
 are going in the wrong direction even if it fixes a bug.

 The point is I see both sides of the argument, I'm sure many people do.
  In a large complicated project like this it's inevitable.  But I think the
 quality and adoption of OpenStack speaks for itself and I believe a key
 component of that is the review system and that's only as good as the
 people which are going to uphold the standards across the project.  I've
 been on enough development projects that give plenty of lip service to code
 quality and review standards which are always the first thing to go when a
 deadline looms, and those projects are always ultimately failures.



 Thanks,

 *MATT RIEDEMANN*
 Advisory Software Engineer
 Cloud Solutions and OpenStack Development
 --
  *Phone:* 1-507-253-7622 | *Mobile:* 1-507-990-1889*
 E-mail:* *mrie...@us.ibm.com* mrie...@us.ibm.com
 [image: IBM]

 3605 Hwy 52 N
 Rochester, MN 55901-1407
 United States





 From:Tim Smith tsm...@gridcentric.com
 To:OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org,
 Date:10/10/2013 07:48 PM
 Subject:Re: [openstack-dev] [Hyper-V] Havana status
 --



 On Thu, Oct 10, 2013 at 1:50 PM, Russell Bryant 
 *rbry...@redhat.com*rbry...@redhat.com
 wrote:

 Please understand that I only want to help here.  Perhaps a good way for
 you to get more review attention is get more karma in the dev community
 by helping review other patches.  It looks like you don't really review
 anything outside of your own stuff, or patches that touch hyper-v.  In
 the absence of significant interest in hyper-v from others, the only way
 to get more attention is by increasing your karma.

 NB: I don't have any vested interest in this discussion except that I want
 to make sure OpenStack stays Open, i.e. inclusive. I believe the concept
 of reviewer karma, while seemingly sensible, is actually subtly counter
 to the goals of openness, innovation, and vendor neutrality, and would also
 lead to overall lower commit quality.

 Brian Kernighan famously wrote: Debugging is twice as hard as writing the
 code in the first place. A corollary is that constructing a mental model
 of code is hard; perhaps harder than writing the code in the first place.
 It follows that reviewing code is not an easy task, especially if one has
 not been intimately involved in the original development of the code under
 review. In fact, if a reviewer is not intimately familiar with the code
 under review, and therefore only

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Joe Gordon
On Thu, Oct 10, 2013 at 5:43 PM, Tim Smith tsm...@gridcentric.com wrote:

 On Thu, Oct 10, 2013 at 1:50 PM, Russell Bryant rbry...@redhat.comwrote:


 Please understand that I only want to help here.  Perhaps a good way for
 you to get more review attention is get more karma in the dev community
 by helping review other patches.  It looks like you don't really review
 anything outside of your own stuff, or patches that touch hyper-v.  In
 the absence of significant interest in hyper-v from others, the only way
 to get more attention is by increasing your karma.


 NB: I don't have any vested interest in this discussion except that I want
 to make sure OpenStack stays Open, i.e. inclusive. I believe the concept
 of reviewer karma, while seemingly sensible, is actually subtly counter
 to the goals of openness, innovation, and vendor neutrality, and would also
 lead to overall lower commit quality.


The way I see it there are a few parts to 'karma' including:

* The ratio of reviewers to open patches is way off. In nova there are only
21 reviewers who have done on average two reviews a day for the past 30
days [1], and there are 226 open reviews, 125 of which are waiting for a
reviewer.  So one part of the karma is helping out the team as a whole with
the review work load (and the more insightful the review the better).  If
we have more reviewers, more patches get looked at faster.
* The more I see someone being active, through reviews or through patches,
the more I trust there +1/-1s and patches.


While there are some potentially negative sides to karma, I don't see how
the properties above, which to me are the major elements of karma, can be
considered negative.


[1] http://www.russellbryant.net/openstack-stats/nova-reviewers-30.txt
[2] http://www.russellbryant.net/openstack-stats/nova-openreviews.html



 Brian Kernighan famously wrote: Debugging is twice as hard as writing the
 code in the first place. A corollary is that constructing a mental model
 of code is hard; perhaps harder than writing the code in the first place.
 It follows that reviewing code is not an easy task, especially if one has
 not been intimately involved in the original development of the code under
 review. In fact, if a reviewer is not intimately familiar with the code
 under review, and therefore only able to perform the functions of human
 compiler and style-checker (functions which can be and typically are
 performed by automatic tools), the rigor of their review is at best
 less-than-ideal, and at worst purely symbolic.


FWIW, we have automatic style-checking.



 It is logical, then, that a reviewer should review changes to code that
 he/she is familiar with. Attempts to gamify the implicit review
 prioritization system through a karma scheme are sadly doomed to fail, as
 contributors hoping to get their patches reviewed will have no option but
 to build karma reviewing patches in code they are unfamiliar with,
 leading to a higher number of low quality reviews.

 So, if a cross-functional karma system won't accomplish the desired
 result (high-quality reviews of commits across all functional units), what
 will it accomplish (besides overall lower commit quality)?

 Because the karma system inherently favors entrenched (read: heavily
 deployed) code, it forms a slippery slope leading to a mediocre
 one-size-fits-all stack, where contributors of new technologies,
 approaches, and hardware/software drivers will see their contributions die
 on the vine due to lack of core reviewer attention. If the driver team for
 a widely deployed hypervisor (outside of the OpenStack space - they can't
 really be expected to have wide OpenStack deployment without a mature
 driver) is having difficulty with reviews due to an implicit karma
 deficit, imagine the challenges that will be faced by the future
 SDN/SDS/SDx innovators of the world hoping to find a platform for their
 innovation in OpenStack.

 Again, I don't have any vested interest in this discussion, except that I
 believe the concept of reviewer karma to be counter to both software
 quality and openness. In this particular case it would seem that the
 simplest solution to this problem would be to give one of the hyper-v team
 members core reviewer status, but perhaps there are consequences to that
 that elude me.


 Regards,
 Tim



 https://review.openstack.org/#/q/reviewer:3185+project:openstack/nova,n,z

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev