Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-05 Thread Prasad Vellanki
Sumit
Thanks for initiating this and also good discussion today on the IRC.

My thoughts are that it is important to make this available to potential
users and customers as soon as possible so that we can get the necessary
feedback. Considering that the neutron cores and community are battling
nova parity and stability now, I would think it would be tough to get any
time for incubator or neutron feature branch any time soon.
I would think it would be better to move GBP into stackforge and then look
at incubator or neutron feature branch when available.

prasadv


On Wed, Sep 3, 2014 at 9:07 PM, Sumit Naiksatam sumitnaiksa...@gmail.com
wrote:

 Hi,

 There's been a lot of lively discussion on GBP a few weeks back and we
 wanted to drive forward the discussion on this a bit more. As you
 might imagine, we're excited to move this forward so more people can
 try it out.  Here are the options:

 * Neutron feature branch: This presumably allows the GBP feature to be
 developed independently, and will perhaps help in faster iterations.
 There does seem to be a significant packaging issue [1] with this
 approach that hasn’t been completely addressed.

 * Neutron-incubator: This allows a path to graduate into Neutron, and
 will be managed by the Neutron core team. That said, the proposal is
 under discussion and there are still some open questions [2].

 * Stackforge: This allows the GBP team to make rapid and iterative
 progress, while still leveraging the OpenStack infra. It also provides
 option of immediately exposing the existing implementation to early
 adopters.

 Each of the above options does not preclude moving to the other at a later
 time.

 Which option do people think is more preferable?

 (We could also discuss this in the weekly GBP IRC meeting on Thursday:
 https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy)

 Thanks!

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044283.html
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/043577.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-05 Thread Mandeep Dhami
I agree. Also, as this does not preclude using the incubator when it is
ready, this is a good way to start iterating on implementation in parallel
with those issues being addressed by the community.

In my view, the issues raised around the incubator were significant enough
(around packaging, handling of updates needed for horizon/heat/celiometer,
handling of multiple feature branches, etc) that we we will probably need a
design session in paris before a consensus will emerge around a solution
for the incubator structure/usage. And if you are following the thread on
nova for 'Averting the Nova crisis ...', the final consensus might actually
BE to use separate stackforge project for plugins anyways, and in that case
we will have a head start ;-)

Regards,
Mandeep
-


On Thu, Sep 4, 2014 at 10:59 PM, Prasad Vellanki 
prasad.vella...@oneconvergence.com wrote:

 Sumit
 Thanks for initiating this and also good discussion today on the IRC.

 My thoughts are that it is important to make this available to potential
 users and customers as soon as possible so that we can get the necessary
 feedback. Considering that the neutron cores and community are battling
 nova parity and stability now, I would think it would be tough to get any
 time for incubator or neutron feature branch any time soon.
 I would think it would be better to move GBP into stackforge and then look
 at incubator or neutron feature branch when available.

 prasadv


 On Wed, Sep 3, 2014 at 9:07 PM, Sumit Naiksatam sumitnaiksa...@gmail.com
 wrote:

 Hi,

 There's been a lot of lively discussion on GBP a few weeks back and we
 wanted to drive forward the discussion on this a bit more. As you
 might imagine, we're excited to move this forward so more people can
 try it out.  Here are the options:

 * Neutron feature branch: This presumably allows the GBP feature to be
 developed independently, and will perhaps help in faster iterations.
 There does seem to be a significant packaging issue [1] with this
 approach that hasn’t been completely addressed.

 * Neutron-incubator: This allows a path to graduate into Neutron, and
 will be managed by the Neutron core team. That said, the proposal is
 under discussion and there are still some open questions [2].

 * Stackforge: This allows the GBP team to make rapid and iterative
 progress, while still leveraging the OpenStack infra. It also provides
 option of immediately exposing the existing implementation to early
 adopters.

 Each of the above options does not preclude moving to the other at a
 later time.

 Which option do people think is more preferable?

 (We could also discuss this in the weekly GBP IRC meeting on Thursday:
 https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy)

 Thanks!

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044283.html
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/043577.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [feature freeze exception] FFE for libvirt-disk-discard-option

2014-09-05 Thread Bohai (ricky)
Hi,

I'd like to ask for a feature freeze exception for blueprint 
libvirt-disk-discard-option.
https://review.openstack.org/#/c/112977/

approved spec:
https://review.openstack.org/#/c/85556/

blueprint was approved, but its status was changed to Pending Approval 
because of FF.
https://blueprints.launchpad.net/nova/+spec/libvirt-disk-discard-option

The patch has got a +2 from the core and pretty close to merge, but FF came.

Best regards to you.
Ricky


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Sylvain Bauza


Le 05/09/2014 01:26, Jay Pipes a écrit :

On 09/04/2014 10:33 AM, Dugger, Donald D wrote:

Basically +1 with what Daniel is saying (note that, as mentioned, a
side effect of our effort to split out the scheduler will help but
not solve this problem).


The difference between Dan's proposal and the Gantt split is that 
Dan's proposal features quite prominently the following:


== begin ==

 - The nova/virt/driver.py class would need to be much better
   specified. All parameters / return values which are opaque dicts
   must be replaced with objects + attributes. Completion of the
   objectification work is mandatory, so there is cleaner separation
   between virt driver impls  the rest of Nova.

== end ==

In other words, Dan's proposal above is EXACTLY what I've been saying 
needs to be done to the interfaces between nova-conductor, 
nova-compute, and nova-scheduler *before* any split of the scheduler 
code is even remotely feasible.


Splitting the scheduler out before this is done would actually not 
help but not solve this problem -- it would instead further the 
problem, IMO.




Jay, we agreed on a plan to carry on, please be sure we're working on 
it, see the Gantt meetings logs for what my vision is.



That said, I think this concern of clean interfaces also applies to this 
thread: if we want to spin off the virt drivers out of Nova git repo, 
that does requires a cleanup on the interfaces, in particular on the 
compute manager and the resource tracker, where a lot of bits are still 
strongly tied and not versionified (thanks to JSON dicts).


So, this effort requires at least one cycle, and as Dan stated, there is 
urgency, so I think we need to identify a short-term solution which 
doesn't require refactoring. My personal opinion is what Russell and 
Thierry expressed, ie. subteam delegation (to what I call half-cores) 
for iterations and only approvals for cores.


-Sylvain



Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Sylvain Bauza


Le 05/09/2014 01:22, Michael Still a écrit :

On Thu, Sep 4, 2014 at 5:24 AM, Daniel P. Berrange berra...@redhat.com wrote:

[Heavy snipping because of length]


The radical (?) solution to the nova core team bottleneck is thus to
follow this lead and split the nova virt drivers out into separate
projects and delegate their maintainence to new dedicated teams.

  - Nova becomes the home for the public APIs, RPC system, database
persistent and the glue that ties all this together with the
virt driver API.

  - Each virt driver project gets its own core team and is responsible
for dealing with review, merge  release of their codebase.

I think this is the crux of the matter. We're not doing a great job of
landing code at the moment, because we can't keep up with the review
workload.

So far we've had two proposals mooted:

  - slots / runways, where we try to rate limit the number of things
we're trying to review at once to maintain focus
  - splitting all the virt drivers out of the nova tree


Ahem, IIRC, there is a third proposal for Kilo :
 - create subteam's half-cores responsible for reviewing patch's 
iterations and send to cores approvals requests once they consider the 
patch enough stable for it.


As I explained, it would allow to free up reviewing time for cores 
without loosing the control over what is being merged.


-Sylvain


Splitting the drivers out of the nova tree does come at a cost -- we'd
need to stabilise and probably version the hypervisor driver
interface, and that will encourage more out of tree drivers, which
are things we haven't historically wanted to do. If we did this split,
I think we need to acknowledge that we are changing policy there. It
also means that nova-core wouldn't be the ones holding the quality bar
for hypervisor drivers any more, I guess this would open the door for
drivers to more actively compete on the quality of their
implementations, which might be a good thing.

Both of these have interesting aspects, and I agree we need to do
_something_. I do wonder if there is a hybrid approach as well though.
For example, could we implement some sort of more formal lieutenant
system for drivers? We've talked about it in the past but never been
able to express how it would work in practise.

The last few days have been interesting as I watch FFEs come through.
People post explaining their feature, its importance, and the risk
associated with it. Three cores sign on for review. All of the ones
I've looked at have received active review since being posted. Would
it be bonkers to declare nova to be in permanent feature freeze? If
we could maintain the level of focus we see now, then we'd be getting
heaps more done that before.

These issues should very definitely be on the agenda for the design
summit, probably early in the week.

Michael




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-05 Thread Stephen Wong
I agree with Prasad here.

There remains lots of unknown about Neutron incubator and its workflow at
this point, and the idea of Neutron feature branch is at best in embryonic
stage. It seems like among the three options, the most well-defined one is
indeed through stackforge.

When and if the Neutron incubator and/or feature branch process and policy
are better defined, we as a community can assess whether it make sense for
GBP to apply for one of those programs. In the meantime, let's get this
feature in the hands of the early adopters and iterate on our
implementation and further enhancements accordingly.

- Stephen



On Thu, Sep 4, 2014 at 10:59 PM, Prasad Vellanki 
prasad.vella...@oneconvergence.com wrote:

 Sumit
 Thanks for initiating this and also good discussion today on the IRC.

 My thoughts are that it is important to make this available to potential
 users and customers as soon as possible so that we can get the necessary
 feedback. Considering that the neutron cores and community are battling
 nova parity and stability now, I would think it would be tough to get any
 time for incubator or neutron feature branch any time soon.
 I would think it would be better to move GBP into stackforge and then look
 at incubator or neutron feature branch when available.

 prasadv


 On Wed, Sep 3, 2014 at 9:07 PM, Sumit Naiksatam sumitnaiksa...@gmail.com
 wrote:

 Hi,

 There's been a lot of lively discussion on GBP a few weeks back and we
 wanted to drive forward the discussion on this a bit more. As you
 might imagine, we're excited to move this forward so more people can
 try it out.  Here are the options:

 * Neutron feature branch: This presumably allows the GBP feature to be
 developed independently, and will perhaps help in faster iterations.
 There does seem to be a significant packaging issue [1] with this
 approach that hasn’t been completely addressed.

 * Neutron-incubator: This allows a path to graduate into Neutron, and
 will be managed by the Neutron core team. That said, the proposal is
 under discussion and there are still some open questions [2].

 * Stackforge: This allows the GBP team to make rapid and iterative
 progress, while still leveraging the OpenStack infra. It also provides
 option of immediately exposing the existing implementation to early
 adopters.

 Each of the above options does not preclude moving to the other at a
 later time.

 Which option do people think is more preferable?

 (We could also discuss this in the weekly GBP IRC meeting on Thursday:
 https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy)

 Thanks!

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044283.html
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/043577.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][iperf] Benchmarking network performance

2014-09-05 Thread masoom alam
Thanks Ajay

I corrected this earlier. But facing another problem. Will forward paste in
a while.



On Friday, September 5, 2014, Ajay Kalambur (akalambu) akala...@cisco.com
wrote:

  Sorry there was  typo in the patch should be @validation and not
 @(validation
 Please change that in vm_perf.py

 Sent from my iPhone

 On Sep 4, 2014, at 7:51 PM, masoom alam masoom.a...@gmail.com
 javascript:_e(%7B%7D,'cvml','masoom.a...@gmail.com'); wrote:

   Why this is so when I patched with your sent patch:

  http://paste.openstack.org/show/106196/


 On Thu, Sep 4, 2014 at 8:58 PM, Rick Jones rick.jon...@hp.com
 javascript:_e(%7B%7D,'cvml','rick.jon...@hp.com'); wrote:

 On 09/03/2014 11:47 AM, Ajay Kalambur (akalambu) wrote:

 Hi
 Looking into the following blueprint which requires that network
 performance tests be done as part of a scenario
 I plan to implement this using iperf and basically a scenario which
 includes a client/server VM pair


  My experience with netperf over the years has taught me that when there
 is just the single stream and pair of systems one won't actually know if
 the performance was limited by inbound, or outbound.  That is why the likes
 of

 http://www.netperf.org/svn/netperf2/trunk/doc/examples/
 netperf_by_flavor.py

 and

 http://www.netperf.org/svn/netperf2/trunk/doc/examples/
 netperf_by_quantum.py

 apart from being poorly written python :)  Will launch several instances
 of a given flavor and then run aggregate tests on the Instance Under Test.
 Those aggregate tests will include inbound, outbound, bidirectional,
 aggregate small packet and then a latency test.

 happy benchmarking,

 rick jones


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sent from noir
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-05 Thread Kevin Benton
Tl;dr - Neutron incubator is only a wiki page with many uncertainties. Use
StackForge to make progress and re-evaluate when the incubator exists.


I also agree that starting out in StackForge as a separate repo is a better
first step. In addition to the uncertainty around packaging and other
processes brought up by Mandeep, I really doubt the Neutron incubator is
going to have the review velocity desired by the group policy contributors.
I believe this will be the case based on the Neutron incubator patch
approval policy in conjunction with the nature of the projects it will
attract.

Due to the requirement for two core +2's in the Neutron incubator, moving
group policy there is hardly going to do anything to reduce the load on the
Neutron cores who are in a similar overloaded position as the Nova
cores.[1] Consequently, I wouldn't be surprised if patches to the Neutron
incubator receive even less core attention than the main repo simply
because their location outside of openstack/neutron will be a good reason
to treat them with a lower priority.

If you combine that with the fact that the incubator is designed to house
all of the proposed experimental features to Neutron, there will be a very
high volume of patches constantly being proposed to add new features, make
changes to features, and maybe even fix bugs in those features. This new
demand for reviewers will not be met by the existing core reviewers because
they will be busy with refactoring, fixing, and enhancing the core Neutron
code.

Even ignoring the review velocity issues, I see very little benefit to GBP
starting inside of the Neutron incubator. It doesn't guarantee any
packaging with Neutron and Neutron code cannot reference any incubator
code. It's effectively a separate repo without the advantage of being able
to commit code quickly.

There is one potential downside to not immediately using the Neutron
incubator. If the Neutron cores decide that all features must live in the
incubator for at least 2 cycles regardless of quality or usage in
deployments, starting outside in a StackForge project would delay the start
of the timer until GBP makes it into the incubator. However, this can be
considered once the incubator actually exists and starts accepting
submissions.

In summary, I think GBP should move to a StackForge project as soon as
possible so development can progress. A transition to the Neutron incubator
can be evaluated once it actually becomes something more than a wiki page.


1.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044872.html

--
Kevin Benton


On Thu, Sep 4, 2014 at 11:24 PM, Mandeep Dhami dh...@noironetworks.com
wrote:


 I agree. Also, as this does not preclude using the incubator when it is
 ready, this is a good way to start iterating on implementation in parallel
 with those issues being addressed by the community.

 In my view, the issues raised around the incubator were significant enough
 (around packaging, handling of updates needed for horizon/heat/celiometer,
 handling of multiple feature branches, etc) that we we will probably need a
 design session in paris before a consensus will emerge around a solution
 for the incubator structure/usage. And if you are following the thread on
 nova for 'Averting the Nova crisis ...', the final consensus might actually
 BE to use separate stackforge project for plugins anyways, and in that case
 we will have a head start ;-)

 Regards,
 Mandeep
 -


 On Thu, Sep 4, 2014 at 10:59 PM, Prasad Vellanki 
 prasad.vella...@oneconvergence.com wrote:

 Sumit
 Thanks for initiating this and also good discussion today on the IRC.

 My thoughts are that it is important to make this available to potential
 users and customers as soon as possible so that we can get the necessary
 feedback. Considering that the neutron cores and community are battling
 nova parity and stability now, I would think it would be tough to get any
 time for incubator or neutron feature branch any time soon.
 I would think it would be better to move GBP into stackforge and then
 look at incubator or neutron feature branch when available.

 prasadv


 On Wed, Sep 3, 2014 at 9:07 PM, Sumit Naiksatam sumitnaiksa...@gmail.com
  wrote:

 Hi,

 There's been a lot of lively discussion on GBP a few weeks back and we
 wanted to drive forward the discussion on this a bit more. As you
 might imagine, we're excited to move this forward so more people can
 try it out.  Here are the options:

 * Neutron feature branch: This presumably allows the GBP feature to be
 developed independently, and will perhaps help in faster iterations.
 There does seem to be a significant packaging issue [1] with this
 approach that hasn’t been completely addressed.

 * Neutron-incubator: This allows a path to graduate into Neutron, and
 will be managed by the Neutron core team. That said, the proposal is
 under discussion and there are still some open questions [2].

 * Stackforge: This allows the GBP team to make 

Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-05 Thread Flavio Percoco
On 09/04/2014 07:08 PM, Clint Byrum wrote:
 Excerpts from Flavio Percoco's message of 2014-09-04 06:01:45 -0700:
 On 09/04/2014 02:14 PM, Sean Dague wrote:
 On 09/04/2014 03:08 AM, Flavio Percoco wrote:
 Greetings,

 Last Tuesday the TC held the first graduation review for Zaqar. During
 the meeting some concerns arose. I've listed those concerns below with
 some comments hoping that it will help starting a discussion before the
 next meeting. In addition, I've added some comments about the project
 stability at the bottom and an etherpad link pointing to a list of use
 cases for Zaqar.

 # Concerns

 - Concern on operational burden of requiring NoSQL deploy expertise to
 the mix of openstack operational skills

 For those of you not familiar with Zaqar, it currently supports 2 nosql
 drivers - MongoDB and Redis - and those are the only 2 drivers it
 supports for now. This will require operators willing to use Zaqar to
 maintain a new (?) NoSQL technology in their system. Before expressing
 our thoughts on this matter, let me say that:

 1. By removing the SQLAlchemy driver, we basically removed the chance
 for operators to use an already deployed OpenStack-technology
 2. Zaqar won't be backed by any AMQP based messaging technology for
 now. Here's[0] a summary of the research the team (mostly done by
 Victoria) did during Juno
 3. We (OpenStack) used to require Redis for the zmq matchmaker
 4. We (OpenStack) also use memcached for caching and as the oslo
 caching lib becomes available - or a wrapper on top of dogpile.cache -
 Redis may be used in place of memcached in more and more deployments.
 5. Ceilometer's recommended storage driver is still MongoDB, although
 Ceilometer has now support for sqlalchemy. (Please correct me if I'm 
 wrong).

 That being said, it's obvious we already, to some extent, promote some
 NoSQL technologies. However, for the sake of the discussion, lets assume
 we don't.

 I truly believe, with my OpenStack (not Zaqar's) hat on, that we can't
 keep avoiding these technologies. NoSQL technologies have been around
 for years and we should be prepared - including OpenStack operators - to
 support these technologies. Not every tool is good for all tasks - one
 of the reasons we removed the sqlalchemy driver in the first place -
 therefore it's impossible to keep an homogeneous environment for all
 services.

 With this, I'm not suggesting to ignore the risks and the extra burden
 this adds but, instead of attempting to avoid it completely by not
 evolving the stack of services we provide, we should probably work on
 defining a reasonable subset of NoSQL services we are OK with
 supporting. This will help making the burden smaller and it'll give
 operators the option to choose.

 [0] http://blog.flaper87.com/post/marconi-amqp-see-you-later/

 I've been one of the consistent voices concerned about a hard
 requirement on adding NoSQL into the mix. So I'll explain that thinking
 a bit more.

 I feel like when the TC makes an integration decision previously this
 has been about evaluating the project applying for integration, and if
 they met some specific criteria they were told about some time in the
 past. I think that's the wrong approach. It's a locally optimized
 approach that fails to ask the more interesting question.

 Is OpenStack better as a whole if this is a mandatory component of
 OpenStack? Better being defined as technically better (more features,
 less janky code work arounds, less unexpected behavior from the stack).
 Better from the sense of easier or harder to run an actual cloud by our
 Operators (taking into account what kinds of moving parts they are now
 expected to manage). Better from the sense of a better user experience
 in interacting with OpenStack as whole. Better from a sense that the
 OpenStack release will experience less bugs, less unexpected cross
 project interactions, an a greater overall feel of consistency so that
 the OpenStack API feels like one thing.

 https://dague.net/2014/08/26/openstack-as-layers/

 One of the interesting qualities of Layers 1  2 is they all follow an
 AMQP + RDBMS pattern (excepting swift). You can have a very effective
 IaaS out of that stack. They are the things that you can provide pretty
 solid integration testing on (and if you look at where everything stood
 before the new TC mandates on testing / upgrade that was basically what
 was getting integration tested). (Also note, I'll accept Barbican is
 probably in the wrong layer, and should be a Layer 2 service.)

 While large shops can afford to have a dedicated team to figure out how
 to make mongo or redis HA, provide monitoring, have a DR plan for when a
 huricane requires them to flip datacenters, that basically means
 OpenStack heads further down the path of only for the big folks. I
 don't want OpenStack to be only for the big folks, I want OpenStack to
 be for all sized folks. I really do want to have all the local small
 colleges around 

Re: [openstack-dev] [all] Design Summit reloaded

2014-09-05 Thread Thierry Carrez
Eoghan Glynn wrote:
 Am I missing some compelling advantage of moving all these emergent
 project-specific meetups to the Friday?

 One is that due to space limitations, we won't have nearly as many
 pods as in Atlanta (more like half or a third of them). Without one
 pod per program, the model breaks a bit.
 
 A-ha, OK.
 
 Will the subset of projects allocated a pod be fixed, or will the
 pod space float between projects as the week progresses?
 
 (for example, it's unlikely that a project will be using its pod
 space when its design session track is in-progress, so the pod could
 be passed on to another project)

We'll have to design some novel pod-switching algorithm, but I kinda
want to know how many pods we can have before we start designing. I'm
visiting the space again on Monday.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][FFE] Feature freeze exception for virt-driver-numa-placement

2014-09-05 Thread Nikola Đipanov
On 09/04/2014 07:42 PM, Murray, Paul (HP Cloud) wrote:
  
 
 
 
 Anyway, not enough to -1 it, but enough to at least say something.
 
 
 
 
 
 
 
 
 
 
 
 .. but I do not want to get into the discussion about software testing
 
 
 
 here, not the place really.
 
 
 
 
 
 
 
 However, I do think it is very harmful to respond to FFE request with
 
 
 
 such blanket statements and generalizations, if only for the message it
 
 
 
 sends to the contributors (that we really care more about upholding our
 
 
 
 own myths as a community than users and features).
 
 
 
 
 
 
 
 
 
 
 
 I believe you brought this up as one of your justifications for the FFE.
 
 When I read your statement it does sound as though you want to put
 
 experimental code in at the final release. I am sure that is not what
 
 you had in mind, but I am also sure you can also understand Sean's point
 
 of view. His point is clear and pertinent to your request.
 
 
 
 
 
 
 
 As the person responsible for Nova in HP I will be interested to see how
 
 it operates in practice. I can assure you we will do extensive testing
 
 on it before it goes into the wild and we will not put it into practice
 
 if we are not happy.
 
 
 
  
 
 That is awesome and we as a project are lucky to have that! I would not
 
 want things put into practice that users can't use or see huge flaws with.
 
  
 
 I can't help but read this as you being OK with the feature going ahead,
 
 though :).
 
  
 
  
 
 Actually, let’s say I have no particular objection. Just thought Sean’s
 point is worth noting.
 
  
 
 Now, if this had been done as an extensible resource I could easily
 decouple deploying it from all the bug fixes that come through with the
 release. But that’s another matter…
 
  

Quick response as not to hijack the thread:

I think we all agree on the benefits of having resources you can turn
off and on at will.

The current implementation of it, however, has some glaring drawbacks
that made it impossible for me to base my work on it, that have been
discussed in detail on other threads and IRC heavily, hence we need to
rethink how to get there.

 
 Paul
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Keystone] Steps toward Kerberos and Federation

2014-09-05 Thread Marco Fargetta
Hi,

I am wondering if the solution I was trying to sketch with the spec
https://review.openstack.org/#/c/96867/13; is not easier to implement
and manage then the steps highlated till n.2. Maybe, the spec is not
yet there and should be improved (I will abandon or move to Kilo as
Marek suggest) but the overall schema I think it is better then try to
complicate the communication between Horizon and Keystone, IMHO.

Step 3 is a different story and it needs more evaluation of the
possible scenarios opened.

Cheers,
Marco

On Thu, Sep 04, 2014 at 05:37:38PM -0400, Adam Young wrote:
 While the Keystone team has made pretty good strides toward
 Federation for getting a Keystone token, we do not yet have a
 complete story for Horizon.  The same is true about Kerberos.  I've
 been working on this, and I want to inform the people that are
 interested in the approach, as well as get some feedback.
 
 My first priority has been Kerberos.  I have a proof of concept of
 this working, but the amount of hacking I had to
 Django-OpenStack-Auth (DOA) made me shudder:  its fairly ugly.  A
 few discussions today have moved things such that I think I can
 clean up the approach.
 
 Phase 1.  DOA should be able to tell whether to use password or
 Kerberos in order to get a token from Keystone based on an variable
 set by the Apache web server;  mod_auth_kerb will set
 
 request.META['KRB5CCNAME']
 
 only in the kerberos case.  If it gets this variable, DOA will only
 do Kerberos.  If it does not, it will only do password.  There will
 be no fallback from Kerberos to password;  this is enforced by
 mod_auth_kerb, not something we can easily hack around in Django.
 
 That gets us Kerberos, but not Federation. Most of the code changes
 are common with what follows after:
 
 Phase 1.5.  Add an optional field  to the password auth page that
 allows a user to log in with a token instead of userid/password.
 This can be a hidden field by default if we really want.  DOA now
 needs to be able to validate a token.  Since Horizon only cares
 about the hashed version of the tokens anyway, we only to Online
 lookup, not PKI token validation.  In the successful response,
 Keystone will to return the properly hashed (SHA256 for example) of
 the PKI tokens for Horizon to cache.
 
 Phase 2.  Use AJAX to get a token from Keystone instead of sending
 the credentials to the client.  Then pass that token to Horizon in
 order to login. This implies that Keystone has set up CORS support.
 This will open the door for Federation.  While it will provide a
 different path to Kerberos than the stage 1, I think both are
 valuable approaches and will serve different use cases.  This
 
 Phase 3.  Never send your token to Horizon.  In this world, the
 browser, with full CORS support, makes AJAX calls direct to Nova,
 Glance, and all other services.
 
 Yep, this should be a cross project session for the summit.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Daniel P. Berrange
On Thu, Sep 04, 2014 at 02:56:04PM -0500, Kyle Mestery wrote:
 On Thu, Sep 4, 2014 at 5:24 AM, Daniel P. Berrange berra...@redhat.com 
 wrote:
  Proposal / solution
  ===
 
  In the past Nova has spun out its volume layer to form the cinder
  project. The Neutron project started as an attempt to solve the
  networking space, and ultimately replace the nova-network. It
  is likely that the schedular will be spun out to a separate project.
 
  Now Neutron itself has grown so large and successful that it is
  considering going one step further and spinning its actual drivers
  out of tree into standalone add-on projects [4]. I've heard on the
  grapevine that Ironic is considering similar steps for hardware
  drivers.
 
 I just wanted to note that this is a huge problem in Neutron, and it
 gets worse with each release as we add on more drivers and plugins
 which carry a maintenance cost without gaining any new reviewers from
 the companies who have the drivers. The rough plan I have for Neutron
 involves moving all non-Open Source drivers out of tree into a
 separate git repository. Your message has made me think that perhaps
 we in Neutron should go one step further and even remove the Open
 Source drivers, leaving the in-tree implementation as the only one
 there. Where we move these is the main issue. Given we have 20+
 drivers/plugins now, one git repository per driver/plugin won't scale,
 as we add 3-5 each cycle. So perhaps a single repository is the best
 idea here, with shared reviews from vendors across each other's code.

While I'll make no secret of my dislike for closed source software,
my feeling is that OpenStack as a project is explicitly welcoming
closed source software  vendors, not least by virtue of using a
more permissive Apache license instead of a strong copyleft license
like GPL. So given the project's stance, I'd not be in favour of
discriminating against drivers for closed source software.

In actual fact though, the premise of my proposal is the idea that
moving a driver out of tree will actually help its development by
giving its team much greater freedom  responsbility. So by only
moving out non-open source drivers, we'd arguably be putting the
in-tree open source drivers at a disadvantage ! I'm also very much
drawn to the idea that having separate repos will let us do more
targetted setup of CI test jobs, so each test job is actually
directly relevant to the code being tested.

I can see your concern about the number of drivers you have in
Neutron and the frequency with which more are added. We don't
have anywhere near this number in Nova and are not likely to
ever grow that much. If you did have 30 separate drivers and
thus 30 separate GIT repos though, the question to consider is
who is ultimately responsible for reviewing those drivers. If
each of those 30 drivers had their own self-organized team of
people the burden of 30 repos is not as bad as it seems, since
any one person would probably only be concerned with a couple
of git repos.  If you still see the single neutron core team
being responsible for each of those repos, then I can see that
having 30 repos would be a big burden. I don't think there is
a single right answer here for all OpenStack projects. It is
entirely conceivable that it might be best for Neutron to have
a single repo for a set of driver, while being best for Nova
to have a separate repo for each driver.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Brainstorming summit sessions

2014-09-05 Thread Thierry Carrez
Michael Still wrote:
 On Thu, Sep 4, 2014 at 11:40 AM, Steve Gordon sgor...@redhat.com wrote:
 
 Did you have a specific goal/date in mind for when you might start to 
 finalize this list? I am guessing at least after the dust settles on J-3 and 
 possibly even the first RCs but just curious.
 
 Good question. Looking at the release calendar, I think we need this
 done by mid-October, but I'm not sure when ttx wants the schedule for
 the summit done by. So we have at least a few weeks, but I'll be more
 concrete when I know more details of summit scheduling.

We have until the week after release to finalize the summit schedule, so
plenty of time. Also there will be Kilo PTLs elections in the middle of
this (end of September), and the Kilo themes are actually a Kilo PTL thing.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][iperf] Benchmarking network performance

2014-09-05 Thread masoom alam
http://paste.openstack.org/show/106297/


On Fri, Sep 5, 2014 at 1:12 PM, masoom alam masoom.a...@gmail.com wrote:

 Thanks Ajay

 I corrected this earlier. But facing another problem. Will forward paste
 in a while.



 On Friday, September 5, 2014, Ajay Kalambur (akalambu) akala...@cisco.com
 wrote:

  Sorry there was  typo in the patch should be @validation and not
 @(validation
 Please change that in vm_perf.py

 Sent from my iPhone

 On Sep 4, 2014, at 7:51 PM, masoom alam masoom.a...@gmail.com wrote:

   Why this is so when I patched with your sent patch:

  http://paste.openstack.org/show/106196/


 On Thu, Sep 4, 2014 at 8:58 PM, Rick Jones rick.jon...@hp.com wrote:

 On 09/03/2014 11:47 AM, Ajay Kalambur (akalambu) wrote:

 Hi
 Looking into the following blueprint which requires that network
 performance tests be done as part of a scenario
 I plan to implement this using iperf and basically a scenario which
 includes a client/server VM pair


  My experience with netperf over the years has taught me that when there
 is just the single stream and pair of systems one won't actually know if
 the performance was limited by inbound, or outbound.  That is why the likes
 of

 http://www.netperf.org/svn/netperf2/trunk/doc/examples/
 netperf_by_flavor.py

 and

 http://www.netperf.org/svn/netperf2/trunk/doc/examples/
 netperf_by_quantum.py

 apart from being poorly written python :)  Will launch several instances
 of a given flavor and then run aggregate tests on the Instance Under Test.
 Those aggregate tests will include inbound, outbound, bidirectional,
 aggregate small packet and then a latency test.

 happy benchmarking,

 rick jones


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sent from noir

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Daniel P. Berrange
On Thu, Sep 04, 2014 at 10:44:17PM -0600, John Griffith wrote:
 Just some thoughts and observations I've had regarding this topic in Cinder
 the past couple of years.  I realize this is a Nova thread so hopefully
 some of this can be applied in a more general context.
 
 TLDR:
 1. I think moving drivers into their own repo is just shoveling the pile to
 make a new pile (not really solving anything)

I'm not familiar with Cinder, but for Nova it would certainly have clear
benefits and not merely be shoveling the pile. Specifically it would

 - Easily let us double the number of core reviewers on aggregate

 - Reduce the bar for getting into a driver core team thus increasing
   the talent pool we can promote from.

 - Work accepted in a release for one driver would not reduce the
   bandwidth for another driver to accept work, since their review
   teams are separate

 - We can have more targetted testing, which will reduce the amount
   of bogus gate failures people get when submitting reviews and
   allow every driver to have gating CI jobs without impacting the
   other drivers

 2. Removal of drivers other than the reference implementation for each
 project could be the healthiest option
 a. Requires transparent, public, automated 3'rd party CI
 b. Requires a TRUE plugin architecture and mentality
 c. Requires a stable and well defined API

As mentioned in the original mail I don't want to see a situation where
we end up with some drivers in tree and others out of tree as it sets up
bad dynamics within the project. Those out of tree will always have the
impression of being second class citizens and thus there will be constant
pressure to accept drivers back into tree. The so called 'reference'
driver that stayed in tree would also continue to be penalized in the
way it is today, and so its development would be disadvantaged compared
to the out of tree drivers.

 3. While I'm still sort of a fan of the removal of drivers, I do think
 Cinder is making it work, there have been missteps and yes it's a pain
 sometimes but it's working ok and we've got plans to try and improve
 
 4. Adding restrictions like drivers only in first milestone and more
 intense scrutinization of features will go a long way to help resolve the
 issues we do have currently

Not in nova at least. We have a fundamental bottleneck in nova and
simply re-arranging review priorities in this kind of way will never
fix it. We've tried many different approaches to prioritization of
work and the only result is that we've got more aggressive at saying
no to contributors. This is directly resulting in the crisis we have
today.

 I've spent a fair amount of time thinking about the explosive number of
 drivers being added to Cinder over the past year or so.  I've been a pretty
 vocal proponent of the idea of removing all drivers except the LVM
 reference implementation from Cinder.  I'd rather see Vendors drivers
 maintained in their own Github Repo and truly follow a plugin model.
  This of course means that Cinder has to be truly designed and maintained
 with a real plugin architecture kept in mind in every aspect of development
 (experience proves this harder to do than it sounds).  I think with things
 stable and well defined interfaces as well as 3'rd party CI this is
 actually a reasonable approach and could be effective.  I do not see how
 creating a separate repo and in essence yet another set of OpenStack
 Projects really helps with the problem.  The fact is that the biggest issue
 most people see with driver contributions is those that are made by
 organizations that work on their driver only and don't contribute back to
 the core project (whether that be in the form of reviews of core
 contributions).  I'm not sure I understand why that would be any different
 by just putting the code in a separate bucket.  In other words, getting a
 solid and consistent team working on that project seems like you've just
 kicked the can down the road so you don't have to deal with it.

Fundamentally people contributing to a project are doing so voluntarily
to scratch their own itch. The project leadership can help identify areas
that need work and encourage people to take up the challenge, but you
cannot force people to do the work. We've done many things in nova that
are basically inflicting a form of punishment on contributors if they
don't work on things we tell them to work on. This is not having a positive
effect, on the contrary it is resulting in alot of demovated and pissed off
contributors who are ultimately leaving the project.

I agree that splitting the virt drivers out into their own repositories is
not going to hugely help get more people to work on Nova core - that was
not the primary intention. The big focus is on unblocking development of
the virt drivers so that their contributors actually feeled their efforts
are valued by the project. If we make the project a more attractive place
to work in general that will 

Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-05 Thread Thierry Carrez
Tim Bell wrote:
 -Original Message-
 From: Thierry Carrez [mailto:thie...@openstack.org]
 Sent: 04 September 2014 16:59
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Zaqar] Comments on the concerns arose during
 the TC meeting

 Sean Dague wrote:
 [...]
 So, honestly, I'll probably remain -1 on the final integration vote,
 not because Zaqar is bad, but because I'm feeling more firmly that for
 OpenStack to not leave the small deployers behind we need to redefine
 the tightly integrated piece of OpenStack to basically the Layer 1  2
 parts of my diagram, and consider the rest of the layers exciting
 parts of our ecosystem that more advanced users may choose to deploy
 to meet their needs. Smaller tent, big ecosystem, easier on ramp.

 I realize that largely means Zaqar would be caught up in a definition
 discussion outside of it's control, and that's kind of unfortunate, as
 Flavio and team have been doing a bang up job of late. But we need to
 stop considering integration as the end game of all interesting
 software in the OpenStack ecosystem, and I think it's better to have
 that conversation sooner rather than later.

 I think it's pretty clear at this point that:

 (1) we need to have a discussion about layers (base nucleus, optional extra
 services at the very least) and the level of support we grant to each -- the
 current binary approach is not working very well

 (2) If we accept Zaqar next week, it's pretty clear it would not fall in the 
 base
 nucleus layer but more in an optional extra services layer, together with at 
 the
 very least Trove and Sahara

 There are two ways of doing this: follow Sean's approach and -1 integration
 (and have zaqar apply to that optional layer when we create it), or +1
 integration now (and have zaqar follow whichever other integrated projects we
 place in that layer when we create it).

 I'm still hesitating on the best approach. I think they yield the same end 
 result,
 but the -1 approach seems to be a bit more unfair, since it would be purely 
 for
 reasons we don't (yet) apply to currently-integrated projects...

 
 The one concern I have with a small core is that there is not an easy way to 
 assess the maturity of a project on stackforge. The stackforge projects may 
 be missing packaging, Red Hat testing, puppet modules, install/admin 
 documentation etc. Thus, I need to have some indication that a project is 
 deployable before looking at it with my user community to see if it meets a 
 need that is sustainable.
 
 Do you see the optional layer services being blessed / validated in some 
 way and therefore being easy to identify ?

Yes, I think whatever exact shape this takes, it should convey some
assertion of stability to be able to distinguish itself from random
projects. Some way of saying this is good and mature, even if it's not
in the inner circle.

Being in The integrated release has been seen as a sign of stability
forever, while it was only ensuring integration with other projects
and OpenStack processes. We are getting better at requiring maturity
there, but if we set up layers, we'll have to get even better at that.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][iperf] Benchmarking network performance

2014-09-05 Thread masoom alam
Please forward ur vmtasks.py file

On Friday, September 5, 2014, masoom alam masoom.a...@gmail.com wrote:

 http://paste.openstack.org/show/106297/


 On Fri, Sep 5, 2014 at 1:12 PM, masoom alam masoom.a...@gmail.com
 javascript:_e(%7B%7D,'cvml','masoom.a...@gmail.com'); wrote:

 Thanks Ajay

 I corrected this earlier. But facing another problem. Will forward paste
 in a while.



 On Friday, September 5, 2014, Ajay Kalambur (akalambu) 
 akala...@cisco.com javascript:_e(%7B%7D,'cvml','akala...@cisco.com');
 wrote:

  Sorry there was  typo in the patch should be @validation and not
 @(validation
 Please change that in vm_perf.py

 Sent from my iPhone

 On Sep 4, 2014, at 7:51 PM, masoom alam masoom.a...@gmail.com wrote:

   Why this is so when I patched with your sent patch:

  http://paste.openstack.org/show/106196/


 On Thu, Sep 4, 2014 at 8:58 PM, Rick Jones rick.jon...@hp.com wrote:

 On 09/03/2014 11:47 AM, Ajay Kalambur (akalambu) wrote:

 Hi
 Looking into the following blueprint which requires that network
 performance tests be done as part of a scenario
 I plan to implement this using iperf and basically a scenario which
 includes a client/server VM pair


  My experience with netperf over the years has taught me that when
 there is just the single stream and pair of systems one won't actually
 know if the performance was limited by inbound, or outbound.  That is why
 the likes of

 http://www.netperf.org/svn/netperf2/trunk/doc/examples/
 netperf_by_flavor.py

 and

 http://www.netperf.org/svn/netperf2/trunk/doc/examples/
 netperf_by_quantum.py

 apart from being poorly written python :)  Will launch several
 instances of a given flavor and then run aggregate tests on the Instance
 Under Test.  Those aggregate tests will include inbound, outbound,
 bidirectional, aggregate small packet and then a latency test.

 happy benchmarking,

 rick jones


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sent from noir




-- 
Sent from noir
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Daniel P. Berrange
On Thu, Sep 04, 2014 at 12:57:57PM -0700, Joe Gordon wrote:
 On Thu, Sep 4, 2014 at 3:24 AM, Daniel P. Berrange berra...@redhat.com
 wrote:
  Proposal / solution
  ===
 
  In the past Nova has spun out its volume layer to form the cinder
  project. The Neutron project started as an attempt to solve the
  networking space, and ultimately replace the nova-network. It
  is likely that the schedular will be spun out to a separate project.
 
  Now Neutron itself has grown so large and successful that it is
  considering going one step further and spinning its actual drivers
  out of tree into standalone add-on projects [4]. I've heard on the
  grapevine that Ironic is considering similar steps for hardware
  drivers.
 
  The radical (?) solution to the nova core team bottleneck is thus to
  follow this lead and split the nova virt drivers out into separate
  projects and delegate their maintainence to new dedicated teams.
 
   - Nova becomes the home for the public APIs, RPC system, database
 persistent and the glue that ties all this together with the
 virt driver API.
 
   - Each virt driver project gets its own core team and is responsible
 for dealing with review, merge  release of their codebase.
 
 
 Overall I do think we need to re-think how the review burden is
 distributed. That being said, this is a nice proposal but I am not sure if
 it moves the review burden around enough or is the right approach. Do you
 have any rough numbers on what percent of the review burden goes to virt
 drivers today (how ever you want to define that statement, number of merged
 patches, man hours, lines of code, number of reviews  etc.). If for example
 today the nova review team spends 10% of there review time on virt drivers
 then I don't think this proposal will have a significant impact on the
 review backlog (for nova-common).

I'm a little wary of doing too many stats on things like reviews and
patches, because I fear it does not capture the full picture. Specifically
we're turning away contributors before they ever get to the point of
submitting reviews / patches, by rejecting their blueprints/specs.
Also the difficultly of getting stuff reviewed is discouraging people
even considering doing alot of work in the first place - if I had had the
confidence in getting it reviewed  merged I would easily have submitted
twice as much code to libvirt this cycle, but as it was I didn't even
start work on most things I would have liked to.

That said though, in the past 6 months we had 1385 changes merged.
Of those, 437 touched at least one file in the /virt/ directory
which is approximately 30%.

I agree though, this proposal will not have a dramatic effect on
the review backlog for the nova common code. It would probably be
a small (but noticable) improvement - most of the benefit would
fall on the virt drivers I expect. If we can make Nova a more
productive  enjoyable place to contribute though, this should
ultimately feed through into more people being involved in general
and thus more resource available to nova common too.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-09-05 Thread Xu Han Peng

Carl,

Seem so. I think internal router interface and external gateway port 
GARP are taken care by keepalived during failover. And if HA is not 
enable, _send_gratuitous_arp is called to send out GARP.


I think we will need to take care IPv6 for both cases since keepalived 
1.2.0 support IPv6. May need a separate BP. For the case HA is enabled 
externally, we still need unsolicited neighbor advertisement for gateway 
failover. But for internal router interface, since Router Advertisement 
is automatically send out by RADVD after failover, we don't need to send 
out neighbor advertisement anymore.


Xu Han


On 09/05/2014 03:04 AM, Carl Baldwin wrote:

Hi Xu Han,

Since I sent my message yesterday there has been some more discussion
in the review on that patch set.  See [1] again.  I think your
assessment is likely correct.

Carl

[1] https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py

On Thu, Sep 4, 2014 at 3:32 AM, Xu Han Peng pengxu...@gmail.com wrote:

Carl,

Thanks a lot for your reply!

If I understand correctly, in VRRP case, keepalived will be responsible for
sending out GARPs? By checking the code you provided, I can see all the
_send_gratuitous_arp_packet call are wrapped by if not is_ha condition.

Xu Han



On 09/04/2014 06:06 AM, Carl Baldwin wrote:

It should be noted that send_arp_for_ha is a configuration option
that preceded the more recent in-progress work to add VRRP controlled
HA to Neutron's router.  The option was added, I believe, to cause the
router to send (default) 3 GARPs to the external gateway if the router
was removed from one network node and added to another by some
external script or manual intervention.  It did not send anything on
the internal network ports.

VRRP is a different story and the code in review [1] sends GARPs on
internal and external ports.

Hope this helps avoid confusion in this discussion.

Carl

[1] https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py

On Mon, Sep 1, 2014 at 8:52 PM, Xu Han Peng pengxu...@gmail.com wrote:

Anthony,

Thanks for your reply.

If HA method like VRRP are used for IPv6 router, according to the VRRP RFC
with IPv6 included, the servers should be auto-configured with the active
router's LLA as the default route before the failover happens and still
remain that route after the failover. In other word, there should be no need
to use two LLAs for default route of a subnet unless load balance is
required.

When the backup router become the master router, the backup router should be
responsible for sending out an unsolicited ND neighbor advertisement with
the associated LLA (the previous master's LLA) immediately to update the
bridge learning state and sending out router advertisement with the same
options with the previous master to maintain the route and bridge learning.

This is shown in http://tools.ietf.org/html/rfc5798#section-4.1 and the
actions backup router should take after failover is documented here:
http://tools.ietf.org/html/rfc5798#section-6.4.2. The need for immediate
messaging sending and periodic message sending is documented here:
http://tools.ietf.org/html/rfc5798#section-2.4

Since the keepalived manager support for L3 HA is merged:
https://review.openstack.org/#/c/68142/43. And keepalived release 1.2.0
supports VRRP IPv6 features ( http://www.keepalived.org/changelog.html, see
Release 1.2.0 | VRRP IPv6 Release). I think we can check if keepalived can
satisfy our requirement here and if that will cause any conflicts with
RADVD.

Thoughts?

Xu Han


On 08/28/2014 10:11 PM, Veiga, Anthony wrote:



Anthony and Robert,

Thanks for your reply. I don't know if the arping is there for NAT, but I am
pretty sure it's for HA setup to broadcast the router's own change since the
arping is controlled by send_arp_for_ha config. By checking the man page
of arping, you can find the arping -A we use in code is sending out ARP
REPLY instead of ARP REQUEST. This is like saying I am here instead of
where are you. I didn't realized this either until Brain pointed this out
at my code review below.


That’s what I was trying to say earlier.  Sending out the RA is the same
effect.  RA says “I’m here, oh and I’m also a router” and should supersede
the need for an unsolicited NA.  The only thing to consider here is that RAs
are from LLAs.  If you’re doing IPv6 HA, you’ll need to have two gateway IPs
for the RA of the standby to work.  So far as I know, I think there’s still
a bug out on this since you can only have one gateway per subnet.



http://linux.die.net/man/8/arping

https://review.openstack.org/#/c/114437/2/neutron/agent/l3_agent.py

Thoughts?

Xu Han


On 08/27/2014 10:01 PM, Veiga, Anthony wrote:


Hi Xuhan,

What I saw is that GARP is sent to the gateway port and also to the router
ports, from a neutron router. I’m not sure why it’s sent to the router ports
(internal network). My understanding for arping to the gateway port is that
it is needed for proper NAT operation. Since we are 

Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Daniel P. Berrange
On Thu, Sep 04, 2014 at 06:48:33PM -0400, Russell Bryant wrote:
 On 09/04/2014 06:24 AM, Daniel P. Berrange wrote:
  Position statement
  ==
  
  Over the past year I've increasingly come to the conclusion that
  Nova is heading for (or probably already at) a major crisis. If
  steps are not taken to avert this, the project is likely to loose
  a non-trivial amount of talent, both regular code contributors and
  core team members. That includes myself. This is not good for
  Nova's long term health and so should be of concern to anyone
  involved in Nova and OpenStack.
  
  For those who don't want to read the whole mail, the executive
  summary is that the nova-core team is an unfixable bottleneck
  in our development process with our current project structure.
  The only way I see to remove the bottleneck is to split the virt
  drivers out of tree and let them all have their own core teams
  in their area of code, leaving current nova core to focus on
  all the common code outside the virt driver impls. I, now, none
  the less urge people to read the whole mail.
 
 Fantastic write-up.  I can't +1 enough the problem statement, which I
 think you've done a nice job of framing.  We've taken steps to try to
 improve this, but none of them have been big enough.  I feel we've
 reached a tipping point.  I think many others do too, and several
 proposals being discussed all seem rooted in this same core issue.
 
 When it comes to the proposed solution, I'm +1 on that too, but part of
 that is that it's hard for me to ignore the limitations placed on us by
 our current review infrastructure (gerrit).
 
 If we ignored gerrit for a moment, is rapid increase in splitting out
 components the ideal workflow?  Would we be better off finding a way to
 finally just implement a model more like the Linux kernel with
 sub-system maintainers and pull requests to a top-level tree?  Maybe.
 I'm not convinced that split of repos is obviously better.
 
 You make some good arguments for why splitting has other benefits.

For a long time I've use the LKML 'subsystem maintainers' model as the
reference point for ideas. In a more LKML like model, each virt team
(or other subsystem team) would have their own separate GIT repo with
a complete Nova codebase, where they did they day to day code submissions,
reviews and merges. Periodically the primary subsystem maintainer would
submit a large pull / merge requests to the overall Nova maintainer.
The $1,000,000 question in such a model is what kind of code review
happens during the big pull requests to integrate subsystem trees. 

The closest example I can see is what's happening with the Ironic
driver merge reviews. I'm personally finding review of that to be
quite a burdensome activity, because all comments on the merge
review then get fed back to the orginal maintainers who do a new
round of patch + review in Ironic tree and then we get a new version
submitted back to nova tree for merge. Rinse, repeat.

So my biggest fear with a model where each team had their own full
Nova tree and did large pull requests, is that we'd suffer major
pain during the merging of large pull requests, especially if any
of the merges touched common code. It could make the pull requests
take a really long time to get accepted into the primary repo.

By constrast with split out git repos per virt driver code, we will
only ever have 1 stage of code review for each patch. Changes to
common code would go straight to main nova common repo and so get
reviewed by the experts there without delay, avoiding the 2nd stage
of review from merge requests.

The more I think abut this, the more attracted I am to the idea
that separate repos will facilitate us doing more targetted testing
and allow 3rd party CI to become gating over their respective virt
driver codebases.

Finally the LKML model would still leave some drivers at a disadvantage
for development, if they're not able to meet the standards we require
in terms of CI testing, to be accepted into the primary repo.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-05 Thread Gordon Sim

On 09/04/2014 09:44 PM, Kurt Griffiths wrote:

Thanks for your comments Gordon. I appreciate where you are coming from
and I think we are actually in agreement on a lot of things.

I just want to make it clear that from the very beginning of the project
the team has tried to communicate (but perhaps could have done a better
job at it) that we aren’t trying to displace other messaging systems that
are clearly delivering a lot of value today.

In fact, I personally have long been a proponent of using the best tool
for the job. The Zaqar project was kicked off at an unconference session
several summits ago because the community saw a need that was not covered
by other messaging systems. Does that mean those other systems are “bad”
or “wrong”? Of course not. It simply means that there are some cases where
those other systems aren’t the best tool for the job, and another tool is
needed (and vice versa).


I think communicating that unmet need, those use-cases not best served 
by other systems, would help a lot in clarifying Zaqar's intended role.



Does that other tool look *exactly* like Zaqar? Probably not. But a lot of
people have told us Zaqar--in its current form--already delivers a lot of
value that they can’t get from other messaging systems that are available
to them. Zaqar, like any open source project, is a manifestation of lots
of peoples' ideas, and will evolve over time to meet the needs of the
community.

Does a Qpid/Rabbit/Kafka provisioning service make sense? Probably. Would
such a service totally overlap in terms of use-cases with Zaqar? Community
feedback suggests otherwise. Will there be some other kind of thing that
comes out of the woodwork? Possibly. (Heck, if something better comes
along I for one have no qualms in shifting resources to the more elegant
solution--again, use the best tool for the job.) This process happens all
the time in the broader open-source world. But this process takes a
healthy amount of time, plus broad exposure and usage, which is something
that you simply don’t get as a non-integrated project in the OpenStack
ecosystem.

In any case, it’s pretty clear to me that Zaqar graduating should not be
viewed as making it the officially blessed messaging service for the
cloud” and nobody is allowed to have any other ideas, ever.


Indeed, and to be clear, I wasn't really commenting on the graduation at 
all. I was really just responding to the statements on scope and 
differentiation; 'messaging service for the cloud' is a very broad 
problem space and as you rightly point out there may be different tools 
that best serve different parts of that problem space.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Christopher Yeoh
On Thu, 4 Sep 2014 11:24:29 +0100
Daniel P. Berrange berra...@redhat.com wrote:
 
  - A fairly significant amount of nova code would need to be
considered semi-stable API. Certainly everything under nova/virt
and any object which is passed in/out of the virt driver API.
Changes to such APIs would have to be done in a backwards
compatible manner, since it is no longer possible to lock-step
change all the virt driver impls. In some ways I think this would
be a good thing as it will encourage people to put more thought
into the long term maintainability of nova internal code instead
of relying on being able to rip it apart later, at will.
 
  - The nova/virt/driver.py class would need to be much better
specified. All parameters / return values which are opaque dicts
must be replaced with objects + attributes. Completion of the
objectification work is mandatory, so there is cleaner separation
between virt driver impls  the rest of Nova.

I think for this to work well with multiple repositories and drivers
having different priorities over implementing changes in the API it
would not just need to be semi-stable, but stable with versioning built
in from the start to allow for backwards incompatible changes. And
the interface would have to be very well documented including things
such as what exceptions are allowed to be raised through the API.
Hopefully this would be enforced through code as well. But as long as
driver maintainers are willing to commit to this extra overhead I can
see it working. 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Compute agent local VM inspector - potential enhancement

2014-09-05 Thread Rao Dingyuan
Hi folks,

Is there anybody working on this?

In most of our cloud environments, business networks are isolated from 
management network. So, we are thinking about making *an agent in guest machine 
to send metrics to compute node using virtual serial port*. And then, compute 
node could send those data to ceilometer. That seems a general solution for all 
kinds of network topologies, and can send metrics without knowing any 
credentials.


BR
Kurt Rao


-Original Message-
发件人: boden [mailto:bo...@linux.vnet.ibm.com] 
发送时间: 2014年8月1日 20:37
收件人: openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] [ceilometer] Compute agent local VM inspector - 
potential enhancement

On 8/1/2014 4:37 AM, Eoghan Glynn wrote:


 Heat cfntools is based on SSH, so I assume it requires TCP/IP 
 connectivity between VM and the central agent(or collector). But in 
 the cloud, some networks are isolated from infrastructure layer 
 network, because of security reasons. Some of our customers even 
 explicitly require such security protection. Does it mean those 
 isolated VMs cannot be monitored by this proposed-VM-agent?

 Yes, that sounds plausible to me.

My understanding is that this VM agent for ceilometer would need connectivity 
to nova API as well as to the AMQP broker. IMHO the infrastructure requirements 
from a network topology POV will differ from provider to provider and based on 
customer reqs / env.


 Cheers,
 Eoghan

 I really wish we can figur out how it could work for all VMs but with 
 no security issues.

 I'm not familiar with heat-cfntools, so, correct me if I am wrong :)


 Best regards!
 Kurt

 -邮件原件-
 发件人: Eoghan Glynn [mailto:egl...@redhat.com]
 发送时间: 2014年8月1日 14:46
 收件人: OpenStack Development Mailing List (not for usage questions)
 主题: Re: [openstack-dev] [ceilometer] Compute agent local VM inspector 
 - potential enhancement



 Disclaimer: I'm not fully vested on ceilometer internals, so bear with me.

 For consumers wanting to leverage ceilometer as a telemetry service 
 atop non-OpenStack Clouds or infrastructure they don't own, some 
 edge cases crop up. Most notably the consumer may not have access to 
 the hypervisor host and therefore cannot leverage the ceilometer 
 compute agent on a per host basis.

 Yes, currently such access to the hypervisor host is required, least 
 in the case of the libvirt-based inspector.

 In such scenarios it's my understanding the main option is to employ 
 the central agent to poll measurements from the monitored resources 
 (VMs, etc.).

 Well, the ceilometer central agent is not generally concerned with 
 with polling related *directly* to VMs - rather it handles acquiring 
 data from RESTful API (glance, neutron etc.) that are not otherwise 
 available in the form of notifications, and also from host-level interfaces 
 such as SNMP.


Thanks for additional clarity. Perhaps this proposed local VM agent fills 
additional use cases whereupon ceilometer is being used without openstack 
proper (e.g. not a full set of openstack complaint services like neutron, 
glance, etc.).

 However this approach requires Cloud APIs (or other mechanisms) 
 which allow the polling impl to obtain the desired measurements (VM 
 memory, CPU, net stats, etc.) and moreover the polling approach has 
 it's own set of pros / cons from a arch / topology perspective.

 Indeed.

 The other potential option is to setup the ceilometer compute agent 
 within the VM and have each VM publish measurements to the collector
 -- a local VM agent / inspector if you will. With respect to this 
 local VM agent approach:
 (a) I haven't seen this documented to date; is there any desire / 
 reqs to support this topology?
 (b) If yes to #a, I whipped up a crude PoC here:
 http://tinyurl.com/pqjgotv  Are folks willing to consider a BP for 
 this approach?

 So in a sense this is similar to the Heat cfn-push-stats utility[1] 
 and seems to suffer from the same fundamental problem, i.e. the need 
 for injection of credentials (user/passwds, keys, whatever) into the 
 VM in order to allow the metric datapoints be pushed up to the 
 infrastructure layer (e.g. onto the AMQP bus, or to a REST API endpoint).

 How would you propose to solve that credentialing issue?


My initial approximation would be to target use cases where end users do not 
have direct guest access or have limited guest access such that their UID / GID 
cannot access the conf file. For example instances which only provide app 
access provisioned using heat SoftwareDeployments
(http://tinyurl.com/qxmh2of) or trove database instances.

In general I don't see this approach from a security POV much different than 
whats done with the trove guest agent (http://tinyurl.com/ohvtmtz).

Longer term perhaps credentials could be mitigated using Barbican as suggested 
here: https://bugs.launchpad.net/nova/+bug/1158328

 Cheers,
 Eoghan

 [1]
 https://github.com/openstack/heat-cfntools/blob/master/bin/cfn-push-s
 tats

 

Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Christopher Yeoh
On Thu, 4 Sep 2014 12:57:57 -0700
Joe Gordon joe.gord...@gmail.com wrote:

 
 Overall I do think we need to re-think how the review burden is
 distributed. That being said, this is a nice proposal but I am not
 sure if it moves the review burden around enough or is the right
 approach. Do you have any rough numbers on what percent of the review
 burden goes to virt drivers today (how ever you want to define that
 statement, number of merged patches, man hours, lines of code, number
 of reviews  etc.). If for example today the nova review team spends
 10% of there review time on virt drivers then I don't think this
 proposal will have a significant impact on the review backlog (for
 nova-common).

Even if it doesn't have a huge impact on the review backlog for
nova-common (I think it should at least help a bit) it does have the
potential to make life much easier for the virt driver developers. 

I think my main concern is around testing - as soon as we have multiple
repositories involved I think debugging of test failures
(especially races) tends to get more complicated and we have fewer
people who are familiar enough with the two code bases. 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Feature freeze + Juno-3 milestone candidates available

2014-09-05 Thread Thierry Carrez
Hi everyone,

We just hit feature freeze[1], so please do not approve changes that add
features or new configuration options unless those have been granted a
feature freeze exception.

This is also string freeze[2], so you should avoid changing translatable
strings. If you have to modify a translatable string, you should give a
heads-up to the I18N team.

Finally, this is also DepFreeze[3], so you should avoid adding new
dependencies (bumping oslo or openstack client libraries is OK until
RC1). If you have a new dependency to add, raise a thread on
openstack-dev about it.

The juno-3 development milestone was tagged, it contains more than 135
features and 760 bugfixes added since the juno-2 milestone 6 weeks ago
(not even counting the Oslo libraries in the mix). You can find the full
list of new features and fixed bugs, as well as tarball downloads, at:

https://launchpad.net/keystone/juno/juno-3
https://launchpad.net/glance/juno/juno-3
https://launchpad.net/nova/juno/juno-3
https://launchpad.net/horizon/juno/juno-3
https://launchpad.net/neutron/juno/juno-3
https://launchpad.net/cinder/juno/juno-3
https://launchpad.net/ceilometer/juno/juno-3
https://launchpad.net/heat/juno/juno-3
https://launchpad.net/trove/juno/juno-3
https://launchpad.net/sahara/juno/juno-3

Many thanks to all the PTLs and release management liaisons who made us
reach this important milestone in the Juno development cycle. Thanks in
particular to John Garbutt, who keeps on doing an amazing job at the
impossible task of keeping the Nova ship straight in troubled waters
while we head toward the Juno release port.

Regards,

[1] https://wiki.openstack.org/wiki/FeatureFreeze
[2] https://wiki.openstack.org/wiki/StringFreeze
[3] https://wiki.openstack.org/wiki/DepFreeze

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Daniel P. Berrange
On Thu, Sep 04, 2014 at 03:54:28PM -0700, Stefano Maffulli wrote:
 Thanks Daniel for taking the time to write such deep message. Obviously
 you have thought about this issue for a long time and your opinion comes
 from deep personal understanding. I'm adding tags for neutron and
 cinder, as I know they're having similar conversations.
 
 I don't have a strong opinion on the solution you and Kyle seem to be
 leaning toward, I just have a couple of comments/warnings below
 
 On 09/04/2014 03:24 AM, Daniel P. Berrange wrote:
  saying 'This Is a Large Crisis'. A large crisis requires a large
  plan.
 
 Not necessarily, quite the contrary indeed. To address and solve big
 problems, experience and management literature suggest it's a lot better
 to make *one* small change, measure its effect and make one more change,
 measure its effect, and on and on until perfection. The discussion
 triggered by TripleO about 'what to measure' goes in the right
 direction.[1]

FWIW, don't read too much into that particular paragraph/sentance -
it is a humourous joke/quote from a british TV comedy :-)

 Your proposal seem to require a long term investment before its effects
 can be visible, although some of the things necessary for the split will
 be needed anyway. Do you think there are small changes with high impact
 that we can refine in Paris and put in place for Juno?

If we wanted to do a short term improvement, we'd probably have to look
at relaxing the way we apply our current 2 x +2  == +A policy in some
way. eg we'd have to look at perhaps identifying core virt driver team
members, and then treating their +1s as equivalent to a +2 if given on
a virt-driver only change, and so setting +A after only getting one +2.

 The other comment I have is about the risks of splitting teams and
 create new ones whose only expertise is their company's domain. I'm
 concerned of the bad side effect of having teams in Nova Program with
 very limited or no incentive at all to participate in nova-common
 project since all they care about will be their little (proprietary)
 hypervisor or network driver. I fear we may end up with nova-common
 owned by a handful of people from a couple of companies, limping along,
 while nova-drivers devs throw stones or ignore.

 Maybe this worst case scenario of disenfranchised membership is not as
 bad as I think it would be, I'm voicing my concern also to gauge this
 risk better. What are your thoughts on this specific risk? How can we
 mitigate it?

One of the important things I think we need todo is firm up the nova
internal virt driver API to make it more well specified, as a way to
prevent some of the sloopy bad practice all the virt driers engage
in today. I still see a fairly reasonable number of feature requests
that will involve nova common code, so even with a virt driver split,
the virt driver teams are forced to engage with the nova common code
to get some non-trivial portion of their work done. So if virt driver
teams don't help out with nova common code work, they're going to find
life hard for themselves when they do have features that involve nova
common.

In many ways I think we are already suffering quite alot from the
problem you describe today in several ways. A large portion of the
people contributing to all the virt drivers only really focus their
attention on their own area of interest, ignoring nova common. I
cannot entirely blame them for that because learning more of nova
is a significant investment of effort. This is one of the reasons
we struggle to identify enough people with broad enough knowledge
to promote to nova core. I think I can also see parallels in the
relationship between the major projects (nova, neutron, cinder,
etc) and the olso project. It is hard go get the downsteam consumer
projects to take an active interest in work on oslo itself. This
was probably worse when oslo first started out, but it is now a
more established team.

I accept that splitting the drivers out from nova common will probably
re-inforce the separation of work to some degree. The biggest benefits
will come to the virt driver teams themselves by unblocking them from
all competing for the same finite core reviewer resource.  The remaining
nova core team will probably gain a little bit more time (perhaps 10-15%)
by not having to pay attention to the virt driver code changes directly
but overall it wouldn't be a drammatic improvement there. The overall
reduction in repo size might help new contributors get up the on-ramp
to being part of the team, since smaller codebases are easier to learn
in general.

Overall I don't have a knockout answer to your concern though, other
than to say we're already facing that problem to quite a large extent
and modularization as a general concept has proved quite successful
for the growth of openstack projects that have split out from nova
in the past.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: 

Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Daniel P. Berrange
On Fri, Sep 05, 2014 at 07:31:50PM +0930, Christopher Yeoh wrote:
 On Thu, 4 Sep 2014 11:24:29 +0100
 Daniel P. Berrange berra...@redhat.com wrote:
  
   - A fairly significant amount of nova code would need to be
 considered semi-stable API. Certainly everything under nova/virt
 and any object which is passed in/out of the virt driver API.
 Changes to such APIs would have to be done in a backwards
 compatible manner, since it is no longer possible to lock-step
 change all the virt driver impls. In some ways I think this would
 be a good thing as it will encourage people to put more thought
 into the long term maintainability of nova internal code instead
 of relying on being able to rip it apart later, at will.
  
   - The nova/virt/driver.py class would need to be much better
 specified. All parameters / return values which are opaque dicts
 must be replaced with objects + attributes. Completion of the
 objectification work is mandatory, so there is cleaner separation
 between virt driver impls  the rest of Nova.
 
 I think for this to work well with multiple repositories and drivers
 having different priorities over implementing changes in the API it
 would not just need to be semi-stable, but stable with versioning built
 in from the start to allow for backwards incompatible changes. And
 the interface would have to be very well documented including things
 such as what exceptions are allowed to be raised through the API.
 Hopefully this would be enforced through code as well. But as long as
 driver maintainers are willing to commit to this extra overhead I can
 see it working. 

With our primary REST or RPC APIs we're under quite strict rules about
what we can  can't change - almost impossible to remove an existing
API from the REST API for example. With the internal virt driver API
we would probably have a little more freedom. For example, I think
if we found an existing virt driver API that was insufficient for a
new bit of work, we could add a new API in parallel with it, give the
virt drivers 1 dev cycle to convert, and then permanently delete the
original virt driver API. So a combination of that kind of API
replacement,  versioning for some data structures/objects, and use of
the capabilties flags would probably be sufficient. That's what I mean
by semi-stable here - no need to maintain existing virt driver APIs
indefinitely - we can remove  replace them in reasonably short time
scales as long as we avoid any lock-step updates.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] NNN_additional_attribute_mapping

2014-09-05 Thread Alexander V Makarov
Greetings!

I think I found a problem in extra attributes handling in LDAP backend.
Also I'd like to propose a solution :)

There is a bug https://bugs.launchpad.net/keystone/+bug/1336769
LDAP additional attribute mappings do not care about model attribute
reported by Marcos Lobo (https://launchpad.net/~marcos-fermin-lobo)

It describes a problem when server handles requests without any warning
about model data mismatches.
First of all I've noticed that all is about optional attributes and it
seems to be okay to handle them as I see fit. But in the end Marcos
states:
I have a mistake in the keystone.conf file and everything is working
properly.

This got my attention and I've decided to check field mappings, model
validations CRUD and anything available in that direction.
I found nothing about model validation. Really, there is no means to be
sure you received valid data from LDAP backend.

Furthermore keystone.common.ldap.core.BaseLdap._ldap_res_to_model()
completely ignores extra fields mapping while translating received from
LDAP server data to model structure.
Tests correctly cover only create operation, checking model to ldap
field mapping:
keystone.tests.test_backend_ldap.LDAPIdentity.test_user_extra_attribute_mapping().
But test for retrieval only covers case when description mapped to
description:
keystone.tests.test_backend_ldap.LDAPIdentity.test_user_extra_attribute_mapping_description_is_returned()

Test passes not because extra mapping works but due to the default
behaviour of generic mapping: if mapping not found pass the field as is.

I added required attribute validation to keystone.common.models.Model,
but applying it resulted in many test failures.
Further analysis revealed that there are some algorythms depending on
behaviour such as saving/retrieving model instanses without some fields
declared in a model as required.
So I had to fall back to warning instead of raising validation error.

Patch awaits review and I'm in doubt: is there a single bug, or it has
to be split.
https://review.openstack.org/#/c/118590/

Kind Regards,
Alexander Makarov,
Senior Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04
Tel.: +7 (926) 204-50-60
Skype: MAKAPOB.AJIEKCAHDP


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Defining what is a SupportStatus version

2014-09-05 Thread Steven Hardy
On Fri, Sep 05, 2014 at 03:56:34PM +1000, Angus Salkeld wrote:
On Fri, Sep 5, 2014 at 3:29 PM, Gauvain Pocentek
gauvain.pocen...@objectif-libre.com wrote:
 
  Hi,
 
  A bit of background: I'm working on the publication of the HOT resources
  reference on docs.openstack.org. This book is mostly autogenerated from
  the heat source code, using the sphinx XML output. To avoid publishing
  several references (one per released version, as is done for the
  OpenStack config-reference), I'd like to add information about the
  support status of each resource (when they appeared, when they've been
  deprecated, and so on).
 
  So the plan is to use the SupportStatus class and its `version`
  attribute (see https://review.openstack.org/#/c/116443/ ). And the
  question is, what information should the version attribute hold?
  Possibilities include the release code name (Icehouse, Juno), or the
  release version (2014.1, 2014.2). But this wouldn't be useful for users
  of clouds continuously deployed.
 
  From my documenter point of view, using the code name seems the right
  option, because it fits with the rest of the documentation.
 
  What do you think would be the best choice from the heat devs POV?
 
IMHO it should match the releases and tags
(https://github.com/openstack/heat/releases).

+1 this makes sense to me.  Couldn't we have the best of both worlds by
having some logic in the docs generation code which maps the milestone to
the release series, so we can say e.g

Supported since 2014.2.b3 (Juno)

This would provide sufficient detail to be useful to both folks consuming
the stable releases and those trunk-chasing via CD?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread John Garbutt
On 4 September 2014 23:48, Russell Bryant rbry...@redhat.com wrote:
 On 09/04/2014 06:24 AM, Daniel P. Berrange wrote:
 Position statement
 ==

 Over the past year I've increasingly come to the conclusion that
 Nova is heading for (or probably already at) a major crisis. If
 steps are not taken to avert this, the project is likely to loose
 a non-trivial amount of talent, both regular code contributors and
 core team members. That includes myself. This is not good for
 Nova's long term health and so should be of concern to anyone
 involved in Nova and OpenStack.

 For those who don't want to read the whole mail, the executive
 summary is that the nova-core team is an unfixable bottleneck
 in our development process with our current project structure.
 The only way I see to remove the bottleneck is to split the virt
 drivers out of tree and let them all have their own core teams
 in their area of code, leaving current nova core to focus on
 all the common code outside the virt driver impls. I, now, none
 the less urge people to read the whole mail.

 Fantastic write-up.  I can't +1 enough the problem statement, which I
 think you've done a nice job of framing.  We've taken steps to try to
 improve this, but none of them have been big enough.  I feel we've
 reached a tipping point.  I think many others do too, and several
 proposals being discussed all seem rooted in this same core issue.

+1

I totally agree we need to split Nova up further, there just didn't
seem to be the support for this before now.

Not yet sure the virt drivers are the best split, but we already have
sub-teams ready to take them on, so it will probably work for that
reason.

 If we ignored gerrit for a moment, is rapid increase in splitting out
 components the ideal workflow?  Would we be better off finding a way to
 finally just implement a model more like the Linux kernel with
 sub-system maintainers and pull requests to a top-level tree?  Maybe.
 I'm not convinced that split of repos is obviously better.

I was thinking along similar lines.

Regardless of that, we should try this for Kilo.

If it feels like we are getting too much driver divergence, and
tempest is not keeping everyone inline, the community is fragmenting
and no one is working on the core of nova, then we might have to think
about an alternative plan for L, including bringing the drivers back
in tree.

At least the separate repos will help us firm up the interfaces, which
I think is a good thing.

I worry about what it means to test a feature in nova common, nova
api, or nova core or whatever we call it, if there are no virt
drivers in tree. To some extent we might want to improve the fake virt
driver for some in-tree functional tests anyways. But thats a separate
discussion.

 I don't think we can afford to wait much longer without drastic change,
 so let's make it happen.

+1

But I do think we should try and go further...

Scheduler: I think we need to split out the scheduler with a similar
level of urgency. We keep blocking features on the split, because we
know we don't have the review bandwidth to deal with them. Right now I
am talking about a compute related scheduler in the compute program,
that might evolve to worry about other services at a later date.

Nova-network: Maybe there isn't a big enough community to support this
right now, but we need to actually delete this, or pull it out of
nova-core.

API: I suspect we might want to also look at splitting out the API
from Nova common too. This one is a slightly more drastic, and needs
more pre-split work (and is very related to making cells a first class
concept), but I am still battling with that inside my head.

Oslo: I suspect we may need to do something around the virt utilities,
so they are easy to share, but there are probably other opportunities
too.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Daniel P. Berrange
On Thu, Sep 04, 2014 at 06:22:18PM -0500, Michael Still wrote:
 On Thu, Sep 4, 2014 at 5:24 AM, Daniel P. Berrange berra...@redhat.com 
 wrote:
 
 [Heavy snipping because of length]
 
  The radical (?) solution to the nova core team bottleneck is thus to
  follow this lead and split the nova virt drivers out into separate
  projects and delegate their maintainence to new dedicated teams.
 
   - Nova becomes the home for the public APIs, RPC system, database
 persistent and the glue that ties all this together with the
 virt driver API.
 
   - Each virt driver project gets its own core team and is responsible
 for dealing with review, merge  release of their codebase.
 
 I think this is the crux of the matter. We're not doing a great job of
 landing code at the moment, because we can't keep up with the review
 workload.
 
 So far we've had two proposals mooted:
 
  - slots / runways, where we try to rate limit the number of things
 we're trying to review at once to maintain focus

FWIW, I'm not really seeing that as a long term solution. In its
essence it is just a more effective way for us to say 'no' to our
potential contributors. While it could no doubt relieve pressure
on the core team by reducing the flow of the pipe, I don't think
it is helpful for our contributors overall.

  - splitting all the virt drivers out of the nova tree
 
 Splitting the drivers out of the nova tree does come at a cost -- we'd
 need to stabilise and probably version the hypervisor driver
 interface, and that will encourage more out of tree drivers, which
 are things we haven't historically wanted to do. If we did this split,
 I think we need to acknowledge that we are changing policy there. It
 also means that nova-core wouldn't be the ones holding the quality bar
 for hypervisor drivers any more, I guess this would open the door for
 drivers to more actively compete on the quality of their
 implementations, which might be a good thing.

There are already a number of drivers out of tree such as Docker,
Ironic (though soon to be in tree), and IIUC there's something IBM
have done for Power hypervisor, and work Oracle have done for the
Solaris virt/container technologies. Probably the distinction I'd
made is around things that are actively part of the OpenStack
community (eg on our gerrit infrastructure and or stackforge, etc),
vs things that are developed in complete isolation from the OpenStack
community.

I'm unclear what the state of play is wrt discussions on OpenStack
technology compatibility certification  trademark usage, but perhaps
that is a partial counterweight to your concern ? I'd certainly like
to see a focus on out of tree drivers remaining a strong part of the
openstack community, and not go off into their own completely isolated
world outside the community.

But yes, I am clearly proposing a change our integration policy here
and so we need need to carefully consider what that means and take
any neccessary steps to mitigate risks.

In some respects I think the split repos could allow us to raise the
bar in terms of quality. For example, with a single repo, I don't
see it ever being practical to make VMware/HyperV/XenAPI  CI systems
gating on changes, because it would push up the level of pain from
false job failures in the gate even further than today. With a separate
repo each virt driver would only need to run jobs directly related to
them, so the VMWare CI could easily be made gating on VMWare driver git
repo.

On testing in general, I think we need to look at the granularity
at which we run tests, in order to let us scale up the number of tests
we run. For example, it is suggested that each feature like disk 
encryption,  disk discard support, each vif driver, and so on, each
requires a new tempest job with appropriate settings. If we look at
the number of possible tunable knobs like, that easily implies 100's
more tempest jobs with varying configs. I don't think it is practical
to consider doing that with our setup today. With separate virt driver
repos we'd have more headroom to add a larger number of jobs since
the volume of changes being tested overall would be smaller.

 Both of these have interesting aspects, and I agree we need to do
 _something_. I do wonder if there is a hybrid approach as well though.
 For example, could we implement some sort of more formal lieutenant
 system for drivers? We've talked about it in the past but never been
 able to express how it would work in practise.

Gerrit makes it hard to express that formally due to the lack of
path based permissioning. If we do go for the virt driver split,
it would none the less be useful if we trialled a lieutenant or
sub-team model during Kilo, as a way to prepare for an eventual
driver split in L. So this is worth talking about regardless
I reckon.

I still think on balance a virt driver split is benefical since
it brings benefits beyond just the review team.

 The last few days have been interesting as I watch FFEs come 

Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread John Garbutt
On 5 September 2014 00:26, Jay Pipes jaypi...@gmail.com wrote:
 On 09/04/2014 10:33 AM, Dugger, Donald D wrote:

 Basically +1 with what Daniel is saying (note that, as mentioned, a
 side effect of our effort to split out the scheduler will help but
 not solve this problem).


 The difference between Dan's proposal and the Gantt split is that Dan's
 proposal features quite prominently the following:

 == begin ==

  - The nova/virt/driver.py class would need to be much better
specified. All parameters / return values which are opaque dicts
must be replaced with objects + attributes. Completion of the
objectification work is mandatory, so there is cleaner separation
between virt driver impls  the rest of Nova.

 == end ==

 In other words, Dan's proposal above is EXACTLY what I've been saying needs
 to be done to the interfaces between nova-conductor, nova-compute, and
 nova-scheduler *before* any split of the scheduler code is even remotely
 feasible.

 Splitting the scheduler out before this is done would actually not help but
 not solve this problem -- it would instead further the problem, IMO.

Given any changes we make to the scheduler interface need to be
backwards compatible, I am not totally convinced being in a separate
repo makes things a whole lot worse, vs the review bottlenecks we
have. Anyways, I certainly agree that work needs to be done ASAP, and
if we can make that a priority in Nova, it would be much quicker and
easier to do while still inside Nova.

We have similar issues with glance, cinder and neutron right now that
need fixing soon too. I know we have patches up for some improvements
in that area, but it certainly feels like we need to do better there.

The virt driver is a step ahead of the scheduler because we know what
interface we are talking about, and we already have most of a
versioning plan in place.

I think the key work we have with the scheduler is to actually draw
out the interface (in code), so we agree what interface we need to
firm up and version. I think we are starting to get agreement on that
now, which is great.

I still think the scheduler split is as urgent as the virt split, but
the virt split is much closer to being possible right now.

At this point, it feels like all of kilo-1 gets dedicated to splitting
out these interfaces, and completing objects. But lets see what the
summit brings.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Daniel P. Berrange
On Fri, Sep 05, 2014 at 11:29:43AM +0100, John Garbutt wrote:
 On 4 September 2014 23:48, Russell Bryant rbry...@redhat.com wrote:
  On 09/04/2014 06:24 AM, Daniel P. Berrange wrote:
  If we ignored gerrit for a moment, is rapid increase in splitting out
  components the ideal workflow?  Would we be better off finding a way to
  finally just implement a model more like the Linux kernel with
  sub-system maintainers and pull requests to a top-level tree?  Maybe.
  I'm not convinced that split of repos is obviously better.
 
 I was thinking along similar lines.
 
 Regardless of that, we should try this for Kilo.
 
 If it feels like we are getting too much driver divergence, and
 tempest is not keeping everyone inline, the community is fragmenting
 and no one is working on the core of nova, then we might have to think
 about an alternative plan for L, including bringing the drivers back
 in tree.
 
 At least the separate repos will help us firm up the interfaces, which
 I think is a good thing.
 
 I worry about what it means to test a feature in nova common, nova
 api, or nova core or whatever we call it, if there are no virt
 drivers in tree. To some extent we might want to improve the fake virt
 driver for some in-tree functional tests anyways. But thats a separate
 discussion.

I look at what we do with Ironic testing current as a guide here.
We have tempest job that runs against Nova, that validates changes
to nova don't break the separate Ironic git repo. So my thought
is that all our current tempest jobs would simply work in that
way. IOW changes to so called nova common would run jobs that
validate the change against all the virt driver git repos. I think
this kind of setup is pretty much mandatory for split repos to be
viable, because I don't want to see us loose testing coverage in
this proposed change.

Having a decent in-tree fake virt driver would none the less be
a nice idea, because it would allow for more complete functional
testing isolated from the risks of bugs in the virt drivers
themselves.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Sean Dague
On 09/05/2014 06:22 AM, Daniel P. Berrange wrote:
 On Fri, Sep 05, 2014 at 07:31:50PM +0930, Christopher Yeoh wrote:
 On Thu, 4 Sep 2014 11:24:29 +0100
 Daniel P. Berrange berra...@redhat.com wrote:

  - A fairly significant amount of nova code would need to be
considered semi-stable API. Certainly everything under nova/virt
and any object which is passed in/out of the virt driver API.
Changes to such APIs would have to be done in a backwards
compatible manner, since it is no longer possible to lock-step
change all the virt driver impls. In some ways I think this would
be a good thing as it will encourage people to put more thought
into the long term maintainability of nova internal code instead
of relying on being able to rip it apart later, at will.

  - The nova/virt/driver.py class would need to be much better
specified. All parameters / return values which are opaque dicts
must be replaced with objects + attributes. Completion of the
objectification work is mandatory, so there is cleaner separation
between virt driver impls  the rest of Nova.

 I think for this to work well with multiple repositories and drivers
 having different priorities over implementing changes in the API it
 would not just need to be semi-stable, but stable with versioning built
 in from the start to allow for backwards incompatible changes. And
 the interface would have to be very well documented including things
 such as what exceptions are allowed to be raised through the API.
 Hopefully this would be enforced through code as well. But as long as
 driver maintainers are willing to commit to this extra overhead I can
 see it working. 
 
 With our primary REST or RPC APIs we're under quite strict rules about
 what we can  can't change - almost impossible to remove an existing
 API from the REST API for example. With the internal virt driver API
 we would probably have a little more freedom. For example, I think
 if we found an existing virt driver API that was insufficient for a
 new bit of work, we could add a new API in parallel with it, give the
 virt drivers 1 dev cycle to convert, and then permanently delete the
 original virt driver API. So a combination of that kind of API
 replacement,  versioning for some data structures/objects, and use of
 the capabilties flags would probably be sufficient. That's what I mean
 by semi-stable here - no need to maintain existing virt driver APIs
 indefinitely - we can remove  replace them in reasonably short time
 scales as long as we avoid any lock-step updates.

I have spent a lot of time over the last year working on things that
require coordinated code lands between projects it's much more
friction than you give it credit.

Every added git tree adds a non linear cost to mental overhead, and a
non linear integration cost. Realistically the reason the gate is in the
state it is has a ton to do with the fact that it's integrating 40 git
trees. Because virt drivers run in the process space of Nova Compute,
they can pretty much do whatever, and the impacts are going to be
somewhat hard to figure out.

Also, if spinning these out seems like the right idea, I think nova-core
needs to retain core rights over the drivers as well. Because there do
need to be veto authority on some of the worst craziness.

If the VMWare team stopped trying to build a distributed lock manager
inside their compute driver, or the Hyperv team didn't wait until J2 to
start pushing patches, I think there would be more trust in some of
these teams. But, I am seriously concerned in both those cases, and the
slow review there is a function of a historic lack of trust in judgment.
I also personally went on a moratorium a year ago in reviewing either
driver because entities at both places where complaining to my
management chain through back channels that I was -1ing their code...
when I was one of the few people actually trying to provide constructive
feedback (basically only Russell and I were reviewing that code in
Grizzly, everyone else was ignoring it). Things may have changed since
then, at least I see a ton of good work from tjones in making Nova
overall better, but that was a pretty bitter pill. (Sorry for the
tangent, but honestly if we are going to fix what's broken we probably
have to expose all related brokens.)


If the concern is that we are keeping out too many contributors by the
CI requirements: let's let Class C back in tree. I believe in the
Freebsd case you were one of the original opponents to a top level
driver, and that they should go through libvirt instead. But I'm cool
with them just showing up as a Class C.

But I honestly don't think the virt driver split is going to make any of
this easier, when you account for the additional overhead it's going to
create, and the work required to get there.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list

Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-05 Thread Nikola Đipanov
On 09/04/2014 10:25 PM, Solly Ross wrote:
 Anyway, I think it would be useful to have some sort of page where people
 could say I'm an SME in X, ask me for reviews and then patch submitters 
 could go
 and say, oh, I need an someone to review my patch about storage backends, 
 let me
 ask sross.
 

This is a good point - I've been thinking along similar lines that we
really could have a huge win in terms of the review experience by
building a tool (maybe a social network looking one :)) that relates
reviews to people being able to do them, visualizes reviewer karma and
other things that can help make the code submissions and reviews more
human friendly.

Dan seems to dismiss the idea of improved tooling as something that can
get us only thus far, but I am not convinced. However - this will
require even more manpower and we are already ridiculously short on that
so...

N.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] instance lock and class lock

2014-09-05 Thread Davanum Srinivas
given the code size, a BP may be a over stretch. I'd just file a review + bug

-- dims

On Fri, Sep 5, 2014 at 1:29 AM, Zang MingJie zealot0...@gmail.com wrote:
 does it require bp or bug report to submit oslo.concurrency patch ?


 On Wed, Sep 3, 2014 at 7:15 PM, Davanum Srinivas dava...@gmail.com wrote:

 Zang MingJie,

 Can you please consider submitting a review against oslo.concurrency?


 http://git.openstack.org/cgit/openstack/oslo.concurrency/tree/oslo/concurrency

 That will help everyone who will adopt/use that library.

 thanks,
 dims

 On Wed, Sep 3, 2014 at 1:45 AM, Zang MingJie zealot0...@gmail.com wrote:
  Hi all:
 
  currently oslo provides lock utility, but unlike other languages, it is
  class lock, which prevent all instances call the function. IMO, oslo
  should
  provide an instance lock, only lock current instance to gain better
  concurrency.
 
  I have written a lock in a patch[1], please consider pick it into oslo
 
  [1]
 
  https://review.openstack.org/#/c/114154/4/neutron/openstack/common/lockutils.py
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Davanum Srinivas :: http://davanum.wordpress.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Daniel P. Berrange
On Fri, Sep 05, 2014 at 07:00:44AM -0400, Sean Dague wrote:
 On 09/05/2014 06:22 AM, Daniel P. Berrange wrote:
  On Fri, Sep 05, 2014 at 07:31:50PM +0930, Christopher Yeoh wrote:
  On Thu, 4 Sep 2014 11:24:29 +0100
  Daniel P. Berrange berra...@redhat.com wrote:
 
   - A fairly significant amount of nova code would need to be
 considered semi-stable API. Certainly everything under nova/virt
 and any object which is passed in/out of the virt driver API.
 Changes to such APIs would have to be done in a backwards
 compatible manner, since it is no longer possible to lock-step
 change all the virt driver impls. In some ways I think this would
 be a good thing as it will encourage people to put more thought
 into the long term maintainability of nova internal code instead
 of relying on being able to rip it apart later, at will.
 
   - The nova/virt/driver.py class would need to be much better
 specified. All parameters / return values which are opaque dicts
 must be replaced with objects + attributes. Completion of the
 objectification work is mandatory, so there is cleaner separation
 between virt driver impls  the rest of Nova.
 
  I think for this to work well with multiple repositories and drivers
  having different priorities over implementing changes in the API it
  would not just need to be semi-stable, but stable with versioning built
  in from the start to allow for backwards incompatible changes. And
  the interface would have to be very well documented including things
  such as what exceptions are allowed to be raised through the API.
  Hopefully this would be enforced through code as well. But as long as
  driver maintainers are willing to commit to this extra overhead I can
  see it working. 
  
  With our primary REST or RPC APIs we're under quite strict rules about
  what we can  can't change - almost impossible to remove an existing
  API from the REST API for example. With the internal virt driver API
  we would probably have a little more freedom. For example, I think
  if we found an existing virt driver API that was insufficient for a
  new bit of work, we could add a new API in parallel with it, give the
  virt drivers 1 dev cycle to convert, and then permanently delete the
  original virt driver API. So a combination of that kind of API
  replacement,  versioning for some data structures/objects, and use of
  the capabilties flags would probably be sufficient. That's what I mean
  by semi-stable here - no need to maintain existing virt driver APIs
  indefinitely - we can remove  replace them in reasonably short time
  scales as long as we avoid any lock-step updates.
 
 I have spent a lot of time over the last year working on things that
 require coordinated code lands between projects it's much more
 friction than you give it credit.
 
 Every added git tree adds a non linear cost to mental overhead, and a
 non linear integration cost. Realistically the reason the gate is in the
 state it is has a ton to do with the fact that it's integrating 40 git
 trees. Because virt drivers run in the process space of Nova Compute,
 they can pretty much do whatever, and the impacts are going to be
 somewhat hard to figure out.
 
 Also, if spinning these out seems like the right idea, I think nova-core
 needs to retain core rights over the drivers as well. Because there do
 need to be veto authority on some of the worst craziness.

If they want todo crazy stuff, let them live or die with the
consequences.

 If the VMWare team stopped trying to build a distributed lock manager
 inside their compute driver, or the Hyperv team didn't wait until J2 to
 start pushing patches, I think there would be more trust in some of
 these teams. But, I am seriously concerned in both those cases, and the
 slow review there is a function of a historic lack of trust in judgment.
 I also personally went on a moratorium a year ago in reviewing either
 driver because entities at both places where complaining to my
 management chain through back channels that I was -1ing their code...

I venture to suggest that the reason we care so much about those kind
of things is precisely because of our policy of pulling them in the
tree. Having them in tree means their quality (or not) reflects directly
on the project as a whole. Separate them from Nova as a whole and give
them control of their own desinty and they can deal with the consequences
of their actions and people can judge the results for themselves.

We don't have the time or resources go continue baby-sitting them
ourselves - attempting todo so has just resulted in a scenario where
they end up getting largely ignored as you admit here. This ultimately
makes their quality even worse, because the lack of reviewer availability
means they stand little chance of pushing through the work to fix what
problems they have. We've seen this first hand with the major refactoring
that vmware driver team has been trying todo. Our 

Re: [openstack-dev] [nova] FFE request v2-on-v3-api

2014-09-05 Thread John Garbutt
Patches re-approved.

Thanks,
John

On 5 September 2014 00:24, Michael Still mi...@stillhq.com wrote:
 Approved.

 Michael

 On Thu, Sep 4, 2014 at 8:11 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.com 
 wrote:
 2014-09-04 20:34 GMT+09:00 Christopher Yeoh cbky...@gmail.com:
 Hi,

 I'd like to request a FFE for 4 changesets from the v2-on-v3-api
 blueprint:

 https://review.openstack.org/#/c/113814/
 https://review.openstack.org/#/c/115515/
 https://review.openstack.org/#/c/115576/
 https://review.openstack.org/#/c/11/

 They have all already been approved and were in the gate for a while
 but just didn't quite make it through in time. So they shouldn't put any
 load on reviewers.

 Sponsoring cores:
 Kenichi Ohmichi
 John Garbutt
 Me

 Yeah, I am happy to support this work.

 Thanks
 Ken'ichi Ohmichi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vmware][nova][FFE] vmware-spawn-refactor

2014-09-05 Thread John Garbutt
Yeah, I have been reviewing these, so happy to sponsor them too.

Patches have been re-approved.

Thanks,
John

On 5 September 2014 00:23, Michael Still mi...@stillhq.com wrote:
 So, that's your three. This exception is approved.

 Michael

 On Thu, Sep 4, 2014 at 9:05 AM, Nikola Đipanov ndipa...@redhat.com wrote:
 On 09/04/2014 03:46 PM, Daniel P. Berrange wrote:
 On Thu, Sep 04, 2014 at 02:09:26PM +0100, Matthew Booth wrote:
 I'd like to request a FFE for the remaining changes from
 vmware-spawn-refactor. They are:

 https://review.openstack.org/#/c/109754/
 https://review.openstack.org/#/c/109755/
 https://review.openstack.org/#/c/114817/
 https://review.openstack.org/#/c/117467/
 https://review.openstack.org/#/c/117283/

 https://review.openstack.org/#/c/98322/

 All but the last had +A, and were in the gate at the time it was closed.
 The last had not yet been approved, but is ready for core review. It has
 recently had some orthogonal changes split out to simplify it
 considerably. It is largely a code motion patch, and has been given +1
 by VMware CI multiple times.

 They're all internal to the VMWare driver, have multiple ACKs from VMWare
 maintainers as well as core, so don't require extra review time. So I think
 it is reasonable request.

 ACK, I'll sponsor it.


 +1 here - I've already looked at a number of those.

 N.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-05 Thread Robert Collins
On 5 September 2014 23:33, Sean Dague s...@dague.net wrote:

 I think realistically a self certification process that would have
 artifacts in a discoverable place. I was thinking something along the
 lines of a baseball card interface with a short description of the
 project, a list of the requirements to deploy (native and python), a
 link the the API docs, a link to current test coverage, as well as some
 statement on the frequency of testing, stats on open bugs and current
 trending and review backlog, current user testimonials. Basically the
 kind of first stage analysis that a deployer would do before pushing
 this out to their users.

Add into that their deployment support - e.g. do they have TripleO
support // Chef // Fuel // Puppet etc etc etc.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Daniel P. Berrange
On Fri, Sep 05, 2014 at 07:12:37AM -0400, Sean Dague wrote:
 On 09/05/2014 06:40 AM, Nikola Đipanov wrote:
  A handy example of this I can think of is the currently granted FFE for
  serial consoles - consider how much of the code went into the common
  part vs. the libvirt specific part, I would say the ratio is very close
  to 1 if not even in favour of the common part (current 4 outstanding
  patches are all for core, and out of the 5 merged - only one of them was
  purely libvirt specific, assuming virt/ will live in nova-common).
  
  Joe asked a similar question elsewhere on the thread.
  
  Once again - I am not against doing it - what I am saying is that we
  need to look into this closer as it may not be as big of a win from the
  number of changes needed per feature as we may think.
  
  Just some things to think about with regards to the whole idea, by no
  means exhaustive.
 
 So maybe the better question is: what are the top sources of technical
 debt in Nova that we need to address? And if we did, everyone would be
 more sane, and feel less burnt.
 
 Maybe the drivers are the worst debt, and jettisoning them makes them
 someone else's problem, so that helps some. I'm not entirely convinced
 right now.
 
 I think Cells represents a lot of debt right now. It doesn't fully work
 with the rest of Nova, and produces a ton of extra code paths special
 cased for the cells path.
 
 The Scheduler has a ton of debt as has been pointed out by the efforts
 in and around Gannt. The focus has been on the split, but realistically
 I'm with Jay is that we should focus on the debt, and exposing a REST
 interface in Nova.
 
 What about the Nova objects transition? That continues to be slow
 because it's basically Dan (with a few other helpers from time to time).
 Would it be helpful if we did an all hands on deck transition of the
 rest of Nova for K1 and just get it done? Would be nice to have the bulk
 of Nova core working on one thing like this and actually be in shared
 context with everyone else for a while.

I think the idea that we can tell everyone in Nova what they should
focus on for a cycle, or more generally, is doomed to failure. This
isn't a closed source company controlled project where you can dictate
what everyones priority must be. We must accept that rely on all our
contributors good will in voluntarily giving their time  resource to
the projct, to scratch whatever itch they have in the project. We have
to encourage them to want to work nova and demonstrate that we value
whatever form of contributor they choose to make. If we have technical
debt that we think is important to address we need to illustrate /
show people why they should care about helping. If they none the less
decide that work isn't for them, we can't just cast them aside and/or
ignore their contributions, while we get on with other things. This
is why I think it is important that we split up nova to allow each
are to self-organize around what they consider to be priorities in
their area of interest / motivation. Not enabling that is going to
to continue to kill our community

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE flood gates open

2014-09-05 Thread John Garbutt
Hi,

We have now tagged juno-3, so we are good to approve patches for FFE now.

Given how much there is to get through the gate, it would be nice to
concentrate on FFE code, and higher priority bugs, till we break the
back of those merges.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [feature freeze exception] FFE for libvirt-disk-discard-option

2014-09-05 Thread Daniel P. Berrange
On Fri, Sep 05, 2014 at 06:28:55AM +, Bohai (ricky) wrote:
 Hi,
 
 I'd like to ask for a feature freeze exception for blueprint 
 libvirt-disk-discard-option.
 https://review.openstack.org/#/c/112977/
 
 approved spec:
 https://review.openstack.org/#/c/85556/
 
 blueprint was approved, but its status was changed to Pending Approval 
 because of FF.
 https://blueprints.launchpad.net/nova/+spec/libvirt-disk-discard-option
 
 The patch has got a +2 from the core and pretty close to merge, but FF came.

ACK, I'll sponsor this.

It is such a simple  useful patch it is madness to reject it on a process
technicality.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [feature freeze exception] Feature freeze exception for config-drive-image-property

2014-09-05 Thread John Garbutt
Blueprint re-approved, code re-approved.

Thanks,
John

On 4 September 2014 21:11, Michael Still mi...@stillhq.com wrote:
 I'll be the third core here. Approved.

 @John: can you please remove your -2 from this one?

 Michael

 On Thu, Sep 4, 2014 at 2:54 PM, Sean Dague s...@dague.net wrote:
 On 09/04/2014 03:35 PM, Jay Pipes wrote:
 On 09/04/2014 03:07 PM, Jiang, Yunhong wrote:
 Hi,
  I'd like to ask for a feature freeze exception for the
 config-drive-image-property.

  The spec has been approved, and the corresponding patch
 (https://review.openstack.org/#/c/77027/ ) has been +W three time, but
 failed to be merged in the end because of gate issue and conflict on
 nova/exception.py.

 I have previously reviewed this code. I'm happy to sponsor it.

 This patch looks pretty straight forward, I just reviewed it, and am
 happy with it in it's current state, so can sponsor.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] requesting an FFE for SRIOV

2014-09-05 Thread John Garbutt
In the nova-meeting we agreed this gets a FFE, based on previous
agreements in nova-meetings.

Blueprint is approved for juno-rc1.

Thanks,
John

On 4 September 2014 16:38, Nikola Đipanov ndipa...@redhat.com wrote:
 On 09/04/2014 05:16 PM, Dan Smith wrote:
 The main sr-iov patches have gone through lots of code reviews, manual
 rebasing, etc. Now we have some critical refactoring work on the
 existing infra to get it ready. All the code for refactoring and sr-iov
 is up for review.

 I've been doing a lot of work on this recently, and plan to see it
 through if possible.

 So, I'll be a sponsor.

 In the meeting russellb said he would as well. I think he's tied up
 today, so I'm proxying him in here :)

 --Dan


 I've already looked at some of this, and some of the work is based on
 the work I did for the NUMA blueprint (that Dan contributed to quite a
 bit as well) so I'd be happy to make sure this lands too.

 N.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Sean Dague
On 09/05/2014 07:26 AM, Daniel P. Berrange wrote:
 On Fri, Sep 05, 2014 at 07:00:44AM -0400, Sean Dague wrote:
 On 09/05/2014 06:22 AM, Daniel P. Berrange wrote:
 On Fri, Sep 05, 2014 at 07:31:50PM +0930, Christopher Yeoh wrote:
 On Thu, 4 Sep 2014 11:24:29 +0100
 Daniel P. Berrange berra...@redhat.com wrote:

  - A fairly significant amount of nova code would need to be
considered semi-stable API. Certainly everything under nova/virt
and any object which is passed in/out of the virt driver API.
Changes to such APIs would have to be done in a backwards
compatible manner, since it is no longer possible to lock-step
change all the virt driver impls. In some ways I think this would
be a good thing as it will encourage people to put more thought
into the long term maintainability of nova internal code instead
of relying on being able to rip it apart later, at will.

  - The nova/virt/driver.py class would need to be much better
specified. All parameters / return values which are opaque dicts
must be replaced with objects + attributes. Completion of the
objectification work is mandatory, so there is cleaner separation
between virt driver impls  the rest of Nova.

 I think for this to work well with multiple repositories and drivers
 having different priorities over implementing changes in the API it
 would not just need to be semi-stable, but stable with versioning built
 in from the start to allow for backwards incompatible changes. And
 the interface would have to be very well documented including things
 such as what exceptions are allowed to be raised through the API.
 Hopefully this would be enforced through code as well. But as long as
 driver maintainers are willing to commit to this extra overhead I can
 see it working. 

 With our primary REST or RPC APIs we're under quite strict rules about
 what we can  can't change - almost impossible to remove an existing
 API from the REST API for example. With the internal virt driver API
 we would probably have a little more freedom. For example, I think
 if we found an existing virt driver API that was insufficient for a
 new bit of work, we could add a new API in parallel with it, give the
 virt drivers 1 dev cycle to convert, and then permanently delete the
 original virt driver API. So a combination of that kind of API
 replacement,  versioning for some data structures/objects, and use of
 the capabilties flags would probably be sufficient. That's what I mean
 by semi-stable here - no need to maintain existing virt driver APIs
 indefinitely - we can remove  replace them in reasonably short time
 scales as long as we avoid any lock-step updates.

 I have spent a lot of time over the last year working on things that
 require coordinated code lands between projects it's much more
 friction than you give it credit.

 Every added git tree adds a non linear cost to mental overhead, and a
 non linear integration cost. Realistically the reason the gate is in the
 state it is has a ton to do with the fact that it's integrating 40 git
 trees. Because virt drivers run in the process space of Nova Compute,
 they can pretty much do whatever, and the impacts are going to be
 somewhat hard to figure out.

 Also, if spinning these out seems like the right idea, I think nova-core
 needs to retain core rights over the drivers as well. Because there do
 need to be veto authority on some of the worst craziness.
 
 If they want todo crazy stuff, let them live or die with the
 consequences.
 
 If the VMWare team stopped trying to build a distributed lock manager
 inside their compute driver, or the Hyperv team didn't wait until J2 to
 start pushing patches, I think there would be more trust in some of
 these teams. But, I am seriously concerned in both those cases, and the
 slow review there is a function of a historic lack of trust in judgment.
 I also personally went on a moratorium a year ago in reviewing either
 driver because entities at both places where complaining to my
 management chain through back channels that I was -1ing their code...
 
 I venture to suggest that the reason we care so much about those kind
 of things is precisely because of our policy of pulling them in the
 tree. Having them in tree means their quality (or not) reflects directly
 on the project as a whole. Separate them from Nova as a whole and give
 them control of their own desinty and they can deal with the consequences
 of their actions and people can judge the results for themselves.
 
 We don't have the time or resources go continue baby-sitting them
 ourselves - attempting todo so has just resulted in a scenario where
 they end up getting largely ignored as you admit here. This ultimately
 makes their quality even worse, because the lack of reviewer availability
 means they stand little chance of pushing through the work to fix what
 problems they have. We've seen this first hand with the major refactoring
 that vmware driver team has been 

Re: [openstack-dev] [FFE] [nova] Barbican key manager wrapper

2014-09-05 Thread Daniel P. Berrange
On Thu, Sep 04, 2014 at 05:19:45PM +, Coffman, Joel M. wrote:
 A major concern about several encryption features within Nova [1, 2] has been 
 the lack of secure key management. To address this concern, work has been 
 underway to integrate these features with Barbican [3], which can be used to 
 manage encryption keys across OpenStack.
 
 We request a feature freeze exception be granted to merge this code [3], 
 which is really a shim between the existing key manager interface in Nova and 
 python-barbicanclient, into Nova [4]. The acceptance of this feature will 
 improve the security of cloud users and operators who use the Cinder volume 
 encryption feature [1], which is currently limited to a single, static 
 encryption key for volumes. Cinder has already merged a similar feature [5] 
 following the review of several patch revisions; not accepting the feature in 
 Nova creates a disparity with Cinder in regards to the management of 
 encryption keys.
 
 As this is an optional feature that introduces very few changes to 
 pre-existing code, the risk of disruption to existing deployments as well as 
 the risk of regression is minimal. The only objection that has very recently 
 been voiced is the implicit dependency on the Barbican service, which does 
 not yet have experimental jobs in Tempest. Other core reviewers, though, 
 believe that the existing unit tests included with the change are sufficient.
 
 Thank you for taking the time to consider this request.

I sponsor it as it is effectively part of the LVM encryption blueprint
which I've already sponsor. So we should consider FFE for both those
blueprints together, rather than in isolation.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Sean Dague
On 09/05/2014 07:40 AM, Daniel P. Berrange wrote:
 On Fri, Sep 05, 2014 at 07:12:37AM -0400, Sean Dague wrote:
 On 09/05/2014 06:40 AM, Nikola Đipanov wrote:
 A handy example of this I can think of is the currently granted FFE for
 serial consoles - consider how much of the code went into the common
 part vs. the libvirt specific part, I would say the ratio is very close
 to 1 if not even in favour of the common part (current 4 outstanding
 patches are all for core, and out of the 5 merged - only one of them was
 purely libvirt specific, assuming virt/ will live in nova-common).

 Joe asked a similar question elsewhere on the thread.

 Once again - I am not against doing it - what I am saying is that we
 need to look into this closer as it may not be as big of a win from the
 number of changes needed per feature as we may think.

 Just some things to think about with regards to the whole idea, by no
 means exhaustive.

 So maybe the better question is: what are the top sources of technical
 debt in Nova that we need to address? And if we did, everyone would be
 more sane, and feel less burnt.

 Maybe the drivers are the worst debt, and jettisoning them makes them
 someone else's problem, so that helps some. I'm not entirely convinced
 right now.

 I think Cells represents a lot of debt right now. It doesn't fully work
 with the rest of Nova, and produces a ton of extra code paths special
 cased for the cells path.

 The Scheduler has a ton of debt as has been pointed out by the efforts
 in and around Gannt. The focus has been on the split, but realistically
 I'm with Jay is that we should focus on the debt, and exposing a REST
 interface in Nova.

 What about the Nova objects transition? That continues to be slow
 because it's basically Dan (with a few other helpers from time to time).
 Would it be helpful if we did an all hands on deck transition of the
 rest of Nova for K1 and just get it done? Would be nice to have the bulk
 of Nova core working on one thing like this and actually be in shared
 context with everyone else for a while.
 
 I think the idea that we can tell everyone in Nova what they should
 focus on for a cycle, or more generally, is doomed to failure. This
 isn't a closed source company controlled project where you can dictate
 what everyones priority must be. We must accept that rely on all our
 contributors good will in voluntarily giving their time  resource to
 the projct, to scratch whatever itch they have in the project. We have
 to encourage them to want to work nova and demonstrate that we value
 whatever form of contributor they choose to make. If we have technical
 debt that we think is important to address we need to illustrate /
 show people why they should care about helping. If they none the less
 decide that work isn't for them, we can't just cast them aside and/or
 ignore their contributions, while we get on with other things. This
 is why I think it is important that we split up nova to allow each
 are to self-organize around what they consider to be priorities in
 their area of interest / motivation. Not enabling that is going to
 to continue to kill our community

I'm getting tired of the reprieve that because we are an Open Source
project declaring priorities is pointless, because it's not. I would say
it's actually the exception that a developer wakes up in the morning and
says I completely disregard what anyone else thinks is important in
this project, this is what I'm going to do today. Because if that's how
they felt they wouldn't choose to be part of a community, they would
just go do their own thing. Lone wolfs by definition don't form
communities.

And the FFE process is firm demonstration that when we pick a small
number of things to look at, they move a lot more quickly.

People are always free to work on whatever they want. But providing some
focus to debt clean up. FFE++ effectively, would be really nice.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [feature freeze exception] FFE for libvirt-disk-discard-option

2014-09-05 Thread Sean Dague
On 09/05/2014 07:42 AM, Daniel P. Berrange wrote:
 On Fri, Sep 05, 2014 at 06:28:55AM +, Bohai (ricky) wrote:
 Hi,

 I'd like to ask for a feature freeze exception for blueprint 
 libvirt-disk-discard-option.
 https://review.openstack.org/#/c/112977/

 approved spec:
 https://review.openstack.org/#/c/85556/

 blueprint was approved, but its status was changed to Pending Approval 
 because of FF.
 https://blueprints.launchpad.net/nova/+spec/libvirt-disk-discard-option

 The patch has got a +2 from the core and pretty close to merge, but FF came.
 
 ACK, I'll sponsor this.
 
 It is such a simple  useful patch it is madness to reject it on a process
 technicality.
 
 Regards,
 Daniel
 

+1, just reviewed and +2ed it. Seems straight forward. Consider me a
sponsor.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [feature freeze exception] FFE for libvirt-disk-discard-option

2014-09-05 Thread Sean Dague
On 09/05/2014 07:42 AM, Daniel P. Berrange wrote:
 On Fri, Sep 05, 2014 at 06:28:55AM +, Bohai (ricky) wrote:
 Hi,

 I'd like to ask for a feature freeze exception for blueprint 
 libvirt-disk-discard-option.
 https://review.openstack.org/#/c/112977/

 approved spec:
 https://review.openstack.org/#/c/85556/

 blueprint was approved, but its status was changed to Pending Approval 
 because of FF.
 https://blueprints.launchpad.net/nova/+spec/libvirt-disk-discard-option

 The patch has got a +2 from the core and pretty close to merge, but FF came.
 
 ACK, I'll sponsor this.
 
 It is such a simple  useful patch it is madness to reject it on a process
 technicality.
 
 Regards,
 Daniel
 

+1, just reviewed and +2ed it. Seems straight forward. Consider me a
sponsor.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-05 Thread Gordon Sim

On 09/04/2014 09:44 PM, Kurt Griffiths wrote:

Does a Qpid/Rabbit/Kafka provisioning service make sense? Probably.


I think something like that would be valuable, especially in conjunction 
with some application layer proxying and mapping between 'virtual' 
addresses/endpoints and specific queues/virtual hosts/brokers.


That would allow people to use the brokers, protocols and semantics they 
are already familiar with, combine different systems even, and have 
self-provisioning and scalability on top of it, with varying degrees of 
isolation for tenants.



Would
such a service totally overlap in terms of use-cases with Zaqar? Community
feedback suggests otherwise.


I'd love to read the feedback if you have any links. While I agree 
overlap is rarely total, I suspect there will be quite a bit and that it 
will grow as each approach evolves. That emphatically does not mean I 
think Zaqar should in any way be held back though, just that I think the 
openstack community should anticipate and indeed encourage other 
approaches also.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Daniel P. Berrange
On Fri, Sep 05, 2014 at 07:49:04AM -0400, Sean Dague wrote:
 On 09/05/2014 07:26 AM, Daniel P. Berrange wrote:
  On Fri, Sep 05, 2014 at 07:00:44AM -0400, Sean Dague wrote:
  On 09/05/2014 06:22 AM, Daniel P. Berrange wrote:
  On Fri, Sep 05, 2014 at 07:31:50PM +0930, Christopher Yeoh wrote:
  On Thu, 4 Sep 2014 11:24:29 +0100
  Daniel P. Berrange berra...@redhat.com wrote:
 
   - A fairly significant amount of nova code would need to be
 considered semi-stable API. Certainly everything under nova/virt
 and any object which is passed in/out of the virt driver API.
 Changes to such APIs would have to be done in a backwards
 compatible manner, since it is no longer possible to lock-step
 change all the virt driver impls. In some ways I think this would
 be a good thing as it will encourage people to put more thought
 into the long term maintainability of nova internal code instead
 of relying on being able to rip it apart later, at will.
 
   - The nova/virt/driver.py class would need to be much better
 specified. All parameters / return values which are opaque dicts
 must be replaced with objects + attributes. Completion of the
 objectification work is mandatory, so there is cleaner separation
 between virt driver impls  the rest of Nova.
 
  I think for this to work well with multiple repositories and drivers
  having different priorities over implementing changes in the API it
  would not just need to be semi-stable, but stable with versioning built
  in from the start to allow for backwards incompatible changes. And
  the interface would have to be very well documented including things
  such as what exceptions are allowed to be raised through the API.
  Hopefully this would be enforced through code as well. But as long as
  driver maintainers are willing to commit to this extra overhead I can
  see it working. 
 
  With our primary REST or RPC APIs we're under quite strict rules about
  what we can  can't change - almost impossible to remove an existing
  API from the REST API for example. With the internal virt driver API
  we would probably have a little more freedom. For example, I think
  if we found an existing virt driver API that was insufficient for a
  new bit of work, we could add a new API in parallel with it, give the
  virt drivers 1 dev cycle to convert, and then permanently delete the
  original virt driver API. So a combination of that kind of API
  replacement,  versioning for some data structures/objects, and use of
  the capabilties flags would probably be sufficient. That's what I mean
  by semi-stable here - no need to maintain existing virt driver APIs
  indefinitely - we can remove  replace them in reasonably short time
  scales as long as we avoid any lock-step updates.
 
  I have spent a lot of time over the last year working on things that
  require coordinated code lands between projects it's much more
  friction than you give it credit.
 
  Every added git tree adds a non linear cost to mental overhead, and a
  non linear integration cost. Realistically the reason the gate is in the
  state it is has a ton to do with the fact that it's integrating 40 git
  trees. Because virt drivers run in the process space of Nova Compute,
  they can pretty much do whatever, and the impacts are going to be
  somewhat hard to figure out.
 
  Also, if spinning these out seems like the right idea, I think nova-core
  needs to retain core rights over the drivers as well. Because there do
  need to be veto authority on some of the worst craziness.
  
  If they want todo crazy stuff, let them live or die with the
  consequences.
  
  If the VMWare team stopped trying to build a distributed lock manager
  inside their compute driver, or the Hyperv team didn't wait until J2 to
  start pushing patches, I think there would be more trust in some of
  these teams. But, I am seriously concerned in both those cases, and the
  slow review there is a function of a historic lack of trust in judgment.
  I also personally went on a moratorium a year ago in reviewing either
  driver because entities at both places where complaining to my
  management chain through back channels that I was -1ing their code...
  
  I venture to suggest that the reason we care so much about those kind
  of things is precisely because of our policy of pulling them in the
  tree. Having them in tree means their quality (or not) reflects directly
  on the project as a whole. Separate them from Nova as a whole and give
  them control of their own desinty and they can deal with the consequences
  of their actions and people can judge the results for themselves.
  
  We don't have the time or resources go continue baby-sitting them
  ourselves - attempting todo so has just resulted in a scenario where
  they end up getting largely ignored as you admit here. This ultimately
  makes their quality even worse, because the lack of reviewer availability
  means they stand little chance of 

Re: [openstack-dev] [FFE] [nova] Barbican key manager wrapper

2014-09-05 Thread Sean Dague
On 09/05/2014 07:51 AM, Daniel P. Berrange wrote:
 On Thu, Sep 04, 2014 at 05:19:45PM +, Coffman, Joel M. wrote:
 A major concern about several encryption features within Nova [1, 2] has 
 been the lack of secure key management. To address this concern, work has 
 been underway to integrate these features with Barbican [3], which can be 
 used to manage encryption keys across OpenStack.

 We request a feature freeze exception be granted to merge this code [3], 
 which is really a shim between the existing key manager interface in Nova 
 and python-barbicanclient, into Nova [4]. The acceptance of this feature 
 will improve the security of cloud users and operators who use the Cinder 
 volume encryption feature [1], which is currently limited to a single, 
 static encryption key for volumes. Cinder has already merged a similar 
 feature [5] following the review of several patch revisions; not accepting 
 the feature in Nova creates a disparity with Cinder in regards to the 
 management of encryption keys.

 As this is an optional feature that introduces very few changes to 
 pre-existing code, the risk of disruption to existing deployments as well as 
 the risk of regression is minimal. The only objection that has very recently 
 been voiced is the implicit dependency on the Barbican service, which does 
 not yet have experimental jobs in Tempest. Other core reviewers, though, 
 believe that the existing unit tests included with the change are sufficient.

 Thank you for taking the time to consider this request.
 
 I sponsor it as it is effectively part of the LVM encryption blueprint
 which I've already sponsor. So we should consider FFE for both those
 blueprints together, rather than in isolation.

Agreed, I kind of assumed we were thinking about them as one thing.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Thierry Carrez
Daniel P. Berrange wrote:
 For a long time I've use the LKML 'subsystem maintainers' model as the
 reference point for ideas. In a more LKML like model, each virt team
 (or other subsystem team) would have their own separate GIT repo with
 a complete Nova codebase, where they did they day to day code submissions,
 reviews and merges. Periodically the primary subsystem maintainer would
 submit a large pull / merge requests to the overall Nova maintainer.
 The $1,000,000 question in such a model is what kind of code review
 happens during the big pull requests to integrate subsystem trees. 

Please note that the Kernel subsystem model is actually a trust tree
based on 20 years of trust building. OpenStack is only 4 years old, so
it's difficult to apply the same model as-is.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Docs plan for Juno

2014-09-05 Thread Swartzlander, Ben
Now that the project is incubated, we should be moving our docs from the 
openstack wiki to the openstack-manuals project. Rushil Chugh has volunteered 
to lead this effort so please coordinate any updates to documentation with him 
(and me). Our goal is to have the updates to openstack-manuals upstream by Sept 
22. It will go faster if we can split up the work and do it in parallel.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-05 Thread Thierry Carrez
Michael Still wrote:
 We're soon to hit feature freeze, as discussed in Thierry's recent
 email. I'd like to outline the process for requesting a freeze
 exception:
 
 * your code must already be up for review
 * your blueprint must have an approved spec
 * you need three (3) sponsoring cores for an exception to be granted
 * exceptions must be granted before midnight, Friday this week
 (September 5) UTC
 * the exception is valid until midnight Friday next week
 (September 12) UTC when all exceptions expire
 
 For reference, our rc1 drops on approximately 25 September, so the
 exception period needs to be short to maximise stabilization time.
 
 John Garbutt and I will both be granting exceptions, to maximise our
 timezone coverage. We will grant exceptions as they come in and gather
 the required number of cores, although I have also carved some time
 out in the nova IRC meeting this week for people to discuss specific
 exception requests.

I'd like to add that every exception approved adds up to create moving
parts at a moment where we want to slow down to let QA and Docs and
other downstream stakeholders catch up.

Obviously, things that are already approved and working their way
through the gate should be in early enough to limit this disruption. But
in general, targeting more than 25% of your juno-3 velocity to -rc1 is a
bit unreasonable. For Nova, that means that more than 7 exceptions is
starting to be a stability issue.

Please keep that in mind every time you go to support a FFE.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Jay Pipes

On 09/05/2014 02:59 AM, Sylvain Bauza wrote:

Le 05/09/2014 01:26, Jay Pipes a écrit :

On 09/04/2014 10:33 AM, Dugger, Donald D wrote:

Basically +1 with what Daniel is saying (note that, as mentioned, a
side effect of our effort to split out the scheduler will help but
not solve this problem).


The difference between Dan's proposal and the Gantt split is that
Dan's proposal features quite prominently the following:

== begin ==

 - The nova/virt/driver.py class would need to be much better
   specified. All parameters / return values which are opaque dicts
   must be replaced with objects + attributes. Completion of the
   objectification work is mandatory, so there is cleaner separation
   between virt driver impls  the rest of Nova.

== end ==

In other words, Dan's proposal above is EXACTLY what I've been saying
needs to be done to the interfaces between nova-conductor,
nova-compute, and nova-scheduler *before* any split of the scheduler
code is even remotely feasible.

Splitting the scheduler out before this is done would actually not
help but not solve this problem -- it would instead further the
problem, IMO.



Jay, we agreed on a plan to carry on, please be sure we're working on
it, see the Gantt meetings logs for what my vision is.


I've attended most of the Gantt meetings, except for a couple recent 
ones due to my house move (finally done, yay!). I believe we are mostly 
aligned on the plan of record, but I see no urgency in splitting out the 
scheduler. I only see urgency on cleaning up the interfaces. But, that 
said, let's not highjack Dan's thread here too much. We can discuss on 
IRC. I was only saying that Don's comment that splitting the scheduler 
out would help solve the bandwidth issues should be predicated on the 
same contingency that Dan placed on splitting out the virt drivers: that 
the internal interfaces be cleaned up, documented and stabilized.


snip


So, this effort requires at least one cycle, and as Dan stated, there is
urgency, so I think we need to identify a short-term solution which
doesn't require refactoring. My personal opinion is what Russell and
Thierry expressed, ie. subteam delegation (to what I call half-cores)
for iterations and only approvals for cores.


Yeah, I don't have much of an issue with the subteam delegation 
proposals. It's just really a technical problem to solve w.r.t. Gerrit 
permissions.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FFE] [nova] Barbican key manager wrapper

2014-09-05 Thread Sean Dague
On 09/05/2014 08:11 AM, Sean Dague wrote:
 On 09/05/2014 07:51 AM, Daniel P. Berrange wrote:
 On Thu, Sep 04, 2014 at 05:19:45PM +, Coffman, Joel M. wrote:
 A major concern about several encryption features within Nova [1, 2] has 
 been the lack of secure key management. To address this concern, work has 
 been underway to integrate these features with Barbican [3], which can be 
 used to manage encryption keys across OpenStack.

 We request a feature freeze exception be granted to merge this code [3], 
 which is really a shim between the existing key manager interface in Nova 
 and python-barbicanclient, into Nova [4]. The acceptance of this feature 
 will improve the security of cloud users and operators who use the Cinder 
 volume encryption feature [1], which is currently limited to a single, 
 static encryption key for volumes. Cinder has already merged a similar 
 feature [5] following the review of several patch revisions; not accepting 
 the feature in Nova creates a disparity with Cinder in regards to the 
 management of encryption keys.

 As this is an optional feature that introduces very few changes to 
 pre-existing code, the risk of disruption to existing deployments as well 
 as the risk of regression is minimal. The only objection that has very 
 recently been voiced is the implicit dependency on the Barbican service, 
 which does not yet have experimental jobs in Tempest. Other core reviewers, 
 though, believe that the existing unit tests included with the change are 
 sufficient.

 Thank you for taking the time to consider this request.

 I sponsor it as it is effectively part of the LVM encryption blueprint
 which I've already sponsor. So we should consider FFE for both those
 blueprints together, rather than in isolation.
 
 Agreed, I kind of assumed we were thinking about them as one thing.

There is a real issue in the current patch which I -1ed on Wed around
the way requirements are pulled in.

If you are in FFE there really is an expectation that patches are respun
quickly on feedback. So if this isn't addressed shortly, I'm removing my
sponsorship here.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-05 Thread Sylvain Bauza


Le 05/09/2014 13:05, Nikola Đipanov a écrit :

On 09/04/2014 10:25 PM, Solly Ross wrote:

Anyway, I think it would be useful to have some sort of page where people
could say I'm an SME in X, ask me for reviews and then patch submitters could 
go
and say, oh, I need an someone to review my patch about storage backends, let 
me
ask sross.


This is a good point - I've been thinking along similar lines that we
really could have a huge win in terms of the review experience by
building a tool (maybe a social network looking one :)) that relates
reviews to people being able to do them, visualizes reviewer karma and
other things that can help make the code submissions and reviews more
human friendly.

Dan seems to dismiss the idea of improved tooling as something that can
get us only thus far, but I am not convinced. However - this will
require even more manpower and we are already ridiculously short on that
so...


Why can't we just create some nova groups in Gerrit and provide some 
docs about which group to be pinged from a file ?


-Sylvain


N.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][FFE] websocket-proxy-to-host-security

2014-09-05 Thread Daniel P. Berrange
On Thu, Sep 04, 2014 at 12:59:07PM -0400, Solly Ross wrote:
 I would like to request a feature freeze exception for the Websocket Proxy to 
 Host Security.
 The spec [1] was approved for Nova, and the patches [2] are currently sitting 
 there with one
 +2 (courtesy of @danpb), with a +1 from Jenkins.
 
 For a TL;DR on the spec, essentially this patch series implements a framework 
 for authenticating
 and encrypting communication between the VNC proxy and the actual compute 
 nodes, and then
 implements a TLS/x509 driver using that framework.
 
 The patches don't touch much, and are enabled optionally, so they should be 
 relatively safe.

I think this is good work and want to see it in Nova. At the same time
though, the first version of this only got up onto review 2 weeks ago,
and I think other proposed FFEs are higher priority to approve.

So I'd sponsor it only if we have capacity after considering the other
items, specifically serial consols, SRIOV, Ironic and NUMA work.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] FFE server-group-quotas

2014-09-05 Thread Day, Phil
Hi,

I'd like to ask for a FFE for the 3 patchsets that implement quotas for server 
groups.

Server groups (which landed in Icehouse) provides a really useful anti-affinity 
filter for scheduling that a lot of customers woudl like to use, but without 
some form of quota control to limit the amount of anti-affinity its impossible 
to enable it as a feature in a public cloud.

The code itself is pretty simple - the number of files touched is a side-effect 
of having three V2 APIs that report quota information and the need to protect 
the change in V2 via yet another extension.

https://review.openstack.org/#/c/104957/
https://review.openstack.org/#/c/116073/
https://review.openstack.org/#/c/116079/

Phil

 -Original Message-
 From: Sahid Orentino Ferdjaoui [mailto:sahid.ferdja...@redhat.com]
 Sent: 04 September 2014 13:42
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [nova] FFE request serial-ports
 
 Hello,
 
 I would like to request a FFE for 4 changesets to complete the blueprint
 serial-ports.
 
 Topic on gerrit:
 
 https://review.openstack.org/#/q/status:open+project:openstack/nova+br
 anch:master+topic:bp/serial-ports,n,z
 
 Blueprint on launchpad.net:
   https://blueprints.launchpad.net/nova/+spec/serial-ports
 
 They have already been approved but didn't get enough time to be merged
 by the gate.
 
 Sponsored by:
 Daniel Berrange
 Nikola Dipanov
 
 s.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [FFE] alternative request for v2-on-v3-api

2014-09-05 Thread Sean Dague
On 09/04/2014 07:54 PM, Christopher Yeoh wrote:
 On Thu, 4 Sep 2014 23:08:09 +0900
 Ken'ichi Ohmichi ken1ohmi...@gmail.com wrote:
 
 Hi

 I'd like to request FFE for v2.1 API patches.

 This request is different from Christopher's one.
 His request is for the approved patches, but this is
 for some patches which are not approved yet.

 https://review.openstack.org/#/c/113169/ : flavor-manage API
 https://review.openstack.org/#/c/114979/ : quota-sets API
 https://review.openstack.org/#/c/115197/ : security_groups API

 I think these API are used in many cases and important, so I'd like
 to test v2.1 API with them together on RC phase.
 Two of them have gotten one +2 on each PS and the other one
 have gotten one +1.

 
 I'm happy to sponsor these extra changesets - I've reviewed them all
 previously. Risk to the rest of Nova is very low.

I'll also sponsor them, they also have the nice effect of being negative
KLOC patches.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Sylvain Bauza


Le 05/09/2014 14:48, Jay Pipes a écrit :

On 09/05/2014 02:59 AM, Sylvain Bauza wrote:

Le 05/09/2014 01:26, Jay Pipes a écrit :

On 09/04/2014 10:33 AM, Dugger, Donald D wrote:

Basically +1 with what Daniel is saying (note that, as mentioned, a
side effect of our effort to split out the scheduler will help but
not solve this problem).


The difference between Dan's proposal and the Gantt split is that
Dan's proposal features quite prominently the following:

== begin ==

 - The nova/virt/driver.py class would need to be much better
   specified. All parameters / return values which are opaque dicts
   must be replaced with objects + attributes. Completion of the
   objectification work is mandatory, so there is cleaner separation
   between virt driver impls  the rest of Nova.

== end ==

In other words, Dan's proposal above is EXACTLY what I've been saying
needs to be done to the interfaces between nova-conductor,
nova-compute, and nova-scheduler *before* any split of the scheduler
code is even remotely feasible.

Splitting the scheduler out before this is done would actually not
help but not solve this problem -- it would instead further the
problem, IMO.



Jay, we agreed on a plan to carry on, please be sure we're working on
it, see the Gantt meetings logs for what my vision is.


I've attended most of the Gantt meetings, except for a couple recent 
ones due to my house move (finally done, yay!). I believe we are 
mostly aligned on the plan of record, but I see no urgency in 
splitting out the scheduler. I only see urgency on cleaning up the 
interfaces. But, that said, let's not highjack Dan's thread here too 
much. We can discuss on IRC. I was only saying that Don's comment that 
splitting the scheduler out would help solve the bandwidth issues 
should be predicated on the same contingency that Dan placed on 
splitting out the virt drivers: that the internal interfaces be 
cleaned up, documented and stabilized.


snip


So, this effort requires at least one cycle, and as Dan stated, there is
urgency, so I think we need to identify a short-term solution which
doesn't require refactoring. My personal opinion is what Russell and
Thierry expressed, ie. subteam delegation (to what I call half-cores)
for iterations and only approvals for cores.


Yeah, I don't have much of an issue with the subteam delegation 
proposals. It's just really a technical problem to solve w.r.t. Gerrit 
permissions.




Well, that just requires new Gerrit groups and a new label (like 
Subteam-Approved) so that members of this group could just 
+Subteam-Approved if they're OK (here I imagine 2 people from the group 
labelling it)


Of course, all the groups could have permissions to label any file of 
Nova, but here we can just define a gentleman's agreement, like we do 
for having two +2s before approving.


That would say that cores could just search using Gerrit with 
'label:Subteam-Approved=1'


-Sylvain


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][FFE] Feature freeze exception for virt-driver-numa-placement

2014-09-05 Thread Nikola Đipanov
Since this did not get an 'Approved' as of yet, I want to make sure that
this is not because the number of sponsors. 2 core members have already
sponsored it, and as per [1] cores can sponsor their own FFEs so that's 3.

N.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044669.html

On 09/04/2014 01:58 PM, Nikola Đipanov wrote:
 Hi team,
 
 I am requesting the exception for the feature from the subject (find
 specs at [1] and outstanding changes at [2]).
 
 Some reasons why we may want to grant it:
 
 First of all all patches have been approved in time and just lost the
 gate race.
 
 Rejecting it makes little sense really, as it has been commented on by a
 good chunk of the core team, most of the invasive stuff (db migrations
 for example) has already merged, and the few parts that may seem
 contentious have either been discussed and agreed upon [3], or can
 easily be addressed in subsequent bug fixes.
 
 It would be very beneficial to merge it so that we actually get real
 testing on the feature ASAP (scheduling features are not tested in the
 gate so we need to rely on downstream/3rd party/user testing for those).
 
 Thanks,
 
 Nikola
 
 [1]
 http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/juno/virt-driver-numa-placement.rst
 [2]
 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/virt-driver-numa-placement,n,z
 [3] https://review.openstack.org/#/c/111782/
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Sylvain Bauza


Le 05/09/2014 12:48, Sean Dague a écrit :

On 09/05/2014 03:02 AM, Sylvain Bauza wrote:

Le 05/09/2014 01:22, Michael Still a écrit :

On Thu, Sep 4, 2014 at 5:24 AM, Daniel P. Berrange
berra...@redhat.com wrote:

[Heavy snipping because of length]


The radical (?) solution to the nova core team bottleneck is thus to
follow this lead and split the nova virt drivers out into separate
projects and delegate their maintainence to new dedicated teams.

   - Nova becomes the home for the public APIs, RPC system, database
 persistent and the glue that ties all this together with the
 virt driver API.

   - Each virt driver project gets its own core team and is responsible
 for dealing with review, merge  release of their codebase.

I think this is the crux of the matter. We're not doing a great job of
landing code at the moment, because we can't keep up with the review
workload.

So far we've had two proposals mooted:

   - slots / runways, where we try to rate limit the number of things
we're trying to review at once to maintain focus
   - splitting all the virt drivers out of the nova tree

Ahem, IIRC, there is a third proposal for Kilo :
  - create subteam's half-cores responsible for reviewing patch's
iterations and send to cores approvals requests once they consider the
patch enough stable for it.

As I explained, it would allow to free up reviewing time for cores
without loosing the control over what is being merged.

I don't really understand how the half core idea works outside of a math
equation, because the point is in core is to have trust over the
judgement of your fellow core members so that they can land code when
you aren't looking. I'm not sure how I manage to build up half trust in
someone any quicker.


Well, this thread is becoming huge so that's becoming hard to follow all 
the discussion but I explained the idea elsewhere. Let me just provide 
it here too :
The idea is *not* to land patches by the halfcores. Core team will still 
be fully responsible for approving patches. The main problem in Nova is 
that cores are spending lots of time because they review each iteration 
of a patch, and also have to look at if a patch is good or not.


That's really time consuming, and for most of the time, quite 
frustrating as it requires to follow the patch's life, so there are high 
risks that your core attention is becoming distracted over the life of 
the patch.


Here, the idea is to reduce dramatically this time by having teams 
dedicated to specific areas (as it's already done anyway for the various 
majority of reviewers) who could on their own take time for reviewing 
all the iterations. Of course, that doesn't mean cores would loose the 
possibility to specifically follow a patch and bypass the halfcores, 
that's just for helping them if they're overwhelmed.


About the question of trusting cores or halfcores, I can just say that 
Nova team is anyway needing to grow up or divide it so the trusting 
delegation has to be real anyway.


This whole process is IMHO very encouraging for newcomers because that 
creates dedicated teams that could help them to improve their changes, 
and not waiting 2 months for getting a -1 and a frank reply.



As I said elsewhere, I dislike the slots proposal because it sends to 
the developers the message that the price to pay for contributing to 
Nova is increasing. Again, that's not because you're prioritizing that 
you increase your velocity, that's 2 distinct subjects.


-Sylvain



-Sean




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-05 Thread Gordon Sim

On 09/04/2014 01:14 PM, Sean Dague wrote:

https://dague.net/2014/08/26/openstack-as-layers/


Just wanted to say that I found this article very useful indeed and 
agree with the points you make in it.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Jay Pipes

On 09/05/2014 08:58 AM, Sylvain Bauza wrote:

Le 05/09/2014 14:48, Jay Pipes a écrit :

On 09/05/2014 02:59 AM, Sylvain Bauza wrote:

Le 05/09/2014 01:26, Jay Pipes a écrit :

On 09/04/2014 10:33 AM, Dugger, Donald D wrote:

Basically +1 with what Daniel is saying (note that, as mentioned, a
side effect of our effort to split out the scheduler will help but
not solve this problem).


The difference between Dan's proposal and the Gantt split is that
Dan's proposal features quite prominently the following:

== begin ==

 - The nova/virt/driver.py class would need to be much better
   specified. All parameters / return values which are opaque dicts
   must be replaced with objects + attributes. Completion of the
   objectification work is mandatory, so there is cleaner separation
   between virt driver impls  the rest of Nova.

== end ==

In other words, Dan's proposal above is EXACTLY what I've been saying
needs to be done to the interfaces between nova-conductor,
nova-compute, and nova-scheduler *before* any split of the scheduler
code is even remotely feasible.

Splitting the scheduler out before this is done would actually not
help but not solve this problem -- it would instead further the
problem, IMO.



Jay, we agreed on a plan to carry on, please be sure we're working on
it, see the Gantt meetings logs for what my vision is.


I've attended most of the Gantt meetings, except for a couple recent
ones due to my house move (finally done, yay!). I believe we are
mostly aligned on the plan of record, but I see no urgency in
splitting out the scheduler. I only see urgency on cleaning up the
interfaces. But, that said, let's not highjack Dan's thread here too
much. We can discuss on IRC. I was only saying that Don's comment that
splitting the scheduler out would help solve the bandwidth issues
should be predicated on the same contingency that Dan placed on
splitting out the virt drivers: that the internal interfaces be
cleaned up, documented and stabilized.

snip


So, this effort requires at least one cycle, and as Dan stated, there is
urgency, so I think we need to identify a short-term solution which
doesn't require refactoring. My personal opinion is what Russell and
Thierry expressed, ie. subteam delegation (to what I call half-cores)
for iterations and only approvals for cores.


Yeah, I don't have much of an issue with the subteam delegation
proposals. It's just really a technical problem to solve w.r.t. Gerrit
permissions.



Well, that just requires new Gerrit groups and a new label (like
Subteam-Approved) so that members of this group could just
+Subteam-Approved if they're OK (here I imagine 2 people from the group
labelling it)


And what about code that crosses module boundaries? Would we need a 
LibvirtSubteamApproved, SchedulerSubteamApproved, etc?



Of course, all the groups could have permissions to label any file of
Nova, but here we can just define a gentleman's agreement, like we do
for having two +2s before approving.


Yes, it would be a gentle-person's agreement. :) Gerrit cannot enforce 
this kind of policy, that's what I was getting at.



That would say that cores could just search using Gerrit with
'label:Subteam-Approved=1'


Interesting, yes, that would be useful.

-jay


-Sylvain


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Day, Phil


 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 05 September 2014 11:49
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out
 virt drivers
 
 On 09/05/2014 03:02 AM, Sylvain Bauza wrote:
 
 
  Ahem, IIRC, there is a third proposal for Kilo :
   - create subteam's half-cores responsible for reviewing patch's
  iterations and send to cores approvals requests once they consider the
  patch enough stable for it.
 
  As I explained, it would allow to free up reviewing time for cores
  without loosing the control over what is being merged.
 
 I don't really understand how the half core idea works outside of a math
 equation, because the point is in core is to have trust over the judgement of
 your fellow core members so that they can land code when you aren't
 looking. I'm not sure how I manage to build up half trust in someone any
 quicker.
 
   -Sean
 
You seem to be looking at a model Sean where trust is purely binary - you’re 
either trusted to know about all of Nova or not trusted at all.  

What Sylvain is proposing (I think) is something more akin to having folks that 
are trusted in some areas of the system and/or trusted to be right enough of 
the time that their reviewing skills take a significant part of the burden of 
the core reviewers.That kind of incremental development of trust feels like 
a fairly natural model me.Its some way between the full divide and rule 
approach of splitting out various components (which doesn't feel like a short 
term solution) and the blanket approach of adding more cores.

Making it easier to incrementally grant trust, and having the processes and 
will to remove it if its seen to be misused feels to me like it has to be part 
of the solution to breaking out of the we need more people we trust, but we 
don’t feel comfortable trusting more than N people at any one time.  Sometimes 
you have to give people a chance in small, well defined and controlled steps.

Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][feature freeze exception] Can I get an exception for AMQP 1.0?

2014-09-05 Thread Doug Hellmann

On Sep 3, 2014, at 2:03 PM, Ken Giusti kgiu...@gmail.com wrote:

 Hello,
 
 I'm proposing a freeze exception for the oslo.messaging AMQP 1.0
 driver:
 
   https://review.openstack.org/#/c/75815/
 
 Blueprint:
 
   
 https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation
 
 I presented this work at the Juno summit [1]. The associated spec has
 been approved and merged [2].
 
 The proposed patch has been in review since before icehouse, with a
 couple of non-binding +1's.  A little more time is necessary to get
 core reviews.
 
 The patch includes a number of functional tests, and I've proposed a
 CI check that will run those tests [3].  This patch is currently
 pending support for bare fedora 20 nodes in CI.  I'm planning to add
 additional test cases and devstack support in the future.
 
 I'm in the process of adding documentation to the RPC section of the
 Openstack manual.
 
 Justification:
 
 I think there's a benefit to have this driver available as an
 _experimental_ feature in Juno, and the risk of inclusion is minimal
 as the driver is optional, disabled by default, and will not have
 impact on any system that does not explicitly enable it.
 
 Unlike previous versions of the protocol, AMQP 1.0 is the official
 standard for AMQP messaging (ISO/IEC 19464).  Support for it is
 arriving from multiple different messaging system vendors [4].
 
 Having access to AMQP 1.0 functionality in openstack sooner rather
 than later gives the developers of AMQP 1.0 messaging systems the
 opportunity to validate their AMQP 1.0 support in the openstack
 environment.  Likewise, easier access to this driver by the openstack
 developer community will help us find and fix any issues in a timely
 manner as adoption of the standard grows.
 
 Please consider this feature to be a part of Juno-3 release.
 
 Thanks,
 
 Ken
 
 
 -- 
 Ken Giusti  (kgiu...@gmail.com)
 

This FFE was approved and the change linked above is currently in the merge 
queue.

Doug



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Docs plan for Juno

2014-09-05 Thread Anne Gentle
Hi Ben,
Since manila just entered incubation, the openstack-manuals repo and common
documentation will not include it until it is integrated. During incubation
we ask you to start documentation in your own repo and identify what will
eventually move into common docs. See
https://wiki.openstack.org/wiki/Documentation/IncubateIntegrate.

You just got incubated, now what?

While you're incubating we like to see documentation in your repo in
doc/source, built with Sphinx. You can start to use the oslosphinx theme, a
common theme for OpenStack projects, at the point your project begins
incubation. You must indicate that you're incubating by setting the
oslosphinx configuration by ensuring your sphinx conf.py has incubating set
to True for the theme:

   'html_theme_options = {'incubating': True}'

You can publish to docs.openstack.org/developer while incubating. Prior to
incubation we suggest publishing to ReadTheDocs.org or your own domain and
avoid using the OpenStack logo or trademark unless you have permission as
outlined in the brand guidelines from http://www.openstack.org/brand/.

While you're incubating and gathering developer contributors, be sure to
also find writer resources. Your developers can probably manage the
contributor developer documentation but be thinking about how you'll write
your REST API reference documentation, your install documentation, your
configuration documentation, your operator and admin documentation, and
end-user documentation for CLI and dashboard interaction. You'll need to
prove you're able to support your users (ops, admins, and end-users) when
graduating to integrated, through docs and answers on ask.openstack.org.

While you're integrating, I'd suggest that your doc/source directory become
the landing place for your documentation. What you'll want to look forward
to is writing in such a way that your pages can be used for these
deliverables in the official OpenStack documentation, found at
https://wiki.openstack.org/wiki/Documentation/ContentSpecs.

Always be aware that the OpenStack Documentation Program prioritizes the
core projects above integrated, but we provide frameworks and reviews and
try to help with orientation as much as possible. We're just not in a
position to provide shared resources such as technical writers assigned to
each project, for example. Rather, we expect the projects to assign a doc
liaison who can interact with the documentation group through the
openstack-docs mailing list and #openstack-doc IRC channel. The docs team
holds weekly meetings and it's great to have doc liaisons attend the
meetings that are in their time zone. The meeting schedule and agenda are
found here: https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting.




On Fri, Sep 5, 2014 at 7:27 AM, Swartzlander, Ben 
ben.swartzlan...@netapp.com wrote:

  Now that the project is incubated, we should be moving our docs from the
 openstack wiki to the openstack-manuals project. Rushil Chugh has
 volunteered to lead this effort so please coordinate any updates to
 documentation with him (and me). Our goal is to have the updates to
 openstack-manuals upstream by Sept 22. It will go faster if we can split up
 the work and do it in parallel.



 -Ben



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-05 Thread Sean Dague
While reviewing this zookeeper service group fix in Nova -
https://review.openstack.org/#/c/102639/ it was exposed that the
zookeeper tests aren't running in infra.

The crux of the issue is that zookeeper python modules are C extensions.
So you have to either install from packages (which we don't do in unit
tests) or install from pip, which means forcing zookeeper dev packages
locally. Realistically this is the same issue we end up with for mysql
and pg, but given their wider usage we just forced that pain on developers.

But it seems like a bad stand off between testing upstream and testing
normal path locally.

Big picture it would be nice to not require a ton of dev libraries
locally for optional components, but still test them upstream. So that
in the base case I'm not running zookeeper locally, but if it fails
upstream because I broke something in zookeeper, it's easy enough to
spin up that dev env that has it.

Which feels like we need some decoupling on our requirements vs. tox
targets to get there. CC to Monty and Clark as our super awesome tox
hackers to help figure out if there is a path forward here that makes sense.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-05 Thread Victoria Martínez de la Cruz
2014-09-05 9:09 GMT-03:00 Sean Dague s...@dague.net:

 On 09/05/2014 07:39 AM, Robert Collins wrote:
  On 5 September 2014 23:33, Sean Dague s...@dague.net wrote:
 
  I think realistically a self certification process that would have
  artifacts in a discoverable place. I was thinking something along the
  lines of a baseball card interface with a short description of the
  project, a list of the requirements to deploy (native and python), a
  link the the API docs, a link to current test coverage, as well as some
  statement on the frequency of testing, stats on open bugs and current
  trending and review backlog, current user testimonials. Basically the
  kind of first stage analysis that a deployer would do before pushing
  this out to their users.
 
  Add into that their deployment support - e.g. do they have TripleO
  support // Chef // Fuel // Puppet etc etc etc.

 ACK, good points. I expect packaging might go on that list as well.

 I won't pretend I've got the whole baseball card self cert thing worked
 out, just trying to sketch an idea that might bear fruit.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi all,

Thanks for bringing up this topic Flavio and everyone for the feedback!

From my humble junior developer point of view I would like to share some
comments about some of the concerns mentioned.

- Added complexity on adding Zaqar to the ecosystem and the NoSQL concern

I won't deny that adding Zaqar, if operators need it, will make OpenStack a
little more harder to deploy. But this will happen with any extra project
added.

Complexity is part of OpenStack though. We, as OpenStack, are trying to
provide a software solution for a really complex need. And that is what
makes us great.

IMO operators interested in using Zaqar will consider the pros and cons of
adding it and will make their choice based on that.

And it's not about the tools we use. NoSQL was chosen to make the job for
Zaqar because it was proven to do a better job. And, in the whole family of
NoSQL solutions, we choose the ones that were considered easier to deploy.

It's a tecnology that has been in use for a long time now and it fits
perfectly the requirements of Zaqar. In this regard, I think there is a
long way to go and is something that Zaqar care about everyday. Zaqar will
keep on researching for the best solutions for the users and working on
adding support for them, as every other project does.

- Reinventing or not reinventing a messaging system

In the last couple of months I has been working on adding AMQP as a
storage/transport backend for Zaqar. During that period I managed to learn
a lot from other messaging systems, including the ones that has been
discussed from now.

With that basis I can say that Zaqar is covering other, different, uses
cases that the mentioned technologies are not meant to cover. And the best
of all, it's being specialized for the cloud.

Most widely used cloud services already has they message and notification
systems solution, and we should be up to the game with that. Right now it
doesn't seems like that need is being filled.

- Integration as a sign of stability

Right now we are in a situation in which Zaqar is a mature project with
many features to provide, but in order to keep on growing and getting
better it needs to integrate with other projects and start showing what is
capable of.

Zaqar is robust, well documented and with a team willing to keep on
enhancing it.

It doesn't matter what places it takes on the stack, what is matter is that
it's on the stack.

Hope this all makes sense and please correct me if I'm wrong. I want the
best for both OpenStack and Zaqar, as you do.

All the best,

Victoria
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Sylvain Bauza


Le 05/09/2014 15:11, Jay Pipes a écrit :

On 09/05/2014 08:58 AM, Sylvain Bauza wrote:

Le 05/09/2014 14:48, Jay Pipes a écrit :

On 09/05/2014 02:59 AM, Sylvain Bauza wrote:

Le 05/09/2014 01:26, Jay Pipes a écrit :

On 09/04/2014 10:33 AM, Dugger, Donald D wrote:

Basically +1 with what Daniel is saying (note that, as mentioned, a
side effect of our effort to split out the scheduler will help but
not solve this problem).


The difference between Dan's proposal and the Gantt split is that
Dan's proposal features quite prominently the following:

== begin ==

 - The nova/virt/driver.py class would need to be much better
   specified. All parameters / return values which are opaque dicts
   must be replaced with objects + attributes. Completion of the
   objectification work is mandatory, so there is cleaner separation
   between virt driver impls  the rest of Nova.

== end ==

In other words, Dan's proposal above is EXACTLY what I've been saying
needs to be done to the interfaces between nova-conductor,
nova-compute, and nova-scheduler *before* any split of the scheduler
code is even remotely feasible.

Splitting the scheduler out before this is done would actually not
help but not solve this problem -- it would instead further the
problem, IMO.



Jay, we agreed on a plan to carry on, please be sure we're working on
it, see the Gantt meetings logs for what my vision is.


I've attended most of the Gantt meetings, except for a couple recent
ones due to my house move (finally done, yay!). I believe we are
mostly aligned on the plan of record, but I see no urgency in
splitting out the scheduler. I only see urgency on cleaning up the
interfaces. But, that said, let's not highjack Dan's thread here too
much. We can discuss on IRC. I was only saying that Don's comment that
splitting the scheduler out would help solve the bandwidth issues
should be predicated on the same contingency that Dan placed on
splitting out the virt drivers: that the internal interfaces be
cleaned up, documented and stabilized.

snip

So, this effort requires at least one cycle, and as Dan stated, 
there is

urgency, so I think we need to identify a short-term solution which
doesn't require refactoring. My personal opinion is what Russell and
Thierry expressed, ie. subteam delegation (to what I call 
half-cores)

for iterations and only approvals for cores.


Yeah, I don't have much of an issue with the subteam delegation
proposals. It's just really a technical problem to solve w.r.t. Gerrit
permissions.



Well, that just requires new Gerrit groups and a new label (like
Subteam-Approved) so that members of this group could just
+Subteam-Approved if they're OK (here I imagine 2 people from the group
labelling it)


And what about code that crosses module boundaries? Would we need a 
LibvirtSubteamApproved, SchedulerSubteamApproved, etc?




Luckily not. I think we only need one more label (we only have 3 now : 
Verified, Code-Review, Approved).


Here the key thing is having a search label that cores can consume 
because they know that this label is worth of interest. If something is 
crosses module, then that's something that probably a core would help.


For example, if I'm an API halfcore, I can subteam-approve all the 
changes related to the API itself (so that encourages small and readable 
patches btw.) but I leave my turn if I'm looking at something I don't 
know enough (or I provide +1)


The porting idea is to encourage reviewing because the step is not so 
high as if I wanted to be core. On the other hand, if an halfcore is 
becoming enough trustable (because he also provides good +1s for other 
areas and is enough involved in the release process), then this folk is 
a good candidate for becoming core.



As you identified, most of the proposal is based on gentle-person 
agreement because Gerrit is not enough flexible for doing that (although 
since 2.8, you can search all patches related to a path, like 
file:^nova/scheduler/*)


-Sylvain

Of course, all the groups could have permissions to label any file of
Nova, but here we can just define a gentleman's agreement, like we do
for having two +2s before approving.


Yes, it would be a gentle-person's agreement. :) Gerrit cannot enforce 
this kind of policy, that's what I was getting at.



That would say that cores could just search using Gerrit with
'label:Subteam-Approved=1'


Interesting, yes, that would be useful.

-jay


-Sylvain


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Eric Windisch


  - Each virt driver project gets its own core team and is responsible
for dealing with review, merge  release of their codebase.

 Note, I really do mean *all* virt drivers should be separate. I do
 not want to see some virt drivers split out and others remain in tree
 because I feel that signifies that the out of tree ones are second
 class citizens.


+1. I made this same proposal to Michael during the mid-cycle. However, I
haven't wanted to conflate this issue with bringing Docker back into Nova.
For the Docker driver in particular, I feel that being able to stay out of
tree and having our own core team would be beneficial, but  I wouldn't want
to do this unless it applied equally to all drivers.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [FFE] Final Libvirt User Namespaces Patch

2014-09-05 Thread Andrew Melton
Hey Devs,

I'd like to request a feature freeze exception for: 
https://review.openstack.org/#/c/94915/

This feature is the final patch set for the User Namespace BP 
(https://blueprints.launchpad.net/nova/+spec/libvirt-lxc-user-namespaces). This 
is an important feature for libvirt-lxc because in greatly increases the 
security of running libvirt-lxc based containers. The code for this feature has 
been up for a couple months now and has had plenty of time to go through 
review. The code itself is solid by now and functionally it hasn't changed much 
in the last month or so. Lastly, the only thing holding up the patch from 
merging this week was a multitude of bugs in the gate.

Thanks,
Andrew Melton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara][FFE] Requesting exception for Swift trust authentication blueprint

2014-09-05 Thread Michael McCune
hey folks,

I am requesting an exception for the Swift trust authentication blueprint[1]. 
This blueprint addresses a security bug in Sahara and represents a significant 
move towards increased security for Sahara clusters. There are several reviews 
underway[2] with 1 or 2 more starting today or monday.

This feature is initially implemented as optional and as such will have minimal 
impact on current user deployments. By default it is disabled and requires no 
additional configuration or management from the end user.

My feeling is that there has been vigorous debate and discussion surrounding 
the implementation of this blueprint and there is consensus among the team that 
these changes are needed. The code reviews for the bulk of the work have been 
positive thus far and I have confidence these patches will be accepted within 
the next week.

thanks for considering this exception,
mike


[1]: 
https://blueprints.launchpad.net/sahara/+spec/edp-swift-trust-authentication
[2]: 
https://review.openstack.org/#/q/status:open+topic:bp/edp-swift-trust-authentication,n,z

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [FFE] Final Libvirt User Namespaces Patch

2014-09-05 Thread Daniel P. Berrange
On Fri, Sep 05, 2014 at 01:49:20PM +, Andrew Melton wrote:
 Hey Devs,
 
 I'd like to request a feature freeze exception for: 
 https://review.openstack.org/#/c/94915/
 
 This feature is the final patch set for the User Namespace BP
 (https://blueprints.launchpad.net/nova/+spec/libvirt-lxc-user-namespaces).
 This is an important feature for libvirt-lxc because in greatly increases
 the security of running libvirt-lxc based containers. The code for this
 feature has been up for a couple months now and has had plenty of time
 to go through review. The code itself is solid by now and functionally
 it hasn't changed much in the last month or so. Lastly, the only thing
 holding up the patch from merging this week was a multitude of bugs in
 the gate.

Since we've already merged 9 out of the 10 total patches for this feature,
it is a no brainer to merge this last one.

I sponsor it.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FFE server-group-quotas

2014-09-05 Thread Khanh-Toan Tran
+1 for ServerGroup quotas. It's been a while since this feature is
discussed and approved. As a public cloud provider we really want to get
ServerGroup into production. However, without quotas it is more harm than
gain. Since ServerGroup (and even its novaclient's command) is merged in
Icehouse, IMO it is reasonable to secure it in Juno. Otherwise it's a
waste. And as Phil said it is not much a change.

Toan

 -Message d'origine-
 De : Day, Phil [mailto:philip@hp.com]
 Envoyé : vendredi 5 septembre 2014 14:57
 À : OpenStack Development Mailing List (not for usage questions)
 Objet : [openstack-dev] [nova] FFE server-group-quotas

 Hi,

 I'd like to ask for a FFE for the 3 patchsets that implement quotas for
server
 groups.

 Server groups (which landed in Icehouse) provides a really useful
anti-affinity
 filter for scheduling that a lot of customers woudl like to use, but
without some
 form of quota control to limit the amount of anti-affinity its
impossible to
 enable it as a feature in a public cloud.

 The code itself is pretty simple - the number of files touched is a
side-effect of
 having three V2 APIs that report quota information and the need to
protect the
 change in V2 via yet another extension.

 https://review.openstack.org/#/c/104957/
 https://review.openstack.org/#/c/116073/
 https://review.openstack.org/#/c/116079/

 Phil

  -Original Message-
  From: Sahid Orentino Ferdjaoui [mailto:sahid.ferdja...@redhat.com]
  Sent: 04 September 2014 13:42
  To: openstack-dev@lists.openstack.org
  Subject: [openstack-dev] [nova] FFE request serial-ports
 
  Hello,
 
  I would like to request a FFE for 4 changesets to complete the
  blueprint serial-ports.
 
  Topic on gerrit:
 
  https://review.openstack.org/#/q/status:open+project:openstack/nova+br
  anch:master+topic:bp/serial-ports,n,z
 
  Blueprint on launchpad.net:
https://blueprints.launchpad.net/nova/+spec/serial-ports
 
  They have already been approved but didn't get enough time to be
  merged by the gate.
 
  Sponsored by:
  Daniel Berrange
  Nikola Dipanov
 
  s.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Jay Pipes

On 09/05/2014 06:29 AM, John Garbutt wrote:

Scheduler: I think we need to split out the scheduler with a similar
level of urgency. We keep blocking features on the split, because we
know we don't have the review bandwidth to deal with them. Right now I
am talking about a compute related scheduler in the compute program,
that might evolve to worry about other services at a later date.


-1

Without first cleaning up the interfaces around resource tracking, claim 
creation and processing, and the communication interfaces between the 
nova-conductor, nova-scheduler, and nova-compute.


I see no urgency at all in splitting out the scheduler. The cleanup of 
the interfaces around the resource tracker and scheduler has great 
priority, though, IMO.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] libvirt version_cap, a postmortem

2014-09-05 Thread John Garbutt
On 3 September 2014 21:57, Joe Gordon joe.gord...@gmail.com wrote:
 On Sat, Aug 30, 2014 at 9:08 AM, Mark McLoughlin mar...@redhat.com wrote:
 Hey

 The libvirt version_cap debacle continues to come up in conversation and
 one perception of the whole thing appears to be:

   A controversial patch was ninjaed by three Red Hat nova-cores and
   then the same individuals piled on with -2s when a revert was proposed
   to allow further discussion.

 I hope it's clear to everyone why that's a pretty painful thing to hear.
 However, I do see that I didn't behave perfectly here. I apologize for
 that.

 In order to understand where this perception came from, I've gone back
 over the discussions spread across gerrit and the mailing list in order
 to piece together a precise timeline. I've appended that below.

 Some conclusions I draw from that tedious exercise:

 Thank you for going through and doing this.

+1

  - Some people came at this from the perspective that we already have
a firm, unwritten policy that all code must have functional written
tests. Others see that test all the things is interpreted as a
worthy aspiration, but is only one of a number of nuanced factors
that needs to be taken into account when considering the addition of
a new feature.

 Confusion over our testing policy sounds like the crux of one of the issues
 here. Having so many unwritten policies has led to confusion in the past
 which is why I started
 http://docs.openstack.org/developer/nova/devref/policies.html, hopefully by
 writing these things down in the future this sort of confusion will arise
 less often.

 Until this whole debacle I didn't even know there was a dissenting opinion
 on what our testing policy is. In every conversation I have seen up until
 this point, the question was always how to raise the bar on testing.  I
 don't expect us to be able to get to the bottom of this issue in a ML
 thread, but hopefully we can begin the testing policy conversation here so
 that we may be able to make a breakthrough and the summit.

+1

I certainly feel that we need a test policy we are all happy to
enforce. I am sure we can resolve this. I have some ideas, but I feel
like we should meet in person to discuss this one. I am really bad at
trying to discuss this kind of thing in text form.

 While I cannot speak for anyone else, I did grumble a bit at the mid-cycle
 about the behavior on Dan's first devref patch,
 https://review.openstack.org/#/c/103923/. This was the first time I saw 3
 '-2's on a single patch revision. To me 1 or 2 '-2's gives the perception of
 'hold on there, lets discuss this more first,' but 3 '-2's is just piling on
 and is very confrontational in nature. I was taken aback by this behavior
 and still don't know what to say or even if my reaction is justified.

People were angry, this highlighted that disagreement.
That lead to us trying to resolving the immediate point of conflict.
It would be worse if there had been no communication.

 To take an even further step back - successful communities like ours
 require a huge amount of trust between the participants. Trust requires
 communication and empathy. If communication breaks down and the pressure
 we're all under erodes our empathy for each others' positions, then
 situations can easily get horribly out of control.

 This isn't a pleasant situation and we should all strive for better.

+1

I think we have now identified where we don't agree.

Looking forward to resolving this, in person, at the summit.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Keystone] Steps toward Kerberos and Federation

2014-09-05 Thread Adam Young

On 09/05/2014 04:49 AM, Marco Fargetta wrote:

Hi,

I am wondering if the solution I was trying to sketch with the spec
https://review.openstack.org/#/c/96867/13; is not easier to implement
and manage then the steps highlated till n.2. Maybe, the spec is not
yet there and should be improved (I will abandon or move to Kilo as
Marek suggest) but the overall schema I think it is better then try to
complicate the communication between Horizon and Keystone, IMHO.

That is a very well written, detailed spec.  I'm impressed.

The S4U2Proxy/Step one stuff will be ready to go as soon as I drop off 
the Net for a while and clean up my patches.  But that doesn't address 
the Federation issue.


The Javascript approach is, I think, simpler and better than using 
OAUTH2 as you specify, as it is the direction that Horizon is going 
anyway: A single page App, Javascript driven,  talking direct to all the 
Remote Services.


I want to limit the number of services that get tokens.  I want to limit 
the scope of those tokens as much as possible.  Keeping passwords out of 
Horizon is just the first step to this goal.



As I see it Keystone tokens and the OAUTH* protocols are both ways of 
doing distributed single sign on and delegation of privileges. However,  
Keystone specifies data that is relevant to OpenStack, and OAUTH is 
necessarily format agnostic.  Using a different mechanism punts on the 
hard decisions and rewrites the easy ones.


Yes, I wish we had started with OAUTH way back a couple years ago, but I 
can't say it is so compelling that we should do it now.




Step 3 is a different story and it needs more evaluation of the
possible scenarios opened.

Cheers,
Marco

On Thu, Sep 04, 2014 at 05:37:38PM -0400, Adam Young wrote:

While the Keystone team has made pretty good strides toward
Federation for getting a Keystone token, we do not yet have a
complete story for Horizon.  The same is true about Kerberos.  I've
been working on this, and I want to inform the people that are
interested in the approach, as well as get some feedback.

My first priority has been Kerberos.  I have a proof of concept of
this working, but the amount of hacking I had to
Django-OpenStack-Auth (DOA) made me shudder:  its fairly ugly.  A
few discussions today have moved things such that I think I can
clean up the approach.

Phase 1.  DOA should be able to tell whether to use password or
Kerberos in order to get a token from Keystone based on an variable
set by the Apache web server;  mod_auth_kerb will set

 request.META['KRB5CCNAME']

only in the kerberos case.  If it gets this variable, DOA will only
do Kerberos.  If it does not, it will only do password.  There will
be no fallback from Kerberos to password;  this is enforced by
mod_auth_kerb, not something we can easily hack around in Django.

That gets us Kerberos, but not Federation. Most of the code changes
are common with what follows after:

Phase 1.5.  Add an optional field  to the password auth page that
allows a user to log in with a token instead of userid/password.
This can be a hidden field by default if we really want.  DOA now
needs to be able to validate a token.  Since Horizon only cares
about the hashed version of the tokens anyway, we only to Online
lookup, not PKI token validation.  In the successful response,
Keystone will to return the properly hashed (SHA256 for example) of
the PKI tokens for Horizon to cache.

Phase 2.  Use AJAX to get a token from Keystone instead of sending
the credentials to the client.  Then pass that token to Horizon in
order to login. This implies that Keystone has set up CORS support.
This will open the door for Federation.  While it will provide a
different path to Kerberos than the stage 1, I think both are
valuable approaches and will serve different use cases.  This

Phase 3.  Never send your token to Horizon.  In this world, the
browser, with full CORS support, makes AJAX calls direct to Nova,
Glance, and all other services.

Yep, this should be a cross project session for the summit.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra][Cinder] Coraid CI system

2014-09-05 Thread Mykola Grygoriev
Hi,

My name is Mykola Grygoriev and I'm engineer who currently working on
deploying 3d party CI for Сoraid Сinder driver.

Following instructions on

http://ci.openstack.org/third_party.html#requesting-a-service-account

asking for adding gerrit CI account (coraid-ci) to the Voting Third-Party
CI Gerrit group https://review.openstack.org/#/admin/groups/91,members.


We have already added description of Coraid CI system to wiki page -
https://wiki.openstack.org/wiki/ThirdPartySystems/Coraid_CI

We used openstack-dev/sandbox project to test current CI infrastructure
with OpenStack Gerrit system. Please find our history there.

Please have a look to results of Coraid CI system. it currently takes
updates from openstack/cinder project:
http://38.111.159.9:8080/job/Coraid_CI/32/
http://38.111.159.9:8080/job/Coraid_CI/33/

Thank you in advance.

--
Best regards,
Mykola Grygoriev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Status update on the python-openstacksdk project

2014-09-05 Thread Brian Curtin
Hi all,

Between recent IRC meetings and the mid-cycle operators meetup, we've
heard things ranging from is the SDK project still around to I
can't wait for this. I'm Brian Curtin from Rackspace and I'd like to
tell you what the python-openstacksdk [0][1] project has been up to
lately.

After initial discussions, meetings [2], and a coordination session in
Atlanta, a group of us decided to kick off a project to offer a
complete Software Development Kit for those creating and building on
top of OpenStack. This project aims to offer a one-stop-shop to
interact with all of the parts of an OpenStack cloud, either writing
code against a consistent set of APIs, or by using command line tools
implemented on those APIs [3], with concise documentation and examples
that end-users can leverage.

From a vendor perspective, it doesn't make sense for all of us to have
our own SDKs written against the same APIs. Additionally, every
service having their own client/CLI presents a fragmented view to
consumers and introduces difficulties once users move beyond
involvement with one or two services. Beyond the varying dependencies
and the sheer number of moving parts involved, user experience is not
as welcoming and great as it should be.

We first built transport and session layers based on python-requests
and Jamie Lennox's Keystone client authentication plugins (minus
compatibility cruft). The service resources are represented in a base
resource class, and we've implemented resources for interacting with
Identity, Object-Store, Compute, Image, Database, Network, and
Orchestration APIs. Expanding or adding support for new services is
straightforward, but we're thinking about the rest of the picture
before building out too far.

This resource layer may be slightly raw if you're looking at it as a
consumer, and not likely what you'd want in a full scale application.
Now that we have these resources exposed to work with, we're looking
upward to think about how an end-user would want to interact with a
service. We're also moving downward and looking at what we want to
provide to command line interfaces, such as easier access to the
JSON/dicts (as prodded by Dean :).

Overall, we're moving along nicely. While we're thinking about these
high-level/end-user views, I'd love to know if anyone has any thoughts
there. For example, what would the ideal interface to your favorite
service look like? As things are hacked out, we'll share them and
gather as much input as we can from this community as well as the
users.

If you're interested in getting involved or have any questions or
comments, we meet on Tuesdays at 1900 UTC in #openstack-meeting-3, and
all of us hang out in #openstack-sdks on Freenode.

As for who's involved, we're on stackalytics [4], but recently it has
been Terry Howe (HP), Jamie Lennox (Red Hat), Dean Troyer (Nebula),
Steve Lewis (Rackspace), and myself.

Thanks for your time


[0] https://github.com/stackforge/python-openstacksdk
[1] https://wiki.openstack.org/wiki/SDK-Development/PythonOpenStackSDK
[2] http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/
[3] OpenStackClient is planning to switch to using the Python SDK
after the interfaces have stabilized.
[4] 
http://stackalytics.com/?project_type=stackforgemodule=python-openstacksdkrelease=all

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-05 Thread Monty Taylor

On 09/05/2014 06:32 AM, Sean Dague wrote:

While reviewing this zookeeper service group fix in Nova -
https://review.openstack.org/#/c/102639/ it was exposed that the
zookeeper tests aren't running in infra.

The crux of the issue is that zookeeper python modules are C extensions.
So you have to either install from packages (which we don't do in unit
tests) or install from pip, which means forcing zookeeper dev packages
locally. Realistically this is the same issue we end up with for mysql
and pg, but given their wider usage we just forced that pain on developers.

But it seems like a bad stand off between testing upstream and testing
normal path locally.

Big picture it would be nice to not require a ton of dev libraries
locally for optional components, but still test them upstream. So that
in the base case I'm not running zookeeper locally, but if it fails
upstream because I broke something in zookeeper, it's easy enough to
spin up that dev env that has it.

Which feels like we need some decoupling on our requirements vs. tox
targets to get there. CC to Monty and Clark as our super awesome tox
hackers to help figure out if there is a path forward here that makes sense.


Funny story - I've come to dislike what we're doing here, so I've been 
slowly working on an idea in this area:


https://github.com/emonty/dox

The tl;dr is it's like tox, except it uses docker instead of 
virtualenv - which means we can express all of our requirements, not 
just pip ones.


It's not quite ready yet - although I'd be happy to move it in to 
stackforge or even openstack-dev and get other people hacking on it with 
me until it is. The main problem that needs solving, I think, is how to 
sanely express multiple target environments (like py26,py27) without 
making a stupidly baroque config file. OTOH, tox's insistence of making 
a new virtualenv for each environment is heavyweight and has led to 
some pretty crazy hacks across the project. Luckily, docker itself does 
an EXCELLENT job at handling caching and reuse - so I think we can have 
a set of containers that something in infra (waves hands) publishes to 
dockerhub, like:


  infra/py27
  infra/py26

And then have things like nova build on those, like:

  infra/nova/py27

Which would have zookeeper as well

The _really_ fun part, again, if we can figure out how to express it in 
config without reimplementing make accidentally, is that we could start 
to have things like:


  infra/mysql
  infra/postgres
  infra/mongodb

And have dox files say things like:

  Nova unittests want a python27 environment, this means we want an 
infra/mysql container, an infra/postgres container and for those to be 
linked to the infra/nova/py27 container where the tests will run.


Since those are all reusable, the speed should be _Excellent_ and we 
should be able to more easily get more things runnable locally without 
full devstack.


Thoughts? Anybody wanna hack on it with me? I think it could wind up 
being a pretty useful tool for folks outside of OpenStack too if we get 
it right.


Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Cinder] Coraid CI system

2014-09-05 Thread Andreas Jaeger
Hi Mykola,
On 09/05/2014 04:09 PM, Mykola Grygoriev wrote:
 Hi,
 
 My name is Mykola Grygoriev and I'm engineer who currently working on
 deploying 3d party CI for Сoraid Сinder driver.

Great, thanks!

 Following instructions on
 
 http://ci.openstack.org/third_party.html#requesting-a-service-account
 
 asking for adding gerrit CI account (coraid-ci) to the Voting
 Third-Party CI Gerrit group
 https://review.openstack.org/#/admin/groups/91,members.

There's a dedicated mailing list for these requests, please reread
http://ci.openstack.org/third_party.html#requesting-a-service-account

and resend your email to the third-party-requests mailing lists,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-05 Thread Flavio Percoco
On 09/05/2014 04:21 PM, Monty Taylor wrote:
 On 09/05/2014 06:32 AM, Sean Dague wrote:
 While reviewing this zookeeper service group fix in Nova -
 https://review.openstack.org/#/c/102639/ it was exposed that the
 zookeeper tests aren't running in infra.

 The crux of the issue is that zookeeper python modules are C extensions.
 So you have to either install from packages (which we don't do in unit
 tests) or install from pip, which means forcing zookeeper dev packages
 locally. Realistically this is the same issue we end up with for mysql
 and pg, but given their wider usage we just forced that pain on
 developers.

 But it seems like a bad stand off between testing upstream and testing
 normal path locally.

 Big picture it would be nice to not require a ton of dev libraries
 locally for optional components, but still test them upstream. So that
 in the base case I'm not running zookeeper locally, but if it fails
 upstream because I broke something in zookeeper, it's easy enough to
 spin up that dev env that has it.

 Which feels like we need some decoupling on our requirements vs. tox
 targets to get there. CC to Monty and Clark as our super awesome tox
 hackers to help figure out if there is a path forward here that makes
 sense.
 
 Funny story - I've come to dislike what we're doing here, so I've been
 slowly working on an idea in this area:
 
 https://github.com/emonty/dox
 
 The tl;dr is it's like tox, except it uses docker instead of
 virtualenv - which means we can express all of our requirements, not
 just pip ones.
 
 It's not quite ready yet - although I'd be happy to move it in to
 stackforge or even openstack-dev and get other people hacking on it with
 me until it is. The main problem that needs solving, I think, is how to
 sanely express multiple target environments (like py26,py27) without
 making a stupidly baroque config file. OTOH, tox's insistence of making
 a new virtualenv for each environment is heavyweight and has led to
 some pretty crazy hacks across the project. Luckily, docker itself does
 an EXCELLENT job at handling caching and reuse - so I think we can have
 a set of containers that something in infra (waves hands) publishes to
 dockerhub, like:
 
   infra/py27
   infra/py26
 
 And then have things like nova build on those, like:
 
   infra/nova/py27
 
 Which would have zookeeper as well
 
 The _really_ fun part, again, if we can figure out how to express it in
 config without reimplementing make accidentally, is that we could start
 to have things like:
 
   infra/mysql
   infra/postgres
   infra/mongodb
 
 And have dox files say things like:
 
   Nova unittests want a python27 environment, this means we want an
 infra/mysql container, an infra/postgres container and for those to be
 linked to the infra/nova/py27 container where the tests will run.
 
 Since those are all reusable, the speed should be _Excellent_ and we
 should be able to more easily get more things runnable locally without
 full devstack.
 
 Thoughts? Anybody wanna hack on it with me? I think it could wind up
 being a pretty useful tool for folks outside of OpenStack too if we get
 it right.
 

I think it is sexy - I don't describe ideas/software as sexy that often
but this one deserves it. I'm interested in helping out.

I'll clone it and give it a try - or at least take a look at it.

Flavio

 Monty
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][iperf] Benchmarking network performance

2014-09-05 Thread Ajay Kalambur (akalambu)
You need to apply the other 2 network context patches from the reviews I sent 
out
Ajay


Sent from my iPhone

On Sep 5, 2014, at 2:25 AM, masoom alam 
masoom.a...@gmail.commailto:masoom.a...@gmail.com wrote:

http://paste.openstack.org/show/106297/


On Fri, Sep 5, 2014 at 1:12 PM, masoom alam 
masoom.a...@gmail.commailto:masoom.a...@gmail.com wrote:
Thanks Ajay

I corrected this earlier. But facing another problem. Will forward paste in a 
while.



On Friday, September 5, 2014, Ajay Kalambur (akalambu) 
akala...@cisco.commailto:akala...@cisco.com wrote:
Sorry there was  typo in the patch should be @validation and not @(validation
Please change that in vm_perf.py

Sent from my iPhone

On Sep 4, 2014, at 7:51 PM, masoom alam masoom.a...@gmail.com wrote:

Why this is so when I patched with your sent patch:

http://paste.openstack.org/show/106196/


On Thu, Sep 4, 2014 at 8:58 PM, Rick Jones rick.jon...@hp.com wrote:
On 09/03/2014 11:47 AM, Ajay Kalambur (akalambu) wrote:
Hi
Looking into the following blueprint which requires that network
performance tests be done as part of a scenario
I plan to implement this using iperf and basically a scenario which
includes a client/server VM pair

My experience with netperf over the years has taught me that when there is just 
the single stream and pair of systems one won't actually know if the 
performance was limited by inbound, or outbound.  That is why the likes of

http://www.netperf.org/svn/netperf2/trunk/doc/examples/netperf_by_flavor.py

and

http://www.netperf.org/svn/netperf2/trunk/doc/examples/netperf_by_quantum.py

apart from being poorly written python :)  Will launch several instances of a 
given flavor and then run aggregate tests on the Instance Under Test.  Those 
aggregate tests will include inbound, outbound, bidirectional, aggregate small 
packet and then a latency test.

happy benchmarking,

rick jones


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Sent from noir

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Cinder] Coraid CI system

2014-09-05 Thread Anita Kuno
On 09/05/2014 10:33 AM, Andreas Jaeger wrote:
 Hi Mykola,
 On 09/05/2014 04:09 PM, Mykola Grygoriev wrote:
 Hi,

 My name is Mykola Grygoriev and I'm engineer who currently working on
 deploying 3d party CI for Сoraid Сinder driver.
 
 Great, thanks!
 
 Following instructions on

 http://ci.openstack.org/third_party.html#requesting-a-service-account

 asking for adding gerrit CI account (coraid-ci) to the Voting
 Third-Party CI Gerrit group
 https://review.openstack.org/#/admin/groups/91,members.
 
 There's a dedicated mailing list for these requests, please reread
 http://ci.openstack.org/third_party.html#requesting-a-service-account
 
 and resend your email to the third-party-requests mailing lists,
 
 Andreas
 
Actually Andreas, he is following these instructions:
http://ci.openstack.org/third_party.html#permissions-on-your-third-party-system

He is asking for the community to give him feedback about whether his
system is ready to have voting permissions. It just seems odd because he
is the first one actually following the instructions.

Thanks Mykola,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FFE server-group-quotas

2014-09-05 Thread Christopher Yeoh
I'm willing to sponsor this


Chris
—
Sent from Mailbox

On Fri, Sep 5, 2014 at 10:29 PM, Day, Phil philip@hp.com wrote:

 Hi,
 I'd like to ask for a FFE for the 3 patchsets that implement quotas for 
 server groups.
 Server groups (which landed in Icehouse) provides a really useful 
 anti-affinity filter for scheduling that a lot of customers woudl like to 
 use, but without some form of quota control to limit the amount of 
 anti-affinity its impossible to enable it as a feature in a public cloud.
 The code itself is pretty simple - the number of files touched is a 
 side-effect of having three V2 APIs that report quota information and the 
 need to protect the change in V2 via yet another extension.
 https://review.openstack.org/#/c/104957/
 https://review.openstack.org/#/c/116073/
 https://review.openstack.org/#/c/116079/
 Phil
 -Original Message-
 From: Sahid Orentino Ferdjaoui [mailto:sahid.ferdja...@redhat.com]
 Sent: 04 September 2014 13:42
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [nova] FFE request serial-ports
 
 Hello,
 
 I would like to request a FFE for 4 changesets to complete the blueprint
 serial-ports.
 
 Topic on gerrit:
 
 https://review.openstack.org/#/q/status:open+project:openstack/nova+br
 anch:master+topic:bp/serial-ports,n,z
 
 Blueprint on launchpad.net:
   https://blueprints.launchpad.net/nova/+spec/serial-ports
 
 They have already been approved but didn't get enough time to be merged
 by the gate.
 
 Sponsored by:
 Daniel Berrange
 Nikola Dipanov
 
 s.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Cinder] Coraid CI system

2014-09-05 Thread Duncan Thomas
+1 from me (Cinder core)

On 5 September 2014 15:09, Mykola Grygoriev mgrygor...@mirantis.com wrote:
 Hi,

 My name is Mykola Grygoriev and I'm engineer who currently working on
 deploying 3d party CI for Сoraid Сinder driver.

 Following instructions on

 http://ci.openstack.org/third_party.html#requesting-a-service-account

 asking for adding gerrit CI account (coraid-ci) to the Voting Third-Party CI
 Gerrit group.



 We have already added description of Coraid CI system to wiki page -
 https://wiki.openstack.org/wiki/ThirdPartySystems/Coraid_CI

 We used openstack-dev/sandbox project to test current CI infrastructure with
 OpenStack Gerrit system. Please find our history there.

 Please have a look to results of Coraid CI system. it currently takes
 updates from openstack/cinder project:
 http://38.111.159.9:8080/job/Coraid_CI/32/
 http://38.111.159.9:8080/job/Coraid_CI/33/

 Thank you in advance.

 --
 Best regards,
 Mykola Grygoriev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [FFE] Final Libvirt User Namespaces Patch

2014-09-05 Thread Jay Pipes

On 09/05/2014 09:57 AM, Daniel P. Berrange wrote:

On Fri, Sep 05, 2014 at 01:49:20PM +, Andrew Melton wrote:

Hey Devs,

I'd like to request a feature freeze exception for: 
https://review.openstack.org/#/c/94915/

This feature is the final patch set for the User Namespace BP
(https://blueprints.launchpad.net/nova/+spec/libvirt-lxc-user-namespaces).
This is an important feature for libvirt-lxc because in greatly increases
the security of running libvirt-lxc based containers. The code for this
feature has been up for a couple months now and has had plenty of time
to go through review. The code itself is solid by now and functionally
it hasn't changed much in the last month or so. Lastly, the only thing
holding up the patch from merging this week was a multitude of bugs in
the gate.


Since we've already merged 9 out of the 10 total patches for this feature,
it is a no brainer to merge this last one.

I sponsor it.


Me as well. I've already reviewed most of the patches.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][FFE] Feature freeze exception for virt-driver-numa-placement

2014-09-05 Thread Jay Pipes

On 09/05/2014 04:48 AM, Nikola Đipanov wrote:

Quick response as not to hijack the thread:

I think we all agree on the benefits of having resources you can turn
off and on at will.


I don't agree at all. There's no cost whatsoever in turning on a 
resource. It doesn't need to be extensible. Resources just need to 
properly modelled so that the things they represent can be properly 
compared in a quantitative manner. For example, a NUMA cell resource 
needs to be modelled in a Python object that can be compared to a 
request for a certain NUMA topology.


There was and continues to be no need to make resources extensible. They 
just needed to be properly modelled into Python objects, and those 
objects used in scheduling decisions and tracking.



The current implementation of it, however, has some glaring drawbacks
that made it impossible for me to base my work on it, that have been
discussed in detail on other threads and IRC heavily, hence we need to
rethink how to get there.


++

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Status of Neutron at Juno-3

2014-09-05 Thread Robert Kukura

Kyle,

Please consider an FFE for 
https://blueprints.launchpad.net/neutron/+spec/ml2-hierarchical-port-binding. 
This was discussed extensively at Wednesday's ML2 meeting, where the 
consensus was that it would be valuable to get this into Juno if 
possible. The patches have had core reviews from Armando, Akihiro, and 
yourself. Updates to the three patches addressing the remaining review 
issues will be posted today, along with an update to the spec to bring 
it in line with the implementation.


-Bob

On 9/3/14, 8:17 AM, Kyle Mestery wrote:

Given how deep the merge queue is (146 currently), we've effectively
reached feature freeze in Neutron now (likely other projects as well).
So this morning I'm going to go through and remove BPs from Juno which
did not make the merge window. I'll also be putting temporary -2s in
the patches to ensure they don't slip in as well. I'm looking at FFEs
for the high priority items which are close but didn't quite make it:

https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor

Thanks,
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-05 Thread David Shrewsbury
I agree with Flavio. This looks really cool, and I had a very similar idea
recently. I'll try to find some time to give this a whirl.

-Dave


On Fri, Sep 5, 2014 at 10:37 AM, Flavio Percoco fla...@redhat.com wrote:

 On 09/05/2014 04:21 PM, Monty Taylor wrote:
  On 09/05/2014 06:32 AM, Sean Dague wrote:
  While reviewing this zookeeper service group fix in Nova -
  https://review.openstack.org/#/c/102639/ it was exposed that the
  zookeeper tests aren't running in infra.
 
  The crux of the issue is that zookeeper python modules are C extensions.
  So you have to either install from packages (which we don't do in unit
  tests) or install from pip, which means forcing zookeeper dev packages
  locally. Realistically this is the same issue we end up with for mysql
  and pg, but given their wider usage we just forced that pain on
  developers.
 
  But it seems like a bad stand off between testing upstream and testing
  normal path locally.
 
  Big picture it would be nice to not require a ton of dev libraries
  locally for optional components, but still test them upstream. So that
  in the base case I'm not running zookeeper locally, but if it fails
  upstream because I broke something in zookeeper, it's easy enough to
  spin up that dev env that has it.
 
  Which feels like we need some decoupling on our requirements vs. tox
  targets to get there. CC to Monty and Clark as our super awesome tox
  hackers to help figure out if there is a path forward here that makes
  sense.
 
  Funny story - I've come to dislike what we're doing here, so I've been
  slowly working on an idea in this area:
 
  https://github.com/emonty/dox
 
  The tl;dr is it's like tox, except it uses docker instead of
  virtualenv - which means we can express all of our requirements, not
  just pip ones.
 
  It's not quite ready yet - although I'd be happy to move it in to
  stackforge or even openstack-dev and get other people hacking on it with
  me until it is. The main problem that needs solving, I think, is how to
  sanely express multiple target environments (like py26,py27) without
  making a stupidly baroque config file. OTOH, tox's insistence of making
  a new virtualenv for each environment is heavyweight and has led to
  some pretty crazy hacks across the project. Luckily, docker itself does
  an EXCELLENT job at handling caching and reuse - so I think we can have
  a set of containers that something in infra (waves hands) publishes to
  dockerhub, like:
 
infra/py27
infra/py26
 
  And then have things like nova build on those, like:
 
infra/nova/py27
 
  Which would have zookeeper as well
 
  The _really_ fun part, again, if we can figure out how to express it in
  config without reimplementing make accidentally, is that we could start
  to have things like:
 
infra/mysql
infra/postgres
infra/mongodb
 
  And have dox files say things like:
 
Nova unittests want a python27 environment, this means we want an
  infra/mysql container, an infra/postgres container and for those to be
  linked to the infra/nova/py27 container where the tests will run.
 
  Since those are all reusable, the speed should be _Excellent_ and we
  should be able to more easily get more things runnable locally without
  full devstack.
 
  Thoughts? Anybody wanna hack on it with me? I think it could wind up
  being a pretty useful tool for folks outside of OpenStack too if we get
  it right.
 

 I think it is sexy - I don't describe ideas/software as sexy that often
 but this one deserves it. I'm interested in helping out.

 I'll clone it and give it a try - or at least take a look at it.

 Flavio

  Monty
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
David Shrewsbury (Shrews)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila]

2014-09-05 Thread Jyoti Ranjan
Which of file system appliances are supported as of today? We are thinking
to integrate with our cloud.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-05 Thread Mohammad Banikazemi
I can only see the use of a separate project for Group Policy as a tactical
and temporary solution. In my opinion, it does not make sense to have the
Group Policy as a separate project outside Neutron (unless the new project
is aiming to replace Neutron and I do not think anybody is suggesting
that). In this regard, Group Policy is not similar to Advanced Services
such as FW and LB.

So, using StackForge to get things moving again is fine but let us keep in
mind (and see if we can agree on) that we want to have the Group Policy
abstractions as part of OpenStack Networking (when/if it proves to be a
valuable extension to what we currently have). I do not want to see our
decision to make things moving quickly right now prevent us from achieving
that goal. That is why I think the other two approaches (from the little I
know about the incubator option, and even littler I know about the feature
branch option) may be better options in the long run.

If I understand it correctly some members of the community are actively
working on these options (that is, the incubator and the Neutron feature
branch options) . In order to make a better judgement as to how to proceed,
it would be very helpful if we get a bit more information on these two
options and their status here on this mailing list.

Mohammad





From:   Kevin Benton blak...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   09/05/2014 04:31 AM
Subject:Re: [openstack-dev] [neutron][policy] Group-based Policy next
steps



Tl;dr - Neutron incubator is only a wiki page with many uncertainties. Use
StackForge to make progress and re-evaluate when the incubator exists.


I also agree that starting out in StackForge as a separate repo is a better
first step. In addition to the uncertainty around packaging and other
processes brought up by Mandeep, I really doubt the Neutron incubator is
going to have the review velocity desired by the group policy contributors.
I believe this will be the case based on the Neutron incubator patch
approval policy in conjunction with the nature of the projects it will
attract.

Due to the requirement for two core +2's in the Neutron incubator, moving
group policy there is hardly going to do anything to reduce the load on the
Neutron cores who are in a similar overloaded position as the Nova
cores.[1] Consequently, I wouldn't be surprised if patches to the Neutron
incubator receive even less core attention than the main repo simply
because their location outside of openstack/neutron will be a good reason
to treat them with a lower priority.

If you combine that with the fact that the incubator is designed to house
all of the proposed experimental features to Neutron, there will be a very
high volume of patches constantly being proposed to add new features, make
changes to features, and maybe even fix bugs in those features. This new
demand for reviewers will not be met by the existing core reviewers because
they will be busy with refactoring, fixing, and enhancing the core Neutron
code.

Even ignoring the review velocity issues, I see very little benefit to GBP
starting inside of the Neutron incubator. It doesn't guarantee any
packaging with Neutron and Neutron code cannot reference any incubator
code. It's effectively a separate repo without the advantage of being able
to commit code quickly.

There is one potential downside to not immediately using the Neutron
incubator. If the Neutron cores decide that all features must live in the
incubator for at least 2 cycles regardless of quality or usage in
deployments, starting outside in a StackForge project would delay the start
of the timer until GBP makes it into the incubator. However, this can be
considered once the incubator actually exists and starts accepting
submissions.

In summary, I think GBP should move to a StackForge project as soon as
possible so development can progress. A transition to the Neutron incubator
can be evaluated once it actually becomes something more than a wiki page.


1.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044872.html

--
Kevin Benton


On Thu, Sep 4, 2014 at 11:24 PM, Mandeep Dhami dh...@noironetworks.com
wrote:

  I agree. Also, as this does not preclude using the incubator when it is
  ready, this is a good way to start iterating on implementation in
  parallel with those issues being addressed by the community.

  In my view, the issues raised around the incubator were significant
  enough (around packaging, handling of updates needed for
  horizon/heat/celiometer, handling of multiple feature branches, etc) that
  we we will probably need a design session in paris before a consensus
  will emerge around a solution for the incubator structure/usage. And if
  you are following the thread on nova for 'Averting the Nova crisis ...',
  the final consensus might actually BE to use separate stackforge project
  for plugins 

Re: [openstack-dev] [neutron] New meeting rotation starting next week

2014-09-05 Thread Kyle Mestery
Just an updated note here. The IPV6 sub-team has moved their meeting
time, so I've claimed #openstack-meeting at 1400UTC on Tuesday's for
the Neutron team meeting. I'll send a note reminding folks of the
change on Monday.

Thanks,
Kyle

On Mon, Sep 1, 2014 at 8:19 PM, Kyle Mestery mest...@mestery.com wrote:
 Per discussion again today in the Neutron meeting, next week we'll
 start rotating the meeting. This will mean next week we'll meet on
 Tuesday (9-9-2014) at 1400 UTC in #openstack-meeting-alt.

 I've updated the Neutron meeting page [1] as well as the meeting wiki
 page [2] with the new details on the meeting page.

 Please add any agenda items to the page.

 Looking forward to seeing some new faces who can't normally join us at
 the 2100UTC slot!

 Thanks,
 Kyle

 [1] https://wiki.openstack.org/wiki/Network/Meetings
 [2] https://wiki.openstack.org/wiki/Meetings#Neutron_team_meeting

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO]

2014-09-05 Thread Jyoti Ranjan
I deployed a server having three drives using TripleO. How can I know which
disk is root disk? I need this information as I have a utility which will
format all drives except root disk. Utility is being used for some solution
we are developing using machines deployed by TripleO.

Regards,
Jyoti Ranjan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO]

2014-09-05 Thread Jyoti Ranjan
I deployed a server having three drives using TripleO. How can I know which
disk is root disk? I need this information as I have a utility which will
format all drives except root disk. Utility is being used for some solution
we are developing using machines deployed by TripleO.

Regards,
Jyoti Ranjan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >