Re: [openstack-dev] [neutron] Feature Proposal Freeze is 9 days away

2014-08-12 Thread Mohammad Banikazemi

What would be the best practice for those who realize their work will not
make it in Juno? Is it enough to not submit code for review? Would it be
better to also request a change in milestone?

Thanks,

Mohammad




From:   Kyle Mestery mest...@mestery.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   08/12/2014 09:15 AM
Subject:[openstack-dev] [neutron] Feature Proposal Freeze is 9 days
away



Just a reminder that Neutron observes FPF [1], and it's 9 days away.
We have an incredible amount of BPs which do not have code submitted
yet. My suggestion to those who own one of these BPs would be to think
hard about whether or not you can realistically land this code in Juno
before jamming things up at the last minute.

I hope we as a team can refocus on the remaining Juno tasks for the
rest of Juno now and land items of importance to the community at the
end.

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/FeatureProposalFreeze

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Continuing on Calling driver interface on every API request

2014-08-12 Thread Eichberger, German
Hi,

I think we are debating some edge-case. An important part of the flavor 
framework is the ability of me the operator to say failover from Octavia to an 
F5. So as an operator I would ensure to only offer the features in that flavor 
which both support. So in order to arrive at Brandon’s example I would have 
misconfigured my environment and rightfully would get errors at the drive level 
– which might be hard to understand for end users but hopefully pretty clear 
for me the operator…

German

From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Monday, August 11, 2014 9:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on Calling driver 
interface on every API request

Well, that exactly what we've tried to solve with tags in the flavor.

Considering your example with whole configuration being sent to the driver - i 
think it will be fine to not apply unsupported parts of configuration (like 
such HM) and mark the HM object with error status/status description.

Thanks,
Eugene.

On Tue, Aug 12, 2014 at 12:33 AM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
Hi Eugene,
An example of the HM issue (and really this can happen with any entity)
is if the driver the API sends the configuration to does not actually
support the value of an attribute.

For example: Provider A support PING health monitor type, Provider B
does not.  API allows the PING health monitor type to go through.  Once
a load balancer has been linked with that health monitor and the
LoadBalancer chose to use Provider B, that entire configuration is then
sent to the driver.  The driver errors out not on the LoadBalancer
create, but on the health monitor create.

I think that's the issue.

Thanks,
Brandon

On Tue, 2014-08-12 at 00:17 +0400, Eugene Nikanorov wrote:
 Hi folks,


 That actually going in opposite direction to what flavor framework is
 trying to do (and for dispatching it's doing the same as providers).
 REST call dispatching should really go via the root object.


 I don't quite get the issue with health monitors. If HM is incorrectly
 configured prior to association with a pool - API layer should handle
 that.
 I don't think driver implementations should be different at
 constraints to HM parameters.


 So I'm -1 on adding provider (or flavor) to each entity. After all, it
 looks just like data denormalization which actually will affect lots
 of API aspects in negative way.


 Thanks,
 Eugene.




 On Mon, Aug 11, 2014 at 11:20 PM, Vijay Venkatachalam
 vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com wrote:

 Yes, the point was to say the plugin need not restrict and
 let driver decide what to do with the API.

 Even if the call was made to driver instantaneously, I
 understand, the driver might decide to ignore
 first and schedule later. But, if the call is present, there
 is scope for validation.
 Also, the driver might be scheduling an async-api to backend,
 in which case  deployment error
 cannot be shown to the user instantaneously.

 W.r.t. identifying a provider/driver, how would it be to make
 tenant the default root object?
 tenantid is already associated with each of these entities,
 so no additional pain.
 For the tenant who wants to override let him specify provider
 in each of the entities.
 If you think of this in terms of the UI, let's say if the
 loadbalancer configuration is exposed
 as a single wizard (which has loadbalancer, listener, pool,
 monitor properties) then provider
  is chosen only once.

 Curious question, is flavour framework expected to address
 this problem?

 Thanks,
 Vijay V.

 -Original Message-
 From: Doug Wiegley 
 [mailto:do...@a10networks.commailto:do...@a10networks.com]

 Sent: 11 August 2014 22:02
 To: OpenStack Development Mailing List (not for usage
 questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on
 Calling driver interface on every API request

 Hi Sam,

 Very true.  I think that Vijay’s objection is that we are
 currently imposing a logical structure on the driver, when it
 should be a driver decision.  Certainly, it goes both ways.

 And I also agree that the mechanism for returning multiple
 errors, and the ability to specify whether those errors are
 fatal or not, individually, is currently weak.

 Doug


 On 8/11/14, 10:21 AM, Samuel Bercovici 
 samu...@radware.commailto:samu...@radware.com
 wrote:

 Hi Doug,
 
 In some implementations Driver !== Device. I think this is
 also true
 for HA Proxy.
 This might mean that there is a 

Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?

2014-08-12 Thread Joshua Harlow

Do u know if ceilometer is using six.wraps?

If so, that helper adds in the `__wrapped__` attribute to decorated 
methods (which can be used to find the original decorated function).


If just plain functools are used (and python3.x isn't used) then it 
will be pretty hard afaik to find the original decorated function (if 
that's the desire).


six.wraps() is new in six 1.7.x so it might not be used in ceilometer 
yet (although maybe it should start to be used?).


-Josh

On Tue, Aug 12, 2014 at 9:08 AM, Pendergrass, Eric 
eric.pendergr...@hp.com wrote:
Hi, I’m trying to use the built in secure decorator in Pecan for 
access control, and I’ld like to get the name of the method that is 
wrapped from within the decorator.
 
For instance, if I’m wrapping MetersController.get_all with an 
@secure decorator, is there a way for the decorator code to know it 
was called by MetersController.get_all?
 
I don’t see any global objects that provide this information.  I 
can get the endpoint, v2/meters, with pecan.request.path, but 
that’s not as elegant.
 
Is there a way to derive the caller or otherwise pass this 
information to the decorator?
 
Thanks

Eric Pendergrass



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Bug#1231298 - size parameter for volume creation

2014-08-12 Thread Duncan Thomas
On 11 August 2014 21:03, Dean Troyer dtro...@gmail.com wrote:
 On Mon, Aug 11, 2014 at 5:34 PM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 Making an previously mandatory parameter optional, at least on the
 command line, does break backward compatibility though, does it?
 Everything that worked before will still work.


 By itself, maybe that is ok.  You're right, nothing _should_ break.  But
 then the following is legal:

 cinder create

 What does that do?

It returns an error. The following becomes legal though:

cinder create --src-volume aaa-bbb-ccc-ddd

cinder create --snapshot aaa-bbb-ccc-ddd

cinder create --image aaa-bbb-ccc-ddd


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?

2014-08-12 Thread Ryan Petrello
This should give you what you need:

from pecan.core import state
state.controller

On 08/12/14 04:08 PM, Pendergrass, Eric wrote:
 Hi, I'm trying to use the built in secure decorator in Pecan for access 
 control, and I'ld like to get the name of the method that is wrapped from 
 within the decorator.
 
 For instance, if I'm wrapping MetersController.get_all with an @secure 
 decorator, is there a way for the decorator code to know it was called by 
 MetersController.get_all?
 
 I don't see any global objects that provide this information.  I can get the 
 endpoint, v2/meters, with pecan.request.path, but that's not as elegant.
 
 Is there a way to derive the caller or otherwise pass this information to the 
 decorator?
 
 Thanks
 Eric Pendergrass

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore

2014-08-12 Thread Edgar Magana
If this plugin will be deprecated in Juno it means that the code will be
there for this release, I will expect to have the CI still running for
until the code is completely removed from the Neutron tree.

Anyway, Infra guys will have the last word here!

Edgar

On 8/11/14, 5:38 PM, Anita Kuno ante...@anteaya.info wrote:

On 08/11/2014 06:31 PM, Henry Gessau wrote:
 On 8/11/2014 7:56 PM, Anita Kuno wrote:
 On 08/11/2014 05:46 PM, Henry Gessau wrote:
 Anita Kuno ante...@anteaya.info wrote:
 On 08/11/2014 05:05 PM, Edgar Magana wrote:
 Cisco Folks,

 I don't see the CI for Cisco NX-OS anymore. Is this being
deprecated?

 I don't ever recall seeing that as a name of a third party gerrit
 account in my list[0], Edgar.

 Do you happen to have a link to a patchset that has that name
attached
 to a comment?

 The Cisco Neutron CI tests at least five different configurations.
By
 NX-OS Edgar is referring to the Cisco Nexus switch configurations.
The CI
 used to run both the monolithic_nexus and ml2_nexus
configurations, but
 the monolithic cisco plugin for nexus is being deprecated for juno
and its
 configuration has already been removed from testing.

 Thanks Henry:

 Do we have a url for patch in gerrit for this or was this an internal
 code change?
 
 This was a change only in the internal 3rd party Jenkins/Zuul settings.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Okay.

Perhaps going forward this could be an item for the third party meeting
under the topic of Deadlines  Deprecations:
https://wiki.openstack.org/wiki/Meetings/ThirdParty Then at the very
least if someone missed the announcement we could have a log of it and
point someone to the conversation.

Thanks Henry,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?

2014-08-12 Thread Pendergrass, Eric
Thanks Ryan, but for some reason the controller attribute is None:

(Pdb) from pecan.core import state
(Pdb) state.__dict__
{'hooks': [ceilometer.api.hooks.ConfigHook object at 0x31894d0, 
ceilometer.api.hooks.DBHook object at 0x3189650, 
ceilometer.api.hooks.PipelineHook object at 0x39871d0, 
ceilometer.api.hooks.TranslationHook object at 0x3aa5510], 'app': 
pecan.core.Pecan object at 0x2e76390, 'request': Request at 0x3ed7390 GET 
http://localhost:8777/v2/meters, 'controller': None, 'response': Response at 
0x3ed74d0 200 OK}

 -Original Message-
 From: Ryan Petrello [mailto:ryan.petre...@dreamhost.com]
 Sent: Tuesday, August 12, 2014 10:34 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ceilometer] Way to get wrapped method's 
 name/class using Pecan secure decorators?

 This should give you what you need:

 from pecan.core import state
 state.controller

 On 08/12/14 04:08 PM, Pendergrass, Eric wrote:
  Hi, I'm trying to use the built in secure decorator in Pecan for access 
  control, and I'ld like to get the name of the method that is wrapped from 
  within the decorator.
 
  For instance, if I'm wrapping MetersController.get_all with an @secure 
  decorator, is there a way for the decorator code to know it was called by 
  MetersController.get_all?
 
  I don't see any global objects that provide this information.  I can get 
  the endpoint, v2/meters, with pecan.request.path, but that's not as elegant.
 
  Is there a way to derive the caller or otherwise pass this information to 
  the decorator?
 
  Thanks
  Eric Pendergrass

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Ryan Petrello
 Senior Developer, DreamHost
 ryan.petre...@dreamhost.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?

2014-08-12 Thread Nikola Đipanov
On 08/12/2014 04:49 PM, Sylvain Bauza wrote:
 (sorry for reposting, missed 2 links...)
 
 Hi Nikola,
 
 Le 12/08/2014 12:21, Nikola Đipanov a écrit :
 Hey Nova-istas,

 While I was hacking on [1] I was considering how to approach the fact
 that we now need to track one more thing (NUMA node utilization) in our
 resources. I went with - I'll add it to compute nodes table thinking
 it's a fundamental enough property of a compute host that it deserves to
 be there, although I was considering  Extensible Resource Tracker at one
 point (ERT from now on - see [2]) but looking at the code - it did not
 seem to provide anything I desperately needed, so I went with keeping it
 simple.

 So fast-forward a few days, and I caught myself solving a problem that I
 kept thinking ERT should have solved - but apparently hasn't, and I
 think it is fundamentally a broken design without it - so I'd really
 like to see it re-visited.

 The problem can be described by the following lemma (if you take 'lemma'
 to mean 'a sentence I came up with just now' :)):

 
 Due to the way scheduling works in Nova (roughly: pick a host based on
 stale(ish) data, rely on claims to trigger a re-schedule), _same exact_
 information that scheduling service used when making a placement
 decision, needs to be available to the compute service when testing the
 placement.
 

 This is not the case right now, and the ERT does not propose any way to
 solve it - (see how I hacked around needing to be able to get
 extra_specs when making claims in [3], without hammering the DB). The
 result will be that any resource that we add and needs user supplied
 info for scheduling an instance against it, will need a buggy
 re-implementation of gathering all the bits from the request that
 scheduler sees, to be able to work properly.
 
 Well, ERT does provide a plugin mechanism for testing resources at the
 claim level. This is the plugin responsibility to implement a test()
 method [2.1] which will be called when test_claim() [2.2]
 
 So, provided this method is implemented, a local host check can be done
 based on the host's view of resources.
 
 

Yes - the problem is there is no clear API to get all the needed bits to
do so - especially the user supplied one from image and flavors.
On top of that, in current implementation we only pass a hand-wavy
'usage' blob in. This makes anyone wanting to use this in conjunction
with some of the user supplied bits roll their own
'extract_data_from_instance_metadata_flavor_image' or similar which is
horrible and also likely bad for performance.

 This is obviously a bigger concern when we want to allow users to pass
 data (through image or flavor) that can affect scheduling, but still a
 huge concern IMHO.
 
 And here is where I agree with you : at the moment, ResourceTracker (and
 consequently Extensible RT) only provides the view of the resources the
 host is knowing (see my point above) and possibly some other resources
 are missing.
 So, whatever your choice of going with or without ERT, your patch [3]
 still deserves it if we want not to lookup DB each time a claim goes.
 
 
 As I see that there are already BPs proposing to use this IMHO broken
 ERT ([4] for example), which will surely add to the proliferation of
 code that hacks around these design shortcomings in what is already a
 messy, but also crucial (for perf as well as features) bit of Nova code.
 
 Two distinct implementations of that spec (ie. instances and flavors)
 have been proposed [2.3] [2.4] so reviews are welcome. If you see the
 test() method, it's no-op thing for both plugins. I'm open to comments
 because I have the stated problem : how can we define a limit on just a
 counter of instances and flavors ?
 

Will look at these - but none of them seem to hit the issue I am
complaining about, and that is that it will need to consider other
request data for claims, not only data available for on instances.

Also - the fact that you don't implement test() in flavor ones tells me
that the implementation is indeed racy (but it is racy atm as well) and
two requests can indeed race for the same host, and since no claims are
done, both can succeed. This is I believe (at least in case of single
flavor hosts) unlikely to happen in practice, but you get the idea.

 
 
 I propose to revert [2] ASAP since it is still fresh, and see how we can
 come up with a cleaner design.

 Would like to hear opinions on this, before I propose the patch tho!
 
 IMHO, I think the problem is more likely that the regular RT misses some
 information for each host so it requires to handle it on a case-by-case
 basis, but I don't think ERT either increases complexity or creates
 another issue.
 

RT does not miss info about the host, but about the particular request
which we have to fish out of different places like image_metadata
extra_specs etc, yet - it can't really work without them. This is
definitely a RT issue that is not specific to ERT.

However, I still see several issues 

Re: [openstack-dev] [neutron] Feature Proposal Freeze is 9 days away

2014-08-12 Thread Kyle Mestery
If you know it won't make it, please let me know so I can remove your BP
from the LP milestone.

Thanks!
Kyle


On Tue, Aug 12, 2014 at 11:18 AM, Mohammad Banikazemi m...@us.ibm.com wrote:

 What would be the best practice for those who realize their work will not
 make it in Juno? Is it enough to not submit code for review? Would it be
 better to also request a change in milestone?

 Thanks,

 Mohammad


 [image: Inactive hide details for Kyle Mestery ---08/12/2014 09:15:03
 AM---Just a reminder that Neutron observes FPF [1], and it's 9 da]Kyle
 Mestery ---08/12/2014 09:15:03 AM---Just a reminder that Neutron observes
 FPF [1], and it's 9 days away. We have an incredible amount of

 From: Kyle Mestery mest...@mestery.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 08/12/2014 09:15 AM
 Subject: [openstack-dev] [neutron] Feature Proposal Freeze is 9 days away
 --



 Just a reminder that Neutron observes FPF [1], and it's 9 days away.
 We have an incredible amount of BPs which do not have code submitted
 yet. My suggestion to those who own one of these BPs would be to think
 hard about whether or not you can realistically land this code in Juno
 before jamming things up at the last minute.

 I hope we as a team can refocus on the remaining Juno tasks for the
 rest of Juno now and land items of importance to the community at the
 end.

 Thanks!
 Kyle

 [1] https://wiki.openstack.org/wiki/FeatureProposalFreeze


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Configuring protected API functions to allow public access

2014-08-12 Thread Dolph Mathews
On Tue, Aug 12, 2014 at 10:30 AM, Yee, Guang guang@hp.com wrote:

 Hi Kristy,

 Have you try the [] or @ rule as mentioned here?


That still requires valid authentication though, just not any specific
authorization. I don't think we have a way to express truly public
resources in oslo.policy.




 https://github.com/openstack/keystone/blob/master/keystone/openstack/common/
 policy.py#L71



 Guang


  -Original Message-
  From: K.W.S.Siu [mailto:k.w.s@kent.ac.uk]
  Sent: Tuesday, August 12, 2014 3:44 AM
  To: openstack Mailing List
  Subject: [openstack-dev] [keystone] Configuring protected API functions
  to allow public access
 
  Hi All,
 
  Correct me if I am wrong but I don't think you can configure the
  Keystone policy.json to allow public access to an API function, as far
  as I can tell you can allow access to any authenticated user regardless
  of role assignments but not public access.
 
  My use case is a client which allows users to query for a list of
  supported identity providers / protocols so that the user can then
  select which provider to authenticate with - as the user is
  unauthenticated at the time of the query the request needs to allow
  public access to the 'List Identity Providers' API function.
 
  I can remove the protected decorator from the required functions but
  this is a nasty hack.
 
  I suggest that it should be possible to configure this kind of access
  rule on a deployment by deployment basis and I was just hoping to get
  some thoughts on this.
 
  Many thanks,
  Kristy
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?

2014-08-12 Thread Ryan Petrello
Can you share some code?  What do you mean by, is there a way for the
decorator code to know it was called by MetersController.get_all

On 08/12/14 04:46 PM, Pendergrass, Eric wrote:
 Thanks Ryan, but for some reason the controller attribute is None:
 
 (Pdb) from pecan.core import state
 (Pdb) state.__dict__
 {'hooks': [ceilometer.api.hooks.ConfigHook object at 0x31894d0, 
 ceilometer.api.hooks.DBHook object at 0x3189650, 
 ceilometer.api.hooks.PipelineHook object at 0x39871d0, 
 ceilometer.api.hooks.TranslationHook object at 0x3aa5510], 'app': 
 pecan.core.Pecan object at 0x2e76390, 'request': Request at 0x3ed7390 GET 
 http://localhost:8777/v2/meters, 'controller': None, 'response': Response 
 at 0x3ed74d0 200 OK}
 
  -Original Message-
  From: Ryan Petrello [mailto:ryan.petre...@dreamhost.com]
  Sent: Tuesday, August 12, 2014 10:34 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Ceilometer] Way to get wrapped method's 
  name/class using Pecan secure decorators?
 
  This should give you what you need:
 
  from pecan.core import state
  state.controller
 
  On 08/12/14 04:08 PM, Pendergrass, Eric wrote:
   Hi, I'm trying to use the built in secure decorator in Pecan for access 
   control, and I'ld like to get the name of the method that is wrapped from 
   within the decorator.
  
   For instance, if I'm wrapping MetersController.get_all with an @secure 
   decorator, is there a way for the decorator code to know it was called by 
   MetersController.get_all?
  
   I don't see any global objects that provide this information.  I can get 
   the endpoint, v2/meters, with pecan.request.path, but that's not as 
   elegant.
  
   Is there a way to derive the caller or otherwise pass this information to 
   the decorator?
  
   Thanks
   Eric Pendergrass
 
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  --
  Ryan Petrello
  Senior Developer, DreamHost
  ryan.petre...@dreamhost.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-12 Thread Dolph Mathews
On Tue, Aug 12, 2014 at 12:30 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote:

 On Thu, Aug 7, 2014 at 1:26 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
  On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez thie...@openstack.org
  wrote:
 
  Hi everyone,
 
  With the incredible growth of OpenStack, our development community is
  facing complex challenges. How we handle those might determine the
  ultimate success or failure of OpenStack.
 
  With this cycle we hit new limits in our processes, tools and cultural
  setup. This resulted in new limiting factors on our overall velocity,
  which is frustrating for developers. This resulted in the burnout of
 key
  firefighting resources. This resulted in tension between people who try
  to get specific work done and people who try to keep a handle on the
 big
  picture.
 
  It all boils down to an imbalance between strategic and tactical
  contributions. At the beginning of this project, we had a strong inner
  group of people dedicated to fixing all loose ends. Then a lot of
  companies got interested in OpenStack and there was a surge in
 tactical,
  short-term contributions. We put on a call for more resources to be
  dedicated to strategic contributions like critical bugfixing,
  vulnerability management, QA, infrastructure... and that call was
  answered by a lot of companies that are now key members of the
 OpenStack
  Foundation, and all was fine again. But OpenStack contributors kept on
  growing, and we grew the narrowly-focused population way faster than
 the
  cross-project population.
 
 
  At the same time, we kept on adding new projects to incubation and to
  the integrated release, which is great... but the new developers you
 get
  on board with this are much more likely to be tactical than strategic
  contributors. This also contributed to the imbalance. The penalty for
  that imbalance is twofold: we don't have enough resources available to
  solve old, known OpenStack-wide issues; but we also don't have enough
  resources to identify and fix new issues.
 
  We have several efforts under way, like calling for new strategic
  contributors, driving towards in-project functional testing, making
  solving rare issues a more attractive endeavor, or hiring resources
  directly at the Foundation level to help address those. But there is a
  topic we haven't raised yet: should we concentrate on fixing what is
  currently in the integrated release rather than adding new projects ?
 
 
  TL;DR: Our development model is having growing pains. until we sort out
 the
  growing pains adding more projects spreads us too thin.
 
 +100

  In addition to the issues mentioned above, with the scale of OpenStack
 today
  we have many major cross project issues to address and no good place to
  discuss them.
 
 We do have the ML, as well as the cross-project meeting every Tuesday
 [1], but we as a project need to do a better job of actually bringing
 up relevant issues here.

 [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting

 
 
  We seem to be unable to address some key issues in the software we
  produce, and part of it is due to strategic contributors (and core
  reviewers) being overwhelmed just trying to stay afloat of what's
  happening. For such projects, is it time for a pause ? Is it time to
  define key cycle goals and defer everything else ?
 
 
 
  I really like this idea, as Michael and others alluded to in above, we
 are
  attempting to set cycle goals for Kilo in Nova. but I think it is worth
  doing for all of OpenStack. We would like to make a list of key goals
 before
  the summit so that we can plan our summit sessions around the goals. On
 a
  really high level one way to look at this is, in Kilo we need to pay
 down
  our technical debt.
 
  The slots/runway idea is somewhat separate from defining key cycle
 goals; we
  can be approve blueprints based on key cycle goals without doing slots.
  But
  with so many concurrent blueprints up for review at any given time, the
  review teams are doing a lot of multitasking and humans are not very
 good at
  multitasking. Hopefully slots can help address this issue, and hopefully
  allow us to actually merge more blueprints in a given cycle.
 
 I'm not 100% sold on what the slots idea buys us. What I've seen this
 cycle in Neutron is that we have a LOT of BPs proposed. We approve
 them after review. And then we hit one of two issues: Slow review
 cycles, and slow code turnaround issues. I don't think slots would
 help this, and in fact may cause more issues. If we approve a BP and
 give it a slot for which the eventual result is slow review and/or
 code review turnaround, we're right back where we started. Even worse,
 we may have not picked a BP for which the code submitter would have
 turned around reviews faster. So we've now doubly hurt ourselves. I
 have no idea how to solve this issue, but by over subscribing the
 

Re: [openstack-dev] Which program for Rally

2014-08-12 Thread Matthew Treinish
On Mon, Aug 11, 2014 at 07:06:11PM -0400, Zane Bitter wrote:
 On 11/08/14 16:21, Matthew Treinish wrote:
 I'm sorry, but the fact that the
 docs in the rally tree has a section for user testimonials [4] I feel speaks 
 a
 lot about the intent of the project.
 
 What... does that even mean?

Yeah, I apologize for that sentence, it was an unfair thing to say and uncalled
for. Looking at it with fresh eyes this morning I'm not entirely sure what my 
intent
was by pointing out that section. I personally feel that those user stories
would probably be more appropriate as a blog post, and shouldn't necessarily be
in a doc tree. But, that's not the stinging indictment which didn't need any
explanation that I apparently thought it was yesterday; it definitely isn't
something worth calling out on this thread.

 
 They seem like just the type of guys that would help Keystone with
 performance benchmarking!
 Burn them!

I'm pretty sure that's not what I meant. :)

 
 I apologize if any of this is somewhat incoherent, I'm still a bit jet-lagged
 so I'm not sure that I'm making much sense.
 
 Ah.
 

Yeah, let's chalk it up to dulled senses from insufficient sleep and trying to
get back on my usual schedule from a trip down under.

 [4] http://git.openstack.org/cgit/stackforge/rally/tree/doc/user_stories


-Matt Treinish
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][cisco] Cisco Nexus requires patched ncclient

2014-08-12 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 12/08/14 17:12, Henry Gessau wrote:
 On 8/12/2014 10:27 AM, Ihar Hrachyshka wrote:
 as per [1], Cisco Nexus ML2 plugin requires a patched version of 
 ncclient from github. I wonder:
 
 - - whether this information is still current;
 
 Please see: https://review.openstack.org/112175
 
 But we need to do backports before updating the wiki.

Thanks for the link!

 
 - - why don't we depend on ncclient thru our requirements.txt
 file.
 
 Do we want to have requirements on things that are only used by a
 specific vendor plugin? So far it has worked by vendor-specific
 documentation instructing to manually install the requirement, or
 vendor-tailored deployment tools/scripts.
 

In downstream, it's hard to maintain all plugin dependencies if they
are not explicitly mentioned in e.g. requirements.txt. Red Hat ships
those plugins (with no commercial support or testing done on our
side), and we didn't know that to make the plugin actually useable, we
need to install that ncclient module until a person from Cisco
reported the issue to us. We don't usually monitor random wiki pages
to get an idea what we need to package and depend on. :)

I think we should have every third party module that we directly use
in requirements.txt. We have code in the tree that imports ncclient
(btw is it unit tested?), so I think it's enough to make that
dependency explicit.

Now, maybe putting the module into requirements.txt is an overkill
(though I doubt it). In that case, we could be interested in getting
the info in some other centralized way.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT6lSAAAoJEC5aWaUY1u57rk8IAKWBqBAJ+DChAkcU/hzs70o5
dqTKm1y5dtLpebSckjLuTb568nd1ShghCaqEQbck4U01g6aDg1hWyWzm2wF2FUyG
PtkYHZRSnKlqyAN7J2PU/Ak7uvTr51UfVKFzqc1hfLujY+SGSlzIjKeucXgjatts
TYIq53xz69y9+9GE/XxX0NpD1ROeaOwaj884WFUI5sIwKWvTjur929o58grym1Hb
bncQUc3wSY1Mtp6OdvwxZJ0MEmlC3t8ukykAUSkv1fBU6xSYo/nLmpGYeHn3o3GQ
icNJXTZbJ/z3oAktbTol1DCxHkKKKruMBqCZcxmxniAdV+l1yNSkZUlAqYwuy3E=
=nI7E
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-12 Thread Daniel P. Berrange
On Tue, Aug 12, 2014 at 03:56:44PM +0100, Mark McLoughlin wrote:
 Hey
 
 (Terrible name for a policy, I know)
 
 From the version_cap saga here:
 
   https://review.openstack.org/110754
 
 I think we need a better understanding of how to approach situations
 like this.
 
 Here's my attempt at documenting what I think we're expecting the
 procedure to be:
 
   https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy
 
 If it sounds reasonably sane, I can propose its addition to the
 Development policies doc.

A bit cumbersome, but given we have to work within Gerrit's limitations,
it looks like a valid approach / process to me.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore

2014-08-12 Thread Jeremy Stanley
On 2014-08-12 16:35:18 + (+), Edgar Magana wrote:
 If this plugin will be deprecated in Juno it means that the code
 will be there for this release, I will expect to have the CI still
 running for until the code is completely removed from the Neutron
 tree.
 
 Anyway, Infra guys will have the last word here!

It's really not up to the Project Infrastructure Team to decide
this (we merely provide guidance, assistance and, sometimes,
arbitration for such matters). It's ultimately the Neutron developer
community who needs to determine whether they're willing to support
an untested feature through deprecation or insist on continued
testing until its full removal can be realized.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-12 Thread Doug Hellmann

On Aug 12, 2014, at 1:44 PM, Dolph Mathews dolph.math...@gmail.com wrote:

 
 On Tue, Aug 12, 2014 at 12:30 AM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
 On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote:
 On Thu, Aug 7, 2014 at 1:26 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
  On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez thie...@openstack.org
  wrote:
 
  Hi everyone,
 
  With the incredible growth of OpenStack, our development community is
  facing complex challenges. How we handle those might determine the
  ultimate success or failure of OpenStack.
 
  With this cycle we hit new limits in our processes, tools and cultural
  setup. This resulted in new limiting factors on our overall velocity,
  which is frustrating for developers. This resulted in the burnout of key
  firefighting resources. This resulted in tension between people who try
  to get specific work done and people who try to keep a handle on the big
  picture.
 
  It all boils down to an imbalance between strategic and tactical
  contributions. At the beginning of this project, we had a strong inner
  group of people dedicated to fixing all loose ends. Then a lot of
  companies got interested in OpenStack and there was a surge in tactical,
  short-term contributions. We put on a call for more resources to be
  dedicated to strategic contributions like critical bugfixing,
  vulnerability management, QA, infrastructure... and that call was
  answered by a lot of companies that are now key members of the OpenStack
  Foundation, and all was fine again. But OpenStack contributors kept on
  growing, and we grew the narrowly-focused population way faster than the
  cross-project population.
 
 
  At the same time, we kept on adding new projects to incubation and to
  the integrated release, which is great... but the new developers you get
  on board with this are much more likely to be tactical than strategic
  contributors. This also contributed to the imbalance. The penalty for
  that imbalance is twofold: we don't have enough resources available to
  solve old, known OpenStack-wide issues; but we also don't have enough
  resources to identify and fix new issues.
 
  We have several efforts under way, like calling for new strategic
  contributors, driving towards in-project functional testing, making
  solving rare issues a more attractive endeavor, or hiring resources
  directly at the Foundation level to help address those. But there is a
  topic we haven't raised yet: should we concentrate on fixing what is
  currently in the integrated release rather than adding new projects ?
 
 
  TL;DR: Our development model is having growing pains. until we sort out the
  growing pains adding more projects spreads us too thin.
 
 +100
 
  In addition to the issues mentioned above, with the scale of OpenStack today
  we have many major cross project issues to address and no good place to
  discuss them.
 
 We do have the ML, as well as the cross-project meeting every Tuesday
 [1], but we as a project need to do a better job of actually bringing
 up relevant issues here.
 
 [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting
 
 
 
  We seem to be unable to address some key issues in the software we
  produce, and part of it is due to strategic contributors (and core
  reviewers) being overwhelmed just trying to stay afloat of what's
  happening. For such projects, is it time for a pause ? Is it time to
  define key cycle goals and defer everything else ?
 
 
 
  I really like this idea, as Michael and others alluded to in above, we are
  attempting to set cycle goals for Kilo in Nova. but I think it is worth
  doing for all of OpenStack. We would like to make a list of key goals before
  the summit so that we can plan our summit sessions around the goals. On a
  really high level one way to look at this is, in Kilo we need to pay down
  our technical debt.
 
  The slots/runway idea is somewhat separate from defining key cycle goals; we
  can be approve blueprints based on key cycle goals without doing slots.  But
  with so many concurrent blueprints up for review at any given time, the
  review teams are doing a lot of multitasking and humans are not very good at
  multitasking. Hopefully slots can help address this issue, and hopefully
  allow us to actually merge more blueprints in a given cycle.
 
 I'm not 100% sold on what the slots idea buys us. What I've seen this
 cycle in Neutron is that we have a LOT of BPs proposed. We approve
 them after review. And then we hit one of two issues: Slow review
 cycles, and slow code turnaround issues. I don't think slots would
 help this, and in fact may cause more issues. If we approve a BP and
 give it a slot for which the eventual result is slow review and/or
 code review turnaround, we're right back where we started. Even worse,
 we may have not picked a BP for which the code submitter would have
 turned around reviews faster. So we've now doubly hurt ourselves. I
 

Re: [openstack-dev] [Neutron][QA] Enabling full neutron Job

2014-08-12 Thread Salvatore Orlando
And just when the patch was only missing a +A, another bug slipped in!
The nova patch to fix it is available at [1]

And while we're there, it won't be a bad idea to also push the neutron full
job, as non-voting, into the integrated gate [2]

Thanks in advance,
(especially to the nova and infra cores who'll review these patches!)
Salvatore

[1] https://review.openstack.org/#/c/113554/
[2] https://review.openstack.org/#/c/113562/


On 7 August 2014 17:51, Salvatore Orlando sorla...@nicira.com wrote:

 Thanks Armando,

 The fix for the bug you pointed out was the reason of the failure we've
 been seeing.
 The follow-up patch merged and I've removed the wip status from the patch
 for the full job [1]

 Salvatore

 [1] https://review.openstack.org/#/c/88289/


 On 7 August 2014 16:50, Armando M. arma...@gmail.com wrote:

 Hi Salvatore,

 I did notice the issue and I flagged this bug report:

 https://bugs.launchpad.net/nova/+bug/1352141

 I'll follow up.

 Cheers,
 Armando


 On 7 August 2014 01:34, Salvatore Orlando sorla...@nicira.com wrote:

 I had to put the patch back on WIP because yesterday a bug causing a
 100% failure rate slipped in.
 It should be an easy fix, and I'm already working on it.
 Situations like this, exemplified by [1] are a bit frustrating for all
 the people working on improving neutron quality.
 Now, if you allow me a little rant, as Neutron is receiving a lot of
 attention for all the ongoing discussion regarding this group policy stuff,
 would it be possible for us to receive a bit of attention to ensure both
 the full job and the grenade one are switched to voting before the juno-3
 review crunch.

 We've already had the attention of the QA team, it would probably good
 if we could get the attention of the infra core team to ensure:
 1) the jobs are also deemed by them stable enough to be switched to
 voting
 2) the relevant patches for openstack-infra/config are reviewed

 Regards,
 Salvatore

 [1]
 http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwie3UnbWVzc2FnZSc6IHUnRmxvYXRpbmcgaXAgcG9vbCBub3QgZm91bmQuJywgdSdjb2RlJzogNDAwfVwiIEFORCBidWlsZF9uYW1lOlwiY2hlY2stdGVtcGVzdC1kc3ZtLW5ldXRyb24tZnVsbFwiIEFORCBidWlsZF9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNzQwMDExMDIwNywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==


 On 23 July 2014 14:59, Matthew Treinish mtrein...@kortar.org wrote:

 On Wed, Jul 23, 2014 at 02:40:02PM +0200, Salvatore Orlando wrote:
  Here I am again bothering you with the state of the full job for
 Neutron.
 
  The patch for fixing an issue in nova's server external events
 extension
  merged yesterday [1]
  We do not have yet enough data points to make a reliable assessment,
 but of
  out 37 runs since the patch merged, we had only 5 failures, which
 puts
  the failure rate at about 13%
 
  This is ugly compared with the current failure rate of the smoketest
 (3%).
  However, I think it is good enough to start making the full job
 voting at
  least for neutron patches.
  Once we'll be able to bring down failure rate to anything around 5%,
 we can
  then enable the job everywhere.

 I think that sounds like a good plan. I'm also curious how the failure
 rates
 compare to the other non-neutron jobs, that might be a useful
 comparison too
 for deciding when to flip the switch everywhere.

 
  As much as I hate asymmetric gating, I think this is a good
 compromise for
  avoiding developers working on other projects are badly affected by
 the
  higher failure rate in the neutron full job.

 So we discussed this during the project meeting a couple of weeks ago
 [3] and
 there was a general agreement that doing it asymmetrically at first
 would be
 better. Everyone should be wary of the potential harms with doing it
 asymmetrically and I think priority will be given to fixing issues that
 block
 the neutron gate should they arise.

  I will therefore resume work on [2] and remove the WIP status as soon
 as I
  can confirm a failure rate below 15% with more data points.
 

 Thanks for keeping on top of this Salvatore. It'll be good to finally
 be at
 least partially gating with a parallel job.

 -Matt Treinish

 
  [1] https://review.openstack.org/#/c/103865/
  [2] https://review.openstack.org/#/c/88289/
 [3]
 http://eavesdrop.openstack.org/meetings/project/2014/project.2014-07-08-21.03.log.html#l-28

 
 
  On 10 July 2014 11:49, Salvatore Orlando sorla...@nicira.com wrote:
 
  
  
  
   On 10 July 2014 11:27, Ihar Hrachyshka ihrac...@redhat.com wrote:
  
   -BEGIN PGP SIGNED MESSAGE-
   Hash: SHA512
  
   On 10/07/14 11:07, Salvatore Orlando wrote:
The patch for bug 1329564 [1] merged about 11 hours ago. From [2]
it seems there has been an improvement on the failure rate, which
seem to have dropped to 25% from over 40%. Still, since the patch
merged there have been 11 failures already in the full job out of
42 jobs 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-12 Thread Devananda van der Veen
On Tue, Aug 12, 2014 at 10:44 AM, Dolph Mathews dolph.math...@gmail.com wrote:

 On Tue, Aug 12, 2014 at 12:30 AM, Joe Gordon joe.gord...@gmail.com wrote:

 Slow review: by limiting the number of blueprints up we hope to focus our
 efforts on fewer concurrent things
 slow code turn around: when a blueprint is given a slot (runway) we will
 first make sure the author/owner is available for fast code turnaround.

 If a blueprint review stalls out (slow code turnaround, stalemate in
 review discussions etc.) we will take the slot and give it to another
 blueprint.


 How is that more efficient than today's do-the-best-we-can approach? It just
 sounds like bureaucracy to me.

 Reading between the lines throughout this thread, it sounds like what we're
 lacking is a reliable method to communicate review prioritization to core
 reviewers.

AIUI, that is precisely what the proposed slots would do -- allow
the PTL (or the drivers team) to reliably communicate review
prioritization to the core review team, in a way that is *not* just
more noise on IRC, and is visible to all contributors.

-Deva

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-12 Thread John Dickinson

On Aug 12, 2014, at 11:08 AM, Doug Hellmann d...@doughellmann.com wrote:

 
 On Aug 12, 2014, at 1:44 PM, Dolph Mathews dolph.math...@gmail.com wrote:
 
 
 On Tue, Aug 12, 2014 at 12:30 AM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
 On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote:
 On Thu, Aug 7, 2014 at 1:26 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
  On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez thie...@openstack.org
  wrote:
 
  Hi everyone,
 
  With the incredible growth of OpenStack, our development community is
  facing complex challenges. How we handle those might determine the
  ultimate success or failure of OpenStack.
 
  With this cycle we hit new limits in our processes, tools and cultural
  setup. This resulted in new limiting factors on our overall velocity,
  which is frustrating for developers. This resulted in the burnout of key
  firefighting resources. This resulted in tension between people who try
  to get specific work done and people who try to keep a handle on the big
  picture.
 
  It all boils down to an imbalance between strategic and tactical
  contributions. At the beginning of this project, we had a strong inner
  group of people dedicated to fixing all loose ends. Then a lot of
  companies got interested in OpenStack and there was a surge in tactical,
  short-term contributions. We put on a call for more resources to be
  dedicated to strategic contributions like critical bugfixing,
  vulnerability management, QA, infrastructure... and that call was
  answered by a lot of companies that are now key members of the OpenStack
  Foundation, and all was fine again. But OpenStack contributors kept on
  growing, and we grew the narrowly-focused population way faster than the
  cross-project population.
 
 
  At the same time, we kept on adding new projects to incubation and to
  the integrated release, which is great... but the new developers you get
  on board with this are much more likely to be tactical than strategic
  contributors. This also contributed to the imbalance. The penalty for
  that imbalance is twofold: we don't have enough resources available to
  solve old, known OpenStack-wide issues; but we also don't have enough
  resources to identify and fix new issues.
 
  We have several efforts under way, like calling for new strategic
  contributors, driving towards in-project functional testing, making
  solving rare issues a more attractive endeavor, or hiring resources
  directly at the Foundation level to help address those. But there is a
  topic we haven't raised yet: should we concentrate on fixing what is
  currently in the integrated release rather than adding new projects ?
 
 
  TL;DR: Our development model is having growing pains. until we sort out the
  growing pains adding more projects spreads us too thin.
 
 +100
 
  In addition to the issues mentioned above, with the scale of OpenStack 
  today
  we have many major cross project issues to address and no good place to
  discuss them.
 
 We do have the ML, as well as the cross-project meeting every Tuesday
 [1], but we as a project need to do a better job of actually bringing
 up relevant issues here.
 
 [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting
 
 
 
  We seem to be unable to address some key issues in the software we
  produce, and part of it is due to strategic contributors (and core
  reviewers) being overwhelmed just trying to stay afloat of what's
  happening. For such projects, is it time for a pause ? Is it time to
  define key cycle goals and defer everything else ?
 
 
 
  I really like this idea, as Michael and others alluded to in above, we are
  attempting to set cycle goals for Kilo in Nova. but I think it is worth
  doing for all of OpenStack. We would like to make a list of key goals 
  before
  the summit so that we can plan our summit sessions around the goals. On a
  really high level one way to look at this is, in Kilo we need to pay down
  our technical debt.
 
  The slots/runway idea is somewhat separate from defining key cycle goals; 
  we
  can be approve blueprints based on key cycle goals without doing slots.  
  But
  with so many concurrent blueprints up for review at any given time, the
  review teams are doing a lot of multitasking and humans are not very good 
  at
  multitasking. Hopefully slots can help address this issue, and hopefully
  allow us to actually merge more blueprints in a given cycle.
 
 I'm not 100% sold on what the slots idea buys us. What I've seen this
 cycle in Neutron is that we have a LOT of BPs proposed. We approve
 them after review. And then we hit one of two issues: Slow review
 cycles, and slow code turnaround issues. I don't think slots would
 help this, and in fact may cause more issues. If we approve a BP and
 give it a slot for which the eventual result is slow review and/or
 code review turnaround, we're right back where we started. Even worse,
 we may have not picked a BP for which the 

[openstack-dev] [Horizon] Feature Proposal Freeze date Aug 14

2014-08-12 Thread Lyle, David
It came to my attention today that I've only communicated this in Horizon
team meetings.

Due to the high number of blueprints already targeting Juno-3 and the
resource contention of reviewers, I have set the Horizon Feature Proposal
Deadline at August 14 (August 12 actually, but since I didn't include the
mailing list, adding 2 days). This will hopefully reduce some of the noise
as we approach the J-3 milestone.

Thanks,
David


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-12 Thread Eoghan Glynn
 
 It seems like this is exactly what the slots give us, though. The core review
 team picks a number of slots indicating how much work they think they can
 actually do (less than the available number of blueprints), and then
 blueprints queue up to get a slot based on priorities and turnaround time
 and other criteria that try to make slot allocation fair. By having the
 slots, not only is the review priority communicated to the review team, it
 is also communicated to anyone watching the project.

One thing I'm not seeing shine through in this discussion of slots is
whether any notion of individual cores, or small subsets of the core
team with aligned interests, can champion blueprints that they have
a particular interest in.

For example it might address some pain-point they've encountered, or
impact on some functional area that they themselves have worked on in
the past, or line up with their thinking on some architectural point.

But for whatever motivation, such small groups of cores currently have
the freedom to self-organize in a fairly emergent way and champion
individual BPs that are important to them, simply by *independently*
giving those BPs review attention.

Whereas under the slots initiative, presumably this power would be
subsumed by the group will, as expressed by the prioritization
applied to the holding pattern feeding the runways?

I'm not saying this is good or bad, just pointing out a change that
we should have our eyes open to.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][cisco] Cisco Nexus requires patched ncclient

2014-08-12 Thread Henry Gessau
On 8/12/2014 1:53 PM, Ihar Hrachyshka wrote:
 On 12/08/14 17:12, Henry Gessau wrote:
 On 8/12/2014 10:27 AM, Ihar Hrachyshka wrote:
 as per [1], Cisco Nexus ML2 plugin requires a patched version of 
 ncclient from github. I wonder:

 - - whether this information is still current;

 Please see: https://review.openstack.org/112175

 But we need to do backports before updating the wiki.
 
 Thanks for the link!
 

 - - why don't we depend on ncclient thru our requirements.txt
 file.

 Do we want to have requirements on things that are only used by a
 specific vendor plugin? So far it has worked by vendor-specific
 documentation instructing to manually install the requirement, or
 vendor-tailored deployment tools/scripts.

 
 In downstream, it's hard to maintain all plugin dependencies if they
 are not explicitly mentioned in e.g. requirements.txt. Red Hat ships
 those plugins (with no commercial support or testing done on our
 side), and we didn't know that to make the plugin actually useable, we
 need to install that ncclient module until a person from Cisco
 reported the issue to us. We don't usually monitor random wiki pages
 to get an idea what we need to package and depend on. :)
 
 I think we should have every third party module that we directly use
 in requirements.txt. We have code in the tree that imports ncclient
 (btw is it unit tested?), so I think it's enough to make that

The unit tests mock the import of ncclient.

 dependency explicit.
 
 Now, maybe putting the module into requirements.txt is an overkill
 (though I doubt it). In that case, we could be interested in getting
 the info in some other centralized way.

I am not familiar with other ways, but let me know if I can be of any help.

Note: it seems that the Brocade plugin also imports ncclient.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?

2014-08-12 Thread Pendergrass, Eric
Sure, here's the decorated method from v2.py:

class MetersController(rest.RestController):
Works on meters.

@pecan.expose()
def _lookup(self, meter_name, *remainder):
return MeterController(meter_name), remainder

@wsme_pecan.wsexpose([Meter], [Query])
@secure(RBACController.check_permissions)
def get_all(self, q=None):

and here's the decorator called by the secure tag:

class RBACController(object):
global _ENFORCER
if not _ENFORCER:
_ENFORCER = policy.Enforcer()


@classmethod
def check_permissions(cls):
# do some stuff

In check_permissions I'd like to know the class and method with the @secure tag 
that caused check_permissions to be invoked.  In this case, that would be 
MetersController.get_all.

Thanks


 Can you share some code?  What do you mean by, is there a way for the 
 decorator code to know it was called by MetersController.get_all

 On 08/12/14 04:46 PM, Pendergrass, Eric wrote:
  Thanks Ryan, but for some reason the controller attribute is None:
 
  (Pdb) from pecan.core import state
  (Pdb) state.__dict__
  {'hooks': [ceilometer.api.hooks.ConfigHook object at 0x31894d0,
  ceilometer.api.hooks.DBHook object at 0x3189650,
  ceilometer.api.hooks.PipelineHook object at 0x39871d0,
  ceilometer.api.hooks.TranslationHook object at 0x3aa5510], 'app':
  pecan.core.Pecan object at 0x2e76390, 'request': Request at
  0x3ed7390 GET http://localhost:8777/v2/meters, 'controller': None,
  'response': Response at 0x3ed74d0 200 OK}
 
   -Original Message-
   From: Ryan Petrello [mailto:ryan.petre...@dreamhost.com]
   Sent: Tuesday, August 12, 2014 10:34 AM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [Ceilometer] Way to get wrapped method's 
   name/class using Pecan secure decorators?
  
   This should give you what you need:
  
   from pecan.core import state
   state.controller
  
   On 08/12/14 04:08 PM, Pendergrass, Eric wrote:
Hi, I'm trying to use the built in secure decorator in Pecan for access 
control, and I'ld like to get the name of the method that is wrapped 
from within the decorator.
   
For instance, if I'm wrapping MetersController.get_all with an @secure 
decorator, is there a way for the decorator code to know it was called 
by MetersController.get_all?
   
I don't see any global objects that provide this information.  I can 
get the endpoint, v2/meters, with pecan.request.path, but that's not as 
elegant.
   
Is there a way to derive the caller or otherwise pass this information 
to the decorator?
   
Thanks
Eric Pendergrass
  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
   --
   Ryan Petrello
   Senior Developer, DreamHost
   ryan.petre...@dreamhost.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore

2014-08-12 Thread Henry Gessau
On 8/12/2014 2:04 PM, Jeremy Stanley wrote:
 On 2014-08-12 16:35:18 + (+), Edgar Magana wrote:
 If this plugin will be deprecated in Juno it means that the code
 will be there for this release, I will expect to have the CI still
 running for until the code is completely removed from the Neutron
 tree.

 Anyway, Infra guys will have the last word here!
 
 It's really not up to the Project Infrastructure Team to decide
 this (we merely provide guidance, assistance and, sometimes,
 arbitration for such matters). It's ultimately the Neutron developer
 community who needs to determine whether they're willing to support
 an untested feature through deprecation or insist on continued
 testing until its full removal can be realized.

The Cisco Nexus sub-plugin is broken because the OVS plugin that is depends on
is broken. The Neutron Project switched from the OVS plugin to ML2 for testing
a long time ago, and the OVS plugin will be removed from the tree in Juno.
There are no plans to fix the OVS plugin, so the Cisco Nexus sub-plugin will
not be fixed either.

There are bugs[1,2] open to remove the deprecated plugins from the tree.

[1] https://bugs.launchpad.net/neutron/+bug/1323729
[2] https://bugs.launchpad.net/neutron/+bug/1350387


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 9 days until feature proposal freeze

2014-08-12 Thread Jay Pipes

On 08/12/2014 04:13 AM, Michael Still wrote:

Hi,

this is just a friendly reminder that we are now 9 days away from
feature proposal freeze for nova. If you think your blueprint isn't
going to make it in time, then now would be a good time to let me know
so that we can defer it until Kilo. That will free up reviewer time
for other blueprints.

Some people have more than one blueprint still under development...
Perhaps they could defer some of those to Kilo?


I removed 
https://blueprints.launchpad.net/nova/+spec/allocation-ratio-to-resource-tracker 
from the Juno cycle, and noted reasons why in the whiteboard (ongoing 
discussions around scheduler separation and the scope of the resource 
tracker in regards to claim processing.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore

2014-08-12 Thread Edgar Magana
Henry,

That makes a lot of sense to me.
If the code will be remove in Juno, then there is nothing else to discuss.

Thank you so much for providing detailed information and sorry for
bothering you with this issue.

Edgar

On 8/12/14, 11:49 AM, Henry Gessau ges...@cisco.com wrote:

On 8/12/2014 2:04 PM, Jeremy Stanley wrote:
 On 2014-08-12 16:35:18 + (+), Edgar Magana wrote:
 If this plugin will be deprecated in Juno it means that the code
 will be there for this release, I will expect to have the CI still
 running for until the code is completely removed from the Neutron
 tree.

 Anyway, Infra guys will have the last word here!
 
 It's really not up to the Project Infrastructure Team to decide
 this (we merely provide guidance, assistance and, sometimes,
 arbitration for such matters). It's ultimately the Neutron developer
 community who needs to determine whether they're willing to support
 an untested feature through deprecation or insist on continued
 testing until its full removal can be realized.

The Cisco Nexus sub-plugin is broken because the OVS plugin that is
depends on
is broken. The Neutron Project switched from the OVS plugin to ML2 for
testing
a long time ago, and the OVS plugin will be removed from the tree in Juno.
There are no plans to fix the OVS plugin, so the Cisco Nexus sub-plugin
will
not be fixed either.

There are bugs[1,2] open to remove the deprecated plugins from the tree.

[1] https://bugs.launchpad.net/neutron/+bug/1323729
[2] https://bugs.launchpad.net/neutron/+bug/1350387


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-12 Thread Dolph Mathews
On Tue, Aug 12, 2014 at 1:08 PM, Doug Hellmann d...@doughellmann.com
wrote:


 On Aug 12, 2014, at 1:44 PM, Dolph Mathews dolph.math...@gmail.com
 wrote:


 On Tue, Aug 12, 2014 at 12:30 AM, Joe Gordon joe.gord...@gmail.com
 wrote:




 On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote:

 On Thu, Aug 7, 2014 at 1:26 PM, Joe Gordon joe.gord...@gmail.com
 wrote:
 
 
 
  On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez thie...@openstack.org
  wrote:
 
  Hi everyone,
 
  With the incredible growth of OpenStack, our development community is
  facing complex challenges. How we handle those might determine the
  ultimate success or failure of OpenStack.
 
  With this cycle we hit new limits in our processes, tools and cultural
  setup. This resulted in new limiting factors on our overall velocity,
  which is frustrating for developers. This resulted in the burnout of
 key
  firefighting resources. This resulted in tension between people who
 try
  to get specific work done and people who try to keep a handle on the
 big
  picture.
 
  It all boils down to an imbalance between strategic and tactical
  contributions. At the beginning of this project, we had a strong inner
  group of people dedicated to fixing all loose ends. Then a lot of
  companies got interested in OpenStack and there was a surge in
 tactical,
  short-term contributions. We put on a call for more resources to be
  dedicated to strategic contributions like critical bugfixing,
  vulnerability management, QA, infrastructure... and that call was
  answered by a lot of companies that are now key members of the
 OpenStack
  Foundation, and all was fine again. But OpenStack contributors kept on
  growing, and we grew the narrowly-focused population way faster than
 the
  cross-project population.
 
 
  At the same time, we kept on adding new projects to incubation and to
  the integrated release, which is great... but the new developers you
 get
  on board with this are much more likely to be tactical than strategic
  contributors. This also contributed to the imbalance. The penalty for
  that imbalance is twofold: we don't have enough resources available to
  solve old, known OpenStack-wide issues; but we also don't have enough
  resources to identify and fix new issues.
 
  We have several efforts under way, like calling for new strategic
  contributors, driving towards in-project functional testing, making
  solving rare issues a more attractive endeavor, or hiring resources
  directly at the Foundation level to help address those. But there is a
  topic we haven't raised yet: should we concentrate on fixing what is
  currently in the integrated release rather than adding new projects ?
 
 
  TL;DR: Our development model is having growing pains. until we sort
 out the
  growing pains adding more projects spreads us too thin.
 
 +100

  In addition to the issues mentioned above, with the scale of OpenStack
 today
  we have many major cross project issues to address and no good place to
  discuss them.
 
 We do have the ML, as well as the cross-project meeting every Tuesday
 [1], but we as a project need to do a better job of actually bringing
 up relevant issues here.

 [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting

 
 
  We seem to be unable to address some key issues in the software we
  produce, and part of it is due to strategic contributors (and core
  reviewers) being overwhelmed just trying to stay afloat of what's
  happening. For such projects, is it time for a pause ? Is it time to
  define key cycle goals and defer everything else ?
 
 
 
  I really like this idea, as Michael and others alluded to in above, we
 are
  attempting to set cycle goals for Kilo in Nova. but I think it is worth
  doing for all of OpenStack. We would like to make a list of key goals
 before
  the summit so that we can plan our summit sessions around the goals.
 On a
  really high level one way to look at this is, in Kilo we need to pay
 down
  our technical debt.
 
  The slots/runway idea is somewhat separate from defining key cycle
 goals; we
  can be approve blueprints based on key cycle goals without doing
 slots.  But
  with so many concurrent blueprints up for review at any given time, the
  review teams are doing a lot of multitasking and humans are not very
 good at
  multitasking. Hopefully slots can help address this issue, and
 hopefully
  allow us to actually merge more blueprints in a given cycle.
 
 I'm not 100% sold on what the slots idea buys us. What I've seen this
 cycle in Neutron is that we have a LOT of BPs proposed. We approve
 them after review. And then we hit one of two issues: Slow review
 cycles, and slow code turnaround issues. I don't think slots would
 help this, and in fact may cause more issues. If we approve a BP and
 give it a slot for which the eventual result is slow review and/or
 code review turnaround, we're right back where we started. Even worse,
 we may have not picked a BP for which the 

[openstack-dev] [Neutron] [LBaaS] Followup on Service Ports and IP Allocation - IPAM from LBaaS Mid Cycle meeting

2014-08-12 Thread Eichberger, German
Hi Mark,

Going through the notes from our midcycle meeting   (see 
https://etherpad.openstack.org/p/juno-lbaas-mid-cycle-hackathon) I noticed your 
name next to the Service Port and IPAM:
Service Ports

* Owner: Mark

* Nova hacks

* Nova port that nova borrows but doesn't destroy when VM is

IP allocation - IPAM

* TBD: Large task: Owner: Mark

* ability to assoc an IP that is not associated with a port/vm

* can we create a faster way of moving IP's? (Susanne)

With all the other LBaaS work we sort of lost track on that but now as we 
started work on planning for Octavia I am wondering if there is any progress on 
those topics.

Thanks a dozen,
German
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Use cases with regards to VIP and routers

2014-08-12 Thread Stephen Balukoff
From the perspective of Blue Box:

* Load balancing appliances will often (usually?) live outside the same
subnet as back-end member VMs.
* The network in which the load balancing appliances live will usually have
a default router (gateway)
* We don't anticipate the need for using extra_routes at this time, though
I suspect other operators might need this.
* We also anticipate occasionally needing the load balancing appliances to
have layer-2 connectivity to some back-end member VMs.



On Tue, Aug 12, 2014 at 12:32 AM, Susanne Balle sleipnir...@gmail.com
wrote:

 In the context of Octavia and Neutron LBaaS. Susanne


 On Mon, Aug 11, 2014 at 5:44 PM, Stephen Balukoff sbaluk...@bluebox.net
 wrote:

 Susanne,

 Are you asking in the context of Load Balancer services in general, or in
 terms of the Neutron LBaaS project or the Octavia project?

 Stephen


 On Mon, Aug 11, 2014 at 9:04 AM, Doug Wiegley do...@a10networks.com
 wrote:

 Hi Susanne,

 While there are a few operators involved with LBaaS that would have good
 input, you might want to also ask this on the non-dev mailing list, for a
 larger sample size.

 Thanks,
 doug

 On 8/11/14, 3:05 AM, Susanne Balle sleipnir...@gmail.com wrote:

 Gang,
 I was asked the following questions around our Neutron LBaaS use cases:
 1.  Will there be a scenario where the ³VIP² port will be in a different
 Node, from all the Member ³VMs² in a pool.
 
 
 2.  Also how likely is it for the LBaaS configured subnet to not have a
 ³router² and just use the ³extra_routes²
  option.
 3.  Is there a valid use case where customers will be using the
 ³extra_routes² with subnets instead of the ³routers².
  ( It would be great if you have some use case picture for this).
 Feel free to chime in here and I'll summaries the answers.
 Regards Susanne
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-12 Thread Kashyap Chamarthy
On Mon, Aug 11, 2014 at 08:05:26AM -0400, Russell Bryant wrote:
 On 08/11/2014 07:58 AM, Russell Bryant wrote:
  On 08/11/2014 05:53 AM, Daniel P. Berrange wrote:
  There is work to add support for this in devestack already which I
  prefer since it makes it easy for developers to get an environment
  which matches the build system:
 
https://review.openstack.org/#/c/108714/
  
  Ah, cool.  Devstack is indeed a better place to put the build scripting.
   So, I think we should:
  
  1) Get the above patch working, and then merged.
  
  2) Get an experimental job going to use the above while we work on #3
  
  3) Before the job can move into the check queue and potentially become
  voting, it needs to not rely on downloading the source on every run.
  IIRC, we can have nodepool build an image to use for these jobs that
  includes the bits already installed.
  
  I'll switch my efforts over to helping get the above completed.
  
 
 I still think the devstack patch is good, but after some more thought, I
 think a better long term CI job setup would just be a fedora image with
 the virt-preview repo. 

So, effectively, you're trying to add a minimal Fedora image w/
virt-preview repo (as part of some post-install kickstart script). If
so, where would the image be stored? I'm asking because, previously Sean
Dague mentioned of mirroring issues (which later turned out to be
intermittent network issues with OpenStack infra cloud providers) of
Fedora images, and floated an idea whether an updated image can be
stored on tarballs.openstack.org, like how Trove[1] does. But, OpenStack
infra folks (fungi) raised some valid points on why not do that.

IIUC, if you intend to run tests w/ this CI job with this new image,
there has to be a mechanism in place to ensure the cached copy (on
tarballs.o.o) is updated.

If I misunderstood what you said, please correct me.


[1] http://tarballs.openstack.org/trove/images/

 I think I'll try that ...

 

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] Meeting cancelled and time for next week.

2014-08-12 Thread Steve Gordon
Hi all,

I am not available to run the meeting tomorrow and was not able to identify 
someone to step in, given this I think it makes sense to cancel for this week. 

For next week I would like to trial the new alternate time we discussed, 1600 
UTC on a Thursday, and assuming there is reasonable attendance alternating 
weekly from there. Are there any objections to this?

As the Feature Proposal Freeze [1] is fast approaching for projects that 
enforce it I will endeavour to track down any of the blueprints listed on the 
wiki that were approved but don't have code submissions associated with them 
yet and highlight this on the mailing list in lieu of a meeting.

Thanks,

Steve

[1] https://wiki.openstack.org/wiki/FeatureProposalFreeze

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Agenda for 13 Aug 2014 meeting

2014-08-12 Thread Stephen Balukoff
Hi folks!

This is what I have for my tentative agenda for tomorrow's Octavia meeting.
Please e-mail me if you want anything else added to this list. (Also, I
will start putting these weekly agendas in the wiki in the near future.)

* Discuss future of Octavia in light of Neutron-incubator project proposal.

* Discuss operator networking requirements (carryover from last week)

* Discuss v0.5 component design proposal:
https://review.openstack.org/#/c/113458/

* Discuss timeline on moving these meetings to IRC.

As usual, please e-mail me if you'd like information on connecting to the
webex we're presently using for these meetings.

Thanks,
Stephen


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore

2014-08-12 Thread Anita Kuno
On 08/12/2014 01:16 PM, Edgar Magana wrote:
 Henry,
 
 That makes a lot of sense to me.
 If the code will be remove in Juno, then there is nothing else to discuss.
 
 Thank you so much for providing detailed information and sorry for
 bothering you with this issue.
 
 Edgar
I don't think it is a bother, I think it is good information to have.
Now we just have to figure out the process for future so we also know
the best path of communication.

Thanks,
Anita.
 
 On 8/12/14, 11:49 AM, Henry Gessau ges...@cisco.com wrote:
 
 On 8/12/2014 2:04 PM, Jeremy Stanley wrote:
 On 2014-08-12 16:35:18 + (+), Edgar Magana wrote:
 If this plugin will be deprecated in Juno it means that the code
 will be there for this release, I will expect to have the CI still
 running for until the code is completely removed from the Neutron
 tree.

 Anyway, Infra guys will have the last word here!

 It's really not up to the Project Infrastructure Team to decide
 this (we merely provide guidance, assistance and, sometimes,
 arbitration for such matters). It's ultimately the Neutron developer
 community who needs to determine whether they're willing to support
 an untested feature through deprecation or insist on continued
 testing until its full removal can be realized.

 The Cisco Nexus sub-plugin is broken because the OVS plugin that is
 depends on
 is broken. The Neutron Project switched from the OVS plugin to ML2 for
 testing
 a long time ago, and the OVS plugin will be removed from the tree in Juno.
 There are no plans to fix the OVS plugin, so the Cisco Nexus sub-plugin
 will
 not be fixed either.

 There are bugs[1,2] open to remove the deprecated plugins from the tree.

 [1] https://bugs.launchpad.net/neutron/+bug/1323729
 [2] https://bugs.launchpad.net/neutron/+bug/1350387


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-12 Thread Russell Bryant
On 08/12/2014 03:40 PM, Kashyap Chamarthy wrote:
 On Mon, Aug 11, 2014 at 08:05:26AM -0400, Russell Bryant wrote:
 On 08/11/2014 07:58 AM, Russell Bryant wrote:
 On 08/11/2014 05:53 AM, Daniel P. Berrange wrote:
 There is work to add support for this in devestack already which I
 prefer since it makes it easy for developers to get an environment
 which matches the build system:

   https://review.openstack.org/#/c/108714/

 Ah, cool.  Devstack is indeed a better place to put the build scripting.
  So, I think we should:

 1) Get the above patch working, and then merged.

 2) Get an experimental job going to use the above while we work on #3

 3) Before the job can move into the check queue and potentially become
 voting, it needs to not rely on downloading the source on every run.
 IIRC, we can have nodepool build an image to use for these jobs that
 includes the bits already installed.

 I'll switch my efforts over to helping get the above completed.


 I still think the devstack patch is good, but after some more thought, I
 think a better long term CI job setup would just be a fedora image with
 the virt-preview repo. 
 
 So, effectively, you're trying to add a minimal Fedora image w/
 virt-preview repo (as part of some post-install kickstart script). If
 so, where would the image be stored? I'm asking because, previously Sean
 Dague mentioned of mirroring issues (which later turned out to be
 intermittent network issues with OpenStack infra cloud providers) of
 Fedora images, and floated an idea whether an updated image can be
 stored on tarballs.openstack.org, like how Trove[1] does. But, OpenStack
 infra folks (fungi) raised some valid points on why not do that.
 
 IIUC, if you intend to run tests w/ this CI job with this new image,
 there has to be a mechanism in place to ensure the cached copy (on
 tarballs.o.o) is updated.
 
 If I misunderstood what you said, please correct me.

Patches for this here:

https://review.openstack.org/#/c/113349/
https://review.openstack.org/#/c/113350/

The first one is the important part about how the image is created.
nodepool runs some prep scripts against the cloud's distro image and
then snapshots it.  That's the image stored to be used later for testing.

In this case, it enables the virt-preview repo and then calls out to the
regular devstack prep scripts to cache all packages needed for the test
locally on the image.

If there are issues with the reliability of fedorapeople.org, it will
indeed cause problems, but at least it's local to image creation and not
every test run.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-12 Thread Thierry Carrez
Dan Smith wrote:
 Looks reasonable to me.
 
 +1

+1

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?

2014-08-12 Thread Sylvain Bauza


Le 12/08/2014 18:54, Nikola Đipanov a écrit :

On 08/12/2014 04:49 PM, Sylvain Bauza wrote:

(sorry for reposting, missed 2 links...)

Hi Nikola,

Le 12/08/2014 12:21, Nikola Đipanov a écrit :

Hey Nova-istas,

While I was hacking on [1] I was considering how to approach the fact
that we now need to track one more thing (NUMA node utilization) in our
resources. I went with - I'll add it to compute nodes table thinking
it's a fundamental enough property of a compute host that it deserves to
be there, although I was considering  Extensible Resource Tracker at one
point (ERT from now on - see [2]) but looking at the code - it did not
seem to provide anything I desperately needed, so I went with keeping it
simple.

So fast-forward a few days, and I caught myself solving a problem that I
kept thinking ERT should have solved - but apparently hasn't, and I
think it is fundamentally a broken design without it - so I'd really
like to see it re-visited.

The problem can be described by the following lemma (if you take 'lemma'
to mean 'a sentence I came up with just now' :)):


Due to the way scheduling works in Nova (roughly: pick a host based on
stale(ish) data, rely on claims to trigger a re-schedule), _same exact_
information that scheduling service used when making a placement
decision, needs to be available to the compute service when testing the
placement.


This is not the case right now, and the ERT does not propose any way to
solve it - (see how I hacked around needing to be able to get
extra_specs when making claims in [3], without hammering the DB). The
result will be that any resource that we add and needs user supplied
info for scheduling an instance against it, will need a buggy
re-implementation of gathering all the bits from the request that
scheduler sees, to be able to work properly.

Well, ERT does provide a plugin mechanism for testing resources at the
claim level. This is the plugin responsibility to implement a test()
method [2.1] which will be called when test_claim() [2.2]

So, provided this method is implemented, a local host check can be done
based on the host's view of resources.



Yes - the problem is there is no clear API to get all the needed bits to
do so - especially the user supplied one from image and flavors.
On top of that, in current implementation we only pass a hand-wavy
'usage' blob in. This makes anyone wanting to use this in conjunction
with some of the user supplied bits roll their own
'extract_data_from_instance_metadata_flavor_image' or similar which is
horrible and also likely bad for performance.


I see your concern where there is no interface for user-facing resources 
like flavor or image metadata.
I also think indeed that the big 'usage' blob is not a good choice for 
long-term vision.


That said, I don't think as we say in French to throw the bath water... 
ie. the problem is with the RT, not the ERT (apart the mention of 
third-party API that you noted - I'll go to it later below)

This is obviously a bigger concern when we want to allow users to pass
data (through image or flavor) that can affect scheduling, but still a
huge concern IMHO.

And here is where I agree with you : at the moment, ResourceTracker (and
consequently Extensible RT) only provides the view of the resources the
host is knowing (see my point above) and possibly some other resources
are missing.
So, whatever your choice of going with or without ERT, your patch [3]
still deserves it if we want not to lookup DB each time a claim goes.



As I see that there are already BPs proposing to use this IMHO broken
ERT ([4] for example), which will surely add to the proliferation of
code that hacks around these design shortcomings in what is already a
messy, but also crucial (for perf as well as features) bit of Nova code.

Two distinct implementations of that spec (ie. instances and flavors)
have been proposed [2.3] [2.4] so reviews are welcome. If you see the
test() method, it's no-op thing for both plugins. I'm open to comments
because I have the stated problem : how can we define a limit on just a
counter of instances and flavors ?


Will look at these - but none of them seem to hit the issue I am
complaining about, and that is that it will need to consider other
request data for claims, not only data available for on instances.

Also - the fact that you don't implement test() in flavor ones tells me
that the implementation is indeed racy (but it is racy atm as well) and
two requests can indeed race for the same host, and since no claims are
done, both can succeed. This is I believe (at least in case of single
flavor hosts) unlikely to happen in practice, but you get the idea.


Agreed, these 2 patches probably require another iteration, in 
particular how we make sure that it won't be racy. So I need another run 
to think about what to test() for these 2 examples.
Another patch has to be done for aggregates, but it's still WIP so not 
mentioned here.


Anyway, as discussed during today's 

Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-12 Thread Anne Gentle
On Tue, Aug 12, 2014 at 9:56 AM, Mark McLoughlin mar...@redhat.com wrote:

 Hey

 (Terrible name for a policy, I know)

 From the version_cap saga here:

   https://review.openstack.org/110754

 I think we need a better understanding of how to approach situations
 like this.

 Here's my attempt at documenting what I think we're expecting the
 procedure to be:

   https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy

 If it sounds reasonably sane, I can propose its addition to the
 Development policies doc.


Thanks for the write up, Mark.

When I first read the thread I thought it'd be about the case where a core
takes a vacation or is unreachable _after_ marking a review -2. Can this
case be considered in this policy as well (or is it already and I don't
know it?)

Thanks,
Anne



 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-12 Thread Michael Still
This looks reasonable to me, with a slight concern that I don't know
what step five looks like... What if we can never reach a consensus on
an issue?

Michael

On Wed, Aug 13, 2014 at 12:56 AM, Mark McLoughlin mar...@redhat.com wrote:
 Hey

 (Terrible name for a policy, I know)

 From the version_cap saga here:

   https://review.openstack.org/110754

 I think we need a better understanding of how to approach situations
 like this.

 Here's my attempt at documenting what I think we're expecting the
 procedure to be:

   https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy

 If it sounds reasonably sane, I can propose its addition to the
 Development policies doc.

 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-08-12 Thread Mark McLoughlin
On Wed, 2014-07-30 at 15:34 -0700, Clark Boylan wrote:
 On Wed, Jul 30, 2014, at 03:23 PM, Jeremy Stanley wrote:
  On 2014-07-30 13:21:10 -0700 (-0700), Joe Gordon wrote:
   While forcing people to move to a newer version of libvirt is
   doable on most environments, do we want to do that now? What is
   the benefit of doing so?
  [...]
  
  The only dog I have in this fight is that using the split-out
  libvirt-python on PyPI means we finally get to run Nova unit tests
  in virtualenvs which aren't built with system-site-packages enabled.
  It's been a long-running headache which I'd like to see eradicated
  everywhere we can. I understand though if we have to go about it
  more slowly, I'm just excited to see it finally within our grasp.
  -- 
  Jeremy Stanley
 
 We aren't quite forcing people to move to newer versions. Only those
 installing nova test-requirements need newer libvirt.

Yeah, I'm a bit confused about the problem here. Is it that people want
to satisfy test-requirements through packages rather than using a
virtualenv?

(i.e. if people just use virtualenvs for unit tests, there's no problem
right?)

If so, is it possible/easy to create new, alternate packages of the
libvirt python bindings (from PyPI) on their own separately from the
libvirt.so and libvirtd packages?

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-12 Thread Joe Gordon
On Tue, Aug 12, 2014 at 11:08 AM, Doug Hellmann d...@doughellmann.com
wrote:


 On Aug 12, 2014, at 1:44 PM, Dolph Mathews dolph.math...@gmail.com
 wrote:


 On Tue, Aug 12, 2014 at 12:30 AM, Joe Gordon joe.gord...@gmail.com
 wrote:




 On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote:

 On Thu, Aug 7, 2014 at 1:26 PM, Joe Gordon joe.gord...@gmail.com
 wrote:
 
 
 
  On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez thie...@openstack.org
  wrote:
 
  Hi everyone,
 
  With the incredible growth of OpenStack, our development community is
  facing complex challenges. How we handle those might determine the
  ultimate success or failure of OpenStack.
 
  With this cycle we hit new limits in our processes, tools and cultural
  setup. This resulted in new limiting factors on our overall velocity,
  which is frustrating for developers. This resulted in the burnout of
 key
  firefighting resources. This resulted in tension between people who
 try
  to get specific work done and people who try to keep a handle on the
 big
  picture.
 
  It all boils down to an imbalance between strategic and tactical
  contributions. At the beginning of this project, we had a strong inner
  group of people dedicated to fixing all loose ends. Then a lot of
  companies got interested in OpenStack and there was a surge in
 tactical,
  short-term contributions. We put on a call for more resources to be
  dedicated to strategic contributions like critical bugfixing,
  vulnerability management, QA, infrastructure... and that call was
  answered by a lot of companies that are now key members of the
 OpenStack
  Foundation, and all was fine again. But OpenStack contributors kept on
  growing, and we grew the narrowly-focused population way faster than
 the
  cross-project population.
 
 
  At the same time, we kept on adding new projects to incubation and to
  the integrated release, which is great... but the new developers you
 get
  on board with this are much more likely to be tactical than strategic
  contributors. This also contributed to the imbalance. The penalty for
  that imbalance is twofold: we don't have enough resources available to
  solve old, known OpenStack-wide issues; but we also don't have enough
  resources to identify and fix new issues.
 
  We have several efforts under way, like calling for new strategic
  contributors, driving towards in-project functional testing, making
  solving rare issues a more attractive endeavor, or hiring resources
  directly at the Foundation level to help address those. But there is a
  topic we haven't raised yet: should we concentrate on fixing what is
  currently in the integrated release rather than adding new projects ?
 
 
  TL;DR: Our development model is having growing pains. until we sort
 out the
  growing pains adding more projects spreads us too thin.
 
 +100

  In addition to the issues mentioned above, with the scale of OpenStack
 today
  we have many major cross project issues to address and no good place to
  discuss them.
 
 We do have the ML, as well as the cross-project meeting every Tuesday
 [1], but we as a project need to do a better job of actually bringing
 up relevant issues here.

 [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting

 
 
  We seem to be unable to address some key issues in the software we
  produce, and part of it is due to strategic contributors (and core
  reviewers) being overwhelmed just trying to stay afloat of what's
  happening. For such projects, is it time for a pause ? Is it time to
  define key cycle goals and defer everything else ?
 
 
 
  I really like this idea, as Michael and others alluded to in above, we
 are
  attempting to set cycle goals for Kilo in Nova. but I think it is worth
  doing for all of OpenStack. We would like to make a list of key goals
 before
  the summit so that we can plan our summit sessions around the goals.
 On a
  really high level one way to look at this is, in Kilo we need to pay
 down
  our technical debt.
 
  The slots/runway idea is somewhat separate from defining key cycle
 goals; we
  can be approve blueprints based on key cycle goals without doing
 slots.  But
  with so many concurrent blueprints up for review at any given time, the
  review teams are doing a lot of multitasking and humans are not very
 good at
  multitasking. Hopefully slots can help address this issue, and
 hopefully
  allow us to actually merge more blueprints in a given cycle.
 
 I'm not 100% sold on what the slots idea buys us. What I've seen this
 cycle in Neutron is that we have a LOT of BPs proposed. We approve
 them after review. And then we hit one of two issues: Slow review
 cycles, and slow code turnaround issues. I don't think slots would
 help this, and in fact may cause more issues. If we approve a BP and
 give it a slot for which the eventual result is slow review and/or
 code review turnaround, we're right back where we started. Even worse,
 we may have not picked a BP for which the 

Re: [openstack-dev] [Nova] Nominating Jay Pipes for nova-core

2014-08-12 Thread Mark McLoughlin
On Wed, 2014-07-30 at 14:02 -0700, Michael Still wrote:
 Greetings,
 
 I would like to nominate Jay Pipes for the nova-core team.
 
 Jay has been involved with nova for a long time now.  He's previously
 been a nova core, as well as a glance core (and PTL). He's been around
 so long that there are probably other types of core status I have
 missed.
 
 Please respond with +1s or any concerns.

Was away, but +1 for the record. Would have been happy to see this some
time ago.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] SoftwareDeployment resource is always in progress

2014-08-12 Thread Steve Baker
On 11/08/14 20:42, david ferahi wrote:
 Hello,

 I 'm trying to create a simple stack with heat (Icehouse release).
 The template contains SoftwareConfig, SoftwareDeployment and a single
 server resources.

 The problem is that the SoftwareDeployment resource is always in
 progress !

So first I'm going to assume you're using an image that you have created
with diskimage-builder which includes the heat-config-script element:
https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements

When I a diagnosing deployments which don't signal back I do the following:
- ssh into the server and sudo to root
- stop the os-collect-config service:
  systemctl stop os-collect-config
- run os-collect-config manually and check for errors:
  os-collect-config --one-time --debug

 After waiting for more than an hour the stack deployment failed and I
 got this error:

  TRACE heat.engine.resource HTTPUnauthorized: ERROR: Authentication
 failed. Please try again with option --include-password or export
 HEAT_INCLUDE_PASSWORD=1
 TRACE heat.engine.resource Authentication required

This looks like a different issue, you should find out what is happening
to your server configuration first.


 When I checked the log file (/var/log/heat/heat-engine.log), it shows
  the following message(every second):
 2014-08-10 19:41:09.622 2391 INFO urllib3.connectionpool [-] Starting
 new HTTP connection (1): 192.168.122.10
 2014-08-10 19:41:10.648 2391 INFO urllib3.connectionpool [-] Starting
 new HTTP connection (1): 192.168.122.10
 2014-08-10 19:41:11.671 2391 INFO urllib3.connectionpool [-] Starting
 new HTTP connection (1): 192.168.122.10
 2014-08-10 19:41:12.690 2391 INFO urllib3.connectionpool [-] Starting
 new HTTP connection (1): 192.168.122.10

 Here the template I am using :
 https://github.com/openstack/heat-templates/blob/master/hot/software-config/example-templates/wordpress/WordPress_software-config_1-instance.yaml

 Please help !


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Update on third party CI in Neutron

2014-08-12 Thread Hemanth Ravi
Kyle,

One Convergence third-party CI is failing due to
https://bugs.launchpad.net/neutron/+bug/1353309.

Let me know if we should turn off the CI logs until this is fixed or if we
need to fix anything on the CI end. I think one other third-party CI
(Mellanox) is failing due to the same issue.

Regards,
-hemanth


On Tue, Jul 29, 2014 at 6:02 AM, Kyle Mestery mest...@mestery.com wrote:

 On Mon, Jul 28, 2014 at 1:42 PM, Hemanth Ravi hemanthrav...@gmail.com
 wrote:
  Kyle,
 
  One Convergence CI has been fixed (setup issue) and is running without
 the
  failures for ~10 days now. Updated the etherpad.
 
 Thanks for the update Hemanth, much appreciated!

 Kyle

  Thanks,
  -hemanth
 
 
  On Fri, Jul 11, 2014 at 4:50 PM, Fawad Khaliq fa...@plumgrid.com
 wrote:
 
 
  On Fri, Jul 11, 2014 at 8:56 AM, Kyle Mestery 
 mest...@noironetworks.com
  wrote:
 
  PLUMgrid
 
  Not saving enough logs
 
  All Jenkins slaves were just updated to upload all required logs.
 PLUMgrid
  CI should be good now.
 
 
  Thanks,
  Fawad Khaliq
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-12 Thread Kevin Benton
Should subsequent patches be reverted as well that depended on the change
in question?


On Tue, Aug 12, 2014 at 7:56 AM, Mark McLoughlin mar...@redhat.com wrote:

 Hey

 (Terrible name for a policy, I know)

 From the version_cap saga here:

   https://review.openstack.org/110754

 I think we need a better understanding of how to approach situations
 like this.

 Here's my attempt at documenting what I think we're expecting the
 procedure to be:

   https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy

 If it sounds reasonably sane, I can propose its addition to the
 Development policies doc.

 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Update on third party CI in Neutron

2014-08-12 Thread Anita Kuno
On 08/12/2014 03:23 PM, Hemanth Ravi wrote:
 Kyle,
 
 One Convergence third-party CI is failing due to
 https://bugs.launchpad.net/neutron/+bug/1353309.
 
 Let me know if we should turn off the CI logs until this is fixed or if we
 need to fix anything on the CI end. I think one other third-party CI
 (Mellanox) is failing due to the same issue.
 
 Regards,
 -hemanth
Are you One Convergence CI, hemanth?

Sorry I don't know who is admin'ing this account.

Thanks,
Anita.
 
 
 On Tue, Jul 29, 2014 at 6:02 AM, Kyle Mestery mest...@mestery.com wrote:
 
 On Mon, Jul 28, 2014 at 1:42 PM, Hemanth Ravi hemanthrav...@gmail.com
 wrote:
 Kyle,

 One Convergence CI has been fixed (setup issue) and is running without
 the
 failures for ~10 days now. Updated the etherpad.

 Thanks for the update Hemanth, much appreciated!

 Kyle

 Thanks,
 -hemanth


 On Fri, Jul 11, 2014 at 4:50 PM, Fawad Khaliq fa...@plumgrid.com
 wrote:


 On Fri, Jul 11, 2014 at 8:56 AM, Kyle Mestery 
 mest...@noironetworks.com
 wrote:

 PLUMgrid

 Not saving enough logs

 All Jenkins slaves were just updated to upload all required logs.
 PLUMgrid
 CI should be good now.


 Thanks,
 Fawad Khaliq


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker][HEAT] Cloud-init and docker container

2014-08-12 Thread Jay Lau
Thanks Eric for the confirmation ;-)


2014-08-12 23:30 GMT+08:00 Eric Windisch ewindi...@docker.com:




 On Tue, Aug 12, 2014 at 5:53 AM, Jay Lau jay.lau@gmail.com wrote:

 I did not have the environment set up now, but by reviewing code, I think
 that the logic should be as following:
 1) When using nova docker driver, we can use cloud-init or/and CMD in
 docker images to run post install scripts.
 myapp:
 Type: OS::Nova::Server
 Properties:
 flavor: m1.small
 image: my-app:latest   docker image
 user-data:  

 2) When using heat docker driver, we can only use CMD in docker image or
 heat template to run post install scripts.
 wordpress:
 type: DockerInc::Docker::Container
 depends_on: [database]
 properties:
   image: wordpress
   links:
 db: mysql
   port_bindings:
 80/tcp: [{HostPort: 80}]
   docker_endpoint:
 str_replace:
   template: http://host:2345/
   params:
 host: {get_attr: [docker_host, networks, private, 0]}
 cmd: /bin/bash 



 I can confirm this is correct for both use-cases. Currently, using Nova,
 one may only specify the CMD in the image itself, or as glance metadata.
 The cloud metadata service should be assessable and usable from Docker.

 The Heat plugin allow settings the CMD as a resource property. The
 user-data is only passed to the instance that runs Docker, not the
 containers. Configuring the CMD and/or environment variables for the
 container is the correct approach.

 --
 Regards,
 Eric Windisch

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?

2014-08-12 Thread Ryan Petrello
Yep, you're right, this doesn't seem to work.  The issue is that security is
enforced at routing time (while the controller is still actually being
discovered).  In order to do this sort of thing with the `check_permissions`,
we'd probably need to add a feature to pecan.

On 08/12/14 06:38 PM, Pendergrass, Eric wrote:
 Sure, here's the decorated method from v2.py:
 
 class MetersController(rest.RestController):
 Works on meters.
 
 @pecan.expose()
 def _lookup(self, meter_name, *remainder):
 return MeterController(meter_name), remainder
 
 @wsme_pecan.wsexpose([Meter], [Query])
 @secure(RBACController.check_permissions)
 def get_all(self, q=None):
 
 and here's the decorator called by the secure tag:
 
 class RBACController(object):
 global _ENFORCER
 if not _ENFORCER:
 _ENFORCER = policy.Enforcer()
 
 
 @classmethod
 def check_permissions(cls):
 # do some stuff
 
 In check_permissions I'd like to know the class and method with the @secure 
 tag that caused check_permissions to be invoked.  In this case, that would be 
 MetersController.get_all.
 
 Thanks
 
 
  Can you share some code?  What do you mean by, is there a way for the 
  decorator code to know it was called by MetersController.get_all
 
  On 08/12/14 04:46 PM, Pendergrass, Eric wrote:
   Thanks Ryan, but for some reason the controller attribute is None:
  
   (Pdb) from pecan.core import state
   (Pdb) state.__dict__
   {'hooks': [ceilometer.api.hooks.ConfigHook object at 0x31894d0,
   ceilometer.api.hooks.DBHook object at 0x3189650,
   ceilometer.api.hooks.PipelineHook object at 0x39871d0,
   ceilometer.api.hooks.TranslationHook object at 0x3aa5510], 'app':
   pecan.core.Pecan object at 0x2e76390, 'request': Request at
   0x3ed7390 GET http://localhost:8777/v2/meters, 'controller': None,
   'response': Response at 0x3ed74d0 200 OK}
  
-Original Message-
From: Ryan Petrello [mailto:ryan.petre...@dreamhost.com]
Sent: Tuesday, August 12, 2014 10:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Way to get wrapped method's 
name/class using Pecan secure decorators?
   
This should give you what you need:
   
from pecan.core import state
state.controller
   
On 08/12/14 04:08 PM, Pendergrass, Eric wrote:
 Hi, I'm trying to use the built in secure decorator in Pecan for 
 access control, and I'ld like to get the name of the method that is 
 wrapped from within the decorator.

 For instance, if I'm wrapping MetersController.get_all with an 
 @secure decorator, is there a way for the decorator code to know it 
 was called by MetersController.get_all?

 I don't see any global objects that provide this information.  I can 
 get the endpoint, v2/meters, with pecan.request.path, but that's not 
 as elegant.

 Is there a way to derive the caller or otherwise pass this 
 information to the decorator?

 Thanks
 Eric Pendergrass
   
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
--
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] lists and merges

2014-08-12 Thread Robert Collins
Just ran into a merge conflict with
https://review.openstack.org/#/c/105878/ which looks like this:

- name: nova_osapi
  port: 8774
  net_binds: *public_binds
- name: nova_metadata
  port: 8775
  net_binds: *public_binds
- name: ceilometer
  port: 8777
  net_binds: *public_binds
- name: swift_proxy_server
  port: 8080
  net_binds: *public_binds
 HEAD
- name: rabbitmq
  port: 5672
  options:
- timeout client 0
- timeout server 0
===
- name: mysql
  port: 3306
  extra_server_params:
- backup
 Change overcloud to use VIP for MySQL

I'd like to propose that we make it a standard - possibly lint on it,
certainly fixup things when we see its wrong - to alpha-sort such
structures: that avoids the textual-merge failure mode of 'append to
the end'.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Ryu plugin deprecation

2014-08-12 Thread YAMAMOTO Takashi
hi,

As announced in the last neutron meeting [1], the Ryu plugin is
being deprecated.  Juno is the last release to support Ryu plugin.
The Ryu team will be focusing on the ofagent going forward.

btw, i'll be mostly offline from Aug 16 to Aug 31.
sorry for inconvenience.

[1] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html

YAMAMOTO Takashi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Horizon] [Heat] Merlin project (formerly known as cross-project UI library for Heat/Mistral/Murano/Solum) plans for PoC and more

2014-08-12 Thread Lyle, David
On 8/6/14, 1:41 PM, Timur Sufiev tsuf...@mirantis.com wrote:

Hi, folks!

Two months ago there was an announcement in ML about gathering the
requirements for cross-project UI library for
Heat/Mistral/Murano/Solum [1]. The positive feedback in related
googledoc [2] and some IRC chats and emails that followed convinced me
that I'm not the only person interested in it :), so I'm happy to make
the next announcement.

The project finally has got its name - 'Merlin' (making complex UIs is
a kind of magic), Openstack wiki page [3] and all other stuff like
stackforge repo, launchpad page and IRC channel (they are all
referenced in [3]). For those who don't like clicking the links, here
is quick summary.

Merlin aims to provide a convenient client side framework for building
rich UIs for Openstack projects dealing with complex input data with
lot of dependencies and constraints (usually encoded in YAML format
via some DSL) - projects like Heat, Murano, Mistral or Solum. The
ultimate goal for such UI is to save users from reading comprehensive
documentation just in order to provide correct input data, thus making
the UI of these projects more user-friendly. If things go well for
Merlin, it could be eventually merged into Horizon library (I¹ll spare
another option for the end of this letter).

The framework trying to solve this ambitious task is facing at least 2
challenges:
(1) enabling the proper UX patterns and
(2) dealing with complexities of different projects' DSLs.

Having worked on DSL things in Murano project before, I'm planning at
first to deal with the challenge (2) in the upcoming Merlin PoC. So,
here is the initial plan: design an in-framework object model (OM)
that could translated forth and back into target project's DSL. This
OM is meant to be synchronised with visual elements shown on browser
canvas. Target project is the Heat with its HOT templates - it has the
most well-established syntax among other projects and comprehensive
documentation.

Considering the challenge (1), not being a dedicated UX engineer, I'm
planning to start with some rough UI concepts [4] and gradually
improve them relying on community feedback, and especially, Openstack
UX group. If anybody from the UX team (or any other team!) is willing
to be involved to a greater degree than just giving some feedback,
you're are enormously welcome! Join Merlin, it will be fun :)!

Finally, with this announcement I¹d like to start a discussion with
Horizon community. As far as I know, Horizon in its current state
lacks such UI toolkit as Merlin aims to provide. Would it be by any
chance possible for the Merlin project to be developed from the very
beginning as part of Horizon library? This choice has its pros and
cons I¹m aware of, but I¹d like to hear the opinions of Horizon
developers on that matter.

I would like to see this toolset built into Horizon. That will make it
accessible to integrated projects like Heat that Horizon already supports,
but will also allow other projects to use the horizon library as a
building block to providing managing project specific DSLs.

David
   

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-June/037054.html
[2] 
https://docs.google.com/a/mirantis.com/document/d/19Q9JwoO77724RyOp7XkpYmA
Lwmdb7JjoQHcDv4ffZ-I/edit#
[3] https://wiki.openstack.org/wiki/Merlin
[4] https://wiki.openstack.org/wiki/Merlin/SampleUI

-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][core] Expectations of core reviewers

2014-08-12 Thread Michael Still
Hi.

One of the action items from the nova midcycle was that I was asked to
make nova's expectations of core reviews more clear. This email is an
attempt at that.

Nova expects a minimum level of sustained code reviews from cores. In
the past this has been generally held to be in the order of two code
reviews a day, which is a pretty low bar compared to the review
workload of many cores. I feel that existing cores understand this
requirement well, and I am mostly stating it here for completeness.

Additionally, there is increasing levels of concern that cores need to
be on the same page about the criteria we hold code to, as well as the
overall direction of nova. While the weekly meetings help here, it was
agreed that summit attendance is really important to cores. Its the
way we decide where we're going for the next cycle, as well as a
chance to make sure that people are all pulling in the same direction
and trust each other.

There is also a strong preference for midcycle meetup attendance,
although I understand that can sometimes be hard to arrange. My stance
is that I'd like core's to try to attend, but understand that
sometimes people will miss one. In response to the increasing
importance of midcycles over time, I commit to trying to get the dates
for these events announced further in advance.

Given that we consider these physical events so important, I'd like
people to let me know if they have travel funding issues. I can then
approach the Foundation about funding travel if that is required.

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] reverting the HOT migration? // dealing with lockstep changes

2014-08-12 Thread Robert Collins
On 12 August 2014 10:46, Robert Collins robe...@robertcollins.net wrote:
 On 12 August 2014 07:24, Dan Prince dpri...@redhat.com wrote:
 On Tue, 2014-08-12 at 06:58 +1200, Robert Collins wrote:
 Hi, so shortly after the HOT migration landed, we hit
 https://bugs.launchpad.net/tripleo/+bug/1354305 which is that on even
 quite recently deployed clouds, the migrated templates were just too
 new. A partial revert (of just the list_join bit) fixes that, but a
 deeper problem emerged which is that stack-update to get from a
 non-HOT to HOT template appears broken
 (https://bugs.launchpad.net/heat/+bug/1354962).

 I think we need to revert the HOT migration today, as forcing a
 scorched earth recreation of a cloud is not a great answer for folk
 that have deployed versions - its a backwards compat issue.

 Its true that our release as of icehouse isn't  really useable, so we
 could try to wiggle our way past this one, but I think as the first
 real test of our new backwards compat policy, that that would be a
 mistake.

 Hmmm. We blocked a good bit of changes to get these HOT templates in so
 I hate to see us revert them. Also, It isn't clear to me how much work
 it would be to fully support the non-HOT to HOT templates upgrade path.
 How much work is this? And is that something we really want to spend
 time on instead of all the other things?

 Following up with Heat folk, apparently the non-HOT-HOTness was a
 distraction - I'll validate this on the hp1 region asap, since I too
 would rather not revert stuff.

I've reproduced the problem with zane's fix for the validation error -
and it does indeed still break:
| stack_status_reason  | StackValidationFailed: Property error :
NovaCompute6:
|  | key_name Value must be a string


 

 We may need to document a two-step upgrade process for the UC - step 1
 upgrade the UC image, *same* template, step 2, use new template to get
 new functionality.

... once we can actually do the stack update at all :).

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] reverting the HOT migration? // dealing with lockstep changes

2014-08-12 Thread Robert Collins
On 13 August 2014 11:05, Robert Collins robe...@robertcollins.net wrote:

 I've reproduced the problem with zane's fix for the validation error -
 and it does indeed still break:
 | stack_status_reason  | StackValidationFailed: Property error :
 NovaCompute6:
 |  | key_name Value must be a string


  

Filed https://bugs.launchpad.net/heat/+bug/1356097 to track this.

Since this makes it impossible to upgrade a pre-HOT-migration merged
stack, I'm going to push forward on toggling back to non-HOT, at least
until we can figure out whether this is a shallow or deep problem in
Heat. (Following our 'rollback then fix' stock approach to issues).

-Rob



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?

2014-08-12 Thread Doug Hellmann
Eric,

If you can give us some more information about your end goal, independent of 
the implementation, maybe we can propose an alternate technique to achieve the 
same thing.

Doug

On Aug 12, 2014, at 6:21 PM, Ryan Petrello ryan.petre...@dreamhost.com wrote:

 Yep, you're right, this doesn't seem to work.  The issue is that security is
 enforced at routing time (while the controller is still actually being
 discovered).  In order to do this sort of thing with the `check_permissions`,
 we'd probably need to add a feature to pecan.
 
 On 08/12/14 06:38 PM, Pendergrass, Eric wrote:
 Sure, here's the decorated method from v2.py:
 
class MetersController(rest.RestController):
Works on meters.
 
@pecan.expose()
def _lookup(self, meter_name, *remainder):
return MeterController(meter_name), remainder
 
@wsme_pecan.wsexpose([Meter], [Query])
@secure(RBACController.check_permissions)
def get_all(self, q=None):
 
 and here's the decorator called by the secure tag:
 
class RBACController(object):
global _ENFORCER
if not _ENFORCER:
_ENFORCER = policy.Enforcer()
 
 
@classmethod
def check_permissions(cls):
# do some stuff
 
 In check_permissions I'd like to know the class and method with the @secure 
 tag that caused check_permissions to be invoked.  In this case, that would 
 be MetersController.get_all.
 
 Thanks
 
 
 Can you share some code?  What do you mean by, is there a way for the 
 decorator code to know it was called by MetersController.get_all
 
 On 08/12/14 04:46 PM, Pendergrass, Eric wrote:
 Thanks Ryan, but for some reason the controller attribute is None:
 
 (Pdb) from pecan.core import state
 (Pdb) state.__dict__
 {'hooks': [ceilometer.api.hooks.ConfigHook object at 0x31894d0,
 ceilometer.api.hooks.DBHook object at 0x3189650,
 ceilometer.api.hooks.PipelineHook object at 0x39871d0,
 ceilometer.api.hooks.TranslationHook object at 0x3aa5510], 'app':
 pecan.core.Pecan object at 0x2e76390, 'request': Request at
 0x3ed7390 GET http://localhost:8777/v2/meters, 'controller': None,
 'response': Response at 0x3ed74d0 200 OK}
 
 -Original Message-
 From: Ryan Petrello [mailto:ryan.petre...@dreamhost.com]
 Sent: Tuesday, August 12, 2014 10:34 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ceilometer] Way to get wrapped method's 
 name/class using Pecan secure decorators?
 
 This should give you what you need:
 
 from pecan.core import state
 state.controller
 
 On 08/12/14 04:08 PM, Pendergrass, Eric wrote:
 Hi, I'm trying to use the built in secure decorator in Pecan for access 
 control, and I'ld like to get the name of the method that is wrapped 
 from within the decorator.
 
 For instance, if I'm wrapping MetersController.get_all with an @secure 
 decorator, is there a way for the decorator code to know it was called 
 by MetersController.get_all?
 
 I don't see any global objects that provide this information.  I can get 
 the endpoint, v2/meters, with pecan.request.path, but that's not as 
 elegant.
 
 Is there a way to derive the caller or otherwise pass this information 
 to the decorator?
 
 Thanks
 Eric Pendergrass
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 --
 Ryan Petrello
 Senior Developer, DreamHost
 ryan.petre...@dreamhost.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -- 
 Ryan Petrello
 Senior Developer, DreamHost
 ryan.petre...@dreamhost.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-12 Thread Joe Gordon
On Tue, Aug 12, 2014 at 12:23 AM, Mark McLoughlin mar...@redhat.com wrote:

 On Mon, 2014-08-11 at 15:25 -0700, Joe Gordon wrote:
 
 
 
  On Sun, Aug 10, 2014 at 11:59 PM, Mark McLoughlin mar...@redhat.com
  wrote:
  On Fri, 2014-08-08 at 09:06 -0400, Russell Bryant wrote:
   On 08/07/2014 08:06 PM, Michael Still wrote:
It seems to me that the tension here is that there are
  groups who
would really like to use features in newer libvirts that
  we don't CI
on in the gate. Is it naive to think that a possible
  solution here is
to do the following:
   
 - revert the libvirt version_cap flag
  
   I don't feel strongly either way on this.  It seemed useful
  at the time
   for being able to decouple upgrading libvirt and enabling
  features that
   come with that.
 
 
  Right, I suggested the flag as a more deliberate way of
  avoiding the
  issue that was previously seen in the gate with live
  snapshots. I still
  think it's a pretty elegant and useful little feature, and
  don't think
  we need to use it as proxy battle over testing requirements
  for new
  libvirt features.
 
 
  Mark,
 
 
  I am not sure if I follow.  The gate issue with live snapshots has
  been worked around by turning it off [0], so presumably this patch is
  forward facing.  I fail to see how this patch is needed to help the
  gate in the future.

 On the live snapshot issue specifically, we disabled it by requiring
 1.3.0 for the feature. With the version cap set to 1.2.2, we won't
 automatically enable this code path again if we update to 1.3.0. No
 question that's a bit of a mess, though.


Agreed



 The point was a more general one - we learned from the live snapshot
 issue that having a libvirt upgrade immediately enable new code paths
 was a bad idea. The patch is a simple, elegant way of avoiding that.

   Wouldn't it just delay the issues until we change the version_cap?

 Yes, that's the idea. Rather than having to scramble when the new
 devstack-gate image shows up, we'd be able to work on any issues in the
 context of a patch series to bump the version_cap.


So the version_cap flag only possibly help for bugs in libvirt that are
triggered by new nova code paths, and not bugs that are triggered by
existing nova code paths that trigger a libvirt regression. Furthermore it
can only catch libvirt bugs that trigger frequently enough to be caught on
the patch to bump the version_cap, and we commonly have bugs that are 1 in
a 1000 these days. This sounds like a potential solution for a very
specific case when I would rather see a more general solution.




  The issue I see with the libvirt version_cap [1] is best captured in
  its commit message: The end user can override the limit if they wish
  to opt-in to use of untested features via the 'version_cap' setting in
  the 'libvirt' group. This goes against the very direction nova has
  been moving in for some time now. We have been moving away from
  merging untested (re: no integration testing) features.  This patch
  changes the very direction the project is going in over testing
  without so much as a discussion. While I think it may be time that we
  revisited this discussion, the discussion needs to happen before any
  patches are merged.

 You put it well - some apparently see us moving towards a zero-tolerance
 policy of not having any code which isn't functionally tested in the
 gate. That obviously is not the case right now.

 The sentiment is great, but any zero-tolerance policy is dangerous. I'm
 very much in favor of discussing this further. We should have some
 principles and goals around this, but rather than argue this in the
 abstract we should be open to discussing the tradeoffs involved with
 individual patches.


To bad the mid-cycle just passed this would have been a great discussion
for it.



  I am less concerned about the contents of this patch, and more
  concerned with how such a big de facto change in nova policy (we
  accept untested code sometimes) without any discussion or consensus.
  In your comment on the revert [2], you say the 'whether not-CI-tested
  features should be allowed to be merged' debate is 'clearly
  unresolved.' How did you get to that conclusion? This was never
  brought up in the mid-cycles as a unresolved topic to be discussed. In
  our specs template we say Is this untestable in gate given current
  limitations (specific hardware / software configurations available)?
  If so, are there mitigation plans (3rd party testing, gate
  enhancements, etc) [3].  We have been blocking untested features for
  some time now.

 Asking is this tested in a spec template makes a tonne of sense.
 Requiring some thought to be put into mitigation where a feature is
 untestable in the gate makes sense. Requiring that 

Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems

2014-08-12 Thread Asselin, Ramy
I forked jaypipe’s repos  working on extending it to support nodepool, log 
server, etc.
Still WIP but generally working.

If you need help, ping me on IRC #openstack-cinder (asselin)

Ramy

From: Jesse Pretorius [mailto:jesse.pretor...@gmail.com]
Sent: Monday, August 11, 2014 11:33 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems

On 12 August 2014 07:26, Amit Das 
amit@cloudbyte.commailto:amit@cloudbyte.com wrote:
I would like some guidance in this regards in form of some links, wiki pages 
etc.

I am currently gathering the driver cert test results i.e. tempest tests from 
devstack in our environment  CI setup would be my next step.

This should get you started:
http://ci.openstack.org/third_party.html

Then Jay Pipes' excellent two part series will help you with the details of 
getting it done:
http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-system/
http://www.joinfu.com/2014/02/setting-up-an-openstack-external-testing-system-part-2/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Minesweeper behaving badly

2014-08-12 Thread Salvatore Orlando
Hi,

VMware minesweeper caused havoc today causing exhaustion of the upstream
node pool.
The account has been disabled so things are back to normal now.

The root cause of the issue was super easy once we realized we missed [1].
I would like to apologise to the whole community on behalf of the VMware
minesweeper team.

The problem has now been fixed, and once the account will be re-enabled,
rechecks should be issued with the command vmware-recheck.

Finally, I have noticed the old grammar is still being used by other 3rd
party CI. I do not have a list of them, but if you run a 3rd party CI, and
this is completely new to you then probably you should look at the syntax
for issuing recheck commands.

Regards,
Salvatore


[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041238.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-12 Thread Osanai, Hisashi

On Tuesday, August 12, 2014 10:14 PM, Julien Danjou wrote:
 The py33 gate shouldn't be activated for the stable/icehouse. I'm no
 infra-config expert, but we should be able to patch it for that (hint?).

Thank you for the response. 

Now we have two choices:
(1) deter to activate the py33 gate
(2) a patch to happybase

I prefer to choose (1) first because (2) is only problem if we activate the 
py33 gate in stable/icehouse together with python33 and as you mentioned 
the py33 gate shouldn't be activated in stable/icehouse but there is the entry 
for the py33 gate in tox.ini so I would like to remove it from stable/icehouse.

If it's ok, I make a bug report for tox.ini in stable/icehouse and commit a fix 
for it.  (then proceed https://review.openstack.org/#/c/112806/ ahead)

What do you think?

- tox.ini (stable/icehouse)
  1 [tox]
  2 minversion = 1.6
  3 skipsdist = True
  4 envlist = py26,py27,py33,pep8

Best Regards,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] introducing cyclops

2014-08-12 Thread Adam Lawson
I am also highly interested. A very large adoption inhibitor has been the
ability to control cloud consumption with charge-back and/or cost center
billing support. Would love to talk about this.


*Adam Lawson*
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072



On Tue, Aug 12, 2014 at 8:48 AM, Stephane Albert sheepr...@nullplace.com
wrote:

 On Tue, Aug 12, 2014 at 05:47:49PM +1200, Fei Long Wang wrote:
  Our suggestion for the first IRC meeting is 25/August 8PM-10PM UTC time
 on
  Freenodes's #openstack-rating channel.
 
  Thoughts? Please reply with the best date/time for you so we can figure
 out a
  time to start.
 

 I'll like to participate to this meeting, but one of my colleagues will
 not be available on the 25th. Maybe we can shift the date to the 26th?

 Thanks

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Openstack Capacity Planning

2014-08-12 Thread Adam Lawson
Something was presented at a meeting recently which had me curious: what
sort of capacity planning tools/capabilities are being developed as an
Openstack program? It's another area where non-proprietary cloud control is
needed and would be another way to kick a peg away from the stool of cloud
resistance. Also, this ties quite nicely into Software Defined Datacenter
but appropriateness for the Openstack suite itself is another matter...

Has this been given much thought at this stage of the game? I'd be more
than happy to host a meeting to talk about it.

Mahalo,
Adam

*Adam Lawson*
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-12 Thread Michael Still
Actually, thinking on this more -- the lack of consensus is on the
attempt to re-add the patch, so I guess we'd handle that just like we
do for a contentious patch now.

Michael

On Wed, Aug 13, 2014 at 7:03 AM, Michael Still mi...@stillhq.com wrote:
 This looks reasonable to me, with a slight concern that I don't know
 what step five looks like... What if we can never reach a consensus on
 an issue?

 Michael

 On Wed, Aug 13, 2014 at 12:56 AM, Mark McLoughlin mar...@redhat.com wrote:
 Hey

 (Terrible name for a policy, I know)

 From the version_cap saga here:

   https://review.openstack.org/110754

 I think we need a better understanding of how to approach situations
 like this.

 Here's my attempt at documenting what I think we're expecting the
 procedure to be:

   https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy

 If it sounds reasonably sane, I can propose its addition to the
 Development policies doc.

 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Rackspace Australia



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-12 Thread Matt Riedemann



On 8/12/2014 4:03 PM, Michael Still wrote:

This looks reasonable to me, with a slight concern that I don't know
what step five looks like... What if we can never reach a consensus on
an issue?

Michael

On Wed, Aug 13, 2014 at 12:56 AM, Mark McLoughlin mar...@redhat.com wrote:

Hey

(Terrible name for a policy, I know)

 From the version_cap saga here:

   https://review.openstack.org/110754

I think we need a better understanding of how to approach situations
like this.

Here's my attempt at documenting what I think we're expecting the
procedure to be:

   https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy

If it sounds reasonably sane, I can propose its addition to the
Development policies doc.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






Just thinking out loud, you could do something like a 2/3 majority vote 
on the issue but that sounds too much like government, which is 
generally terrible.


Otherwise maybe the PTL is the tie-breaker?

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-12 Thread Russell Bryant


 On Aug 12, 2014, at 5:10 PM, Michael Still mi...@stillhq.com wrote:
 
 This looks reasonable to me, with a slight concern that I don't know
 what step five looks like... What if we can never reach a consensus on
 an issue?

In an extreme case, the PTL has the authority to make the call.

In general I would like to think we can all just put on our big boy pants and 
talk through contentious issues to find a resolution that everyone can live 
with.

-- 
Russell Bryant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-12 Thread Michael Still
On Wed, Aug 13, 2014 at 11:36 AM, Russell Bryant rbry...@redhat.com wrote:
 On Aug 12, 2014, at 5:10 PM, Michael Still mi...@stillhq.com wrote:

 This looks reasonable to me, with a slight concern that I don't know
 what step five looks like... What if we can never reach a consensus on
 an issue?

 In an extreme case, the PTL has the authority to make the call.

 In general I would like to think we can all just put on our big boy pants and 
 talk through contentious issues to find a resolution that everyone can live 
 with.

That's what we've done for the few cases I can remember (think nova v3
API). It is expensive though in terms of time and emotional costs, but
I think its worth it to keep the community together. In general I
think a PTL fiat is something to be avoided if at all possible.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-12 Thread Michael Still
On Wed, Aug 13, 2014 at 4:26 AM, Eoghan Glynn egl...@redhat.com wrote:

 It seems like this is exactly what the slots give us, though. The core review
 team picks a number of slots indicating how much work they think they can
 actually do (less than the available number of blueprints), and then
 blueprints queue up to get a slot based on priorities and turnaround time
 and other criteria that try to make slot allocation fair. By having the
 slots, not only is the review priority communicated to the review team, it
 is also communicated to anyone watching the project.

 One thing I'm not seeing shine through in this discussion of slots is
 whether any notion of individual cores, or small subsets of the core
 team with aligned interests, can champion blueprints that they have
 a particular interest in.

I think that's because we've focussed in this discussion on the slots
themselves, not the process of obtaining a slot.

The proposal as it stands now is that we would have a public list of
features that are ready to occupy a slot. That list would the ranked
in order of priority to the project, and the next free slot goes to
the top item on the list. The ordering of the list is determined by
nova-core, based on their understanding of the importance of a given
thing, as well as what they are hearing from our users.

So -- there's totally scope for lobbying, or for a subset of core to
champion a feature to land, or for a company to explain why a given
feature is very important to them.

It sort of happens now -- there is a subset of core which cares more
about xen than libvirt for example. We're just being more open about
the process and setting expectations for our users. At the moment its
very confusing as a user, there are hundreds of proposed features for
Juno, nearly 100 of which have been accepted. However, we're kidding
ourselves if we think we can land 100 blueprints in a release cycle.

 For example it might address some pain-point they've encountered, or
 impact on some functional area that they themselves have worked on in
 the past, or line up with their thinking on some architectural point.

 But for whatever motivation, such small groups of cores currently have
 the freedom to self-organize in a fairly emergent way and champion
 individual BPs that are important to them, simply by *independently*
 giving those BPs review attention.

 Whereas under the slots initiative, presumably this power would be
 subsumed by the group will, as expressed by the prioritization
 applied to the holding pattern feeding the runways?

 I'm not saying this is good or bad, just pointing out a change that
 we should have our eyes open to.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Update on third party CI in Neutron

2014-08-12 Thread Hemanth Ravi
Anita,

Yes, I'm the contact for One Convergence CI.

Thanks,
-hemanth


On Tue, Aug 12, 2014 at 3:12 PM, Anita Kuno ante...@anteaya.info wrote:

 On 08/12/2014 03:23 PM, Hemanth Ravi wrote:
  Kyle,
 
  One Convergence third-party CI is failing due to
  https://bugs.launchpad.net/neutron/+bug/1353309.
 
  Let me know if we should turn off the CI logs until this is fixed or if
 we
  need to fix anything on the CI end. I think one other third-party CI
  (Mellanox) is failing due to the same issue.
 
  Regards,
  -hemanth
 Are you One Convergence CI, hemanth?

 Sorry I don't know who is admin'ing this account.

 Thanks,
 Anita.
 
 
  On Tue, Jul 29, 2014 at 6:02 AM, Kyle Mestery mest...@mestery.com
 wrote:
 
  On Mon, Jul 28, 2014 at 1:42 PM, Hemanth Ravi hemanthrav...@gmail.com
  wrote:
  Kyle,
 
  One Convergence CI has been fixed (setup issue) and is running without
  the
  failures for ~10 days now. Updated the etherpad.
 
  Thanks for the update Hemanth, much appreciated!
 
  Kyle
 
  Thanks,
  -hemanth
 
 
  On Fri, Jul 11, 2014 at 4:50 PM, Fawad Khaliq fa...@plumgrid.com
  wrote:
 
 
  On Fri, Jul 11, 2014 at 8:56 AM, Kyle Mestery 
  mest...@noironetworks.com
  wrote:
 
  PLUMgrid
 
  Not saving enough logs
 
  All Jenkins slaves were just updated to upload all required logs.
  PLUMgrid
  CI should be good now.
 
 
  Thanks,
  Fawad Khaliq
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PCI support

2014-08-13 Thread Irena Berezovsky
Hi Gary,
I understand your concern. I think CI is mandatory to ensure that code is not 
broken. While unit tests provide great value, it may end up with the code that 
does not work...
I am not sure how this code can be checked for validity without running the 
neutron part.
Probably our CI job should be triggered by nova changes in the PCI area.
What do you suggest?

Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Tuesday, August 12, 2014 4:29 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks, the concern is for the code in Nova and not in Neutron. That is, there 
is quite a lot of PCI code being added and no way of knowing that it actually 
works (unless we trust the developers working on it :)).
Thanks
Gary

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Tuesday, August 12, 2014 at 10:25 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com
Subject: RE: [openstack-dev] [Nova] PCI support

Hi Gary,
Mellanox already established CI support on Mellanox SR-IOV NICs, as one of the 
jobs of Mellanox External Testing CI 
(Check-MLNX-Neutron-ML2-Sriov-driverhttps://urldefense.proofpoint.com/v1/url?u=http://144.76.193.39/ci-artifacts/94888/13/Check-MLNX-Neutron-ML2-Sriov-driverk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=OFhjKT9ipKmAmkiQpq6hlqZIHthaGP7q1PTygNW2RXs%3D%0As=13fdee114a421eeed33edf26a639f8450df6efa361ba912c41694ff75292e789).
Meanwhile not voting, but will be soon.

BR,
Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Monday, August 11, 2014 5:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks for the update.

From: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, August 11, 2014 at 5:08 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] PCI support

Gary,

Cisco is adding it in our CI testbed. I guess that mlnx is doing the same for 
their MD as well.

-Robert

On 8/11/14, 9:05 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:

Hi,
At the moment all of the drivers are required CI support. Are there any plans 
regarding the PIC support. I understand that this is something that requires 
specific hardware. Are there any plans to add this?
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Get Tenant Details in novaclient

2014-08-13 Thread Sachi Gupta
Hi,

nova --os-tenant-name admin list --tenant c40ad5830e194f2296ad11a96cefc487 
--all-tenants 1 - Works Fine and returns all the servers available where 
c40ad5830e194f2296ad11a96cefc487  is the id of the demo tenant whereas 
nova --os-tenant-name admin list --tenant demo --all-tenants 1 - Returns 
nothing when tenant-name demo is passed in place of its id.

For the above bug, need to get the tenant details in novaclient on the 
basis of tenant-name being passed to nova api so that the list of servers 
can be shown up by both tenant_name or tenant_id.

Also, to interact between Openstaack components we can use the rest calls.

Can anyone suggest how to get the keystone tenant-details in novaclient to 
make the above functionality work.

Thanks in advance
Sachi
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Update on third party CI in Neutron

2014-08-13 Thread Irena Berezovsky
Hi,
Mellanox CI was also failing due to the same issue, 
https://bugs.launchpad.net/neutron/+bug/1355780 (apparently duplicated bug for 
https://bugs.launchpad.net/neutron/+bug/1353309)
We currently fixed the issue locally, by patching the server side RPC version 
support to 1.3.

BR,
Irena


From: Hemanth Ravi [mailto:hemanthrav...@gmail.com]
Sent: Wednesday, August 13, 2014 12:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [third-party] Update on third party CI 
in Neutron

Kyle,

One Convergence third-party CI is failing due to 
https://bugs.launchpad.net/neutron/+bug/1353309.

Let me know if we should turn off the CI logs until this is fixed or if we need 
to fix anything on the CI end. I think one other third-party CI (Mellanox) is 
failing due to the same issue.

Regards,
-hemanth

On Tue, Jul 29, 2014 at 6:02 AM, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:
On Mon, Jul 28, 2014 at 1:42 PM, Hemanth Ravi 
hemanthrav...@gmail.commailto:hemanthrav...@gmail.com wrote:
 Kyle,

 One Convergence CI has been fixed (setup issue) and is running without the
 failures for ~10 days now. Updated the etherpad.

Thanks for the update Hemanth, much appreciated!

Kyle

 Thanks,
 -hemanth


 On Fri, Jul 11, 2014 at 4:50 PM, Fawad Khaliq 
 fa...@plumgrid.commailto:fa...@plumgrid.com wrote:


 On Fri, Jul 11, 2014 at 8:56 AM, Kyle Mestery 
 mest...@noironetworks.commailto:mest...@noironetworks.com
 wrote:

 PLUMgrid

 Not saving enough logs

 All Jenkins slaves were just updated to upload all required logs. PLUMgrid
 CI should be good now.


 Thanks,
 Fawad Khaliq


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Get Tenant Details in novaclient

2014-08-13 Thread Chen CH Ji
this spec has some thought on functionality to validate the tenant or user
that is consumed by nova, not sure whether it's what you want, FYI

https://review.openstack.org/#/c/92507/

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Sachi Gupta sachi.gu...@tcs.com
To: openstack-dev@lists.openstack.org,
Date:   08/13/2014 01:58 PM
Subject:[openstack-dev] Get Tenant Details in novaclient



Hi,

nova --os-tenant-name admin list --tenant c40ad5830e194f2296ad11a96cefc487
--all-tenants 1 - Works Fine and returns all the servers available where
c40ad5830e194f2296ad11a96cefc487  is the id of the demo tenant whereas
nova --os-tenant-name admin list --tenant demo --all-tenants 1 - Returns
nothing when tenant-name demo is passed in place of its id.

For the above bug, need to get the tenant details in novaclient on the
basis of tenant-name being passed to nova api so that the list of servers
can be shown up by both tenant_name or tenant_id.

Also, to interact between Openstaack components we can use the rest calls.

Can anyone suggest how to get the keystone tenant-details in novaclient to
make the above functionality work.

Thanks in advance
Sachi


=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-13 Thread Baohua Yang
Like the policy-group naming.

The policy-target is better than policy-point, but still feel there's some
little confusing, as the target is usually meaning what it's for, but not
what it's on.

Hence, the policy-endpoint might be more exact.


On Fri, Aug 8, 2014 at 11:43 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/07/2014 01:17 PM, Ronak Shah wrote:

 Hi,
 Following a very interesting and vocal thread on GBP for last couple of
 days and the GBP meeting today, GBP sub-team proposes following name
 changes to the resource.


 policy-point for endpoint
 policy-group for endpointgroup (epg)

 Please reply if you feel that it is not ok with reason and suggestion.


 Thanks Ronak and Sumit for sharing. I, too, wasn't able to attend the
 meeting (was in other meetings yesterday and today).

 I'm very happy with the change from endpoint-group - policy-group.

 policy-point is better than endpoint, for sure. The only other suggestion
 I might have would be to use policy-target instead of policy-point,
 since the former clearly delineates what the object is used for (a target
 for a policy).

 But... I won't raise a stink about this. Sorry for sparking long and
 tangential discussions on GBP topics earlier this week. And thanks to the
 folks who persevered and didn't take too much offense to my questioning.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best wishes!
Baohua
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PCI support

2014-08-13 Thread Gary Kotton
Hi,
If I understand correctly the only way that this work is with nova and neutron 
running. My understanding would be to have the CI running with this as the 
configuration. I just think that this should be a prerequisite similar to 
having validations of virtualization drivers.
Does that make sense?
Thanks
Gary

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Wednesday, August 13, 2014 at 9:01 AM
To: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com, OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: RE: [openstack-dev] [Nova] PCI support

Hi Gary,
I understand your concern. I think CI is mandatory to ensure that code is not 
broken. While unit tests provide great value, it may end up with the code that 
does not work...
I am not sure how this code can be checked for validity without running the 
neutron part.
Probably our CI job should be triggered by nova changes in the PCI area.
What do you suggest?

Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Tuesday, August 12, 2014 4:29 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks, the concern is for the code in Nova and not in Neutron. That is, there 
is quite a lot of PCI code being added and no way of knowing that it actually 
works (unless we trust the developers working on it :)).
Thanks
Gary

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Tuesday, August 12, 2014 at 10:25 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com
Subject: RE: [openstack-dev] [Nova] PCI support

Hi Gary,
Mellanox already established CI support on Mellanox SR-IOV NICs, as one of the 
jobs of Mellanox External Testing CI 
(Check-MLNX-Neutron-ML2-Sriov-driverhttps://urldefense.proofpoint.com/v1/url?u=http://144.76.193.39/ci-artifacts/94888/13/Check-MLNX-Neutron-ML2-Sriov-driverk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=OFhjKT9ipKmAmkiQpq6hlqZIHthaGP7q1PTygNW2RXs%3D%0As=13fdee114a421eeed33edf26a639f8450df6efa361ba912c41694ff75292e789).
Meanwhile not voting, but will be soon.

BR,
Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Monday, August 11, 2014 5:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks for the update.

From: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, August 11, 2014 at 5:08 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] PCI support

Gary,

Cisco is adding it in our CI testbed. I guess that mlnx is doing the same for 
their MD as well.

-Robert

On 8/11/14, 9:05 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:

Hi,
At the moment all of the drivers are required CI support. Are there any plans 
regarding the PIC support. I understand that this is something that requires 
specific hardware. Are there any plans to add this?
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-13 Thread Angus Lees
I'm doing various small cleanup changes as I explore the neutron codebase.  
Some of these cleanups are to fix actual bugs discovered in the code.  Almost 
all of them are tiny and obviously correct.

A recurring reviewer comment is that the change should have had an 
accompanying bug report and that they would rather that change was not 
submitted without one (or at least, they've -1'ed my change).

I often didn't discover these issues by encountering an actual production 
issue so I'm unsure what to include in the bug report other than basically a 
copy of the change description.  I also haven't worked out the pattern yet of 
which changes should have a bug and which don't need one.

There's a section describing blueprints in NeutronDevelopment but nothing on 
bugs.  It would be great if someone who understands the nuances here could add 
some words on when to file bugs:
Which type of changes should have accompanying bug reports?
What is the purpose of that bug, and what should it contain?

-- 
Thanks,
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-13 Thread Kevin Benton
I'm not sure what the guideline is, but I would like to point out a good
reason to have the bug report even for obvious fixes.
When users encounters bugs, they go to launchpad to report them. They don't
first scan the commits of the master branch to see what was fixed. Having
the bug in launchpad provides a way to track the status (fixed, backported,
impact, etc) of the bug and reduces the chances of duplicated bugs.

Can you provide an example of a patch that you felt was trivial that a
reviewer requested a bug for so we have something concrete to discuss and
establish guidelines around?
On Aug 13, 2014 12:32 AM, Angus Lees g...@inodes.org wrote:

 I'm doing various small cleanup changes as I explore the neutron codebase.
 Some of these cleanups are to fix actual bugs discovered in the code.
  Almost
 all of them are tiny and obviously correct.

 A recurring reviewer comment is that the change should have had an
 accompanying bug report and that they would rather that change was not
 submitted without one (or at least, they've -1'ed my change).

 I often didn't discover these issues by encountering an actual production
 issue so I'm unsure what to include in the bug report other than basically
 a
 copy of the change description.  I also haven't worked out the pattern yet
 of
 which changes should have a bug and which don't need one.

 There's a section describing blueprints in NeutronDevelopment but nothing
 on
 bugs.  It would be great if someone who understands the nuances here could
 add
 some words on when to file bugs:
 Which type of changes should have accompanying bug reports?
 What is the purpose of that bug, and what should it contain?

 --
 Thanks,
  - Gus

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Julien Danjou
On Wed, Aug 13 2014, Osanai, Hisashi wrote:

 On Tuesday, August 12, 2014 10:14 PM, Julien Danjou wrote:
 The py33 gate shouldn't be activated for the stable/icehouse. I'm no
 infra-config expert, but we should be able to patch it for that (hint?).

 Thank you for the response. 

 Now we have two choices:
 (1) deter to activate the py33 gate
 (2) a patch to happybase

 I prefer to choose (1) first because (2) is only problem if we activate the 
 py33 gate in stable/icehouse together with python33 and as you mentioned 
 the py33 gate shouldn't be activated in stable/icehouse but there is the 
 entry 
 for the py33 gate in tox.ini so I would like to remove it from 
 stable/icehouse.

 If it's ok, I make a bug report for tox.ini in stable/icehouse and commit a 
 fix 
 for it.  (then proceed https://review.openstack.org/#/c/112806/ ahead)

This is not a problem in tox.ini, this is a problem in the
infrastructure config. Removing py33 from the envlist in tox.ini isn't
going to fix anything unforunately.

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gantt project

2014-08-13 Thread Dugger, Donald D
Our initial goal is to just split the scheduler out into a separate project, 
not make it a part of Nova compute.  The functionality will be exactly the same 
as the Nova scheduler (the vast majority of the code will be a copy of the Nova 
scheduler code modulo some path name changes).  When the split is complete and 
we've thoroughly tested it to show the same functionality with Gantt we can 
make Gantt the default Nova scheduler, target all new scheduler work into Gantt 
and deprecate use of the Nova scheduler.  Hopefully in the L or M time frame we 
would excise the scheduler code out of Nova.

I would certainly not advocate forced usage of Gantt by fiat for other 
projects.  Instead we should evaluate the scheduling requirements needed by 
other projects, see if they can be handled by a common scheduler and, if so, 
enhance Gantt appropriately so that other projects can use it.  (Hopefully if 
we build Gantt they will come :-)  This should be no worse than the current 
situation where projects are forced to create their own scheduler, projects 
will have the option to utilize Gantt and not waste effort duplicating a 
scheduler function.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

-Original Message-
From: John Dickinson [mailto:m...@not.mn] 
Sent: Tuesday, August 12, 2014 9:24 AM
To: Dugger, Donald D
Cc: OpenStack Development Mailing List (not for usage questions); Michael 
Still; Mark Washenberger; Dolph Mathews; Lyle, David; Kyle Mestery; John 
Griffith; Eoghan Glynn; Zane Bitter; Nikhil Manchanda; Devananda van der Veen; 
Doug Hellmann; James E. Blair; Anne Gentle; Matthew Treinish; Robert Collins; 
Dean Troyer; Thierry Carrez; Kurt Griffiths; Sergey Lukjanov; Jarret Raim
Subject: Re: Gantt project

Thanks for the info. It does seem like most OpenStack projects have some 
concept of a scheduler, as you mentioned. Perhaps that's expected in any 
distributed system.

Is it expected or assumed that Gantt will become the common scheduler for all 
OpenStack projects? That is, is Gantt's plan and/or design goals to provide 
scheduling (or a scheduling framework) for all OpenStack projects? Perhaps 
this is a question for the TC rather than Don. [1]

Since Gantt is initially intended to be used by Nova, will it be under the 
compute program or will there be a new program created for it?


--John


[1] You'll forgive me, but I've certainly seen OpenStack projects move from 
you can use it if you want to you must start using this in the past.




On Aug 11, 2014, at 11:09 PM, Dugger, Donald D donald.d.dug...@intel.com 
wrote:

 This is to make sure that everyone knows about the Gantt project and to make 
 sure that no one has a strong aversion to what we are doing.
  
 The basic goal is to split the scheduler out of Nova and create a separate 
 project that, ultimately, can be used by other OpenStack projects that have a 
 need for scheduling services.  Note that we have no intention of forcing 
 people to use Gantt but it seems silly to have a scheduler inside Nova, 
 another scheduler inside Cinder, another scheduler inside Neutron and so 
 forth.  This is clearly predicated on the idea that we can create a common, 
 flexible scheduler that can meet everyone's needs but, as I said, theirs is 
 no rule that any project has to use Gantt, if we don't meet your needs you 
 are free to roll your own scheduler.
  
 We will start out by just splitting the scheduler code out of Nova into a 
 separate project that will initially only be used by Nova.  This will be 
 followed by enhancements, like a common API, that can then be utilized by 
 other projects.
  
 We are cleaning up the internal interfaces in the Juno release with the 
 expectation that early in the Kilo cycle we will be able to do the split and 
 create a Gantt project that is completely compatible with the current Nova 
 scheduler.
  
 Hopefully our initial goal (a separate project that is completely compatible 
 with the Nova scheduler) is not too controversial but feel free to reply with 
 any concerns you may have.
  
 --
 Don Dugger
 Censeo Toto nos in Kansa esse decisse. - D. Gale
 Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PCI support

2014-08-13 Thread Irena Berezovsky
Generally, I agree with you. But it's a little tricky.
There are different types of SR-IOV NICs and what will work for some vendor may 
be broken for another.
I think that both current SR-IOV networking flavors: Embedded switching (Intel, 
Mellanox) and Cisco VM-FEX should be verified for relevant nova patches.
What tests do you think it should run for nova side?

Thanks,
Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Wednesday, August 13, 2014 10:10 AM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Hi,
If I understand correctly the only way that this work is with nova and neutron 
running. My understanding would be to have the CI running with this as the 
configuration. I just think that this should be a prerequisite similar to 
having validations of virtualization drivers.
Does that make sense?
Thanks
Gary

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Wednesday, August 13, 2014 at 9:01 AM
To: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com, OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: RE: [openstack-dev] [Nova] PCI support

Hi Gary,
I understand your concern. I think CI is mandatory to ensure that code is not 
broken. While unit tests provide great value, it may end up with the code that 
does not work...
I am not sure how this code can be checked for validity without running the 
neutron part.
Probably our CI job should be triggered by nova changes in the PCI area.
What do you suggest?

Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Tuesday, August 12, 2014 4:29 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks, the concern is for the code in Nova and not in Neutron. That is, there 
is quite a lot of PCI code being added and no way of knowing that it actually 
works (unless we trust the developers working on it :)).
Thanks
Gary

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Tuesday, August 12, 2014 at 10:25 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com
Subject: RE: [openstack-dev] [Nova] PCI support

Hi Gary,
Mellanox already established CI support on Mellanox SR-IOV NICs, as one of the 
jobs of Mellanox External Testing CI 
(Check-MLNX-Neutron-ML2-Sriov-driverhttps://urldefense.proofpoint.com/v1/url?u=http://144.76.193.39/ci-artifacts/94888/13/Check-MLNX-Neutron-ML2-Sriov-driverk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=OFhjKT9ipKmAmkiQpq6hlqZIHthaGP7q1PTygNW2RXs%3D%0As=13fdee114a421eeed33edf26a639f8450df6efa361ba912c41694ff75292e789).
Meanwhile not voting, but will be soon.

BR,
Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Monday, August 11, 2014 5:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks for the update.

From: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, August 11, 2014 at 5:08 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] PCI support

Gary,

Cisco is adding it in our CI testbed. I guess that mlnx is doing the same for 
their MD as well.

-Robert

On 8/11/14, 9:05 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:

Hi,
At the moment all of the drivers are required CI support. Are there any plans 
regarding the PIC support. I understand that this is something that requires 
specific hardware. Are there any plans to add this?
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Eoghan Glynn


  One thing I'm not seeing shine through in this discussion of slots is
  whether any notion of individual cores, or small subsets of the core
  team with aligned interests, can champion blueprints that they have
  a particular interest in.
 
 I think that's because we've focussed in this discussion on the slots
 themselves, not the process of obtaining a slot.

That's fair.
 
 The proposal as it stands now is that we would have a public list of
 features that are ready to occupy a slot. That list would the ranked
 in order of priority to the project, and the next free slot goes to
 the top item on the list. The ordering of the list is determined by
 nova-core, based on their understanding of the importance of a given
 thing, as well as what they are hearing from our users.
 
 So -- there's totally scope for lobbying, or for a subset of core to
 champion a feature to land, or for a company to explain why a given
 feature is very important to them.

Yeah, that's pretty much what I mean by the championing being subsumed
under the group will.

What's lost is not so much the ability to champion something, as the
freedom to do so in an independent/emergent way.

(Note that this is explicitly not verging into the retrospective veto
policy discussion on another thread[1], I'm totally assuming good faith
and good intent on the part of such champions)
 
 It sort of happens now -- there is a subset of core which cares more
 about xen than libvirt for example. We're just being more open about
 the process and setting expectations for our users. At the moment its
 very confusing as a user, there are hundreds of proposed features for
 Juno, nearly 100 of which have been accepted. However, we're kidding
 ourselves if we think we can land 100 blueprints in a release cycle.

Yeah, so I guess it would be worth drilling down into that user
confusion.

Are users confused because they don't understand the current nature
of the group dynamic, the unseen hand that causes some blueprints to
prosper while others fester seemingly unnoticed?

(for example, in the sense of not appreciating the emergent championing
done by say the core subset interested in libvirt)

Or are they confused in that they read some implicit contract or
commitment into the targeting of those 100 blueprints to a release
cycle?

(in sense of expecting that the core team will land all/most of those
100 target'd BPs within the cycle)

Cheers,
Eoghan 

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-August/042728.html

  For example it might address some pain-point they've encountered, or
  impact on some functional area that they themselves have worked on in
  the past, or line up with their thinking on some architectural point.
 
  But for whatever motivation, such small groups of cores currently have
  the freedom to self-organize in a fairly emergent way and champion
  individual BPs that are important to them, simply by *independently*
  giving those BPs review attention.
 
  Whereas under the slots initiative, presumably this power would be
  subsumed by the group will, as expressed by the prioritization
  applied to the holding pattern feeding the runways?
 
  I'm not saying this is good or bad, just pointing out a change that
  we should have our eyes open to.
 
 Michael
 
 --
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Nikola Đipanov
On 08/13/2014 04:05 AM, Michael Still wrote:
 On Wed, Aug 13, 2014 at 4:26 AM, Eoghan Glynn egl...@redhat.com wrote:

 It seems like this is exactly what the slots give us, though. The core 
 review
 team picks a number of slots indicating how much work they think they can
 actually do (less than the available number of blueprints), and then
 blueprints queue up to get a slot based on priorities and turnaround time
 and other criteria that try to make slot allocation fair. By having the
 slots, not only is the review priority communicated to the review team, it
 is also communicated to anyone watching the project.

 One thing I'm not seeing shine through in this discussion of slots is
 whether any notion of individual cores, or small subsets of the core
 team with aligned interests, can champion blueprints that they have
 a particular interest in.
 
 I think that's because we've focussed in this discussion on the slots
 themselves, not the process of obtaining a slot.
 
 The proposal as it stands now is that we would have a public list of
 features that are ready to occupy a slot. That list would the ranked
 in order of priority to the project, and the next free slot goes to
 the top item on the list. The ordering of the list is determined by
 nova-core, based on their understanding of the importance of a given
 thing, as well as what they are hearing from our users.
 
 So -- there's totally scope for lobbying, or for a subset of core to
 champion a feature to land, or for a company to explain why a given
 feature is very important to them.
 
 It sort of happens now -- there is a subset of core which cares more
 about xen than libvirt for example. We're just being more open about
 the process and setting expectations for our users. At the moment its
 very confusing as a user, there are hundreds of proposed features for
 Juno, nearly 100 of which have been accepted. However, we're kidding
 ourselves if we think we can land 100 blueprints in a release cycle.
 

While I agree with motivation for this - setting the expectations, I
fail to see how this is different to what the Swift guys seem to be
doing apart from more red tape.

I would love for us to say: If you want your feature in - you need to
convince us that it's awesome and that we need to listen to you, by
being active in the community (not only by means of writing code of
course).

I fear that slots will have us saying: Here's another check-box for you
to tick, and the code goes in, which in addition to not communicating
that we are ultimately the ones who chose what goes in, regardless of
slots, also shifts the conversation away from what is really important,
and that is the relative merit of the feature itself.

But it obviously depends on the implementation.

N.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems

2014-08-13 Thread David Pineau
Hello,

I have currently setup the Scality CI not to report (mostly because it
isn't fully functionnal yet, as the machine it runs on turns out to be
undersized and thus the tests fails on some timeout), partly because
it's currently a nightly build. I have no way of testing multiple
patchsets at the same time so it is easier this way.

How do you plan to Officialize the different 3rd party CIs ? I
remember that the cinder meeting about that in the Atlanta Summit
concluded that a nightly build would be enough, but such build cannot
really report on gerrit.

David Pineau
gerrit: Joachim
IRC#freenode: joa

2014-08-13 2:28 GMT+02:00 Asselin, Ramy ramy.asse...@hp.com:
 I forked jaypipe’s repos  working on extending it to support nodepool, log
 server, etc.

 Still WIP but generally working.



 If you need help, ping me on IRC #openstack-cinder (asselin)



 Ramy



 From: Jesse Pretorius [mailto:jesse.pretor...@gmail.com]
 Sent: Monday, August 11, 2014 11:33 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems



 On 12 August 2014 07:26, Amit Das amit@cloudbyte.com wrote:

 I would like some guidance in this regards in form of some links, wiki pages
 etc.



 I am currently gathering the driver cert test results i.e. tempest tests
 from devstack in our environment  CI setup would be my next step.



 This should get you started:

 http://ci.openstack.org/third_party.html



 Then Jay Pipes' excellent two part series will help you with the details of
 getting it done:

 http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-system/

 http://www.joinfu.com/2014/02/setting-up-an-openstack-external-testing-system-part-2/




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
David Pineau,
Developer RD at Scality

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-08-13 Thread Daniel P. Berrange
On Tue, Aug 12, 2014 at 10:09:52PM +0100, Mark McLoughlin wrote:
 On Wed, 2014-07-30 at 15:34 -0700, Clark Boylan wrote:
  On Wed, Jul 30, 2014, at 03:23 PM, Jeremy Stanley wrote:
   On 2014-07-30 13:21:10 -0700 (-0700), Joe Gordon wrote:
While forcing people to move to a newer version of libvirt is
doable on most environments, do we want to do that now? What is
the benefit of doing so?
   [...]
   
   The only dog I have in this fight is that using the split-out
   libvirt-python on PyPI means we finally get to run Nova unit tests
   in virtualenvs which aren't built with system-site-packages enabled.
   It's been a long-running headache which I'd like to see eradicated
   everywhere we can. I understand though if we have to go about it
   more slowly, I'm just excited to see it finally within our grasp.
   -- 
   Jeremy Stanley
  
  We aren't quite forcing people to move to newer versions. Only those
  installing nova test-requirements need newer libvirt.
 
 Yeah, I'm a bit confused about the problem here. Is it that people want
 to satisfy test-requirements through packages rather than using a
 virtualenv?
 
 (i.e. if people just use virtualenvs for unit tests, there's no problem
 right?)
 
 If so, is it possible/easy to create new, alternate packages of the
 libvirt python bindings (from PyPI) on their own separately from the
 libvirt.so and libvirtd packages?

The libvirt python API is (mostly) automatically generated from a
description of the XML that is built from the C source files. In
tree with have fakelibvirt which is a semi-crappy attempt to provide
a pure python libvirt client API with the same signature. IIUC, what
you are saying is that we should get a better fakelibvirt that is
truely identical with same API coverage /signatures as real libvirt ?


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Using any username/password to create tempest clients

2014-08-13 Thread Udi Kalifon
Hello.

I am writing a tempest scenario for keystone. In this scenario I create a 
domain, project and a user with admin rights on the project. I then try to 
instantiate a Manager so I can call keystone using the new user credentials:

creds = KeystoneV3Credentials(username=dom1proj1admin_name, 
password=dom1proj1admin_name, domain_name=dom1_name, user_domain_name=dom1_name)
auth_provider = KeystoneV3AuthProvider(creds)
creds = auth_provider.fill_credentials()
admin_client = clients.Manager(interface=self._interface, 
credentials=creds)

The problem is that I get unauthorized return codes for every call I make 
with this client. I verified that the user is created properly and has the 
needed credentials, by manually authenticating and getting a token with his 
credentials and then using that token. Apparently, in my code I don't create 
the creds properly or I'm missing another step. How can I use the new user in 
tempest properly?

Thanks in advance,
Udi Kalifon.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Thierry Carrez
Nikola Đipanov wrote:
 While I agree with motivation for this - setting the expectations, I
 fail to see how this is different to what the Swift guys seem to be
 doing apart from more red tape.

It's not different imho. It's just that nova as significantly more
features being thrown at it, so the job of selecting priority features
is significantly harder, and the backlog is a lot bigger. The slot
system allows to visualize that backlog.

Currently we target all features to juno-3, everyone expects their stuff
to get review attention, nothing gets merged until the end of the
milestone period, and and in the end we merge almost nothing. The
blueprint priorities don't cut it, what you want is a ranked list. See
how likely you are to be considered for a release. Communicate that the
feature will actually be a Kilo feature earlier. Set downstream
expectations right. Merge earlier.

That ties into the discussions we are having for StoryBoard to support
task lists[1], which are arbitrary ranked lists of tasks. Those are much
more flexible than mono-dimensional priorities that fail to express the
complexity of priority in a complex ecosystem like OpenStack development.

[1] https://wiki.openstack.org/wiki/StoryBoard/Task_Lists

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Thierry Carrez
Rochelle.RochelleGrober wrote:
 [...]
 So, with all that prologue, here is what I propose (and please consider 
 proposing your improvements/changes to it).  I would like to see for Kilo:
 
 - IRC meetings and mailing list meetings beginning with Juno release and 
 continuing through the summit that focus on core project needs (what Thierry 
 call strategic) that as a set would be considered the primary focus of the 
 Kilo release for each project.  This could include high priority bugs, 
 refactoring projects, small improvement projects, high interest extensions 
 and new features, specs that didn't make it into Juno, etc.
 - Develop the list and prioritize it into Needs and Wants. Consider these 
 the feeder projects for the two runways if you like.  
 - Discuss the lists.  Maybe have a community vote? The vote will freeze the 
 list, but as in most development project freezes, it can be a soft freeze 
 that the core, or drivers or TC can amend (or throw out for that matter).
 [...]

One thing we've been unable to do so far is to set release goals at
the beginning of a release cycle and stick to those. It used to be
because we were so fast moving that new awesome stuff was proposed
mid-cycle and ends up being a key feature (sometimes THE key feature)
for the project. Now it's because there is so much proposed noone knows
what will actually get completed.

So while I agree that what you propose is the ultimate solution (and the
workflow I've pushed PTLs to follow every single OpenStack release so
far), we have struggled to have the visibility, long-term thinking and
discipline to stick to it in the past. If you look at the post-summit
plans and compare to what we end up in a release, you'll see quite a lot
of differences :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-13 Thread Daniel P. Berrange
On Wed, Aug 13, 2014 at 08:57:40AM +1000, Michael Still wrote:
 Hi.
 
 One of the action items from the nova midcycle was that I was asked to
 make nova's expectations of core reviews more clear. This email is an
 attempt at that.
 
 Nova expects a minimum level of sustained code reviews from cores. In
 the past this has been generally held to be in the order of two code
 reviews a day, which is a pretty low bar compared to the review
 workload of many cores. I feel that existing cores understand this
 requirement well, and I am mostly stating it here for completeness.
 
 Additionally, there is increasing levels of concern that cores need to
 be on the same page about the criteria we hold code to, as well as the
 overall direction of nova. While the weekly meetings help here, it was
 agreed that summit attendance is really important to cores. Its the
 way we decide where we're going for the next cycle, as well as a
 chance to make sure that people are all pulling in the same direction
 and trust each other.
 
 There is also a strong preference for midcycle meetup attendance,
 although I understand that can sometimes be hard to arrange. My stance
 is that I'd like core's to try to attend, but understand that
 sometimes people will miss one. In response to the increasing
 importance of midcycles over time, I commit to trying to get the dates
 for these events announced further in advance.

Personally I'm going to find it really hard to justify long distance
travel 4 times a year for OpenStack for personal / family reasons,
let alone company cost. I couldn't attend Icehouse mid-cycle because
I just had too much travel in a short time to be able to do another
week long trip away from family. I couldn't attend Juno mid-cycle
because it clashed we personal holiday. There are other opensource
related conferences that I also have to attend (LinuxCon, FOSDEM,
KVM Forum, etc), etc so doubling the expected number of openstack
conferences from 2 to 4 is really very undesirable from my POV.
I might be able to attend the occassional mid-cycle meetup if the
location was convenient, but in general I don't see myself being
able to attend them regularly.

I tend to view the fact that we're emphasising the need of in-person
meetups to be somewhat of an indication of failure of our community
operation. The majority of open source projects work very effectively
with far less face-to-face time. OpenStack is fortunate that companies
are currently willing to spend 6/7-figure sums flying 1000's of
developers around the world many times a year, but I don't see that
lasting forever so I'm concerned about baking the idea of f2f midcycle
meetups into our way of life even more strongly.

 Given that we consider these physical events so important, I'd like
 people to let me know if they have travel funding issues. I can then
 approach the Foundation about funding travel if that is required.

Travel funding is certainly an issue, but I'm not sure that Foundation
funding would be a solution, because the impact probably isn't directly
on the core devs. Speaking with my Red Hat on, if the midcycle meetup
is important enough, the core devs will likely get the funding to attend.
The fallout of this though is that every attendee at a mid-cycle summit
means fewer attendees at the next design summit. So the impact of having
more core devs at mid-cycle is that we'll get fewer non-core devs at
the design summit. This sucks big time for the non-core devs who want
to engage with our community.

Also having each team do a f2f mid-cycle meetup at a different location
makes it even harder for people who have a genuine desire / need to take
part in multiple teams. Going to multiple mid-cycle meetups is even more
difficult to justify so they're having to make difficult decisions about
which to go to :-(

I'm also not a fan of mid-cycle meetups because I feel it further
stratifies our contributors into two increasly distinct camps - core
vs non-core.

I can see that a big benefit of a mid-cycle meetup is to be a focal
point for collaboration, to forcably break contributors our of their
day-to-day work pattern to concentrate on discussing specific issues.
It also obviously solves the distinct timezone problem we have with
our dispersed contributor base. I think that we should be examining
what we can achieve with some kind of virtual online mid-cycle meetups
instead. Using technology like google hangouts or some similar live
collaboration technology, not merely an IRC discussion. Pick a 2-3
day period, schedule formal agendas / talking slots as you would with
a physical summit and so on. I feel this would be more inclusive to
our community as a whole, avoid excessive travel costs, so allowing
more of our community to attend the bigger design summits. It would
even open possibility of having multiple meetups during a cycle (eg
could arrange mini virtual events around each milestone if we wanted)

Regards,
Daniel
-- 
|: http://berrange.com  -o-

Re: [openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 13/08/14 09:28, Angus Lees wrote:
 I'm doing various small cleanup changes as I explore the neutron
 codebase. Some of these cleanups are to fix actual bugs discovered
 in the code.  Almost all of them are tiny and obviously correct.
 
 A recurring reviewer comment is that the change should have had an
  accompanying bug report and that they would rather that change was
 not submitted without one (or at least, they've -1'ed my change).
 
 I often didn't discover these issues by encountering an actual
 production issue so I'm unsure what to include in the bug report
 other than basically a copy of the change description.  I also
 haven't worked out the pattern yet of which changes should have a
 bug and which don't need one.
 
 There's a section describing blueprints in NeutronDevelopment but
 nothing on bugs.  It would be great if someone who understands the
 nuances here could add some words on when to file bugs: Which type
 of changes should have accompanying bug reports? What is the
 purpose of that bug, and what should it contain?
 

It was discussed before at:
http://lists.openstack.org/pipermail/openstack-dev/2014-May/035789.html

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT6zfOAAoJEC5aWaUY1u570wQIAMpoXIK/p5invp+GW0aMMUK0
C/MR6WIJ83e6e2tOVUrxheK6bncVvidOI4EWGW1xzP1sg9q+8Hs1TNyKHXhJAb+I
c435MMHWsDwj6p1OeDxHnSOVMthcGH96sgRa1+CIk6+oktDF3IMmiOPRkxdpqWCZ
7TkV75mryehrTNwAkVPfpWG3OhWO44d5lLnJFCIMCuOw2NHzyLIOoGQAlWNQpy4V
a869s00WO37GEed6A5Zizc9K/05/6kpDIQVim37tw91JcZ69VelUlZ1THx+RTd33
92r87APm3fC/LioKN3fq1UUo2c94Vzl3gYPFVl8ZateQNMKB7ONMBePOfWR9H1k=
=wCJQ
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] 5.0.2

2014-08-13 Thread Mike Scherbakov
Hi Fuelers,
I'd like to clarify 5.0.2 state. This is not planned to be an official ISO
with 5.0.2, but rather it's going to be a set of packages and manifests,
which represent bugfixes on bugs reported to 5.0.2 milestone in LP [1].

5.0.2 is going to be cut in stable/5.0 at the same time as 5.1 is produced
and tagged, and upgrade tarball is created (with 5.0.2 packages). 5.0.2
will follow maintenance release of 5.0.1. So in fact, for now all the
changes which are merged into stable/5.0 will be in 5.0.1. Currently, we
run acceptance testing against RC for 5.0.1. If it succeeds without
critical bugs, it's going to be released on this Thursday, 14th of August.
Right after that, all changes merged to stable/5.0 will become a part of
5.0.2.

All, please don't forget about 5.0.2. For all High/Critical issues we face
in 5.1, we need to consider whether we want to see a fix in 5.0.2. So
please do not forget about proposing those into 5.0.2 milestone, proposing
commits consequently into stable/5.0 branch, helping out with reviewing
those and merging (if you have rights).

[1] https://launchpad.net/fuel/+milestone/5.0.2

Thanks,
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack Capacity Planning

2014-08-13 Thread Sylvain Bauza


Le 13/08/2014 03:48, Fei Long Wang a écrit :

Hi Adam,

Please refer this https://wiki.openstack.org/wiki/Blazar. Hope it's 
helpful. Cheers.


On 13/08/14 12:54, Adam Lawson wrote:
Something was presented at a meeting recently which had me curious: 
what sort of capacity planning tools/capabilities are being developed 
as an Openstack program? It's another area where non-proprietary 
cloud control is needed and would be another way to kick a peg away 
from the stool of cloud resistance. Also, this ties quite nicely into 
Software Defined Datacenter but appropriateness for the Openstack 
suite itself is another matter...


Has this been given much thought at this stage of the game? I'd be 
more than happy to host a meeting to talk about it.


Mahalo,
Adam



Hi Adam,
As a Blazar developer, what do you want to know about Capacity Planning 
? This topic is pretty wide, so more details are welcome :-)


Thanks,
-Sylvain


*/
Adam Lawson/*
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email:flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Osanai, Hisashi

On Wednesday, August 13, 2014 5:03 PM, Julien Danjou wrote:
 This is not a problem in tox.ini, this is a problem in the
 infrastructure config. Removing py33 from the envlist in tox.ini isn't
 going to fix anything unforunately.

Thank you for your quick response.

I may misunderstand this topic. Let me clarify ...
My understanding is:
- the py33 failed because there is a problem that the happybase-0.8 cannot 
  work with python33 env. (execfile function calls on python33 doesn't work)
- the happybase is NOT an OpenStack component.
- the py33 doesn't need to execute on stable/icehouse

One idea to solve this problem is:
If the py33 doesn't need to execute on stable/icehouse, just eliminate the py33.

 This is not a problem in tox.ini, 
Means the py33 needs to execute on stable/icehouse. Here I misunderstand 
something...

 this is a problem in the infrastructure config.
Means execfile function calls on python33 in happybase is a problem. If my 
understanding 
is correct, I agree with you and I think this is the direct cause of this 
problem.

Your idea to solve this is creating a patch for the direct cause, right?

Thanks in advance,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Daniel P. Berrange
On Mon, Aug 11, 2014 at 10:30:12PM -0700, Joe Gordon wrote:
 On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote:
   I really like this idea, as Michael and others alluded to in above, we
  are
   attempting to set cycle goals for Kilo in Nova. but I think it is worth
   doing for all of OpenStack. We would like to make a list of key goals
  before
   the summit so that we can plan our summit sessions around the goals. On a
   really high level one way to look at this is, in Kilo we need to pay down
   our technical debt.
  
   The slots/runway idea is somewhat separate from defining key cycle
  goals; we
   can be approve blueprints based on key cycle goals without doing slots.
   But
   with so many concurrent blueprints up for review at any given time, the
   review teams are doing a lot of multitasking and humans are not very
  good at
   multitasking. Hopefully slots can help address this issue, and hopefully
   allow us to actually merge more blueprints in a given cycle.
  
  I'm not 100% sold on what the slots idea buys us. What I've seen this
  cycle in Neutron is that we have a LOT of BPs proposed. We approve
  them after review. And then we hit one of two issues: Slow review
  cycles, and slow code turnaround issues. I don't think slots would
  help this, and in fact may cause more issues. If we approve a BP and
  give it a slot for which the eventual result is slow review and/or
  code review turnaround, we're right back where we started. Even worse,
  we may have not picked a BP for which the code submitter would have
  turned around reviews faster. So we've now doubly hurt ourselves. I
  have no idea how to solve this issue, but by over subscribing the
  slots (e.g. over approving), we allow for the submissions with faster
  turnaround a chance to merge quicker. With slots, we've removed this
  capability by limiting what is even allowed to be considered for
  review.
 
 
 Slow review: by limiting the number of blueprints up we hope to focus our
 efforts on fewer concurrent things
 slow code turn around: when a blueprint is given a slot (runway) we will
 first make sure the author/owner is available for fast code turnaround.
 
 If a blueprint review stalls out (slow code turnaround, stalemate in review
 discussions etc.) we will take the slot and give it to another blueprint.

This idea of fixed slots is not really very appealing to me. It sounds
like we're adding a significant amount of buerocratic overhead to our
development process that is going to make us increasingly inefficient.
I don't want to waste time wating for a stalled blueprint to time out
before we give the slot to another blueprint. On any given day when I
have spare review time available I'll just review anything that is up
and waiting for review. If we can set a priority for the things up for
review that is great since I can look at those first, but the idea of
having fixed slots for things we should review does not do anything to
help my review efficiency IMHO.

I also thing it will kill our flexibility in approving  dealing with
changes that are not strategically important, but none the less go
through our blueprint/specs process. There have been a bunch of things
I've dealt with that are not strategic, but have low overhead to code
and review and easily dealt with in the slack time between looking at
the high priority reviews. It sounds like we're going to loose our
flexibility to pull in stuff like this if it only gets a chance when
strategically imporatant stuff is not occupying a slot

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-13 Thread Dave Tucker
I've been working on this for OpenDaylight
https://github.com/dave-tucker/odl-neutron-drivers

This seems to work for me (tested Devstack w/ML2) but YMMV.

-- Dave

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Giulio Fidente

On 08/07/2014 12:56 PM, Jay Pipes wrote:

On 08/07/2014 02:12 AM, Kashyap Chamarthy wrote:

On Thu, Aug 07, 2014 at 07:10:23AM +1000, Michael Still wrote:

On Wed, Aug 6, 2014 at 2:03 AM, Thierry Carrez
thie...@openstack.org wrote:


We seem to be unable to address some key issues in the software we
produce, and part of it is due to strategic contributors (and core
reviewers) being overwhelmed just trying to stay afloat of what's
happening. For such projects, is it time for a pause ? Is it time to
define key cycle goals and defer everything else ?


[. . .]


We also talked about tweaking the ratio of tech debt runways vs
'feature runways. So, perhaps every second release is focussed on
burning down tech debt and stability, whilst the others are focussed
on adding features.



I would suggest if we do such a thing, Kilo should be a stability'
release.


Excellent sugestion. I've wondered multiple times that if we could
dedicate a good chunk (or whole) of a specific release for heads down
bug fixing/stabilization. As it has been stated elsewhere on this list:
there's no pressing need for a whole lot of new code submissions, rather
we focusing on fixing issues that affect _existing_ users/operators.


There's a whole world of GBP/NFV/VPN/DVR/TLA folks that would beg to
differ on that viewpoint. :)

That said, I entirely agree with you and wish efforts to stabilize would
take precedence over feature work.


I'm of this same opinion: I think a periodic, concerted effort to 
stabilize the existing features (which shouldn't be about bugs fixing 
only) would be helpful to work on some of the issues mentioned.


I'm thinking of qa, infra, the tactical contributions, the code clean-up 
and more in general the reviews backlog as some of these.


And I also think it would useful to figure what are the *strategic* 
features needed, as it would provide with some time to gather feedback 
from the field.


--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Dina Belova
Hisashi Osanai, I have really strange feeling about this issue.
It happens only with py33 job for icehouse branch? Because actually happy
base is the same for the master code Jenkins jobs, so it looks like that
exec file issue should appear in master runs as well... Do I understand
everything right?

As I understand Julien, he proposes to run this job only for master (as it
works for now magically for master checks) and skip in for everything
earlier - mostly because it won't work for stable branches anyway - as
there were no fixed ceilometer code itself there.

Thanks,
Dina


On Wed, Aug 13, 2014 at 2:11 PM, Osanai, Hisashi 
osanai.hisa...@jp.fujitsu.com wrote:


 On Wednesday, August 13, 2014 5:03 PM, Julien Danjou wrote:
  This is not a problem in tox.ini, this is a problem in the
  infrastructure config. Removing py33 from the envlist in tox.ini isn't
  going to fix anything unforunately.

 Thank you for your quick response.

 I may misunderstand this topic. Let me clarify ...
 My understanding is:
 - the py33 failed because there is a problem that the happybase-0.8 cannot
   work with python33 env. (execfile function calls on python33 doesn't
 work)
 - the happybase is NOT an OpenStack component.
 - the py33 doesn't need to execute on stable/icehouse

 One idea to solve this problem is:
 If the py33 doesn't need to execute on stable/icehouse, just eliminate the
 py33.

  This is not a problem in tox.ini,
 Means the py33 needs to execute on stable/icehouse. Here I misunderstand
 something...

  this is a problem in the infrastructure config.
 Means execfile function calls on python33 in happybase is a problem. If my
 understanding
 is correct, I agree with you and I think this is the direct cause of this
 problem.

 Your idea to solve this is creating a patch for the direct cause, right?

 Thanks in advance,
 Hisashi Osanai

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?

2014-08-13 Thread Sylvain Bauza


Le 12/08/2014 22:06, Sylvain Bauza a écrit :


Le 12/08/2014 18:54, Nikola Đipanov a écrit :

On 08/12/2014 04:49 PM, Sylvain Bauza wrote:

(sorry for reposting, missed 2 links...)

Hi Nikola,

Le 12/08/2014 12:21, Nikola Đipanov a écrit :

Hey Nova-istas,

While I was hacking on [1] I was considering how to approach the fact
that we now need to track one more thing (NUMA node utilization) in 
our

resources. I went with - I'll add it to compute nodes table thinking
it's a fundamental enough property of a compute host that it 
deserves to
be there, although I was considering  Extensible Resource Tracker 
at one

point (ERT from now on - see [2]) but looking at the code - it did not
seem to provide anything I desperately needed, so I went with 
keeping it

simple.

So fast-forward a few days, and I caught myself solving a problem 
that I

kept thinking ERT should have solved - but apparently hasn't, and I
think it is fundamentally a broken design without it - so I'd really
like to see it re-visited.

The problem can be described by the following lemma (if you take 
'lemma'

to mean 'a sentence I came up with just now' :)):


Due to the way scheduling works in Nova (roughly: pick a host based on
stale(ish) data, rely on claims to trigger a re-schedule), _same 
exact_

information that scheduling service used when making a placement
decision, needs to be available to the compute service when testing 
the

placement.


This is not the case right now, and the ERT does not propose any 
way to

solve it - (see how I hacked around needing to be able to get
extra_specs when making claims in [3], without hammering the DB). The
result will be that any resource that we add and needs user supplied
info for scheduling an instance against it, will need a buggy
re-implementation of gathering all the bits from the request that
scheduler sees, to be able to work properly.

Well, ERT does provide a plugin mechanism for testing resources at the
claim level. This is the plugin responsibility to implement a test()
method [2.1] which will be called when test_claim() [2.2]

So, provided this method is implemented, a local host check can be done
based on the host's view of resources.



Yes - the problem is there is no clear API to get all the needed bits to
do so - especially the user supplied one from image and flavors.
On top of that, in current implementation we only pass a hand-wavy
'usage' blob in. This makes anyone wanting to use this in conjunction
with some of the user supplied bits roll their own
'extract_data_from_instance_metadata_flavor_image' or similar which is
horrible and also likely bad for performance.


I see your concern where there is no interface for user-facing 
resources like flavor or image metadata.
I also think indeed that the big 'usage' blob is not a good choice for 
long-term vision.


That said, I don't think as we say in French to throw the bath 
water... ie. the problem is with the RT, not the ERT (apart the 
mention of third-party API that you noted - I'll go to it later below)

This is obviously a bigger concern when we want to allow users to pass
data (through image or flavor) that can affect scheduling, but still a
huge concern IMHO.
And here is where I agree with you : at the moment, ResourceTracker 
(and

consequently Extensible RT) only provides the view of the resources the
host is knowing (see my point above) and possibly some other resources
are missing.
So, whatever your choice of going with or without ERT, your patch [3]
still deserves it if we want not to lookup DB each time a claim goes.



As I see that there are already BPs proposing to use this IMHO broken
ERT ([4] for example), which will surely add to the proliferation of
code that hacks around these design shortcomings in what is already a
messy, but also crucial (for perf as well as features) bit of Nova 
code.

Two distinct implementations of that spec (ie. instances and flavors)
have been proposed [2.3] [2.4] so reviews are welcome. If you see the
test() method, it's no-op thing for both plugins. I'm open to comments
because I have the stated problem : how can we define a limit on just a
counter of instances and flavors ?


Will look at these - but none of them seem to hit the issue I am
complaining about, and that is that it will need to consider other
request data for claims, not only data available for on instances.

Also - the fact that you don't implement test() in flavor ones tells me
that the implementation is indeed racy (but it is racy atm as well) and
two requests can indeed race for the same host, and since no claims are
done, both can succeed. This is I believe (at least in case of single
flavor hosts) unlikely to happen in practice, but you get the idea.


Agreed, these 2 patches probably require another iteration, in 
particular how we make sure that it won't be racy. So I need another 
run to think about what to test() for these 2 examples.
Another patch has to be done for aggregates, but it's still WIP so 

Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Julien Danjou
On Wed, Aug 13 2014, Osanai, Hisashi wrote:

 One idea to solve this problem is:
 If the py33 doesn't need to execute on stable/icehouse, just eliminate
 the py33.

Yes, that IS the solution.

But modifying tox.ini is not going be a working implementation of that
solution.

 This is not a problem in tox.ini, 
 Means the py33 needs to execute on stable/icehouse. Here I misunderstand 
 something...

Not it does not, that line in tox.ini is not use by the gate.

 this is a problem in the infrastructure config.
 Means execfile function calls on python33 in happybase is a problem. If my 
 understanding 
 is correct, I agree with you and I think this is the direct cause of this 
 problem.

 Your idea to solve this is creating a patch for the direct cause, right?

My idea to solve this is to create a patch on
http://git.openstack.org/cgit/openstack-infra/config/
to exclude py33 on the stable/icehouse branch of Ceilometer in the gate.

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Blueprint -- Floating IP Auto Association

2014-08-13 Thread Salvatore Orlando
Hi,

this discussion came up recently regarding a nodepool issue.
The blueprint was recently revived and there is a proposed specification [1]

I tend to disagree with the way nova implements this feature today.
A configuration-wide flag indeed has the downside that this creates
different API behaviour across deployments.
As an API consumer which wants a public IP for an instance, I would
probably have to check if such IP is already available before allocating,
which, by the way, it's what nodepool does [2].

The specification [1] tries to make this clearer to user allowing control
of this behaviour on a per-subnet basis. This is not bad, but I still think
it's not a great idea to introduce side effect in neutron API (in this case
port create). Personally I think from the neutron side we can make user's
life easier by tying a floating IP lifecycle to the port it is associated
with, so that when the port is deleted, the floating IP is not just
disassociated but removed too. This won't give the same ease of use which
nova achieves today with auto_assign_floating_ips but will still be a
better level of automation without adding orchestration on the neutron side.

I've not yet made up my mind on this topic, but if you have any opinion,
please share it.

Salvatore


[1] https://review.openstack.org/#/c/106487/
[2]
http://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/nodepool.py#n398


On 17 November 2013 01:08, Steven Weston steven-wes...@live.com wrote:

  Hi Salvatore!

 My responses (to your responses) are in-line. I think we could also use
 some feedback from the rest of the community on this, as well … would it be
 a good idea to discuss the implementation further at the next IRC meeting?

 Good Stuff!!

 Steven


 On 11/15/2013 7:39 AM, Salvatore Orlando wrote:




 On 14 November 2013 23:03, Steven Weston steven-wes...@live.com wrote:

  Hi Salvatore,

 My Launchpad ID is steven-weston.  I do not know who those other Steven
 Westons are … if someone has created clones of me, I am going to be upset!
 Anyway, Here are my thoughts on the implementation approach.

 I have now assigned the blueprint to you.


 Great, thank you!

 Is there any reason why the two alternatives you listed should be
 considered mutually exclusive?

 In line of principle they're not. But if we provide the facility in
 Neutron, doing the orchestration from nova for the association would be, in
 my opinion, just redundant.
 Unless I am not understanding what you suggest.


 I agree, implementing the functionality in nova and neutron would be
 redundant, although I was suggesting that the nova api be modified to allow
 for the auto association request on vm creation, which would then be passed
 to neutron for the port creation.  Currently it looks to only be available
 as a configuration option in nova.


   So far I understand the goal is to pass a 'autoassociate_fip' flag (or
 something similar) to POST /v2/port
  the operation will create two resource: a floating IP and a port, with
 only the port being returned (hence the side-effect).


 This sounds good, unless we want to modify the api behavior to return a
 list of floating ips, as you already suggested below.  Or would it be
 better to return a mapping of fixed ips to floating ips, since that would
 technically be more accurate?



   I think that in consideration of loosely coupled design, it would be
 best to make the attribute addition to the port in neutron and create the
 ability for nova to orchestrate the call as well.  I do not see a way to
 prevent modification of the REST API, and in the interest of fulfilling
 your concern of atomicity, the fact that an auto association was requested
 will need to be stored somewhere, in addition to the state of the request
 as well.

 Storing the autoassociation could be achieved with a flag on the floating
 IP data model. But would that also imply that the association for an
 auto-associate floatingIP cannot be altered?


 I think that depends on how we want it to work … see my comments below.

 Plus, tracking the attribute in neutron would allow the ability of
 other events to fire that would need to be performed in response to an auto
 associate request, such as split zone dns updates (for example).  The
 primary use case for this would be for request by nova, although I can
 think of other services which could use it as well -- load balancers,
 firewalls, vpn’s, and any component that would require connectivity to
 another network.  I think the default behavior of the auto association
 request would be to create ip addresses on the associated networks of the
 attached routers, unless a specific network is given.


  Perhaps I need more info on this specific point; I think the current
 floating_port_id - port_id might work to this aim; perhaps the reverse
 mapping would be needed to, and we might work to add id - but I don't see
 why we would need a 'auto_associate' flag. This is not a criticism. It's
 

Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Julien Danjou
On Wed, Aug 13 2014, Dina Belova wrote:

 Hisashi Osanai, I have really strange feeling about this issue.
 It happens only with py33 job for icehouse branch? Because actually happy
 base is the same for the master code Jenkins jobs, so it looks like that
 exec file issue should appear in master runs as well... Do I understand
 everything right?

happybase is not installed when running py33 in master because it has a
requirements-py3.txt without happybase in it. Which stable/icehouse has
not.

 As I understand Julien, he proposes to run this job only for master (as it
 works for now magically for master checks) and skip in for everything
 earlier - mostly because it won't work for stable branches anyway - as
 there were no fixed ceilometer code itself there.

That's what I propose, and that should be done by hacking
openstack-infra/config AFAIK.

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Dina Belova
Julien, will do right now.

Thanks
Dina


On Wed, Aug 13, 2014 at 2:35 PM, Julien Danjou jul...@danjou.info wrote:

 On Wed, Aug 13 2014, Osanai, Hisashi wrote:

  One idea to solve this problem is:
  If the py33 doesn't need to execute on stable/icehouse, just eliminate
  the py33.

 Yes, that IS the solution.

 But modifying tox.ini is not going be a working implementation of that
 solution.

  This is not a problem in tox.ini,
  Means the py33 needs to execute on stable/icehouse. Here I misunderstand
 something...

 Not it does not, that line in tox.ini is not use by the gate.

  this is a problem in the infrastructure config.
  Means execfile function calls on python33 in happybase is a problem. If
 my understanding
  is correct, I agree with you and I think this is the direct cause of
 this problem.
 
  Your idea to solve this is creating a patch for the direct cause, right?

 My idea to solve this is to create a patch on
 http://git.openstack.org/cgit/openstack-infra/config/
 to exclude py33 on the stable/icehouse branch of Ceilometer in the gate.

 --
 Julien Danjou
 # Free Software hacker
 # http://julien.danjou.info

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Daniel P. Berrange
On Thu, Aug 07, 2014 at 03:56:04AM -0700, Jay Pipes wrote:
 On 08/07/2014 02:12 AM, Kashyap Chamarthy wrote:
 On Thu, Aug 07, 2014 at 07:10:23AM +1000, Michael Still wrote:
 On Wed, Aug 6, 2014 at 2:03 AM, Thierry Carrez thie...@openstack.org 
 wrote:
 
 We seem to be unable to address some key issues in the software we
 produce, and part of it is due to strategic contributors (and core
 reviewers) being overwhelmed just trying to stay afloat of what's
 happening. For such projects, is it time for a pause ? Is it time to
 define key cycle goals and defer everything else ?
 
 [. . .]
 
 We also talked about tweaking the ratio of tech debt runways vs
 'feature runways. So, perhaps every second release is focussed on
 burning down tech debt and stability, whilst the others are focussed
 on adding features.
 
 I would suggest if we do such a thing, Kilo should be a stability'
 release.
 
 Excellent sugestion. I've wondered multiple times that if we could
 dedicate a good chunk (or whole) of a specific release for heads down
 bug fixing/stabilization. As it has been stated elsewhere on this list:
 there's no pressing need for a whole lot of new code submissions, rather
 we focusing on fixing issues that affect _existing_ users/operators.
 
 There's a whole world of GBP/NFV/VPN/DVR/TLA folks that would beg to differ
 on that viewpoint. :)

Yeah, I think declaring entire cycles to be stabilization vs feature
focused is far to coarse  inflexibile. The most likely effect
of it would be that people who would otherwise contribute useful
features to openstack will simply walk away from the project for
that cycle.

I think that in fact the time when we need the strongest focus on
bug fixing is immediately after sizeable features have merged. I
don't think you want to give people the message that stabalization
work doesn't take place until the next 6 month cycle - that's far
too long to live with unstable code.

Currently we have a bit of focus on stabalization at each milestone
but to be honest most of that focus is on the last milestone only.
I'd like to see us have a much more explicit push for regular
stabalization work during the cycle, to really re-inforce the
idea that stabilization is an activity that should be taking place
continuously. Be really proactive in designating a day of the week
(eg Bug fix wednesdays) and make a concerted effort during that
day to have reviewers  developers concentrate exclusively on
stabilization related activities.

 That said, I entirely agree with you and wish efforts to stabilize would
 take precedence over feature work.

I find it really contradictory that we have such a strong desire for
stabilization and testing of our code, but at the same time so many
people argue that the core teams should have nothing at all todo with
the stable release branches which a good portion of our users will
actually be running. By ignoring stable branches, leaving it upto a
small team to handle, I think we giving the wrong message about what
our priorities as a team team are. I can't help thinking this filters
through to impact the way people think about their work on master.
Stabilization is important and should be baked into the DNA of our
teams to the extent that identifying bug fixes for stable is just
an automatic part of our dev lifecycle. The quantity of patches going
into stable isn't so high that it take up significant resources when
spread across the entire core team.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   6   7   8   9   10   >