[openstack-dev] [Neutron][ML2] ML2 late/early-cycle sprint announcement

2015-09-08 Thread Sukhdev Kapur
Folks,

We are planning on having ML2 coding sprint on October 6 through 8, 2015.
Some are calling it Liberty late-cycle sprint, others are calling it Mitaka
early-cycle sprint.

ML2 team has been discussing the issues related to synchronization of the
Neutron DB resources with the back-end drivers. Several issues have been
reported when multiple ML2 drivers are deployed in scaled HA deployments.
The issues surface when either side (Neutron or back-end HW/drivers)
restart and resource view gets out of sync. There is no mechanism in
Neutron or ML2 plugin which ensures the synchronization of the state
between the front-end and back-end. The drivers either end up implementing
their own solutions or they dump the issue on the operators to intervene
and correct it manually.

We plan on utilizing Task Flow to implement the framework in ML2 plugin
which can be leveraged by ML2 drivers to achieve synchronization in a
simplified manner.

There are couple of additional items on the Sprint agenda, which are listed
on the etherpad [1]. The details of venue and schedule are listed on the
enterpad as well. The sprint is hosted by Yahoo Inc.
Whoever is interested in the topics listed on the etherpad, is welcome to
sign up for the sprint and join us in making this reality.

Additionally, we will utilize this sprint to formalize the design
proposal(s) for the fish bowl session at Tokyo summit [2]

Any questions/clarifications, please join us in our weekly ML2 meeting on
Wednesday at 1600 UTC (9AM pacific time) at #openstack-meeting-alt

Thanks
-Sukhdev

[1] - https://etherpad.openstack.org/p/Neutron_ML2_Mid-Cycle_Sprint
[2] - https://etherpad.openstack.org/p/neutron-mitaka-designsummit
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Port forwarding

2015-09-08 Thread Germy Lure
Hi Gal,

Congratulations, eventually you understand what I mean.

Yes, in bulk. But I don't think that's an enhancement to the API. The bulk
operation is more common scenario. It is more useful and covers the single
port-mapping scenario.

By the way, bulk operation may apply to a subnet, a range(IP1 to IP100) or
even all the VMs behind a router. Perhaps, we need make a choice between
them while I prefer "range". Because it's more flexible and easier to use.

Many thanks.
Germy

On Wed, Sep 9, 2015 at 3:30 AM, Carl Baldwin  wrote:

> On Tue, Sep 1, 2015 at 11:59 PM, Gal Sagie  wrote:
> > Hello All,
> >
> > I have searched and found many past efforts to implement port forwarding
> in
> > Neutron.
>
> I have heard a few express a desire for this use case a few times in
> the past without gaining much traction.  Your summary here seems to
> show that this continues to come up.  I would be interested in seeing
> this move forward.
>
> > I have found two incomplete blueprints [1], [2] and an abandoned patch
> [3].
> >
> > There is even a project in Stackforge [4], [5] that claims
> > to implement this, but the L3 parts in it seems older then current
> master.
>
> I looked at this stack forge project.  It looks like files copied out
> of neutron and modified as an alternative to proposing a patch set to
> neutron.
>
> > I have recently came across this requirement for various use cases, one
> of
> > them is
> > providing feature compliance with Docker port-mapping feature (for
> Kuryr),
> > and saving floating
> > IP's space.
>
> I think both of these could be compelling use cases.
>
> > There has been many discussions in the past that require this feature,
> so i
> > assume
> > there is a demand to make this formal, just a small examples [6], [7],
> [8],
> > [9]
> >
> > The idea in a nutshell is to support port forwarding (TCP/UDP ports) on
> the
> > external router
> > leg from the public network to internal ports, so user can use one
> Floating
> > IP (the external
> > gateway router interface IP) and reach different internal ports
> depending on
> > the port numbers.
> > This should happen on the network node (and can also be leveraged for
> > security reasons).
>
> I'm sure someone will ask how this works with DVR.  It should be
> implemented so that it works with a DVR router but it will be
> implemented in the central part of the router.  Ideally, DVR and
> legacy routers work the same in this regard and a single bit of code
> will implement it for both.  If this isn't the case, I think that is a
> problem with our current code structure.
>
> > I think that the POC implementation in the Stackforge project shows that
> > this needs to be
> > implemented inside the L3 parts of the current reference implementation,
> it
> > will be hard
> > to maintain something like that in an external repository.
> > (I also think that the API/DB extensions should be close to the current
> L3
> > reference
> > implementation)
>
> Agreed.
>
> > I would like to renew the efforts on this feature and propose a RFE and a
> > spec for this to the
> > next release, any comments/ideas/thoughts are welcome.
> > And of course if any of the people interested or any of the people that
> > worked on this before
> > want to join the effort, you are more then welcome to join and comment.
>
> I have added this to the agenda for the Neutron drivers meeting.  When
> the team starts to turn its eye toward Mitaka, we'll discuss it.
> Hopefully that will be soon as I'm started to think about it already.
>
> I'd like to see how the API for this will look.  I don't think we'll
> need more detail that that for now.
>
> Carl
>
> > [1]
> https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
> > [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
> > [3] https://review.openstack.org/#/c/60512/
> > [4] https://github.com/stackforge/networking-portforwarding
> > [5] https://review.openstack.org/#/q/port+forwarding,n,z
> >
> > [6]
> >
> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
> > [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
> > [8]
> >
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
> > [9]
> >
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
> >
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] This is what disabled-by-policy should look like to the user

2015-09-08 Thread Adam Young

On 09/06/2015 03:31 PM, Duncan Thomas wrote:



On 5 Sep 2015 05:47, "Adam Young" > wrote:


> Then let my Hijack:
>
> Policy is still broken.  We need the pieces of Dynamic policy.
>
> I am going to call for a cross project policy discussion for the 
upcoming summit.  Please, please, please all the projects attend. The 
operators have made it clear they need better policy support.


Can you give us a heads up on the perceived shortcomings, please, 
together with an overview of any proposed changes? Turning up to a 
session to hear, unprepared, something that can be introduced in 
advance over email so that people can ruminate on the details and be 
better prepared to discuss them is probably more productive than 
expecting tired, jet-lagged people to think on their feet.


In general, I think the practice of introducing new things at design 
summits, rather than letting people prepare, is slowing us down as a 
community.




I've been harping o0n this for a while, both at summits and before.

It starts with:

https://bugs.launchpad.net/keystone/+bug/968696

We can't fix that until we have an approach that lets us unstick the 
situations where we need a global admin.


This was the start of it:
https://adam.younglogic.com/2014/11/dynamic-policy-in-keystone/

Submitted this overview spec (which was termed not implementable because 
it was an overview)


https://review.openstack.org/#/c/147651/



and a bunch of supporting specs:

https://review.openstack.org/#/q/status:open+project:openstack/keystone-specs+branch:master+topic:dynamic-policy,n,z

We've m,ade very little progress on this since this point 6 months ago.

Had a cross project policy discussion in Vancouver.  It was almost all 
Keystone folks, with a very few people from other projects.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] [keystone] Set minimum version for passlib

2015-09-08 Thread Adam Young

On 09/08/2015 09:50 AM, Alan Pevec wrote:

Hi all,

according to https://wiki.openstack.org/wiki/DepFreeze I'm requesting
depfreeze exception for
https://review.openstack.org/221267
This is just a sync with reality, copying Javier's description:

(Keystone) commit a7235fc0511c643a8441efd3d21fc334535066e2 [1] uses
passlib.utils.MAX_PASSWORD_SIZE, which was only introduced to
passlib in version 1.6

Cheers,
Alan

[1] https://review.openstack.org/217449

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API v2.1 reference documentation

2015-09-08 Thread Anne Gentle
On Tue, Sep 8, 2015 at 8:41 PM, Ken'ichi Ohmichi 
wrote:

> Hi Melanie,
>
> 2015-09-09 8:00 GMT+09:00 melanie witt :
> > Hi All,
> >
> > With usage of v2.1 picking up (devstack) I find myself going to the API
> ref documentation [1] often and find it lacking compared with the similar
> v2 doc [2]. I refer to this doc whenever I see a novaclient bug where
> something broke with v2.1 and I'm trying to find out what the valid request
> parameters are, etc.
> >
> > The main thing I notice is in the v2.1 docs, there isn't any request
> parameter list with descriptions like there is in v2. And I notice "create
> server" documentation doesn't seem to exist -- there is "Create multiple
> servers" but it doesn't provide much nsight about what the many request
> parameters are.
> >
> > I assume the docs are generated from the code somehow, so I'm wondering
> how we can get this doc improved? Any pointers would be appreciated.
>

They are manual, and Alex made a list of how far behind the 2.1 docs which
is in a doc bug here:

https://bugs.launchpad.net/openstack-api-site/+bug/1488144

It's great to see Atsushi Sakai working hard on those, join him in the
patching.

We're still patching WADL for this release with the hope of adding Swagger
for many services by October 15th -- however the WADL to Swagger tool we
have now migrates WADL.

Thanks,
Anne

>
> > Thanks,
> > -melanie (irc: melwitt)
> >
> >
> > [1] http://developer.openstack.org/api-ref-compute-v2.1.html
> > [2] http://developer.openstack.org/api-ref-compute-v2.html
>
> Nice point.
> "create server" API is most important and necessary to be described on
> the document anyway.
>
> In short-term, we need to describe it from the code by hands, and we
> can know available parameters from JSON-Schema code.
> The base parameters can be gotten from
>
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/servers.py#L18
> In addition, there are extensions which add more parameters and we can
> get to know from
>
> https://github.com/openstack/nova/tree/master/nova/api/openstack/compute/schemas
> If module files contain the dict *server_create*, they are also API
> parameters.
> For example, keypairs extension adds "key_name" parameter and we can
> know it from
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/keypairs.py
>
> In long-term, it will be great to generate these API parameter
> document from JSON-Schema directly.
> JSON-Schema supports "description" parameter and we can describe the
> meaning of each parameter.
> But that will be long-term way. We need to write them by hands now.
>
> Thanks
> Ken Ohmichi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-08 Thread Ben Swartzlander

On 09/08/2015 01:58 PM, Doug Hellmann wrote:

Excerpts from Ben Swartzlander's message of 2015-09-08 13:32:58 -0400:

On 09/03/2015 08:22 AM, Thierry Carrez wrote:

Hi everyone,

A feature deprecation policy is a standard way to communicate and
perform the removal of user-visible behaviors and capabilities. It helps
setting user expectations on how much and how long they can rely on a
feature being present. It gives them reassurance over the timeframe they
have to adapt in such cases.

In OpenStack we always had a feature deprecation policy that would apply
to "integrated projects", however it was never written down. It was
something like "to remove a feature, you mark it deprecated for n
releases, then you can remove it".

We don't have an "integrated release" anymore, but having a base
deprecation policy, and knowing which projects are mature enough to
follow it, is a great piece of information to communicate to our users.

That's why the next-tags workgroup at the Technical Committee has been
working to propose such a base policy as a 'tag' that project teams can
opt to apply to their projects when they agree to apply it to one of
their deliverables:

https://review.openstack.org/#/c/207467/

Before going through the last stage of this, we want to survey existing
projects to see which deprecation policy they currently follow, and
verify that our proposed base deprecation policy makes sense. The goal
is not to dictate something new from the top, it's to reflect what's
generally already applied on the field.

In particular, the current proposal says:

"At the very minimum the feature [...] should be marked deprecated (and
still be supported) in the next two coordinated end-of-cyle releases.
For example, a feature deprecated during the M development cycle should
still appear in the M and N releases and cannot be removed before the
beginning of the O development cycle."

That would be a n+2 deprecation policy. Some suggested that this is too
far-reaching, and that a n+1 deprecation policy (feature deprecated
during the M development cycle can't be removed before the start of the
N cycle) would better reflect what's being currently done. Or that
config options (which are user-visible things) should have n+1 as long
as the underlying feature (or behavior) is not removed.

Please let us know what makes the most sense. In particular between the
3 options (but feel free to suggest something else):

1. n+2 overall
2. n+2 for features and capabilities, n+1 for config options
3. n+1 overall

I think any discussion of a deprecation policy needs to be combined with
a discussion about LTS (long term support) releases. Real customers (not
devops users -- people who pay money for support) can't deal with
upgrades every 6 months.

Unavoidably, distros are going to want to support certain releases for
longer than the normal upstream support window so they can satisfy the
needs of the aforementioned customers. This will be true whether the
deprecation policy is N+1, N+2, or N+3.

It makes sense for the community to define LTS releases and coordinate
making sure all the relevant projects are mutually compatible at that
release point. Then the job of actually maintaining the LTS release can
fall on people who care about such things. The major benefit to solving
the LTS problem, though, is that deprecation will get a lot less painful
because you could assume upgrades to be one release at a time or
skipping directly from one LTS to the next, and you can reduce your
upgrade test matrix accordingly.

How is this fundamentally different from what we do now with stable
releases, aside from involving a longer period of time?


It would be a recognition that most customers don't want to upgrade 
every 6 months -- they want to skip over 3 releases and upgrade every 2 
years. I'm sure there are customers all over the spectrum from those who 
run master to those to do want a new release every 6 month, to some that 
want to install something and run it forever without upgrading*. My 
intuition is that, for most customers, 2 years is a reasonable amount of 
time to run a release before upgrading. I think major Linux distros 
understand this, as is evidenced by their release and support patterns.


As sdague mentions, the idea of LTS is really a separate goal from the 
deprecation policy, but I see the two becoming related when the 
deprecation policy makes it impossible to cleanly jump 4 releases in a 
single upgrade. I also believe that if you solve the LTS problem, the 
deprecation policy flows naturally from whatever your supported-upgrade 
path is: you simply avoid breaking anyone who does a supported upgrade.


It sounds to me like the current supported upgrade path is: you upgrade 
each release one at a time, never skipping over a release. In this 
model, N+1 deprecation makes perfect sense. I think the same people who 
want longer deprecation periods are the ones who want to skip over 
releases when upgrade for the reasons I ment

Re: [openstack-dev] [nova] API v2.1 reference documentation

2015-09-08 Thread Ken'ichi Ohmichi
Hi Melanie,

2015-09-09 8:00 GMT+09:00 melanie witt :
> Hi All,
>
> With usage of v2.1 picking up (devstack) I find myself going to the API ref 
> documentation [1] often and find it lacking compared with the similar v2 doc 
> [2]. I refer to this doc whenever I see a novaclient bug where something 
> broke with v2.1 and I'm trying to find out what the valid request parameters 
> are, etc.
>
> The main thing I notice is in the v2.1 docs, there isn't any request 
> parameter list with descriptions like there is in v2. And I notice "create 
> server" documentation doesn't seem to exist -- there is "Create multiple 
> servers" but it doesn't provide much nsight about what the many request 
> parameters are.
>
> I assume the docs are generated from the code somehow, so I'm wondering how 
> we can get this doc improved? Any pointers would be appreciated.
>
> Thanks,
> -melanie (irc: melwitt)
>
>
> [1] http://developer.openstack.org/api-ref-compute-v2.1.html
> [2] http://developer.openstack.org/api-ref-compute-v2.html

Nice point.
"create server" API is most important and necessary to be described on
the document anyway.

In short-term, we need to describe it from the code by hands, and we
can know available parameters from JSON-Schema code.
The base parameters can be gotten from
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/servers.py#L18
In addition, there are extensions which add more parameters and we can
get to know from
https://github.com/openstack/nova/tree/master/nova/api/openstack/compute/schemas
If module files contain the dict *server_create*, they are also API parameters.
For example, keypairs extension adds "key_name" parameter and we can
know it from 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/keypairs.py

In long-term, it will be great to generate these API parameter
document from JSON-Schema directly.
JSON-Schema supports "description" parameter and we can describe the
meaning of each parameter.
But that will be long-term way. We need to write them by hands now.

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] python-selenium landed in Debian main today (in Debian Experimental for the moment)

2015-09-08 Thread Richard Jones
On 9 September 2015 at 05:35, Thomas Goirand  wrote:

> After the non-free files were removed from the package (after I asked
> for it through the Debian bug https://bugs.debian.org/770232), Selenium
> was uploaded and reached Debian Experimental in main today (ie: Selenium
> is not in non-free section of Debian anymore). \o/
>

\o/


Now, I wonder: can the Horizon team use python-selenium as uploaded to
> Debian experimental today? Can we run the Selenium unit tests, even
> without the browser plugins? It is my understanding that it's possible,
> if we use something like PhantomJS, which is also available in Debian.
>

We can't use PhantomJS as a webdriver as a couple of the tests interact
with file inputs and ghostdriver doesn't support those, sadly (and the
developer of ghostdriver is MIA). We are pretty much stuck with just
Firefox as the webdriver.


 Richard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Using storage drivers outside of openstack/cinder

2015-09-08 Thread John Griffith
On Tue, Sep 8, 2015 at 5:01 PM, Walter A. Boring IV 
wrote:

> Hey Tony,
>   This has been a long running pain point/problem for some of the drivers
> in Cinder.
> As a reviewer, I try and -1 drivers that talk directly to the database as
> I don't think
> drivers *should* be doing that.   But, for some drivers, unfortunately, in
> order to
> implement the features, they currently need to talk to the DB. :(  One of
> the new
> features in Cinder, namely consistency groups, has a bug that basically
> requires
> drivers to talk to the DB to fetch additional data.  There are plans to
> remedy this
> problem in the M release of Cinder.   For other DB calls in drivers, it's
> a case by
> case basis for removing the call, that's not entirely obvious how to do it
> at the
> current time.   It's a topic that has come up now and again within the
> community,
> and I for one, would like to see the DB calls removed as well. Feel free to
> help contribute!  It's OpenSource after all. :)
>
> Cheers,
> Walt
>
>> Openstack/Cinder has a wealth of storage drivers to talk to different
>> storage subsystems, which is great for users of openstack.  However, it
>> would be even greater if this same functionality could be leveraged
>> outside of openstack/cinder.  So that other projects don't need to
>> duplicate the same functionality when trying to talk to hardware.
>>
>>
>> When looking at cinder and asking around[1] about how one could
>> potentially do this I find out that is there is quite a bit of coupling
>> with openstack, like:
>>
>> * The NFS driver is initialized with knowledge about whether any volumes
>> exist in the database or not, and if not, can trigger certain behavior
>> to set permissions, etc.  This means that something other than the
>> cinder-volume service needs to mimic the right behavior if using this
>> driver.
>>
>> * The LVM driver touches the database when creating a backup of a volume
>> (many drivers do), and when managing a volume (importing an existing
>> external LV to use as a Cinder volume).
>>
>> * A few drivers (GPFS, others?) touch the db when managing consistency
>> groups.
>>
>> * EMC, Hitachi, and IBM NFS drivers touch the db when creating/deleting
>> snapshots.
>>
>>
>> Am I the only one that thinks this would be useful?  What ideas do
>> people have for making the cinder drivers stand alone, so that everyone
>> could benefit from this great body of work?
>>
>> Thanks,
>> Tony
>>
>> [1] Special thanks to Eric Harney for the examples of coupling
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> .
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​Hey Tony,

Thanks for posting this, I've thought for a while that something like what
you propose would be AWESOME and in fact is the direction we should go.
Initially I'd planned to make Cinder a consumable service outside of
OpenStack, but over time that's become less easy of a task with all the
libraries, dependencies and assumptions that we make with respect to our
environment.

The idea of a more "general" driver that can be consumed is something
that's come up a number of times and proposed to sit in Cinder as a driver
(Vipr/CoprHd and some others over the years), I think we (Cinder) could
provide that level of abstraction and consumability better than most of
what's been proposed so far.  It would be a good deal of work I think to
make it happen and require some buy in / commitment from almost all the
Cinder contributors but I think it would be something worth doing.

I'll be curious to see if any other interest is expressed here.  There are
ways to deal with the DB pieces and things like that I think (hackish ways
like config settings for OpenStack vs non-OpenStack env).  Anyway, I'd love
to talk more about it... maybe in Tokyo?

John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Should v2 compatibility mode (v2.0 on v2.1) fixes be applicable for v2.1 too?

2015-09-08 Thread Ken'ichi Ohmichi
2015-09-08 19:45 GMT+09:00 Sean Dague :
> On 09/06/2015 11:15 PM, GHANSHYAM MANN wrote:
>> Hi All,
>>
>> As we all knows, api-paste.ini default setting for /v2 was changed to
>> run those on v2.1 (v2.0 on v2.1) which is really great think for easy
>> code maintenance in future (removal of v2 code).
>>
>> To keep "v2.0 on v2.1" fully compatible with "v2.0 on v2.0", some bugs
>> were found[1] and fixed. But I think we should fix those only for v2
>> compatible mode not for v2.1.
>>
>> For example bug#1491325, 'device' on volume attachment Request is
>> optional param[2] (which does not mean 'null-able' is allowed) and
>> v2.1 used to detect and error on usage of 'device' as "None". But as
>> it was used as 'None' by many /v2 users and not to break those, we
>> should allow 'None' on v2 compatible mode also. But we should not
>> allow the same for v2.1.
>>
>> IMO v2.1 strong input validation feature (which helps to make API
>> usage in correct manner) should not be changed, and for v2 compatible
>> mode we should have another solution without affecting v2.1 behavior
>> may be having different schema for v2 compatible mode and do the
>> necessary fixes there.
>>
>> Trying to know other's opinion on this or something I missed during
>> any discussion.
>>
>> [1]: https://bugs.launchpad.net/python-novaclient/+bug/1491325
>>   https://bugs.launchpad.net/nova/+bug/1491511
>>
>> [2]: http://developer.openstack.org/api-ref-compute-v2.1.html#attachVolume
>
> A lot of these issue need to be a case by case determination.
>
> In this particular case, we had the Documetation, the nova code, the
> clients, and the future.
>
> The documentation: device is optional. That means it should be a string
> or not there at all. The schema was set to enforce this on v2.1
>
> The nova code: device = None was accepted previously, because device is
> a mandatory parameter all the way down the call stack. 2 layers in we
> default it to None if it wasn't specified.
>
> The clients: both python-novaclient and ruby fog sent device=None in the
> common case. While only 2 data points, this does demonstrate this is
> more wide spread than just our buggy code.
>
> The future: it turns out we really can't honor this parameter in most
> cases anyway, and passing it just means causing bugs. This is an
> artifact of the EC2 API that only works on specific (and possibly
> forked) versions of Xen that Amazon runs. Most hypervisor / guest
> relationships don't allow this to be set. The long term direction is
> going to be removing it from our API.
>
> Given that it seemed fine to relax this across all API. We screwed up
> and didn't test this case correctly, and long term we're going to dump
> it. So we don't want to honor 3 different versions of this API,
> especially as no one seems written to work against the documentation,
> but were written against the code in question. If they write to the
> docs, they'll be fine. But the clients that are out in the wild will be
> fine as well.

I think the case by case determination is fine, but current change
progress of relaxing validation seems wrong.
In Kilo, we required nova-specs for relaxing v2.1 API validation like
https://review.openstack.org/#/c/126696/
and we had much enough discussion and we built a consensus about that.
But we merged the above patch in just 2 working days without any
nova-spec even if we didn't have a consensus about that v2.1
validation change requires microversion bump or not.

If we really need to relax validation thing for v2.0 compatible API,
please consider separating v2.0 API schema from v2.1 API schema.
I have one idea about that like https://review.openstack.org/#/c/221129/

We worked for strict and consistent validation way on v2.1 API over 2
years, and I don't want to make it loose without enough thinking.

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2015-09-08 Thread Clint Byrum
Neutron would add a soft router that only knows the route to the metadata
service (and any other services you want your neutron private network vms
to be able to reach). This is not unique to the metadata service. Heat,
Trove, etc, all want this as a feature so that one can poke holes out of
these private networks only to the places where the cloud operator has
services running.

Excerpts from Fox, Kevin M's message of 2015-09-08 14:44:35 -0700:
> How does that work with neutron private networks?
> 
> Thanks,
> Kevin
> 
> From: Clint Byrum [cl...@fewbar.com]
> Sent: Tuesday, September 08, 2015 1:35 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support
> 
> Excerpts from Nir Yechiel's message of 2014-07-07 09:15:09 -0700:
> > AFAIK, the cloud-init metadata service can currently be accessed only by 
> > sending a request to http://169.254.169.254, and no IPv6 equivalent is 
> > currently implemented. Does anyone working on this or tried to address this 
> > before?
> >
> 
> I'm not sure we'd want to carry the way metadata works forward now that
> we have had some time to think about this.
> 
> We already have DHCP6 and NDP. Just use one of those, and set the host's
> name to a nonce that it can use to lookup the endpoint for instance
> differentiation via DNS SRV records. So if you were told you are
> 
> d02a684d-56ea-44bc-9eba-18d997b1d32d.region.cloud.com
> 
> Then you look that up as a SRV record on your configured DNS resolver,
> and connect to the host name returned and do something like  GET
> /d02a684d-56ea-44bc-9eba-18d997b1d32d
> 
> And viola, metadata returns without any special link local thing, and
> it works like any other dual stack application on the planet.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Should pod/rc/service 'bay_uuid' be foreign key for Bay 'uuid' ?

2015-09-08 Thread Vilobh Meshram
Hi All,


K8s resources Pod/RC/Service share same 'bay_uuid' which it gets from the
Bay 'uuid' (which happens to be the primary key for Bay). Shouldn't it be a
good idea to make pod/rc/service 'bay_uuid' be foreign key for Bay 'uuid'.
Are there any cons in doing so ? Why was it done in this specific way
initially ?

Listing down some pros in doing so :-

1. It helps to give a clear indication whether a Bay exist or not; if the
Pod/RC/Service 'bay_uuid' is a foreign key for Bay 'uuid'.
2. No additional lookup is necessary for the Bay table to check the
existence of Bay.



Nova already does so by [1]. Even other projects do follow the same pattern.


- Vilobh

[1]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L352
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Using storage drivers outside of openstack/cinder

2015-09-08 Thread Walter A. Boring IV

Hey Tony,
  This has been a long running pain point/problem for some of the 
drivers in Cinder.
As a reviewer, I try and -1 drivers that talk directly to the database 
as I don't think
drivers *should* be doing that.   But, for some drivers, unfortunately, 
in order to
implement the features, they currently need to talk to the DB. :(  One 
of the new
features in Cinder, namely consistency groups, has a bug that basically 
requires
drivers to talk to the DB to fetch additional data.  There are plans to 
remedy this
problem in the M release of Cinder.   For other DB calls in drivers, 
it's a case by
case basis for removing the call, that's not entirely obvious how to do 
it at the
current time.   It's a topic that has come up now and again within the 
community,

and I for one, would like to see the DB calls removed as well. Feel free to
help contribute!  It's OpenSource after all. :)

Cheers,
Walt

Openstack/Cinder has a wealth of storage drivers to talk to different
storage subsystems, which is great for users of openstack.  However, it
would be even greater if this same functionality could be leveraged
outside of openstack/cinder.  So that other projects don't need to
duplicate the same functionality when trying to talk to hardware.


When looking at cinder and asking around[1] about how one could
potentially do this I find out that is there is quite a bit of coupling
with openstack, like:

* The NFS driver is initialized with knowledge about whether any volumes
exist in the database or not, and if not, can trigger certain behavior
to set permissions, etc.  This means that something other than the
cinder-volume service needs to mimic the right behavior if using this
driver.

* The LVM driver touches the database when creating a backup of a volume
(many drivers do), and when managing a volume (importing an existing
external LV to use as a Cinder volume).

* A few drivers (GPFS, others?) touch the db when managing consistency
groups.

* EMC, Hitachi, and IBM NFS drivers touch the db when creating/deleting
snapshots.


Am I the only one that thinks this would be useful?  What ideas do
people have for making the cinder drivers stand alone, so that everyone
could benefit from this great body of work?

Thanks,
Tony

[1] Special thanks to Eric Harney for the examples of coupling

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] API v2.1 reference documentation

2015-09-08 Thread melanie witt
Hi All,

With usage of v2.1 picking up (devstack) I find myself going to the API ref 
documentation [1] often and find it lacking compared with the similar v2 doc 
[2]. I refer to this doc whenever I see a novaclient bug where something broke 
with v2.1 and I'm trying to find out what the valid request parameters are, etc.

The main thing I notice is in the v2.1 docs, there isn't any request parameter 
list with descriptions like there is in v2. And I notice "create server" 
documentation doesn't seem to exist -- there is "Create multiple servers" but 
it doesn't provide much nsight about what the many request parameters are.

I assume the docs are generated from the code somehow, so I'm wondering how we 
can get this doc improved? Any pointers would be appreciated.

Thanks,
-melanie (irc: melwitt)


[1] http://developer.openstack.org/api-ref-compute-v2.1.html
[2] http://developer.openstack.org/api-ref-compute-v2.html



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ansible][Infra] Moving ansible roles into big tent?

2015-09-08 Thread Emilien Macchi


On 09/08/2015 10:57 AM, Paul Belanger wrote:
> Greetings,
> 
> I wanted to start a discussion about the future of ansible / ansible roles in
> OpenStack. Over the last week or so I've started down the ansible path, 
> starting
> my first ansible role; I've started with ansible-role-nodepool[1].
> 
> My initial question is simple, now that big tent is upon us, I would like
> some way to include ansible roles into the opentack git workflow.  I first
> thought the role might live under openstack-infra however I am not sure that
> is the right place.  My reason is, -infra tents to include modules they
> currently run under the -infra namespace, and I don't want to start the effort
> to convince people to migrate.

I'm wondering what would be the goal of ansible-role-nodepool and what
it would orchestrate exactly. I did not find README that explains it,
and digging into the code makes me think you try to prepare nodepool
images but I don't exactly see why.

Since we already have puppet-nodepool, I'm curious about the purpose of
this role.
IMHO, if we had to add such a new repo, it would be under
openstack-infra namespace, to be consistent with other repos
(puppet-nodepool, etc).

> Another thought might be to reach out to the os-ansible-deployment team and 
> ask
> how they see roles in OpenStack moving foward (mostly the reason for this
> email).

os-ansible-deployment aims to setup OpenStack services in containers
(LXC). I don't see relation between os-ansible-deployment (openstack
deployment related) and ansible-role-nodepool (infra related).

> Either way, I would be interested in feedback on moving forward on this. Using
> travis-ci and github works but OpenStack workflow is much better.
> 
> [1] https://github.com/pabelanger/ansible-role-nodepool
> 

To me, it's unclear how and why we are going to use ansible-role-nodepool.
Could you explain with use-case?

Thanks,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Getting Started : OpenStack

2015-09-08 Thread gord chung



On 07/09/2015 1:59 PM, Bhagyashree Uday wrote:

Hi ,

I am Bhagyashree from India(IRC nick : bee2502 ). I have previous 
experience in data analytics including Machine Leraning,,NLP,IR and 
User Experience Research. I am interested in contributing to OpenStack 
on projects involving data analysis. Also , if these projects could be 
a part of Outreachy, it would be added bonus. I went through project 
ideas listed on https://wiki.openstack.org/wiki/Internship_ideas and 
one of these projects interested me a lot -
Understand OpenStack Operations via Insights from Logs and Metrics: A 
Data Science Perspective
However, this project does not have any mentioned mentor and I was 
hoping you could provide me with some individual contact from 
OpenStack community who would be interested in mentoring this project 
or some mailing list/thread/IRC community where I could look for a 
mentor. Other open data science projects/idea suggestions are also 
welcome.




there was a project proposed a few months back called Cognitive[1] but i 
don't know the status of this project. as for Ceilometer, it doesn't 
encompass data analysis but it does collect data which you might be 
interested in leveraging (ie. resource metrics and system events)


[1] http://lists.openstack.org/pipermail/openstack-dev/2015-May/064195.html

--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-08 Thread gord chung



On 08/09/2015 5:29 PM, Sean Dague wrote:

On 09/08/2015 03:32 PM, Doug Hellmann wrote:

Excerpts from Sean Dague's message of 2015-09-08 14:11:48 -0400:

On 09/08/2015 01:07 PM, Doug Hellmann wrote:

Excerpts from Dean Troyer's message of 2015-09-08 11:20:47 -0500:

On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann

I'd like to come up with some way to express the time other than
N+M because in the middle of a cycle it can be confusing to know
what that means (if I want to deprecate something in August am I
far enough through the current cycle that it doesn't count?).

Also, as we start moving more projects to doing intermediate releases
the notion of a "release" vs. a "cycle" will drift apart, so we
want to talk about "stable releases" not just any old release.


I've always thought the appropriate equivalent for projects not following
the (old) integrated release cadence was for N == six months.  It sets
approx. the same pace and expectation with users/deployers.

For those deployments tracking trunk, a similar approach can be taken, in
that deprecating a config option in M3 then removing it in N1 might be too
quick, but rather wait at least the same point in the following release
cycle to increment 'N'.

dt


Making it explicitly date-based would simplify tracking, to be sure.

I would agree that the M3 -> N0 drop can be pretty quick, it can be 6
weeks (which I've seen happen). However N == six months might make FFE
deprecation lands in one release run into FFE in the next. For the CD
case my suggestion is > 3 months. Because if you aren't CDing in
increments smaller than that, and hence seeing the deprecation, you
aren't really doing the C part of CDing.

 -Sean


Do those 3 months need to span more than one stable release? For
projects doing intermediary releases, there may be several releases
within a 3 month period.

Yes. 1 stable release branch AND 3 months linear time is what I'd
consider reasonable.

-Sean

while the pyro in me would like to burn things asap, my fellow 
contributors won't let me so Ceilometer typically has done 
deprecate->deprecate->remove. but agree with sdague, the bare minimum 
should be ^. operators will yell, don't make them yell.


cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2015-09-08 Thread Fox, Kevin M
How does that work with neutron private networks?

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Tuesday, September 08, 2015 1:35 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support

Excerpts from Nir Yechiel's message of 2014-07-07 09:15:09 -0700:
> AFAIK, the cloud-init metadata service can currently be accessed only by 
> sending a request to http://169.254.169.254, and no IPv6 equivalent is 
> currently implemented. Does anyone working on this or tried to address this 
> before?
>

I'm not sure we'd want to carry the way metadata works forward now that
we have had some time to think about this.

We already have DHCP6 and NDP. Just use one of those, and set the host's
name to a nonce that it can use to lookup the endpoint for instance
differentiation via DNS SRV records. So if you were told you are

d02a684d-56ea-44bc-9eba-18d997b1d32d.region.cloud.com

Then you look that up as a SRV record on your configured DNS resolver,
and connect to the host name returned and do something like  GET
/d02a684d-56ea-44bc-9eba-18d997b1d32d

And viola, metadata returns without any special link local thing, and
it works like any other dual stack application on the planet.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient] add lin hua cheng to osc-core

2015-09-08 Thread Dean Troyer
+++  Thanks Lin!

dt

On Tue, Sep 8, 2015 at 4:38 PM, Steve Martinelli 
wrote:

> Hey everyone,
>
> I would like to nominate Lin Hua Cheng to the OpenStackClient core team.
>
> Lin continues to be an outstanding OpenStack contributor, as noted by his
> core status in both Keystone and Horizon. He has somehow found time to also
> contribute to OpenStackClient and provide meaningful and high quality
> reviews, as well as several timely bug fixes. He knows the code base inside
> and out, and his UX background from horizon has been a great asset.
>
> If no one disagrees with this by end of day on Friday, I'll give Lin his
> new awesome core power that evening.
>
> Thanks,
>
> Steve Martinelli
> OpenStack Keystone Core
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Using storage drivers outside of openstack/cinder

2015-09-08 Thread Tony Asleson
Openstack/Cinder has a wealth of storage drivers to talk to different
storage subsystems, which is great for users of openstack.  However, it
would be even greater if this same functionality could be leveraged
outside of openstack/cinder.  So that other projects don't need to
duplicate the same functionality when trying to talk to hardware.


When looking at cinder and asking around[1] about how one could
potentially do this I find out that is there is quite a bit of coupling
with openstack, like:

* The NFS driver is initialized with knowledge about whether any volumes
exist in the database or not, and if not, can trigger certain behavior
to set permissions, etc.  This means that something other than the
cinder-volume service needs to mimic the right behavior if using this
driver.

* The LVM driver touches the database when creating a backup of a volume
(many drivers do), and when managing a volume (importing an existing
external LV to use as a Cinder volume).

* A few drivers (GPFS, others?) touch the db when managing consistency
groups.

* EMC, Hitachi, and IBM NFS drivers touch the db when creating/deleting
snapshots.


Am I the only one that thinks this would be useful?  What ideas do
people have for making the cinder drivers stand alone, so that everyone
could benefit from this great body of work?

Thanks,
Tony

[1] Special thanks to Eric Harney for the examples of coupling

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstackclient] add lin hua cheng to osc-core

2015-09-08 Thread Steve Martinelli


Hey everyone,

I would like to nominate Lin Hua Cheng to the OpenStackClient core team.

Lin continues to be an outstanding OpenStack contributor, as noted by his
core status in both Keystone and Horizon. He has somehow found time to also
contribute to OpenStackClient and provide meaningful and high quality
reviews, as well as several timely bug fixes. He knows the code base inside
and out, and his UX background from horizon has been a great asset.

If no one disagrees with this by end of day on Friday, I'll give Lin his
new awesome core power that evening.

Thanks,

Steve Martinelli
OpenStack Keystone Core
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-08 Thread Sean Dague
On 09/08/2015 03:32 PM, Doug Hellmann wrote:
> Excerpts from Sean Dague's message of 2015-09-08 14:11:48 -0400:
>> On 09/08/2015 01:07 PM, Doug Hellmann wrote:
>>> Excerpts from Dean Troyer's message of 2015-09-08 11:20:47 -0500:
 On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann
>
> I'd like to come up with some way to express the time other than
> N+M because in the middle of a cycle it can be confusing to know
> what that means (if I want to deprecate something in August am I
> far enough through the current cycle that it doesn't count?).
>
> Also, as we start moving more projects to doing intermediate releases
> the notion of a "release" vs. a "cycle" will drift apart, so we
> want to talk about "stable releases" not just any old release.
>

 I've always thought the appropriate equivalent for projects not following
 the (old) integrated release cadence was for N == six months.  It sets
 approx. the same pace and expectation with users/deployers.

 For those deployments tracking trunk, a similar approach can be taken, in
 that deprecating a config option in M3 then removing it in N1 might be too
 quick, but rather wait at least the same point in the following release
 cycle to increment 'N'.

 dt

>>>
>>> Making it explicitly date-based would simplify tracking, to be sure.
>>
>> I would agree that the M3 -> N0 drop can be pretty quick, it can be 6
>> weeks (which I've seen happen). However N == six months might make FFE
>> deprecation lands in one release run into FFE in the next. For the CD
>> case my suggestion is > 3 months. Because if you aren't CDing in
>> increments smaller than that, and hence seeing the deprecation, you
>> aren't really doing the C part of CDing.
>>
>> -Sean
>>
> 
> Do those 3 months need to span more than one stable release? For
> projects doing intermediary releases, there may be several releases
> within a 3 month period.

Yes. 1 stable release branch AND 3 months linear time is what I'd
consider reasonable.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] New BP required for adding new json file to Glance metadefs?

2015-09-08 Thread Ramakrishna, Deepti
Thanks Nikhil for your response.

Given the spec deadline for Liberty is over, I assume I would submitting my 
spec targeting for Mitaka. Currently, I noticed there is no Mitaka specs folder 
yet. So I created one.

Can you please help review it - https://review.openstack.org/#/c/218098/ ? I 
can then upload my spec to this.

Thanks,
Deepti

From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
Sent: Tuesday, September 08, 2015 1:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Ramakrishna, Deepti
Subject: Re: [openstack-dev] [Glance] New BP required for adding new json file 
to Glance metadefs?

Please create a standard glance-spec [0] and a corres. BP for rel-mgmt. The 
discussion can be expected on the spec. You can request it to be discussed 
during the weekly glance drivers meeting [1].

[0] https://github.com/openstack/glance-specs
[1] http://eavesdrop.openstack.org/#Glance_Drivers_Meeting
On 9/8/15 4:02 PM, Ramakrishna, Deepti wrote:
Hi all,


I have a question. Hoping one of the Glance cores or spec-cores could answer.

As part of adding some encryption related work, we would like to propose an 
additional data-security.json file to Glance metadefs which can be used to set 
encryption requirements for an image. Should we have to propose a BP for this 
or just submit as part of a patch with our reasons for doing the same?



Please let me know.

Thanks,
Deepti

[P.S: I didn’t get any response on IRC and hence this email.]




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--



Thanks,

Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-08 Thread Vladimir Kozhukalov
Dear colleagues,

Currently, we install fuel-libraryX.Y package(s) on the master node and
then right before starting actual deployment we rsync [1] puppet modules
(one of installed versions) from the master node to slave nodes. Such a
flow makes things much more complicated than they could be if we installed
puppet modules on slave nodes as rpm/deb packages. Deployment itself is
parameterized by repo urls (upstream + mos) and this pre-deployment task
could be nothing more than just installing fuel-library package from mos
repo defined for a cluster. We would not have several versions of
fuel-library on the master node, we would not need that complicated upgrade
stuff like we currently have for puppet modules.

Please give your opinions on this.


[1]
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218

Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2015-09-08 Thread Clint Byrum
Excerpts from Nir Yechiel's message of 2014-07-07 09:15:09 -0700:
> AFAIK, the cloud-init metadata service can currently be accessed only by 
> sending a request to http://169.254.169.254, and no IPv6 equivalent is 
> currently implemented. Does anyone working on this or tried to address this 
> before?
> 

I'm not sure we'd want to carry the way metadata works forward now that
we have had some time to think about this.

We already have DHCP6 and NDP. Just use one of those, and set the host's
name to a nonce that it can use to lookup the endpoint for instance
differentiation via DNS SRV records. So if you were told you are

d02a684d-56ea-44bc-9eba-18d997b1d32d.region.cloud.com

Then you look that up as a SRV record on your configured DNS resolver,
and connect to the host name returned and do something like  GET
/d02a684d-56ea-44bc-9eba-18d997b1d32d

And viola, metadata returns without any special link local thing, and
it works like any other dual stack application on the planet.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Evgeniy Konstantinov for fuel-docs core

2015-09-08 Thread Dmitry Borodaenko
It's been 6 days and there's an obvious consensus in favor of adding
Evgeny to fuel-docs core reviewers. I've added Evgeny to
fuel-docs-core group:
https://review.openstack.org/#/admin/groups/657,members

Thanks for your contribution so far and please keep up the good work!

On Tue, Sep 8, 2015 at 7:41 AM, Alexander Adamov  wrote:
> +1
>
> On Thu, Sep 3, 2015 at 11:41 PM, Dmitry Pyzhov  wrote:
>>
>> +1
>>
>> On Thu, Sep 3, 2015 at 10:14 PM, Sergey Vasilenko
>>  wrote:
>>>
>>> +1
>>>
>>>
>>> /sv
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] New BP required for adding new json file to Glance metadefs?

2015-09-08 Thread Nikhil Komawar
Please create a standard glance-spec [0] and a corres. BP for rel-mgmt.
The discussion can be expected on the spec. You can request it to be
discussed during the weekly glance drivers meeting [1].

[0] https://github.com/openstack/glance-specs
[1] http://eavesdrop.openstack.org/#Glance_Drivers_Meeting

On 9/8/15 4:02 PM, Ramakrishna, Deepti wrote:
>
> Hi all,
>
>  
>
> I have a question. Hoping one of the Glance cores or spec-cores could
> answer.
>
> As part of adding some encryption related work, we would like to
> propose an additional data-security.json file to Glance metadefs which
> can be used to set encryption requirements for an image. Should we
> have to propose a BP for this or just submit as part of a patch with
> our reasons for doing the same?
>
>  
>
> Please let me know.
>
>  
>
> Thanks,
>
> Deepti
>
>  
>
> [P.S: I didn’t get any response on IRC and hence this email.]
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Getting Started : OpenStack

2015-09-08 Thread michael mccune

On 09/08/2015 02:05 PM, Bhagyashree Uday wrote:

Hi Victoria ,

Thanks for the prompt reply. I go by Bee(IRC nick : bee2502) .There
doesn't seem to be much information regarding this project even on the
Ceilometer project page :( I will wait till the next Outreachy
applications begin though to check out any new developments. Thanks for
suggesting the IRC channel :) Btw, do you happen to know any other open
data analysis projects in OpenStack ?

Bee


hi Bee,

you may also be interested in the sahara project, the data processing 
service for openstack[1].


i am a developer with the project, and although we don't deal 
specifically in the analysis of data, we are addressing the issues of 
deploying popular data processing frameworks(Hadoop, Spark, Storm) into 
openstack.


if this sounds interesting, please stop by our channel, 
#openstack-sahara and chat us up. we are always looking for more people 
interested in contributing =)


regards,
mike

(elmiko on irc)

[1]: http://docs.openstack.org/developer/sahara/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] New BP required for adding new json file to Glance metadefs?

2015-09-08 Thread Ramakrishna, Deepti
Hi all,


I have a question. Hoping one of the Glance cores or spec-cores could answer.

As part of adding some encryption related work, we would like to propose an 
additional data-security.json file to Glance metadefs which can be used to set 
encryption requirements for an image. Should we have to propose a BP for this 
or just submit as part of a patch with our reasons for doing the same?



Please let me know.

Thanks,
Deepti

[P.S: I didn't get any response on IRC and hence this email.]
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] port delete allowed on VM

2015-09-08 Thread Ajay Kalambur (akalambu)
Hi
Today when we create a VM on a port and delete that port I don’t get a message 
saying Port in Use

Is there a plan to fix this or is this expected behavior in neutron

Is there a plan to fix this and if so is there a bug tracking this?

Ajay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [aodh][ceilometer] (re)introducing Aodh - OpenStack Alarming

2015-09-08 Thread gord chung

hi all,

as you may have heard, in an effort to simplify OpenStack Telemetry 
(Ceilometer) and streamline it's code, the alarming functionality 
provided by OpenStack Telemetry has been moved to it's own 
repository[1]. The new project is called Aodh[2]. the idea is that Aodh 
will grow as it's own entity, with it's own distinct core team, under 
the Telemetry umbrella. this way, we will have a focused team 
specifically for the alarming aspects of Telemetry. as always, feedback 
and contributions are welcomed[3].


in the coming days, we will release a migration/changes document to 
explain the differences between the original alarming code and Aodh. all 
effort was made to maintain compatibility in configurations such that it 
should be possible to take the existing configuration and reuse it for 
Aodh deployment.


some quick notes:
- the existing alarming code will remain consumable for Liberty release 
(but in deprecated state)
- all new functionality (ie. inline/streaming alarm evaluations) will be 
added only to Aodh
- client and api support has been added to common Ceilometer interfaces 
such that if Aodh is enabled, the client can still be used and redirect 
to Aodh.

- mailing list items can be tagged with [aodh]
- irc discussions will remain under #openstack-ceilometer

many thanks for all those who worked on the code split and integration 
testing.


[1] https://github.com/openstack/aodh
[2] http://www.behindthename.com/name/aodh
[3] https://launchpad.net/aodh

cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci-passtrough and neutron multi segment networks

2015-09-08 Thread Robert Li (baoli)
As far as I know, it was discussed but not supported yet. It requires change in 
nova and support in the neutron plugins.

—Robert

On 9/8/15, 9:39 AM, "Vladyslav Gridin" 
mailto:vladyslav.gri...@nuagenetworks.net>> 
wrote:

Hi All,

Is there a way to successfully deploy a vm with sriov nic
on both single segment vlan network, and multi provider network,
containing vlan segment?
When nova builds pci request for nic it looks for 'physical_network'
at network level, but for multi provider networks this is set within a segment.

e.g.
RESP BODY: {"network": {"status": "ACTIVE", "subnets": 
["3862051f-de55-4bb9-8c88-acd675bb3702"], "name": "sriov", "admin_state_up": 
true, "router:external": false, "segments": [{"provider:segmentation_id": 77, 
"provider:physical_network": "physnet1", "provider:network_type": "vlan"}, 
{"provider:segmentation_id": 35, "provider:physical_network": null, 
"provider:network_type": "vxlan"}], "mtu": 0, "tenant_id": 
"bd3afb5fac0745faa34713e6cada5a8d", "shared": false, "id": 
"53c0e71e-4c9a-4a33-b1a0-69529583e05f"}}


So, if on compute my pci_passthrough_whitelist contains physical_network,
deployment will fail in multi segment network, and vice versa.

Thanks,
Vlad.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] python-selenium landed in Debian main today (in Debian Experimental for the moment)

2015-09-08 Thread Thomas Goirand
Hi,

I'm very happy to write this message! :)

After the non-free files were removed from the package (after I asked
for it through the Debian bug https://bugs.debian.org/770232), Selenium
was uploaded and reached Debian Experimental in main today (ie: Selenium
is not in non-free section of Debian anymore). \o/

Now, I wonder: can the Horizon team use python-selenium as uploaded to
Debian experimental today? Can we run the Selenium unit tests, even
without the browser plugins? It is my understanding that it's possible,
if we use something like PhantomJS, which is also available in Debian.

So, Horizon guys, could you please have a look, and let me know if I may
run Selenium tests with what's in Debian now? Does it requires some
modification on how we run tests in Horizon currently?

Running Selenium unit tests during package build time would definitively
increase a lot the Horizon package quality insurance, so I would
definitively love running these tests. Please help me doing so! :)

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-08 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2015-09-08 14:11:48 -0400:
> On 09/08/2015 01:07 PM, Doug Hellmann wrote:
> > Excerpts from Dean Troyer's message of 2015-09-08 11:20:47 -0500:
> >> On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann
> >>>
> >>> I'd like to come up with some way to express the time other than
> >>> N+M because in the middle of a cycle it can be confusing to know
> >>> what that means (if I want to deprecate something in August am I
> >>> far enough through the current cycle that it doesn't count?).
> >>>
> >>> Also, as we start moving more projects to doing intermediate releases
> >>> the notion of a "release" vs. a "cycle" will drift apart, so we
> >>> want to talk about "stable releases" not just any old release.
> >>>
> >>
> >> I've always thought the appropriate equivalent for projects not following
> >> the (old) integrated release cadence was for N == six months.  It sets
> >> approx. the same pace and expectation with users/deployers.
> >>
> >> For those deployments tracking trunk, a similar approach can be taken, in
> >> that deprecating a config option in M3 then removing it in N1 might be too
> >> quick, but rather wait at least the same point in the following release
> >> cycle to increment 'N'.
> >>
> >> dt
> >>
> > 
> > Making it explicitly date-based would simplify tracking, to be sure.
> 
> I would agree that the M3 -> N0 drop can be pretty quick, it can be 6
> weeks (which I've seen happen). However N == six months might make FFE
> deprecation lands in one release run into FFE in the next. For the CD
> case my suggestion is > 3 months. Because if you aren't CDing in
> increments smaller than that, and hence seeing the deprecation, you
> aren't really doing the C part of CDing.
> 
> -Sean
> 

Do those 3 months need to span more than one stable release? For
projects doing intermediary releases, there may be several releases
within a 3 month period.

Doug



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Port forwarding

2015-09-08 Thread Carl Baldwin
On Tue, Sep 1, 2015 at 11:59 PM, Gal Sagie  wrote:
> Hello All,
>
> I have searched and found many past efforts to implement port forwarding in
> Neutron.

I have heard a few express a desire for this use case a few times in
the past without gaining much traction.  Your summary here seems to
show that this continues to come up.  I would be interested in seeing
this move forward.

> I have found two incomplete blueprints [1], [2] and an abandoned patch [3].
>
> There is even a project in Stackforge [4], [5] that claims
> to implement this, but the L3 parts in it seems older then current master.

I looked at this stack forge project.  It looks like files copied out
of neutron and modified as an alternative to proposing a patch set to
neutron.

> I have recently came across this requirement for various use cases, one of
> them is
> providing feature compliance with Docker port-mapping feature (for Kuryr),
> and saving floating
> IP's space.

I think both of these could be compelling use cases.

> There has been many discussions in the past that require this feature, so i
> assume
> there is a demand to make this formal, just a small examples [6], [7], [8],
> [9]
>
> The idea in a nutshell is to support port forwarding (TCP/UDP ports) on the
> external router
> leg from the public network to internal ports, so user can use one Floating
> IP (the external
> gateway router interface IP) and reach different internal ports depending on
> the port numbers.
> This should happen on the network node (and can also be leveraged for
> security reasons).

I'm sure someone will ask how this works with DVR.  It should be
implemented so that it works with a DVR router but it will be
implemented in the central part of the router.  Ideally, DVR and
legacy routers work the same in this regard and a single bit of code
will implement it for both.  If this isn't the case, I think that is a
problem with our current code structure.

> I think that the POC implementation in the Stackforge project shows that
> this needs to be
> implemented inside the L3 parts of the current reference implementation, it
> will be hard
> to maintain something like that in an external repository.
> (I also think that the API/DB extensions should be close to the current L3
> reference
> implementation)

Agreed.

> I would like to renew the efforts on this feature and propose a RFE and a
> spec for this to the
> next release, any comments/ideas/thoughts are welcome.
> And of course if any of the people interested or any of the people that
> worked on this before
> want to join the effort, you are more then welcome to join and comment.

I have added this to the agenda for the Neutron drivers meeting.  When
the team starts to turn its eye toward Mitaka, we'll discuss it.
Hopefully that will be soon as I'm started to think about it already.

I'd like to see how the API for this will look.  I don't think we'll
need more detail that that for now.

Carl

> [1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
> [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
> [3] https://review.openstack.org/#/c/60512/
> [4] https://github.com/stackforge/networking-portforwarding
> [5] https://review.openstack.org/#/q/port+forwarding,n,z
>
> [6]
> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
> [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
> [8]
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
> [9]
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Horizon] [Sahara] FFE request for Sahara unified job interface map UI

2015-09-08 Thread Trevor McKay
+1 from me as well.  It would be a shame to see this go to the next
cycle.

On Fri, 2015-09-04 at 10:40 -0400, Ethan Gafford wrote:
> Hello all,
> 
> I request a FFE for the change at: https://review.openstack.org/#/c/209683/
> 
> This change enables a significant improvement to UX in Sahara's elastic data 
> processing flow which is already in the server and client layers of Sahara. 
> Because it specifically aims at improving ease of use and comprehensibility, 
> Horizon integration is critical to the success of the feature. The change 
> itself is reasonably modular and thus low-risk; it will have no impact 
> outside Sahara's job template creation and launch flow, and (failing 
> unforseen issues) no impact to users of the existing flow who choose not to 
> use this feature.
> 
> Thank you,
> Ethan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-08 Thread Sean Dague
On 09/08/2015 02:24 PM, Jeremy Stanley wrote:
> On 2015-09-08 13:32:58 -0400 (-0400), Ben Swartzlander wrote:
> [...]
>> It makes sense for the community to define LTS releases and coordinate
>> making sure all the relevant projects are mutually compatible at that
>> release point.
> [...]
> 
> This seems premature. The most recent stable branch to reach EOL
> (icehouse) made it just past 14 months before we had to give up
> because not enough effort was being expended to keep it working and
> testable. As a community we've so far struggled to maintain stable
> branches as much as one year past release. While there are some
> exciting improvements on the way to our requirements standardization
> which could prove to help extend this, I really want to see us
> demonstrate that we can maintain a release longer before we make
> such a long-term commitment to downstream consumers.

And, the LTS question is separate from the feature deprecation question.
They are both pro consumer behaviors that have cost on the development
teams, but they are different things.

We rarely get resolution on one thing by entwining a different thing in
the same question.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to Fuel-Library Core

2015-09-08 Thread Tomasz Napierala
> On 02 Sep 2015, at 01:31, Sergii Golovatiuk  wrote:
> 
> Hi,
> 
> I would like to nominate Alex Schultz to Fuel-Library Core team. He’s been 
> doing a great job in writing patches. At the same time his reviews are solid 
> with comments for further improvements. He’s #3 reviewer and #1 contributor 
> with 46 commits for last 90 days [1]. Additionally, Alex has been very active 
> in IRC providing great ideas. His ‘librarian’ blueprint [3] made a big step 
> towards to puppet community.
> 
> Fuel Library, please vote with +1/-1 for approval/objection. Voting will be 
> open until September 9th. This will go forward after voting is closed if 
> there are no objections.  
> 
> Overall contribution:
> [0] http://stackalytics.com/?user_id=alex-schultz
> Fuel library contribution for last 90 days:
> [1] http://stackalytics.com/report/contribution/fuel-library/90
> List of reviews:
> [2] 
> https://review.openstack.org/#/q/reviewer:%22Alex+Schultz%22+status:merged,n,z
> ‘Librarian activities’ in mailing list: 
> [3] http://lists.openstack.org/pipermail/openstack-dev/2015-July/071058.html


Definitely well deserved for Alex. Outstanding technical work and really good 
community skills! My strong +1

Regards,
-- 
Tomasz 'Zen' Napierala
Product Engineering - Poland








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-08 Thread Jeremy Stanley
On 2015-09-08 13:32:58 -0400 (-0400), Ben Swartzlander wrote:
[...]
> It makes sense for the community to define LTS releases and coordinate
> making sure all the relevant projects are mutually compatible at that
> release point.
[...]

This seems premature. The most recent stable branch to reach EOL
(icehouse) made it just past 14 months before we had to give up
because not enough effort was being expended to keep it working and
testable. As a community we've so far struggled to maintain stable
branches as much as one year past release. While there are some
exciting improvements on the way to our requirements standardization
which could prove to help extend this, I really want to see us
demonstrate that we can maintain a release longer before we make
such a long-term commitment to downstream consumers.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-08 Thread Sean Dague
On 09/08/2015 01:07 PM, Doug Hellmann wrote:
> Excerpts from Dean Troyer's message of 2015-09-08 11:20:47 -0500:
>> On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann
>>>
>>> I'd like to come up with some way to express the time other than
>>> N+M because in the middle of a cycle it can be confusing to know
>>> what that means (if I want to deprecate something in August am I
>>> far enough through the current cycle that it doesn't count?).
>>>
>>> Also, as we start moving more projects to doing intermediate releases
>>> the notion of a "release" vs. a "cycle" will drift apart, so we
>>> want to talk about "stable releases" not just any old release.
>>>
>>
>> I've always thought the appropriate equivalent for projects not following
>> the (old) integrated release cadence was for N == six months.  It sets
>> approx. the same pace and expectation with users/deployers.
>>
>> For those deployments tracking trunk, a similar approach can be taken, in
>> that deprecating a config option in M3 then removing it in N1 might be too
>> quick, but rather wait at least the same point in the following release
>> cycle to increment 'N'.
>>
>> dt
>>
> 
> Making it explicitly date-based would simplify tracking, to be sure.

I would agree that the M3 -> N0 drop can be pretty quick, it can be 6
weeks (which I've seen happen). However N == six months might make FFE
deprecation lands in one release run into FFE in the next. For the CD
case my suggestion is > 3 months. Because if you aren't CDing in
increments smaller than that, and hence seeing the deprecation, you
aren't really doing the C part of CDing.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Getting Started : OpenStack

2015-09-08 Thread Bhagyashree Uday
Hi Victoria ,

Thanks for the prompt reply. I go by Bee(IRC nick : bee2502) . There
doesn't seem to be much information regarding this project even on the
Ceilometer project page :( I will wait till the next Outreachy applications
begin though to check out any new developments. Thanks for suggesting the
IRC channel :) Btw, do you happen to know any other open data analysis
projects in OpenStack ?

Bee

On Tue, Sep 8, 2015 at 12:59 AM, Victoria Martínez de la Cruz <
victo...@vmartinezdelacruz.com> wrote:

> Hi Bhagyashree,
>
> Welcome!
>
> That project seems to belong to Ceilometer, but I'm not sure about that.
> Ceilometer is the code name for OpenStack telemetry, if you are interested
> about it a good place to start is
> https://wiki.openstack.org/wiki/Ceilometer.
>
> Those internships ideas are from previous Outreachy/Google Summer of Code
> rounds. Outreachy applications will open next September 22nd so there is
> no much information about next round mentors/projects yet.
>
> Call for mentors is going to be launched soon, so keep track of that wiki
> for updates. Feel free to pass by #openstack-opw as well and we can help
> you set your development environment.
>
> Cheers,
>
> Victoria
>
> 2015-09-07 14:59 GMT-03:00 Bhagyashree Uday :
>
>> Hi ,
>>
>> I am Bhagyashree from India(IRC nick : bee2502 ). I have previous
>> experience in data analytics including Machine Leraning,,NLP,IR and User
>> Experience Research. I am interested in contributing to OpenStack on
>> projects involving data analysis. Also , if these projects could be a
>> part of Outreachy, it would be added bonus. I went through project ideas
>> listed on https://wiki.openstack.org/wiki/Internship_ideas and one of
>> these projects interested me a lot -
>> Understand OpenStack Operations via Insights from Logs and Metrics: A
>> Data Science Perspective
>> However, this project does not have any mentioned mentor and I was hoping
>> you could provide me with some individual contact from OpenStack community
>> who would be interested in mentoring this project or some mailing
>> list/thread/IRC community where I could look for a mentor. Other open data
>> science projects/idea suggestions are also welcome.
>>
>> Regards,
>> Bhagyashree
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-08 Thread Doug Hellmann
Excerpts from Ben Swartzlander's message of 2015-09-08 13:32:58 -0400:
> On 09/03/2015 08:22 AM, Thierry Carrez wrote:
> > Hi everyone,
> >
> > A feature deprecation policy is a standard way to communicate and
> > perform the removal of user-visible behaviors and capabilities. It helps
> > setting user expectations on how much and how long they can rely on a
> > feature being present. It gives them reassurance over the timeframe they
> > have to adapt in such cases.
> >
> > In OpenStack we always had a feature deprecation policy that would apply
> > to "integrated projects", however it was never written down. It was
> > something like "to remove a feature, you mark it deprecated for n
> > releases, then you can remove it".
> >
> > We don't have an "integrated release" anymore, but having a base
> > deprecation policy, and knowing which projects are mature enough to
> > follow it, is a great piece of information to communicate to our users.
> >
> > That's why the next-tags workgroup at the Technical Committee has been
> > working to propose such a base policy as a 'tag' that project teams can
> > opt to apply to their projects when they agree to apply it to one of
> > their deliverables:
> >
> > https://review.openstack.org/#/c/207467/
> >
> > Before going through the last stage of this, we want to survey existing
> > projects to see which deprecation policy they currently follow, and
> > verify that our proposed base deprecation policy makes sense. The goal
> > is not to dictate something new from the top, it's to reflect what's
> > generally already applied on the field.
> >
> > In particular, the current proposal says:
> >
> > "At the very minimum the feature [...] should be marked deprecated (and
> > still be supported) in the next two coordinated end-of-cyle releases.
> > For example, a feature deprecated during the M development cycle should
> > still appear in the M and N releases and cannot be removed before the
> > beginning of the O development cycle."
> >
> > That would be a n+2 deprecation policy. Some suggested that this is too
> > far-reaching, and that a n+1 deprecation policy (feature deprecated
> > during the M development cycle can't be removed before the start of the
> > N cycle) would better reflect what's being currently done. Or that
> > config options (which are user-visible things) should have n+1 as long
> > as the underlying feature (or behavior) is not removed.
> >
> > Please let us know what makes the most sense. In particular between the
> > 3 options (but feel free to suggest something else):
> >
> > 1. n+2 overall
> > 2. n+2 for features and capabilities, n+1 for config options
> > 3. n+1 overall
> 
> I think any discussion of a deprecation policy needs to be combined with 
> a discussion about LTS (long term support) releases. Real customers (not 
> devops users -- people who pay money for support) can't deal with 
> upgrades every 6 months.
> 
> Unavoidably, distros are going to want to support certain releases for 
> longer than the normal upstream support window so they can satisfy the 
> needs of the aforementioned customers. This will be true whether the 
> deprecation policy is N+1, N+2, or N+3.
> 
> It makes sense for the community to define LTS releases and coordinate 
> making sure all the relevant projects are mutually compatible at that 
> release point. Then the job of actually maintaining the LTS release can 
> fall on people who care about such things. The major benefit to solving 
> the LTS problem, though, is that deprecation will get a lot less painful 
> because you could assume upgrades to be one release at a time or 
> skipping directly from one LTS to the next, and you can reduce your 
> upgrade test matrix accordingly.

How is this fundamentally different from what we do now with stable
releases, aside from involving a longer period of time?

Doug

> 
> -Ben Swartzlander
> 
> > Thanks in advance for your input.
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] GSLB

2015-09-08 Thread Hayes, Graham
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/09/15 18:07, Anik wrote:
> Hi Graham,
> 
> Thanks for getting back.
> 
> So am I correct in summarizing that the current plan is that [1]
> GSLB will have its own API set and not evolve as an extension of
> Designate and [2] GSLB will mostly act as a policy definition
> engine (+ health checks ?) with Designate backend providing the
> actual DNS resolution ?

[1] - Yes. Designate decided a while ago that this was not in scope for
  the project

[2] - Yes, The Kosmos API will be a place to define the endpoints you
  are balancing across, and what checks should be run on them to
  decide on their status.

  There will be built in checks like TCP / HTTP(S). There will also
  be plugin checks -for example, with a Neutron LBaaS Load
  Balancer, we can query its status API.

  All of this will result in DNS entries in Designate being updated
  (or another Global Load Balancing Plugin)

> Regards, Anik




-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV7x9lAAoJEPRBUqpJBgIiYLsH/3hBKyTYVQ3qDRKd4qtlPWeV
zdYzGwKFQ2OauE6ou7p1SpXTu9iJBg2uT0XIIuhHI6clI0ZgZSylZtlyBUyjggvO
6fY5pCzouJxE0Ad3HbGypXqU554WYLxPXmftto0fEB6nvkrc0qeDPgUre2Q4QTdo
P/A1tJaDqlNzQlCpnMRl2Ihdy8gZYSD5sRJVvmTMF2cKD40dJQrCI68IzWr+bfLf
O/961QBUQOxJ8GqldR1gLTNoIzEUSdVFvjU7i1JZpznk74RzB5tDp7O/hiO0Shit
rFXEx4KCLovIvG3hbDaurSpf+A1SQACuZXZJDFtXufN2JPfXMT3xCm/3mNfMowc=
=2Eis
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] What is the no_device flag for in block device mapping?

2015-09-08 Thread Murray, Paul (HP Cloud)
Hi All,

I'm wondering what the "no_device" flag is used for in the block device 
mappings. I had a dig around in the code but couldn't figure out why it is 
there. The name suggests an obvious meaning, but I've learnt not to guess too 
much from names.

Any pointers welcome.

Thanks
Paul

Paul Murray
Nova Technical Lead, HP Cloud
+44 117 316 2527

Hewlett-Packard Limited   |   Registered Office: Cain Road, Bracknell, 
Berkshire, RG12 1HN   |Registered No: 690597 England   |VAT Number: GB 
314 1496 79

This e-mail may contain confidential and/or legally privileged material for the 
sole use of the intended recipient.  If you are not the intended recipient (or 
authorized to receive for the recipient) please contact the sender by reply 
e-mail and delete all copies of this message.  If you are receiving this 
message internally within the Hewlett Packard group of companies, you should 
consider the contents "CONFIDENTIAL".

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-08 Thread Ben Swartzlander

On 09/03/2015 08:22 AM, Thierry Carrez wrote:

Hi everyone,

A feature deprecation policy is a standard way to communicate and
perform the removal of user-visible behaviors and capabilities. It helps
setting user expectations on how much and how long they can rely on a
feature being present. It gives them reassurance over the timeframe they
have to adapt in such cases.

In OpenStack we always had a feature deprecation policy that would apply
to "integrated projects", however it was never written down. It was
something like "to remove a feature, you mark it deprecated for n
releases, then you can remove it".

We don't have an "integrated release" anymore, but having a base
deprecation policy, and knowing which projects are mature enough to
follow it, is a great piece of information to communicate to our users.

That's why the next-tags workgroup at the Technical Committee has been
working to propose such a base policy as a 'tag' that project teams can
opt to apply to their projects when they agree to apply it to one of
their deliverables:

https://review.openstack.org/#/c/207467/

Before going through the last stage of this, we want to survey existing
projects to see which deprecation policy they currently follow, and
verify that our proposed base deprecation policy makes sense. The goal
is not to dictate something new from the top, it's to reflect what's
generally already applied on the field.

In particular, the current proposal says:

"At the very minimum the feature [...] should be marked deprecated (and
still be supported) in the next two coordinated end-of-cyle releases.
For example, a feature deprecated during the M development cycle should
still appear in the M and N releases and cannot be removed before the
beginning of the O development cycle."

That would be a n+2 deprecation policy. Some suggested that this is too
far-reaching, and that a n+1 deprecation policy (feature deprecated
during the M development cycle can't be removed before the start of the
N cycle) would better reflect what's being currently done. Or that
config options (which are user-visible things) should have n+1 as long
as the underlying feature (or behavior) is not removed.

Please let us know what makes the most sense. In particular between the
3 options (but feel free to suggest something else):

1. n+2 overall
2. n+2 for features and capabilities, n+1 for config options
3. n+1 overall


I think any discussion of a deprecation policy needs to be combined with 
a discussion about LTS (long term support) releases. Real customers (not 
devops users -- people who pay money for support) can't deal with 
upgrades every 6 months.


Unavoidably, distros are going to want to support certain releases for 
longer than the normal upstream support window so they can satisfy the 
needs of the aforementioned customers. This will be true whether the 
deprecation policy is N+1, N+2, or N+3.


It makes sense for the community to define LTS releases and coordinate 
making sure all the relevant projects are mutually compatible at that 
release point. Then the job of actually maintaining the LTS release can 
fall on people who care about such things. The major benefit to solving 
the LTS problem, though, is that deprecation will get a lot less painful 
because you could assume upgrades to be one release at a time or 
skipping directly from one LTS to the next, and you can reduce your 
upgrade test matrix accordingly.


-Ben Swartzlander



Thanks in advance for your input.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Meters

2015-09-08 Thread Srikanth Vavilapalli
Hi

The vcpus, memory and disk usage related nova measurements are of 
"notification" type. So Plz ensure you have the following configuration 
settings in your nova.conf file on your compute node and restart your 
nova-compute service if you made any changes to that file.

instance_usage_audit=True
instance_usage_audit_period=hour
notify_on_state_change=vm_and_task_state
notification_driver = messagingv2
notification_topics = notifications
notify_on_any_change = True

Thanks
Srikanth



From: Abhishek Talwar [mailto:abhishek.tal...@tcs.com]
Sent: Monday, September 07, 2015 11:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ceilometer] Meters

Hi Folks,


I have installed a kilo devstack setup install and I am trying to get the 
memory and disk usage for my VM's. But on checking the "ceilometer meter-list" 
I can't find memory.usage or disk.usage meters.

I am searched a lot for this and still couldn't find a solution. So how to 
enable these meters to our meter-list.

I want all these meters in the ceilometer meter-list so that I can use them to 
monitor my instances.
Currently the output of ceilometer meter-list is as follows:
+--++--+--+--+
| Name | Type   | Resource ID   
   | User ID  | Project ID   |
+--++--+--+--+
| cpu  | cumulative | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77  
   | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| cpu_util | gauge  | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77  
   | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| disk.read.bytes  | cumulative | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77  
   | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| disk.read.requests   | cumulative | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77  
   | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| disk.write.bytes | cumulative | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77  
   | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| disk.write.requests  | cumulative | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77  
   | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| image| gauge  | 55a0a2c2-8cfb-4882-ad05-01d7c821b1de  
   |  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image| gauge  | acd6beef-13e6-4d64-a83d-9e96beac26ef  
   |  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image| gauge  | ecefcd31-ae47-4079-bd19-efe07f4c33d3  
   |  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.download   | delta  | 55a0a2c2-8cfb-4882-ad05-01d7c821b1de  
   |  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.serve  | delta  | 55a0a2c2-8cfb-4882-ad05-01d7c821b1de  
   |  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.size   | gauge  | 55a0a2c2-8cfb-4882-ad05-01d7c821b1de  
   |  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.size   | gauge  | acd6beef-13e6-4d64-a83d-9e96beac26ef  
   |  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.size   | gauge  | ecefcd31-ae47-4079-bd19-efe07f4c33d3  
   |  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.update | delta  | 55a0a2c2-8cfb-4882-ad05-01d7c821b1de  
   |  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.upload | delta  | 55a0a2c2-8cfb-4882-ad05-01d7c821b1de  
   |  | 22f22fb60bf8496cb60e8498d93d56e8 |
| instance | gauge  | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77  
   | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| instance:m1.small| gauge  | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77  
   | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| network.incoming.bytes   | cumulative | 
nova-instance-instance-0022-fa163e3bd74e | 92876a1aad3c477398137b702a8467d3 
| 22f22fb60bf8496cb60e8498d93d56e8 |
| network.incoming.packets | cumulative | 
nova-instance-instance-0022-fa163e3bd74e | 92876a1aad3c477398137b702a8467d3 
| 22f22fb60bf8496cb60e8498d93d56e8 |
| network.outgoing.bytes   | cumulative | 
nova-instance-instance-0022-fa163e3bd74e | 92876a1aad3c477398137b702a8467d3 

Re: [openstack-dev] [nova] [neutron] [rally] Neutron or nova degradation?

2015-09-08 Thread Carl Baldwin
This sounds like a good candidate for a "git bisect" operation [1]
since we already have a pretty tight window where things changed.

Carl

[1] http://git-scm.com/docs/git-bisect

On Thu, Sep 3, 2015 at 7:07 AM, Assaf Muller  wrote:
>
>
> On Thu, Sep 3, 2015 at 8:43 AM, Andrey Pavlov  wrote:
>>
>> Hello,
>>
>> We have rally job with fake virt driver. And we run it periodically.
>> This job runs 200 servers and measures 'show' operations.
>>
>> On 18.08 it was run well[1]. But on 21.08 it was failed by timeout[2].
>> I tried to understand what happens.
>> I tried to check this job with 20 servers only[3]. It passed but I see
>> that
>> operations with neutron take more time now (list subnets, list network
>> interfaces).
>> and as result start and show instances take more time also.
>>
>> Maybe anyone knows what happens?
>
>
> Looking at the merged Neutron patches between the 18th and 21st, there's a
> lot of
> candidates, including QoS and work around quotas.
>
> I think the best way to find out would be to run a profiler against Neutron
> from the 18th,
> and Neutron from the 21st while running the Rally tests, and finding out if
> the major
> bottlenecks moved. Last time I profiled Neutron I used GeventProfiler:
> https://pypi.python.org/pypi/GreenletProfiler
>
> Ironically I was having issued with the profiler that comes with Eventlet.
>
>>
>>
>>
>> [1]
>> http://logs.openstack.org/13/211613/6/experimental/ec2-api-rally-dsvm-fakevirt/fac263e/
>> [2]
>> http://logs.openstack.org/74/213074/7/experimental/ec2-api-rally-dsvm-fakevirt/91d0675/
>> [3]
>> http://logs.openstack.org/46/219846/1/experimental/ec2-api-rally-dsvm-fakevirt/dad98f0/
>>
>> --
>> Kind regards,
>> Andrey Pavlov.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack support for Amazon Concepts - was Re: cloud-init IPv6 support

2015-09-08 Thread Kevin Benton
The contract we have is to maintain compatibility. As long as a client
written for the AWS API continues to work, I don't think we are violating
anything. Offering one API isn't a promise not to offer an alternative way
to access the same information.
On Sep 6, 2015 7:37 PM, "Sean M. Collins"  wrote:

> On Sun, Sep 06, 2015 at 04:25:43PM EDT, Kevin Benton wrote:
> > So it's been pointed out that http://169.254.169.254/openstack is
> completed
> > OpenStack invented. I don't quite understand how that's not violating the
> > contract you said we have with end users about EC2 compatibility under
> the
> > restriction of 'no new stuff'.
>
> I think that is a violation. I don't think that allows us to make more
> changes, just because we've broken the contract once, so a second
> infraction is less significant.
>
> > If we added an IPv6 endpoint that the metadata service listens on, it
> would
> > just be another place that non cloud-init clients don't know how to talk
> > to. It's not going to break our compatibility with any clients that
> connect
> > to the IPv4 address.
>
> No, but if Amazon were to make a decision about how to implement IPv6 in
> EC2 and how to make the Metadata API service work with IPv6 we'd be
> supporting two implementations - the one we came up with and one for
> supporting the way Amazon implemented it.
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] GSLB

2015-09-08 Thread Anik
Hi Graham,
Thanks for getting back. 
So am I correct in summarizing that the current plan is that [1] GSLB will have 
its own API set and not evolve as an extension of Designate and [2] GSLB will 
mostly act as a policy definition engine (+ health checks ?) with Designate 
backend providing the actual DNS resolution ? Regards, Anik 

  From: "Hayes, Graham" 
 To: Anik ; OpenStack Development Mailing List (not for 
usage questions)  
 Sent: Tuesday, September 8, 2015 9:19 AM
 Subject: Re: [openstack-dev] GSLB
   
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/09/15 12:55, Anik wrote:
> Hello,
> 
> Recently saw some discussions in the Designate mailer archive
> around GSLB and saw some API snippets subsequently. Seems like
> early stages on this project, but highly excited that there is some
> traction now on GSLB.
> 
> I would to find out [1] If there has been discussions around how
> GSLB will work across multiple OpenStack regions and [2] The level
> of integration planned between Designate and GSLB.
> 
> Any pointers in this regard will be helpful.
> 
> Regards, Anik
> 

Hi Anik

Currently we are in very early stages of planning for GSLB.

We do not yet have a good answer for [1] - we need to work this out in
the near future.

My plan is for the MVP, is for a service running in one region - this
allows us to work out the kinks in the API / driver integration for the
service.

For [2], I would like to see us do the following:

For regional load balancers support the Neutron LBaaS v2 API as a
default in-tree.

For the global side, (routing traffic to regions) I would like to
have designate as the default in-tree.

I think we should have these both as plugins, to allow for other
configurations / technologies, but we should be runnable out of the box
using just other OpenStack open source projects.

I may be slightly biased (I am a member of designate-core), but I think
we should do the 4 Open's of OpenStack, and this seems the best way.

We also meet in #openstack-meeting-4 at 16:00 UTC every Tuesday, and
most of the people interested are in #openstack-gslb .

Thanks,

Graham



-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV7wqjAAoJEPRBUqpJBgIicJoH/RztF/0om92Jx01+iGL/9+Wu
ZP9h++7/nZUMtrEY5vIGKR5q2d6wD7AMH9bxYLzjP2u6yIfWhchqH5YWy87PVsdH
7nC/AF7jIR30RiOxD6UlZ7N1414sFCEsO3VcnneV6oSbretmh2kjmH2KRRKBdfaR
VNZXrSvaICoqzhDcfmhhSpFdHVFkPHEQ5DVDJPlF0CrGFG9fp0R7Osra+DC8dpAu
2HF6o7cSZk/t/EoQWnZE43rNXQfFXLmLs964OsklMBK79FJX49qJ90TxgDGJznCt
RuduUMMEZmDJ0a8rDLxkuzkOHsuqPmnXJ+ADcouuWZ/EWb5fMfLkZ4ZPmayDlgg=
=VZl2
-END PGP SIGNATURE-


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-08 Thread Doug Hellmann
Excerpts from Dean Troyer's message of 2015-09-08 11:20:47 -0500:
> On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann
> >
> > I'd like to come up with some way to express the time other than
> > N+M because in the middle of a cycle it can be confusing to know
> > what that means (if I want to deprecate something in August am I
> > far enough through the current cycle that it doesn't count?).
> >
> > Also, as we start moving more projects to doing intermediate releases
> > the notion of a "release" vs. a "cycle" will drift apart, so we
> > want to talk about "stable releases" not just any old release.
> >
> 
> I've always thought the appropriate equivalent for projects not following
> the (old) integrated release cadence was for N == six months.  It sets
> approx. the same pace and expectation with users/deployers.
> 
> For those deployments tracking trunk, a similar approach can be taken, in
> that deprecating a config option in M3 then removing it in N1 might be too
> quick, but rather wait at least the same point in the following release
> cycle to increment 'N'.
> 
> dt
> 

Making it explicitly date-based would simplify tracking, to be sure.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Mitaka Design Summit ideas

2015-09-08 Thread Kyle Mestery
Folks:

It's that time of the cycle again! Lets start collecting ideas for our
design summit in Tokyo at the etherpad located here [1]. We'll discuss
these a bit in some upcoming meetings and ensure we have a solid schedule
to fill our 12 fishbowl slots in Tokyo.

Thanks!
Kyle

[1] https://etherpad.openstack.org/p/neutron-mitaka-designsummit
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] [sahara]

2015-09-08 Thread Denis Egorenko
Hello everyone,

Currently Sahara [1] keeps two run mods: stand alone mode (all-in-one) and
distributed mode. Difference between those modes is that second mode
creates separation between API and engine processes. Such architecture
allows the API process to remain relatively free to handle requests while
offloading intensive tasks to the engine processes. [2] Second mode is more
appropriate for big and complex environments, but you can also use stand
alone mode for simple tasks and tests.

So, the main issue is that now puppet-sahara [3] uses only first all-in-one
run mode. So, I've implemented support for distributed mode in
puppet-sahara [4], please review this commit and provide feedback.

Also i have some +1 from Sahara team, including Sahara Cores.

[1] https://github.com/openstack/sahara
[2]
http://docs.openstack.org/developer/sahara/userdoc/advanced.configuration.guide.html#distributed-mode-configuration
[3] https://github.com/openstack/sahara
[4] https://review.openstack.org/#/c/192721/

Thanks.

-- 
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [sahara]

2015-09-08 Thread Denis Egorenko
Sorry, wrong link [3]  https://github.com/openstack/puppet-sahara


2015-09-08 19:31 GMT+03:00 Denis Egorenko :

> Hello everyone,
>
> Currently Sahara [1] keeps two run mods: stand alone mode (all-in-one) and
> distributed mode. Difference between those modes is that second mode
> creates separation between API and engine processes. Such architecture
> allows the API process to remain relatively free to handle requests while
> offloading intensive tasks to the engine processes. [2] Second mode is more
> appropriate for big and complex environments, but you can also use stand
> alone mode for simple tasks and tests.
>
> So, the main issue is that now puppet-sahara [3] uses only first
> all-in-one run mode. So, I've implemented support for distributed mode in
> puppet-sahara [4], please review this commit and provide feedback.
>
> Also i have some +1 from Sahara team, including Sahara Cores.
>
> [1] https://github.com/openstack/sahara
> [2]
> http://docs.openstack.org/developer/sahara/userdoc/advanced.configuration.guide.html#distributed-mode-configuration
> [3] https://github.com/openstack/sahara
> [4] https://review.openstack.org/#/c/192721/
>
> Thanks.
>
> --
> Best Regards,
> Egorenko Denis,
> Deployment Engineer
> Mirantis
>



-- 
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ansible][Infra] Moving ansible roles into big tent?

2015-09-08 Thread Kevin Carter
Hi Paul,

We'd love to collaborate on improving openstack-ansible and getting our 
OpenStack roles into the general big tent and out of our monolithic repository. 
We have a proposal in review for moving our roles out of our main repository 
and into separate repositories [0] making openstack-ansible consume the roles 
through the use of an Ansible Galaxy interface. We've been holding off on this 
effort until os-ansible-deployment is moved into the OpenStack namespace which 
should be happening sometime on September 11 [1][2]. With that, I'd say join us 
in the #openstack-ansible channel if you have any questions on the 
os-ansible-deployment project in general and check out our twice weekly 
meetings [3]. Lastly, many of the core members / deployers of the project will 
be at the summit and if you're interested / will be in Tokyo we can schedule 
some time to work out a path to convergence. 

Look forward to talking to you and others about this more soon. 

--

[0] - https://review.openstack.org/#/c/213779
[1] - https://review.openstack.org/#/c/200730
[2] - 
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Upcoming_Project_Renames
[3] - https://wiki.openstack.org/wiki/Meetings/openstack-ansible

Kevin Carter
IRC: cloudnull



From: Paul Belanger 
Sent: Tuesday, September 8, 2015 9:57 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Ansible][Infra] Moving ansible roles into big tent?

Greetings,

I wanted to start a discussion about the future of ansible / ansible roles in
OpenStack. Over the last week or so I've started down the ansible path, starting
my first ansible role; I've started with ansible-role-nodepool[1].

My initial question is simple, now that big tent is upon us, I would like
some way to include ansible roles into the opentack git workflow.  I first
thought the role might live under openstack-infra however I am not sure that
is the right place.  My reason is, -infra tents to include modules they
currently run under the -infra namespace, and I don't want to start the effort
to convince people to migrate.

Another thought might be to reach out to the os-ansible-deployment team and ask
how they see roles in OpenStack moving foward (mostly the reason for this
email).

Either way, I would be interested in feedback on moving forward on this. Using
travis-ci and github works but OpenStack workflow is much better.

[1] https://github.com/pabelanger/ansible-role-nodepool

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ansible][Infra] Moving ansible roles into big tent?

2015-09-08 Thread Matthew Thode
On 09/08/2015 09:57 AM, Paul Belanger wrote:
> Greetings,
> 
> I wanted to start a discussion about the future of ansible / ansible roles in
> OpenStack. Over the last week or so I've started down the ansible path, 
> starting
> my first ansible role; I've started with ansible-role-nodepool[1].
> 
> My initial question is simple, now that big tent is upon us, I would like
> some way to include ansible roles into the opentack git workflow.  I first
> thought the role might live under openstack-infra however I am not sure that
> is the right place.  My reason is, -infra tents to include modules they
> currently run under the -infra namespace, and I don't want to start the effort
> to convince people to migrate.
> 
> Another thought might be to reach out to the os-ansible-deployment team and 
> ask
> how they see roles in OpenStack moving foward (mostly the reason for this
> email).
> 
> Either way, I would be interested in feedback on moving forward on this. Using
> travis-ci and github works but OpenStack workflow is much better.
> 
> [1] https://github.com/pabelanger/ansible-role-nodepool
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

This might be useful in openstack-ansible if we are going to want to use
openstack-ansible for testing.  Might want infra's feedback on that
though, also a spec would be in order (to openstack-ansible) for this.

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican core

2015-09-08 Thread Farr, Kaitlin M.
+1, Dave has been a key contributor, and his code reviews are thoughtful.

Kaitlin

I'd like to nominate Dave Mccowan for the Barbican core review team.

He has been an active contributor both in doing relevant code pieces and
making useful and thorough reviews; And so I think he would make a great
addition to the team.

Please bring the +1's :D

Cheers!

--
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] GSLB

2015-09-08 Thread Hayes, Graham
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/09/15 12:55, Anik wrote:
> Hello,
> 
> Recently saw some discussions in the Designate mailer archive
> around GSLB and saw some API snippets subsequently. Seems like
> early stages on this project, but highly excited that there is some
> traction now on GSLB.
> 
> I would to find out [1] If there has been discussions around how
> GSLB will work across multiple OpenStack regions and [2] The level
> of integration planned between Designate and GSLB.
> 
> Any pointers in this regard will be helpful.
> 
> Regards, Anik
> 

Hi Anik

Currently we are in very early stages of planning for GSLB.

We do not yet have a good answer for [1] - we need to work this out in
the near future.

My plan is for the MVP, is for a service running in one region - this
allows us to work out the kinks in the API / driver integration for the
service.

For [2], I would like to see us do the following:

For regional load balancers support the Neutron LBaaS v2 API as a
default in-tree.

For the global side, (routing traffic to regions) I would like to
have designate as the default in-tree.

I think we should have these both as plugins, to allow for other
configurations / technologies, but we should be runnable out of the box
using just other OpenStack open source projects.

I may be slightly biased (I am a member of designate-core), but I think
we should do the 4 Open's of OpenStack, and this seems the best way.

We also meet in #openstack-meeting-4 at 16:00 UTC every Tuesday, and
most of the people interested are in #openstack-gslb .

Thanks,

Graham



-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV7wqjAAoJEPRBUqpJBgIicJoH/RztF/0om92Jx01+iGL/9+Wu
ZP9h++7/nZUMtrEY5vIGKR5q2d6wD7AMH9bxYLzjP2u6yIfWhchqH5YWy87PVsdH
7nC/AF7jIR30RiOxD6UlZ7N1414sFCEsO3VcnneV6oSbretmh2kjmH2KRRKBdfaR
VNZXrSvaICoqzhDcfmhhSpFdHVFkPHEQ5DVDJPlF0CrGFG9fp0R7Osra+DC8dpAu
2HF6o7cSZk/t/EoQWnZE43rNXQfFXLmLs964OsklMBK79FJX49qJ90TxgDGJznCt
RuduUMMEZmDJ0a8rDLxkuzkOHsuqPmnXJ+ADcouuWZ/EWb5fMfLkZ4ZPmayDlgg=
=VZl2
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core

2015-09-08 Thread Nikolay Starodubtsev
Serg and murano team, thanks. I'll try to do my best for the project.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-09-08 17:55 GMT+03:00 Serg Melikyan :

> Nikolai, my congratulations!
>
> On Tue, Sep 8, 2015 at 5:28 AM, Stan Lagun  wrote:
>
>> +1
>>
>> Sincerely yours,
>> Stan Lagun
>> Principal Software Engineer @ Mirantis
>>
>> 
>>
>> On Tue, Sep 1, 2015 at 3:03 PM, Alexander Tivelkov <
>> ativel...@mirantis.com> wrote:
>>
>>> +1. Well deserved.
>>>
>>> --
>>> Regards,
>>> Alexander Tivelkov
>>>
>>> On Tue, Sep 1, 2015 at 2:47 PM, Victor Ryzhenkin <
>>> vryzhen...@mirantis.com> wrote:
>>>
 +1 from me ;)

 --
 Victor Ryzhenkin
 Junior QA Engeneer
 freerunner on #freenode

 Включено 1 сентября 2015 г. в 12:18:19, Ekaterina Chernova (
 efedor...@mirantis.com) написал:

 +1

 On Tue, Sep 1, 2015 at 10:03 AM, Dmitro Dovbii 
 wrote:

> +1
>
> 2015-09-01 2:24 GMT+03:00 Serg Melikyan :
>
>> +1
>>
>> On Mon, Aug 31, 2015 at 3:45 PM, Kirill Zaitsev <
>> kzait...@mirantis.com> wrote:
>>
>>> I’m pleased to nominate Nikolai for Murano core.
>>>
>>> He’s been actively participating in development of murano during
>>> liberty and is among top5 contributors during last 90 days. He’s also
>>> leading the CloudFoundry integration initiative.
>>>
>>> Here are some useful links:
>>>
>>> Overall contribution: http://stackalytics.com/?user_id=starodubcevna
>>> List of reviews:
>>> https://review.openstack.org/#/q/reviewer:%22Nikolay+Starodubtsev%22,n,z
>>> Murano contribution during latest 90 days
>>> http://stackalytics.com/report/contribution/murano/90
>>>
>>> Please vote with +1/-1 for approval/objections
>>>
>>> --
>>> Kirill Zaitsev
>>> Murano team
>>> Software Engineer
>>> Mirantis, Inc
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
>> http://mirantis.com | smelik...@mirantis.com
>>
>> +7 (495) 640-4904, 0261
>> +7 (903) 156-0836
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
> http://mirantis.com | smelik...@mirantis.com
>
> +7 (495) 640-4904, 0261
> +7 (903) 156-0836
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscr

Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-08 Thread Dean Troyer
On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann
>
> I'd like to come up with some way to express the time other than
> N+M because in the middle of a cycle it can be confusing to know
> what that means (if I want to deprecate something in August am I
> far enough through the current cycle that it doesn't count?).
>
> Also, as we start moving more projects to doing intermediate releases
> the notion of a "release" vs. a "cycle" will drift apart, so we
> want to talk about "stable releases" not just any old release.
>

I've always thought the appropriate equivalent for projects not following
the (old) integrated release cadence was for N == six months.  It sets
approx. the same pace and expectation with users/deployers.

For those deployments tracking trunk, a similar approach can be taken, in
that deprecating a config option in M3 then removing it in N1 might be too
quick, but rather wait at least the same point in the following release
cycle to increment 'N'.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican core

2015-09-08 Thread Ade Lee
Definitely .. +1
On Tue, 2015-09-08 at 19:05 +0300, Juan Antonio Osorio wrote:
> I'd like to nominate Dave Mccowan for the Barbican core review team.
> 
> He has been an active contributor both in doing relevant code pieces
> and making useful and thorough reviews; And so I think he would make
> a great addition to the team.
> 
> Please bring the +1's :D
> 
> Cheers!
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican core

2015-09-08 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

+1

Dave has been a great asset to the team, and I think he would make an
excellent core reviewer.

- - Douglas Mendizábal

On 9/8/15 11:05 AM, Juan Antonio Osorio wrote:
> I'd like to nominate Dave Mccowan for the Barbican core review 
> team.
> 
> He has been an active contributor both in doing relevant code 
> pieces and making useful and thorough reviews; And so I think he 
> would make a great addition to the team.
> 
> Please bring the +1's :D
> 
> Cheers!
> 
> -- Juan Antonio Osorio R. e-mail: jaosor...@gmail.com 
> 
> 
> 
> 
> __

>
>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJV7wkfAAoJEB7Z2EQgmLX7X+IP/AtYTxcx0u+O6MMLDU1VcGZg
5ksCdn1bosfuqJ/X/QWplHBSG8BzllwciHm7YJxIY94MaAlThk3Zw6UDKKkBMqIt
Qag09Z868LPl9/pll0whR5fVa052zSMq/QYWTnpgwpAgQduKNe4KaR1ZKhtBBbAJ
BvjyKEa2dJLA6LIMXxcxpoCAKSeORM5lce19kHHhWyqq9v5A89U6GHMgwRAa2fGN
7RyYmlOrmxh6TyJQX9Xl+w9y5WPAbxaUqC0MYEkLMpa7VnGf2pEangkN0LUAJO2x
NxwHa73b2LA8K1+4hwTvZO28sRnyMHwjSpqvpGt60FXkgi4dLyyy8gR6gsO49EDB
QOSwpwyFHzA//iuMl72pAD6uMzK0SCECtEu2000l0p3WEXS1i0z7p9VTfw4FySqb
V0S/IeSFfkt09TK2DoOSzXAvBZjsLz9gjRbRIv2dx0QTTmN5JpihOeoUojn24aDV
86AshlhoImJGOX16MwRL+T6LCindkczGe4Faz7WzmBomEJ7SOY6pzDbyEBLYcqzu
crvrLt2D1HmaygFGS37lVCqxlIegwsnZHGIe+Jtr8pDIDSW37ig4LZIDVra2/lj9
E7/fWYCDqbSIUWYG2jMr0/3eQQwZCj4kNvtWaTlNFmTPJZAEYpSN3rBhkfWBgsLv
mqBOM4IeR4EqaqaC2og7
=jL8d
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican core

2015-09-08 Thread Juan Antonio Osorio
I'd like to nominate Dave Mccowan for the Barbican core review team.

He has been an active contributor both in doing relevant code pieces and
making useful and thorough reviews; And so I think he would make a great
addition to the team.

Please bring the +1's :D

Cheers!

-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Multi Node Stack - keystone federation

2015-09-08 Thread Fox, Kevin M
I think it lets you take a token on the identity cloud and provide it to the 
service cloud and get a token for that cloud. So I think it might do what we 
need without storing credentials.

Thanks,
Kevin

From: Zane Bitter [zbit...@redhat.com]
Sent: Tuesday, September 08, 2015 7:53 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Heat] Multi Node Stack - keystone federation

On 07/09/15 05:27, SHTILMAN, Tomer (Tomer) wrote:
> Hi
>
> Currently in heat we have the ability to deploy a remote stack on a
> different region using OS::Heat::Stack and region_name in the context
>
> My question is regarding multi node , separate keystones, with keystone
> federation.
>
> Is there an option in a HOT template to send a stack to a different
> node, using the keystone federation feature?
>
> For example ,If I have two Nodes (N1 and N2) with separate keystones
> (and keystone federation), I would like to deploy a stack on N1 with a
> nested stack that will deploy on N2, similar to what we have now for regions

Short answer: no.

Long answer: this is something we've wanted to do for a while, and a lot
of folks have asked for it. We've been calling it multi-cloud (i.e.
multiple keystones, as opposed to multi-region which is multiple regions
with one keystone). In principle it's a small extension to the
multi-region stacks (just add a way to specify the auth_url as well as
the region), but the tricky part is how to authenticate to the other
clouds. We don't want to encourage people to put their login credentials
into a template. I'm not sure to what extent keystone federation could
solve that - I suspect that it does not allow you to use a single token
on multiple clouds, just that it allows you to obtain a token on
multiple clouds using the same credentials? So basically this idea is on
hold until someone comes up with a safe way to authenticate to the other
clouds. Ideas/specs welcome.

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #50

2015-09-08 Thread Emilien Macchi


On 09/07/2015 11:54 AM, Emilien Macchi wrote:
> Hello,
> 
> Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
> in #openstack-meeting-4:
> 
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150908
> 
> Please add additional items you'd like to discuss.
> If our schedule allows it, we'll make bug triage during the meeting.
> 

We did our meeting, you can read the notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-08-15.00.html

Thanks,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][i18n] Is there any point in using _() inpython-novaclient?

2015-09-08 Thread Matt Riedemann



On 9/6/2015 12:18 AM, Steve Martinelli wrote:

Isn't this just a matter of setting up novaclient for translation? IIRC
using _() is harmless if there's no translation bits set up for the project.

Thanks,

Steve Martinelli
OpenStack Keystone Core

Inactive hide details for Matt Riedemann ---2015/09/04 01:50:54 PM---I
noticed this today: https://review.openstack.org/#/c/219Matt Riedemann
---2015/09/04 01:50:54 PM---I noticed this today:
https://review.openstack.org/#/c/219768/

From: Matt Riedemann 
To: "OpenStack Development Mailing List (not for usage questions)"
, openstack-i...@lists.openstack.org
Date: 2015/09/04 01:50 PM
Subject: [openstack-dev] [nova][i18n] Is there any point in using _() in
python-novaclient?





I noticed this today:

https://review.openstack.org/#/c/219768/

And it got me thinking about something I've wondered before - why do we
even use _() in python-novaclient?  It doesn't have any .po files for
babel message translation, it has no babel config, there is nothing in
setup.cfg about extracting messages and compiling them into .mo's, there
is nothing on Transifex for python-novaclient, etc.

Is there a way to change your locale and get translated output in nova
CLIs?  I didn't find anything in docs from a quick google search.

Comparing to python-openstackclient, that does have a babel config and
some locale po files in tree, at least for de and zh_TW.

So if this doesn't work in python-novaclient, do we need any of the i18n
code in there?  It doesn't really hurt, but it seems pointless to push
changes for it or try to keep user-facing messages in mind in the code.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, doffm is working on enabling i18n for python-novaclient via bug:

https://bugs.launchpad.net/python-novaclient/+bug/1492444

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bug importance

2015-09-08 Thread Matt Riedemann



On 9/7/2015 4:54 AM, John Garbutt wrote:

I have a feeling launchpad asked my to renew my membership of nova-bug
recently, and said it would drop me form the list if I didn't do that.

Not sure if thats intentional to keep the list fresh? Its the first I
knew about it.

Unsure, but that could be related?

Thanks,
John

On 6 September 2015 at 14:25, Gary Kotton  wrote:

That works.
Thanks!

From: "dava...@gmail.com" 
Reply-To: OpenStack List 
Date: Sunday, September 6, 2015 at 4:10 PM
To: OpenStack List 
Subject: Re: [openstack-dev] [nova] Bug importance

Gary,

Not sure what changed...

On this page (https://bugs.launchpad.net/nova/) on the right hand side, do
you see "Bug Supervisor" set to "Nova Bug Team"?  I believe "Nova Bug Team"
is open and you can add yourself, so if you do not see yourself in that
group, can you please add it and try?

-- Dims

On Sun, Sep 6, 2015 at 4:56 AM, Gary Kotton  wrote:


Hi,
In the past I was able to set the importance of a bug. Now I am unable to
do this? Has the policy changed? Can someone please clarify. If the policy
has changed who is responsible for deciding the priority of a bug?
Thanks
Gary

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



That's probably what happened.  I recently got the same notice about 
being dropped from the cinder bugs team if I didn't renew my membership. 
 It's not a bad idea since there have been issues where you have a lot 
of projects associated with a single bug and launchpad times out 
changing status or making other updates because it's trying to process 
everyone that gets notified.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] trello

2015-09-08 Thread Derek Higgins

Hi All,

   Some of ye may remember some time ago we used to organize TripleO 
based jobs/tasks on a trello board[1], at some stage this board fell out 
of use (the exact reason I can't put my finger on). This morning I was 
putting a list of things together that need to be done in the area of CI 
and needed somewhere to keep track of it.


I propose we get back to using this trello board and each of us add 
cards at the very least for the things we are working on.


This should give each of us a lot more visibility into what is ongoing 
on in the tripleo project currently, unless I hear any objections, 
tomorrow I'll start archiving all cards on the boards and removing 
people no longer involved in tripleo. We can then start adding items and 
anybody who wants in can be added again.


thanks,
Derek.

[1] - https://trello.com/tripleo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Meters

2015-09-08 Thread gord chung
is this using libvirt? if so, can you verify you have the following 
requirements:


 * libvirt 1.1.1+
 * qemu 1.5+
 * guest driver that supports memory balloon stats

also, please check to see if there are any visible ERRORs in 
ceilometer-agent-compute log.



On 08/09/2015 2:04 AM, Abhishek Talwar wrote:

Hi Folks,


I have installed a *kilo devstack setup* install and I am trying to 
get the *memory and disk usage* for my VM's. But on checking the 
*"ceilometer meter-list"* I can't find memory.usage or disk.usage meters.


I am searched a lot for this and still couldn't find a solution. So 
how to enable these meters to our meter-list.


I want all these meters in the ceilometer meter-list so that I can use 
them to monitor my instances.


|Currently the output of *ceilometer meter-list* is as follows:

|
|+--++--+--+--+
|Name|Type|Resource ID |User ID |Project ID |
+--++--+--+--+
| cpu | cumulative 
|5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
| cpu_util | gauge 
|5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
| disk.read.bytes | cumulative 
|5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
| disk.read.requests | cumulative 
|5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
| disk.write.bytes | cumulative 
|5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
| disk.write.requests | cumulative 
|5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
| image | gauge 
|55a0a2c2-8cfb-4882-ad05-01d7c821b1de||22f22fb60bf8496cb60e8498d93d56e8|
| image | gauge 
| acd6beef-13e6-4d64-a83d-9e96beac26ef||22f22fb60bf8496cb60e8498d93d56e8|
| image | gauge | ecefcd31-ae47-4079-bd19-efe07f4c33d3 
||22f22fb60bf8496cb60e8498d93d56e8|
| image.download | delta 
|55a0a2c2-8cfb-4882-ad05-01d7c821b1de||22f22fb60bf8496cb60e8498d93d56e8|
| image.serve | delta 
|55a0a2c2-8cfb-4882-ad05-01d7c821b1de||22f22fb60bf8496cb60e8498d93d56e8|
| image.size | gauge 
|55a0a2c2-8cfb-4882-ad05-01d7c821b1de||22f22fb60bf8496cb60e8498d93d56e8|
| image.size | gauge 
| acd6beef-13e6-4d64-a83d-9e96beac26ef||22f22fb60bf8496cb60e8498d93d56e8|
| image.size | gauge | ecefcd31-ae47-4079-bd19-efe07f4c33d3 
||22f22fb60bf8496cb60e8498d93d56e8|
| image.update | delta 
|55a0a2c2-8cfb-4882-ad05-01d7c821b1de||22f22fb60bf8496cb60e8498d93d56e8|
| image.upload | delta 
|55a0a2c2-8cfb-4882-ad05-01d7c821b1de||22f22fb60bf8496cb60e8498d93d56e8|
| instance | gauge 
|5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
| instance:m1.small | gauge 
|5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
| network.incoming.bytes | cumulative 
| nova-instance-instance-0022-fa163e3bd74e 
|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
| network.incoming.packets | cumulative 
| nova-instance-instance-0022-fa163e3bd74e 
|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
| network.outgoing.bytes | cumulative 
| nova-instance-instance-0022-fa163e3bd74e 
|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
| network.outgoing.packets | cumulative 
| nova-instance-instance-0022-fa163e3bd74e 
|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|

+--++--+--+--+

Thanks and Regards
Abhishek Talwar
|

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord

__
OpenStack Development Mailing List (not for usage questions)
Uns

[openstack-dev] [Manila] Feature Freeze

2015-09-08 Thread Ben Swartzlander
Manila reached its liberty feature freeze yesterday thanks to the heroic 
work of the last few submitters and a few core reviewers who worked over 
the weekend! All of the features targeted for Liberty have been merged 
and nothing was booted out.


I would like to say that this has been a very painful feature freeze for 
many of us, and some mistakes were made which should not be repeated. I 
have some ideas for changes we can implement in the Mitaka timeframe to 
avoid the need for heroics at the last minute. In particular, large new 
features need a deadline substantially earlier than the ordinary FPF 
deadline, at least for WIP patches to be upstream (this was xyang's idea 
and it makes tons of sense). We can discuss the detail of how we want to 
run Mitaka at future meetings or in Tokyo, but I wanted to acknowledge 
that we didn't do it a good job this time.


Now that we're past feature freeze we need to drive aggressively to fix 
all the bugs because the RC1 target date has not moved (Sept 17). This 
is all the more important because our L-3 milestone is not really usable 
for testing purposes and we need a release that QA-oriented people can 
hammer on. This also means the client patches related to new features 
all need to get merged and release in the next week too.


Also the CI-system reporting deadline blew by last week during the 
gate-breakage-hell and I haven't had time to go check that all the CI 
systems which should be reporting actually are. That's something I'll be 
doing today and I'll post the driver removal patches for any system not 
reporting.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

2015-09-08 Thread Nikhil Komawar
Malini,

Your note on the etherpad [1] went unnoticed as we had that sync on
Friday outside of our regular meeting and weekly meeting agenda etherpad
was not fit for discussion purposes.

It would be nice if you all can update & comment on the spec, ref. the
note or have someone send a relative email here that explains the
redressal of the issues raised on the spec and during Friday sync [2].

[1] https://etherpad.openstack.org/p/glance-team-meeting-agenda
[2]
http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2015-09-04.log.html#t2015-09-04T14:29:47

On 9/5/15 4:40 PM, Bhandaru, Malini K wrote:
> Thank you Nikhil and Glance team on the FFE consideration.
> We are committed to making the revisions per suggestion and separately seek 
> help from the Flavio, Sabari, and Harsh.
> Regards
> Malini, Kent, and Jakub 
>
>
> -Original Message-
> From: Nikhil Komawar [mailto:nik.koma...@gmail.com] 
> Sent: Friday, September 04, 2015 9:44 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal
>
> Hi Malini et.al.,
>
> We had a sync up earlier today on this topic and a few items were discussed 
> including new comments on the spec and existing code proposal.
> You can find the logs of the conversation here [1].
>
> There are 3 main outcomes of the discussion:
> 1. We hope to get a commitment on the feature (spec and the code) that the 
> comments would be addressed and code would be ready by Sept 18th; after which 
> the RC1 is planned to be cut [2]. Our hope is that the spec is merged way 
> before and implementation to the very least is ready if not merged. The 
> comments on the spec and merge proposal are currently implementation details 
> specific so we were positive on this front.
> 2. The decision to grant FFE will be on Tuesday Sept 8th after the spec has 
> newer patch sets with major concerns addressed.
> 3. We cannot commit to granting a backport to this feature so, we ask the 
> implementors to consider using the plug-ability and modularity of the 
> taskflow library. You may consult developers who have already worked on 
> adopting this library in Glance (Flavio, Sabari and Harsh). Deployers can 
> then use those scripts and put them back in their Liberty deployments even if 
> it's not in the standard tarball.
>
> Please let me know if you have more questions.
>
> [1]
> http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2015-09-04.log.html#t2015-09-04T14:29:47
> [2] https://wiki.openstack.org/wiki/Liberty_Release_Schedule
>
> On 9/3/15 1:13 PM, Bhandaru, Malini K wrote:
>> Thank you Nikhil and Brian!
>>
>> -Original Message-
>> From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
>> Sent: Thursday, September 03, 2015 9:42 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>> proposal
>>
>> We agreed to hold off on granting it a FFE until tomorrow.
>>
>> There's a sync up meeting on this topic tomorrow, Friday Sept 4th at
>> 14:30 UTC ( #openstack-glance ). Please be there to voice your opinion and 
>> cast your vote.
>>
>> On 9/3/15 9:15 AM, Brian Rosmaita wrote:
>>> I added an agenda item for this for today's Glance meeting:
>>>https://etherpad.openstack.org/p/glance-team-meeting-agenda
>>>
>>> I'd prefer to hold my vote until after the meeting.
>>>
>>> cheers,
>>> brian
>>>
>>>
>>> On 9/3/15, 6:14 AM, "Kuvaja, Erno"  wrote:
>>>
 Malini, all,

 My current opinion is -1 for FFE based on the concerns in the spec 
 and implementation.

 I'm more than happy to realign my stand after we have updated spec 
 and a) it's agreed to be the approach as of now and b) we can 
 evaluate how much work the implementation needs to meet with the revisited 
 spec.

 If we end up to the unfortunate situation that this functionality 
 does not merge in time for Liberty, I'm confident that this is one 
 of the first things in Mitaka. I really don't think there is too 
 much to go, we just might run out of time.

 Thanks for your patience and endless effort to get this done.

 Best,
 Erno

> -Original Message-
> From: Bhandaru, Malini K [mailto:malini.k.bhand...@intel.com]
> Sent: Thursday, September 03, 2015 10:10 AM
> To: Flavio Percoco; OpenStack Development Mailing List (not for 
> usage
> questions)
> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
> proposal
>
> Flavio, first thing in the morning Kent will upload a new BP that 
> addresses the comments. We would very much appreciate a +1 on the 
> FFE.
>
> Regards
> Malini
>
>
>
> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: Thursday, September 03, 2015 1:52 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [

[openstack-dev] [Ansible][Infra] Moving ansible roles into big tent?

2015-09-08 Thread Paul Belanger
Greetings,

I wanted to start a discussion about the future of ansible / ansible roles in
OpenStack. Over the last week or so I've started down the ansible path, starting
my first ansible role; I've started with ansible-role-nodepool[1].

My initial question is simple, now that big tent is upon us, I would like
some way to include ansible roles into the opentack git workflow.  I first
thought the role might live under openstack-infra however I am not sure that
is the right place.  My reason is, -infra tents to include modules they
currently run under the -infra namespace, and I don't want to start the effort
to convince people to migrate.

Another thought might be to reach out to the os-ansible-deployment team and ask
how they see roles in OpenStack moving foward (mostly the reason for this
email).

Either way, I would be interested in feedback on moving forward on this. Using
travis-ci and github works but OpenStack workflow is much better.

[1] https://github.com/pabelanger/ansible-role-nodepool

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core

2015-09-08 Thread Serg Melikyan
Nikolai, my congratulations!

On Tue, Sep 8, 2015 at 5:28 AM, Stan Lagun  wrote:

> +1
>
> Sincerely yours,
> Stan Lagun
> Principal Software Engineer @ Mirantis
>
> 
>
> On Tue, Sep 1, 2015 at 3:03 PM, Alexander Tivelkov  > wrote:
>
>> +1. Well deserved.
>>
>> --
>> Regards,
>> Alexander Tivelkov
>>
>> On Tue, Sep 1, 2015 at 2:47 PM, Victor Ryzhenkin > > wrote:
>>
>>> +1 from me ;)
>>>
>>> --
>>> Victor Ryzhenkin
>>> Junior QA Engeneer
>>> freerunner on #freenode
>>>
>>> Включено 1 сентября 2015 г. в 12:18:19, Ekaterina Chernova (
>>> efedor...@mirantis.com) написал:
>>>
>>> +1
>>>
>>> On Tue, Sep 1, 2015 at 10:03 AM, Dmitro Dovbii 
>>> wrote:
>>>
 +1

 2015-09-01 2:24 GMT+03:00 Serg Melikyan :

> +1
>
> On Mon, Aug 31, 2015 at 3:45 PM, Kirill Zaitsev  > wrote:
>
>> I’m pleased to nominate Nikolai for Murano core.
>>
>> He’s been actively participating in development of murano during
>> liberty and is among top5 contributors during last 90 days. He’s also
>> leading the CloudFoundry integration initiative.
>>
>> Here are some useful links:
>>
>> Overall contribution: http://stackalytics.com/?user_id=starodubcevna
>> List of reviews:
>> https://review.openstack.org/#/q/reviewer:%22Nikolay+Starodubtsev%22,n,z
>> Murano contribution during latest 90 days
>> http://stackalytics.com/report/contribution/murano/90
>>
>> Please vote with +1/-1 for approval/objections
>>
>> --
>> Kirill Zaitsev
>> Murano team
>> Software Engineer
>> Mirantis, Inc
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
> http://mirantis.com | smelik...@mirantis.com
>
> +7 (495) 640-4904, 0261
> +7 (903) 156-0836
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Multi Node Stack - keystone federation

2015-09-08 Thread Zane Bitter

On 07/09/15 05:27, SHTILMAN, Tomer (Tomer) wrote:

Hi

Currently in heat we have the ability to deploy a remote stack on a
different region using OS::Heat::Stack and region_name in the context

My question is regarding multi node , separate keystones, with keystone
federation.

Is there an option in a HOT template to send a stack to a different
node, using the keystone federation feature?

For example ,If I have two Nodes (N1 and N2) with separate keystones
(and keystone federation), I would like to deploy a stack on N1 with a
nested stack that will deploy on N2, similar to what we have now for regions


Short answer: no.

Long answer: this is something we've wanted to do for a while, and a lot 
of folks have asked for it. We've been calling it multi-cloud (i.e. 
multiple keystones, as opposed to multi-region which is multiple regions 
with one keystone). In principle it's a small extension to the 
multi-region stacks (just add a way to specify the auth_url as well as 
the region), but the tricky part is how to authenticate to the other 
clouds. We don't want to encourage people to put their login credentials 
into a template. I'm not sure to what extent keystone federation could 
solve that - I suspect that it does not allow you to use a single token 
on multiple clouds, just that it allows you to obtain a token on 
multiple clouds using the same credentials? So basically this idea is on 
hold until someone comes up with a safe way to authenticate to the other 
clouds. Ideas/specs welcome.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack support for Amazon Concepts - was Re: cloud-init IPv6 support

2015-09-08 Thread Fox, Kevin M
We have the whole /openstack namespace. We can extend it as far as we like.
Again, why would aws choosing to go a different way then openstack when 
openstack did something first be an openstack problem? We're not even talking 
about a big change. Just making the same md server available on a second ip.

Thanks,
Kevin


From: Sean M. Collins
Sent: Sunday, September 06, 2015 7:34:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] OpenStack support for Amazon Concepts - was Re: 
cloud-init IPv6 support

On Sun, Sep 06, 2015 at 04:25:43PM EDT, Kevin Benton wrote:
> So it's been pointed out that http://169.254.169.254/openstack is completed
> OpenStack invented. I don't quite understand how that's not violating the
> contract you said we have with end users about EC2 compatibility under the
> restriction of 'no new stuff'.

I think that is a violation. I don't think that allows us to make more
changes, just because we've broken the contract once, so a second
infraction is less significant.

> If we added an IPv6 endpoint that the metadata service listens on, it would
> just be another place that non cloud-init clients don't know how to talk
> to. It's not going to break our compatibility with any clients that connect
> to the IPv4 address.

No, but if Amazon were to make a decision about how to implement IPv6 in
EC2 and how to make the Metadata API service work with IPv6 we'd be
supporting two implementations - the one we came up with and one for
supporting the way Amazon implemented it.

--
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Evgeniy Konstantinov for fuel-docs core

2015-09-08 Thread Alexander Adamov
+1

On Thu, Sep 3, 2015 at 11:41 PM, Dmitry Pyzhov  wrote:

> +1
>
> On Thu, Sep 3, 2015 at 10:14 PM, Sergey Vasilenko  > wrote:
>
>> +1
>>
>>
>> /sv
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack support for Amazon Concepts - was Re: cloud-init IPv6 support

2015-09-08 Thread Fox, Kevin M
Yeah, we heve been trying to work through how to make instance users work with 
config drive and its staticness makes the problem very difficult. It just 
trades one problem for another.

Thanks,
Kevin


From: Jim Rollenhagen
Sent: Sunday, September 06, 2015 5:02:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] OpenStack support for Amazon Concepts - was Re: 
cloud-init IPv6 support



> On Sep 6, 2015, at 09:43, Monty Taylor  wrote:
>
>> On 09/05/2015 06:19 PM, Sean M. Collins wrote:
>>> On Fri, Sep 04, 2015 at 04:20:23PM EDT, Kevin Benton wrote:
>>> Right, it depends on your perspective of who 'owns' the API. Is it
>>> cloud-init or EC2?
>>>
>>> At this point I would argue that cloud-init is in control because it would
>>> be a large undertaking to switch all of the AMI's on Amazon to something
>>> else. However, I know Sean disagrees with me on this point so I'll let him
>>> reply here.
>>
>>
>> Here's my take:
>>
>> Cloud-Init is a *client* of the Metadata API. The OpenStack Metadata API
>> in both the Neutron and Nova projects should all the details of the
>> Metadata API that is documented at:
>>
>> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
>>
>> This means that this is a compatibility layer that OpenStack has
>> implemented so that users can use appliances, applications, and
>> operating system images in both Amazon EC2 and an OpenStack environment.
>>
>> Yes, we can make changes to cloud-init. However, there is no guarantee
>> that all users of the Metadata API are exclusively using cloud-init as
>> their client. It is highly unlikely that people are rolling their own
>> Metadata API clients, but it's a contract we've made with users. This
>> includes transport level details like the IP address that the service
>> listens on.
>>
>> The Metadata API is an established API that Amazon introduced years ago,
>> and we shouldn't be "improving" APIs that we don't control. If Amazon
>> were to introduce IPv6 support the Metadata API tomorrow, we would
>> naturally implement it exactly the way they implemented it in EC2. We'd
>> honor the contract that Amazon made with its users, in our Metadata API,
>> since it is a compatibility layer.
>>
>> However, since they haven't defined transport level details of the
>> Metadata API, regarding IPv6 - we can't take it upon ourselves to pick a
>> solution. It is not our API.
>>
>> The nice thing about config-drive is that we've created a new mechanism
>> for bootstrapping instances - by replacing the transport level details
>> of the API. Rather than being a link-local address that instances access
>> over HTTP, it's a device that guests can mount and read. The actual
>> contents of the drive may have a similar schema as the Metadata API, but
>> I think at this point we've made enough of a differentiation between the
>> EC2 Metadata API and config-drive that I believe the contents of the
>> actual drive that the instance mounts can be changed without breaking
>> user expectations - since config-drive was developed by the OpenStack
>> community. The point being that we call it "config-drive" in
>> conversation and our docs. Users understand that config-drive is a
>> different feature.
>
> Another great part about config-drive is that it's scalable. At infra's 
> application scale, we take pains to disable anyting in our images that might 
> want to contact the metadata API because we're essentially a DDOS on it.

So, I tend to think a simple API service like this should never be hard to 
scale. Put a bunch of hosts behind a load balancer, boom, done. Even 1000 
requests/s shouldn't be hard, though it may require many hosts, and that's far 
beyond what infra would hit today.

The one problem I have with config-drive is that it is static. I'd love for 
systems like cloud-init, glean, etc, to be able to see changes to mounted 
disks, attached networks, etc. Attaching things after the fact isn't uncommon, 
and to make the user config the thing is a terrible experience. :(

// jim

>
> config-drive being local to the hypervisor host makes it MUCH more stable at 
> scale.
>
> cloud-init supports config-drive
>
> If it were up to me, nobody would be enablig the metadata API in new 
> deployments.
>
> I totally agree that we should not make changes in the metadata API.
>
>> I've had this same conversation about the Security Group API that we
>> have. We've named it the same thing as the Amazon API, but then went and
>> made all the fields different, inexplicably. Thankfully, it's just the
>> names of the fields, rather than being huge conceptual changes.
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2015-June/068319.html
>>
>> Basically, I believe that OpenStack should create APIs that are
>> community driven and owned, and that we should only emulate
>> non-community APIs where appropriate, and explicitly state that we only
>> are emulating them. 

Re: [openstack-dev] OpenStack support for Amazon Concepts - was Re: cloud-init IPv6 support

2015-09-08 Thread Fox, Kevin M
No, we already extend the metadata server with our own stuff. See /openstack/ 
on the metadata server. Cloudinit even supports the extensions. Supporting ipv6 
as well as v4 is the same. Why does it matter if aws doesnt currently support 
it? They can support it if they want in the future and reuse code, or do their 
own thing and have to convince cloudinit to support there way too. But why 
should that hold back the openstack metadata server now? Lets lead rather then 
follow.

Thanks,
Kevin


From: Sean M. Collins
Sent: Saturday, September 05, 2015 3:19:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Fox, Kevin M; PAUL CARVER
Subject: OpenStack support for Amazon Concepts - was Re: [openstack-dev] 
cloud-init IPv6 support

On Fri, Sep 04, 2015 at 04:20:23PM EDT, Kevin Benton wrote:
> Right, it depends on your perspective of who 'owns' the API. Is it
> cloud-init or EC2?
>
> At this point I would argue that cloud-init is in control because it would
> be a large undertaking to switch all of the AMI's on Amazon to something
> else. However, I know Sean disagrees with me on this point so I'll let him
> reply here.


Here's my take:

Cloud-Init is a *client* of the Metadata API. The OpenStack Metadata API
in both the Neutron and Nova projects should all the details of the
Metadata API that is documented at:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

This means that this is a compatibility layer that OpenStack has
implemented so that users can use appliances, applications, and
operating system images in both Amazon EC2 and an OpenStack environment.

Yes, we can make changes to cloud-init. However, there is no guarantee
that all users of the Metadata API are exclusively using cloud-init as
their client. It is highly unlikely that people are rolling their own
Metadata API clients, but it's a contract we've made with users. This
includes transport level details like the IP address that the service
listens on.

The Metadata API is an established API that Amazon introduced years ago,
and we shouldn't be "improving" APIs that we don't control. If Amazon
were to introduce IPv6 support the Metadata API tomorrow, we would
naturally implement it exactly the way they implemented it in EC2. We'd
honor the contract that Amazon made with its users, in our Metadata API,
since it is a compatibility layer.

However, since they haven't defined transport level details of the
Metadata API, regarding IPv6 - we can't take it upon ourselves to pick a
solution. It is not our API.

The nice thing about config-drive is that we've created a new mechanism
for bootstrapping instances - by replacing the transport level details
of the API. Rather than being a link-local address that instances access
over HTTP, it's a device that guests can mount and read. The actual
contents of the drive may have a similar schema as the Metadata API, but
I think at this point we've made enough of a differentiation between the
EC2 Metadata API and config-drive that I believe the contents of the
actual drive that the instance mounts can be changed without breaking
user expectations - since config-drive was developed by the OpenStack
community. The point being that we call it "config-drive" in
conversation and our docs. Users understand that config-drive is a
different feature.

I've had this same conversation about the Security Group API that we
have. We've named it the same thing as the Amazon API, but then went and
made all the fields different, inexplicably. Thankfully, it's just the
names of the fields, rather than being huge conceptual changes.

http://lists.openstack.org/pipermail/openstack-dev/2015-June/068319.html

Basically, I believe that OpenStack should create APIs that are
community driven and owned, and that we should only emulate
non-community APIs where appropriate, and explicitly state that we only
are emulating them. Putting improvements in APIs that came from
somewhere else, instead of creating new OpenStack branded APIs is a lost
opportunity to differentiate OpenStack from other projects, as well as
Amazon AWS.

Thanks for reading, and have a great holiday.

--
Sean M. Collins
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] This is what disabled-by-policy should look like to the user

2015-09-08 Thread Fox, Kevin M
+1


From: Adam Young
Sent: Friday, September 04, 2015 7:43:08 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] This is what disabled-by-policy should look like 
to the user

On 09/04/2015 10:04 AM, Monty Taylor wrote:
> mordred@camelot:~$ neutron net-create test-net-mt
> Policy doesn't allow create_network to be performed.
>
> Thank you neutron. Excellent job.
>
> Here's what that looks like at the REST layer:
>
> DEBUG: keystoneclient.session RESP: [403] date: Fri, 04 Sep 2015
> 13:55:47 GMT connection: close content-type: application/json;
> charset=UTF-8 content-length: 130 x-openstack-request-id:
> req-ba05b555-82f4-4aaf-91b2-bae37916498d
> RESP BODY: {"NeutronError": {"message": "Policy doesn't allow
> create_network to be performed.", "type": "PolicyNotAuthorized",
> "detail": ""}}
>
> As a user, I am not confused. I do not think that maybe I made a
> mistake with my credentials. The cloud in question simply does not
> allow user creation of networks. I'm fine with that. (as a user, that
> might make this cloud unusable to me - but that's a choice I can now
> make with solid information easily. Turns out, I don't need to create
> networks for my application, so this actually makes it easier for me
> personally)
>
> In any case- rather than complaining and being a whiny brat about
> something that annoys me - I thought I'd say something nice about
> something that the neutron team has done that especially pleases me.

Then let my Hijack:

Policy is still broken.  We need the pieces of Dynamic policy.

I am going to call for a cross project policy discussion for the
upcoming summit.  Please, please, please all the projects attend. The
operators have made it clear they need better policy support.


> I would love it if this became the experience across the board in
> OpenStack for times when a feature of the API is disabled by local
> policy. It's possible it already is and I just haven't directly
> experienced it - so please don't take this as a backhanded
> condemnation of anyone else.
>
> Monty
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][release] flushing unreleased client library changes

2015-09-08 Thread Thierry Carrez
Doug Hellmann wrote:
> Excerpts from Ben Swartzlander's message of 2015-09-04 16:54:13 -0400:
>> [...]
>> I would actually want to release a milestone between L-3 and RC1 after 
>> we get to the real Manila FF date but since that's not in line with the 
>> official release process I'm okay waiting for RC1. Since there is no 
>> official process for client releases (that I know about) I'd rather just 
>> wait to do the client until RC1. We'll plan for an early RC1 by 
>> aggressively driving the bugs to zero instead of putting time into 
>> testing the L-3 milestone.
> 
> If master is broken right now, I agree it's not a good idea to
> release.  That said, you still don't want to wait any later than
> you have to. Gate jobs only install libraries from packages, so no
> projects that are co-gating with manila, including manila itself,
> are using the source version of the client library. That means when
> there's a release, the new package introduces all of the new changes
> into the integration tests at the same time.

Yes, that creates unnecessary risk toward the end of the release cycle.
This is why we want to release near-final version of libraries this week
(from which we create the library release/stable branches) -- to keep
risk under control and start testing with the latest code ASAP.

> We want to release clients as often as possible to keep the number
> of changes small. This is why we release Oslo libraries weekly --
> we still break things once in a while, but when we do we have a
> short list of changes to look at to figure out why.
> 
> I'll be proposing that we do a weekly client change review for all
> managed clients starting next cycle, and release when there are changes
> that warrant (probably not just for requirements changes, unless
> it's necessary). I haven't worked out the details of how to do the
> review without me contacting release liaisons directly, so suggestions
> on that are welcome.

+1

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-08 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2015-09-08 08:13:14 -0400:
> On 09/03/2015 08:22 AM, Thierry Carrez wrote:
> > Hi everyone,
> > 
> > A feature deprecation policy is a standard way to communicate and
> > perform the removal of user-visible behaviors and capabilities. It helps
> > setting user expectations on how much and how long they can rely on a
> > feature being present. It gives them reassurance over the timeframe they
> > have to adapt in such cases.
> > 
> > In OpenStack we always had a feature deprecation policy that would apply
> > to "integrated projects", however it was never written down. It was
> > something like "to remove a feature, you mark it deprecated for n
> > releases, then you can remove it".
> > 
> > We don't have an "integrated release" anymore, but having a base
> > deprecation policy, and knowing which projects are mature enough to
> > follow it, is a great piece of information to communicate to our users.
> > 
> > That's why the next-tags workgroup at the Technical Committee has been
> > working to propose such a base policy as a 'tag' that project teams can
> > opt to apply to their projects when they agree to apply it to one of
> > their deliverables:
> > 
> > https://review.openstack.org/#/c/207467/
> > 
> > Before going through the last stage of this, we want to survey existing
> > projects to see which deprecation policy they currently follow, and
> > verify that our proposed base deprecation policy makes sense. The goal
> > is not to dictate something new from the top, it's to reflect what's
> > generally already applied on the field.
> > 
> > In particular, the current proposal says:
> > 
> > "At the very minimum the feature [...] should be marked deprecated (and
> > still be supported) in the next two coordinated end-of-cyle releases.
> > For example, a feature deprecated during the M development cycle should
> > still appear in the M and N releases and cannot be removed before the
> > beginning of the O development cycle."
> > 
> > That would be a n+2 deprecation policy. Some suggested that this is too
> > far-reaching, and that a n+1 deprecation policy (feature deprecated
> > during the M development cycle can't be removed before the start of the
> > N cycle) would better reflect what's being currently done. Or that
> > config options (which are user-visible things) should have n+1 as long
> > as the underlying feature (or behavior) is not removed.
> > 
> > Please let us know what makes the most sense. In particular between the
> > 3 options (but feel free to suggest something else):
> > 
> > 1. n+2 overall
> > 2. n+2 for features and capabilities, n+1 for config options
> > 3. n+1 overall
> > 
> > Thanks in advance for your input.
> 
> Based on my experience of projects in OpenStack projects in what they
> are doing today:
> 
> Configuration options are either N or N+1: either they are just changed,
> or there is a single deprecation cycle (i.e. deprecated by Milestone 3
> of release N, removed before milestone 1 of release N+1). I know a lot
> of projects continue to just change configs based on the number of
> changes we block landing with Grenade.
> 
> An N+1 policy for configuration seems sensible. N+2 ends up pretty
> burdensome because typically removing a config option means dropping a
> code path as well, and an N+2 policy means the person deprecating the
> config may very well not be the one removing the code, leading to debt
> or more bugs.
> 
> For features, this is all over the map. I've seen removes in 0 cycles
> because everyone is convinced that the feature doesn't work anyway (and
> had been broken for some amount of time). I've seen 1 cycle deprecations
> for minor features that are believed to be little used. In Nova we did
> XML deprecation over 2 cycles IIRC. EC2 is going to be 2+ (we're still
> waiting to get field data back on the alternate approach). The API
> version deprecations by lots of projects are measured in years at this
> point.
> 
> I feel like a realistic bit of compromise that won't drive everyone nuts
> would be:
> 
> config options: n+1
> minor features: n+1
> major features: at least n+2 (larger is ok)
> 
> And come up with some fuzzy words around minor / major features.
> 
> I also think that ensuring that any project that gets this tag publishes
> a list of deprecations in release notes would be really good. And that
> gets looked for going forward.

These times seem reasonable to me.

I'd like to come up with some way to express the time other than
N+M because in the middle of a cycle it can be confusing to know
what that means (if I want to deprecate something in August am I
far enough through the current cycle that it doesn't count?).

Also, as we start moving more projects to doing intermediate releases
the notion of a "release" vs. a "cycle" will drift apart, so we
want to talk about "stable releases" not just any old release.

Doug


__
OpenStack 

Re: [openstack-dev] [ptl][release] flushing unreleased client library changes

2015-09-08 Thread Doug Hellmann
Excerpts from Ben Swartzlander's message of 2015-09-04 16:54:13 -0400:
> 
> On 09/04/2015 03:21 PM, Doug Hellmann wrote:
> > Excerpts from Ben Swartzlander's message of 2015-09-04 14:51:10 -0400:
> >> On 09/04/2015 12:39 PM, Doug Hellmann wrote:
> >>> PTLs,
> >>>
> >>> We have quite a few unreleased client changes pending, and it would
> >>> be good to go ahead and publish them so they can be tested as part
> >>> of the release candidate process. I have the full list of changes for
> >>> each project below, so please find yours and review them and then
> >>> propose a release request to the openstack/releases repository.
> >> Manila had multiple gate-breaking bugs this week and I've extended our
> >> feature freeze to next Tuesday to compensate. As a result our L-3
> >> milestone release is not really representative of Liberty and we'd
> >> rather not do a client release until we reach RC1.
> > Keep in mind that the unreleased changes are not being used to test
> > anything at all in the gate, so there's an integration "penalty" for
> > delaying releases. You can have as many releases as you want, and we can
> > create the stable branch from the last useful release any time after it
> > is created. So, I still recommend releasing early and often unless you
> > anticipate making API or CLI breaking changes between now and RC1.
> 
> There is currently an API breaking change that needs to be fixed. It 
> will be fixed before the RC so that Kilo<->Liberty upgrades go smoothly 
> but the L-3 milestone is broken regarding forward and backward 
> compatibility.
> 
> https://bugs.launchpad.net/manila/+bug/1488624
> 
> I would actually want to release a milestone between L-3 and RC1 after 
> we get to the real Manila FF date but since that's not in line with the 
> official release process I'm okay waiting for RC1. Since there is no 
> official process for client releases (that I know about) I'd rather just 
> wait to do the client until RC1. We'll plan for an early RC1 by 
> aggressively driving the bugs to zero instead of putting time into 
> testing the L-3 milestone.

If master is broken right now, I agree it's not a good idea to
release.  That said, you still don't want to wait any later than
you have to. Gate jobs only install libraries from packages, so no
projects that are co-gating with manila, including manila itself,
are using the source version of the client library. That means when
there's a release, the new package introduces all of the new changes
into the integration tests at the same time.

We want to release clients as often as possible to keep the number
of changes small. This is why we release Oslo libraries weekly --
we still break things once in a while, but when we do we have a
short list of changes to look at to figure out why.

I'll be proposing that we do a weekly client change review for all
managed clients starting next cycle, and release when there are changes
that warrant (probably not just for requirements changes, unless
it's necessary). I haven't worked out the details of how to do the
review without me contacting release liaisons directly, so suggestions
on that are welcome.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler hints, API and Objects

2015-09-08 Thread Andrew Laski

On 09/07/15 at 09:27am, Ken'ichi Ohmichi wrote:

Hi Andrew,

2015-09-04 23:45 GMT+09:00 Andrew Laski :


Now we are discussing this on https://review.openstack.org/#/c/217727/
for allowing out-of-tree scheduler-hints.
When we wrote API schema for scheduler-hints, it was difficult to know
what are available API parameters for scheduler-hints.
Current API schema exposes them and I guess that is useful for API users
also.

One idea is that: How about auto-extending scheduler-hint API schema
based on loaded schedulers?
Now API schemas of "create/update/resize/rebuild a server" APIs are
auto-extended based on loaded extensions by using stevedore
library[1].
I guess we can apply the same way for scheduler-hints also in long-term.
Each scheduler needs to implement a method which returns available API
parameter formats and nova-api tries to get them then extends
scheduler-hints API schema with them.
That means out-of-tree schedulers also will be available if they
implement the method.
# In short-term, I can see "blocking additionalProperties" validation
disabled by the way.



https://review.openstack.org/#/c/220440 is a prototype for the above idea.



I like the idea of providing strict API validation for the scheduler hints
if it accounts for out of tree extensions like this would do.  I do have a
slight concern about how this works in a world where the scheduler does
eventually get an HTTP interface that Nova uses and the code isn't
necessarily accessible, but that can be worried about later.

This does mean that the scheduler hints are not controlled by microversions
though, since we don't have a mechanism for out of tree extensions to signal
their presence that way.  And even if they could it would still mean that
identical microversions on different clouds wouldn't offer the same hints.
If we're accepting of that, which isn't really any different than having
"additionalProperties: True", then this seems reasonable to me.


In short-term, yes. That is almost the same as "additionalProperties: True".
But in long-term, no. Each scheduler-hint parameter which is described
with JSON-Schema will be useful instead of "additionalProperties:
True" because API parameters will be exposed with JSON-Schema format
on JSON-Home or something.
If we allow customization of scheduler-hints like new filters,
out-of-tree filters without microversions, API users cannot know
available scheduler-hints parameter from microversions number.
That will be helpful for API users that nova can provide available
parameters with JSON-Home or something.


The issue that I still have is that I don't believe that scheduler hints 
belong in the interopable cloud story, at least not any time soon.  I 
think scheduling is one place that different cloud providers can 
distinguish themselves and I don't think there's anything wrong with 
that.  It's very coupled to the underlying infrastructure that runs the 
cloud and I haven't yet seen the proper abstraction that can properly 
reconcile the differences that happen there between different clouds, at 
least beyond the simple level of host affinity.  Now after saying that I 
would love to find a solution that allows for a strict API around 
scheduling while still providing flexibility to cloud providers.  I 
don't assume it can't be done, I just don't think we're at a place where 
adding strictness adds any real value.


I would compare this to flavor extra specs.  There are a lot of 
proposals to do things with extra specs which we would not want to 
introduce to Nova in that way.  However there are clouds out there that 
have out of tree code that relies on data in flavor extra specs.  And 
discussions that I've been involved in around that have focused on how 
to introduce those concepts into Nova in a standard way that doesn't 
rely on an unversioned key/value store like extra specs.  The solution 
hasn't been to introduce a schema on extra specs and lock them down so 
they share meaning across clouds.  It's been to acknowledge that extra 
specs is a mess that doesn't provide what we want in a manageable way so 
we should deprecate it's usage in favor of better methods.  I think the 
same applies to scheduler hints.  Let's acknowledge that they're a mess 
and rather than trying to impose order on them we should focus on other 
improvements around scheduling.  My big fear is still that we introduce 
microversion 2.42 which adds scheduler hint foo which is now a permanent 
part of the Nova API.  And what contortions will we need to go through 
to maintain that if we get to a point where the scheduler is no longer 
in Nova or that hint for some reason no longer makes logical sense.





Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_

Re: [openstack-dev] [Fuel] Remove MOS DEB repo from master node

2015-09-08 Thread Vladimir Kozhukalov
Sorry, fat fingers => early sending.

=
Dear colleagues,

The idea is to remove MOS DEB repo from the Fuel master node by default and
use online MOS repo instead. Pros of such an approach are:

0) Reduced requirement for the master node minimal disk space
1) There won't be such things in like [1] and [2], thus less complicated
flow, less errors, easier to maintain, easier to understand, easier to
troubleshoot
2) If one wants to have local mirror, the flow is the same as in case of
upstream repos (fuel-createmirror), which is clrear for a user to
understand.

Many people still associate ISO with MOS, but it is not true when using
package based delivery approach.

It is easy to define necessary repos during deployment and thus it is easy
to control what exactly is going to be installed on slave nodes.

What do you guys think of it?



Vladimir Kozhukalov

On Tue, Sep 8, 2015 at 4:53 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> The idea is to remove MOS DEB repo from the Fuel master node by default
> and use online MOS repo instead. Pros of such an approach are:
>
> 0) Reduced requirement for the master node minimal disk space
> 1) There won't be such things in like [1] and [2], thus less complicated
> flow, less errors, easier to maintain, easier to understand, easier to
> troubleshoot
> 2) If one wants to have local mirror, the flow is the same as in case of
> upstream repos (fuel-createmirror), which is clrear for a user to
> understand.
>
> Many people still associate ISO with MOS
>
>
>
>
>
> [1]
> https://github.com/stackforge/fuel-main/blob/master/iso/ks.template#L416-L419
> [2]
> https://github.com/stackforge/fuel-web/blob/master/fuel_upgrade_system/fuel_upgrade/fuel_upgrade/engines/host_system.py#L109-L115
>
>
> Vladimir Kozhukalov
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Status of CI changes

2015-09-08 Thread Derek Higgins



On 03/09/15 07:34, Derek Higgins wrote:

Hi All,

The patch to reshuffle our CI jobs has merged[1], along with the patch
to switch the f21-noha job to be instack based[2] (with centos images).

So the current status is that our CI has been removed from most of the
non tripleo projects (with the exception of nova/neutron/heat and ironic
where it is only available with check experimental until we are sure its
reliable).

The last big move is to pull in some repositories into the upstream[3]
gerrit so until this happens we still have to worry about some projects
being on gerrithub (the instack based CI pulls them in from gerrithub
for now). I'll follow up with a mail once this happens


This has happened, as of now we should be developing the following 
repositories on https://review.openstack.org/#/


http://git.openstack.org/cgit/openstack/instack/
http://git.openstack.org/cgit/openstack/instack-undercloud/
http://git.openstack.org/cgit/openstack/tripleo-docs/
http://git.openstack.org/cgit/openstack/python-tripleoclient/



A lot of CI stuff still needs to be worked on (and improved) e.g.
  o Add ceph support to the instack based job
  o Add ha support to the instack based job
  o Improve the logs exposed
  o Pull out a lot of workarounds that have gone into the CI job
  o move out some of the parts we still use in tripleo-incubator
  o other stuff

Please make yourself known if your interested in any of the above

thanks,
Derek.

[1] https://review.openstack.org/#/c/205479/
[2] https://review.openstack.org/#/c/185151/
[3] https://review.openstack.org/#/c/215186/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Remove MOS DEB repo from master node

2015-09-08 Thread Vladimir Kozhukalov
Dear colleagues,

The idea is to remove MOS DEB repo from the Fuel master node by default and
use online MOS repo instead. Pros of such an approach are:

0) Reduced requirement for the master node minimal disk space
1) There won't be such things in like [1] and [2], thus less complicated
flow, less errors, easier to maintain, easier to understand, easier to
troubleshoot
2) If one wants to have local mirror, the flow is the same as in case of
upstream repos (fuel-createmirror), which is clrear for a user to
understand.

Many people still associate ISO with MOS





[1]
https://github.com/stackforge/fuel-main/blob/master/iso/ks.template#L416-L419
[2]
https://github.com/stackforge/fuel-web/blob/master/fuel_upgrade_system/fuel_upgrade/fuel_upgrade/engines/host_system.py#L109-L115


Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [depfreeze] [keystone] Set minimum version for passlib

2015-09-08 Thread Alan Pevec
Hi all,

according to https://wiki.openstack.org/wiki/DepFreeze I'm requesting
depfreeze exception for
https://review.openstack.org/221267
This is just a sync with reality, copying Javier's description:

(Keystone) commit a7235fc0511c643a8441efd3d21fc334535066e2 [1] uses
passlib.utils.MAX_PASSWORD_SIZE, which was only introduced to
passlib in version 1.6

Cheers,
Alan

[1] https://review.openstack.org/217449

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Plugin integration and environment file naming

2015-09-08 Thread Jay Dobies
I like where this is going. I've been asked a number of times where to 
put things and we never had a solid convention. I like the idea of 
having that docced somewhere.


I like either of the proposed solutions. My biggest concern is that they 
don't capture how you actually use them. I know that was the point of 
your e-mail; we don't yet have the Heat constructs in place for the 
templates to convey that information.


What about if we adopt the directory structure model and strongly 
request a README.md file in there? It's similar to the image elements 
model. We could offer a template to fill out or leave it open ended, but 
the purpose would be to specify:


- Installation instructions (e.g. "set the resource registry namespace 
for Blah to point to this file" or "use the corresponding environment 
file foo.yaml")
- Parameters that can/should be specified via parameter_defaults. I'm 
not saying we add a ton of documentation in there that would be 
duplicate of the actual parameter definitions, but perhaps just a list 
of the parameter names. That way, a user can have an idea of what 
specifically to look for in the template parameter list itself.


That should be all of the info that we'd like Heat to eventually provide 
and hold us over until those discussions are finished.


On 09/08/2015 08:20 AM, Jiří Stránský wrote:

On 8.9.2015 13:47, Jiří Stránský wrote:

Apart from "cinder" and "neutron-ml2" directories, we could also have a
"combined" (or sth similar) directory for env files which combine
multiple other env files. The use case which i see is for extra
pre-deployment configs which would be commonly used together. E.g.
combining Neutron and Horizon extensions of a single vendor [4].


Ah i mixed up two things in this paragraph -- env files vs. extraconfig
nested stacks. Not sure if we want to start namespacing the extraconfig
bits in a parallel manner. E.g.
"puppet/extraconfig/pre_deploy/controller/cinder",
"puppet/extraconfig/pre_deploy/controller/neutron-ml2". It would be
nice, especially if we're sort of able to map the extraconfig categories
to env file categories most of the time. OTOH the directory nesting is
getting quite deep there :)


That was my thought too, that the nesting is getting a bit deep. I also 
don't think we should enforce the role in the directory structure as 
we've already seen instances of things that have to happen on both 
controller and compute.




J.


[4]
https://review.openstack.org/#/c/213142/1/puppet/extraconfig/pre_deploy/controller/all-bigswitch.yaml




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] pci-passtrough and neutron multi segment networks

2015-09-08 Thread Vladyslav Gridin
Hi All,

Is there a way to successfully deploy a vm with sriov nic
on both single segment vlan network, and multi provider network,
containing vlan segment?
When nova builds pci request for nic it looks for 'physical_network'
at network level, but for multi provider networks this is set within a
segment.

e.g.
RESP BODY: {"network": {"status": "ACTIVE", "subnets":
["3862051f-de55-4bb9-8c88-acd675bb3702"], "name": "sriov",
"admin_state_up": true, "router:external": false, "segments":
[{"provider:segmentation_id": 77, "provider:physical_network": "physnet1",
"provider:network_type": "vlan"}, {"provider:segmentation_id": 35,
"provider:physical_network": null, "provider:network_type": "vxlan"}],
"mtu": 0, "tenant_id": "bd3afb5fac0745faa34713e6cada5a8d", "shared": false,
"id": "53c0e71e-4c9a-4a33-b1a0-69529583e05f"}}


So, if on compute my pci_passthrough_whitelist contains physical_network,
deployment will fail in multi segment network, and vice versa.

Thanks,
Vlad.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] python-jobs now vote on Fuel Client

2015-09-08 Thread Roman Prykhodchenko
Good news folks!

Since python jobs worked well on a number of patches, their mode was switched 
to voting. They were also added to the gate pipeline.


- romcheg


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Aodh] event-alarm fire policy

2015-09-08 Thread Zhai, Edwin

Liusheng,
Thanks for your idea. I think it guarantees alarm action got called. But I just 
want it fired upon each matching event. E.g. for each instance crash event.


Have talked with MIBU, repeat-actions can be used for this purpose.

Best,
Edwin

On Tue, 8 Sep 2015, liusheng wrote:

Just a personal thought, can we add an ACK to alarm notifier? when an 
event-alarm fired, the alarm state transformed to "alarm", if 
'alarm_action' has been set, the 'alarm_action' will be triggered and 
notify. for event-alarm, a timout can be set to wait the ACK from alarm 
notifier, if the ACK recieved, reset the alarm state to OK, if timeout 
occured, set the alarm state to 'UNKNOWN'. If 'alarm_action' has not 
been set, we just need to recored the alarm state transition history.


在 2015/9/8 7:26, Zhai, Edwin 写道:

All,
Currently, event-alarm is one-shot style: don't fire again for same 
event.  But threshold-alarm is limited periodic style:

1. only get 1 fire for continuous valid datapoints.
2. Would get a new fire if insufficient data followed by valid ones, 
as we reset alarm state upon insufficient data.


So maybe event-alarm should be periodic also. But I'm not sure when to 
reset the alarm state to 'UNKNOWN': after each fire, or when receive 
different event.


Fire a bug @
https://bugs.launchpad.net/aodh/+bug/1493171

Best Rgds,
Edwin

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Best Rgds,
Edwin__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Port forwarding

2015-09-08 Thread Gal Sagie
Hi Germy,

Yes i understand now.
What you request is an enahncment to the API to be able to assing these
port forwarding rules in bulks per subnet.
I will make sure to mention this in the spec that i am writing for this.

Thanks!
Gal.


On Tue, Sep 8, 2015 at 10:33 AM, Germy Lure  wrote:

> Hi Gal,
>
> Thank you for your explanation.
> As you mentioned, PF is a way of reusing floating IP to access several
> Neutron ports. I agree with your point of view completely.
> Let me extend your example to explain where I was going.
> T1 has 20 subnets behind a router, and one of them is 10.0.0.0/24 named
> s1. There are 100 VMs named VM1~VM100 in the subnet s1 and T1 wants to
> update the same file(or something else) in those VMs. Let's have a look how
> will T1 do it.
>
> T1 invokes Neutron API to create a port-mapping for VM1(Maybe that will be
> did by operator)
> For example :  172.20.20.10:4001  =>  10.0.0.1:80
> And then T1 does the update task via 172.20.20.10:4001.
>
> Now for the VM2,VM3,...VM100, T1 must repeat the steps above with
> different ports. And T1 must clean those records(100 records in DB) after
> accessing. That's badly, I think.
> Note that T1 still has 19 subnets to be dealt with. That's a nightmare to
> T1.
> To PaaS, SaaS, that also is a big trouble.
>
> So, can we do it like this?
> T1 invokes Neutron API one time for s1(not VM1), and Neutron setups a
> group of port-mapping relation. For example:
> 172.20.20.10:4001  =>  10.0.0.1:80
> 172.20.20.10:4002  =>  10.0.0.2:80
> 172.20.20.10:4003  =>  10.0.0.3:80
> ..   ..
> 172.20.20.10:4100  =>  10.0.0.100:80
> Now T1 just needs focus on his/her business work not PF.
>
> We just store one record in Neutron DB for such one time API invoking. For
> the single VM scene, we can specific private IP range instead of subnet.
> For example, 10.0.0.1 to 10.0.0.3. The mapped ports(like 4001,4002...) can
> be returned in the response body, for example, 4001 to 4003, also can just
> return a base number(4000) and upper layer rework it. For example, 4000+1,
> where 1 is the last number in the private IP address of VM1.
>
> Forgive my poor E.
> Hope that's clear enough and i am happy to discuss it further if necessary.
>
> Germy
>
>
> On Tue, Sep 8, 2015 at 1:58 PM, Gal Sagie  wrote:
>
>> Hi Germy,
>>
>> Port forwarding the way i see it, is a way of reusing the same floating
>> ip to access several different Neutron ports (VM's , Containers)
>> So for example if we have floating IP 172.20.20.10 , we can assign
>> 172.20.20.10:4001 to VM1 and 172.20.20.10:4002 to VM2 (which are behind
>> that same router
>> which has an external gw).
>> The user use the same IP but according to the tcp/udp port Neutron
>> performs mapping in the virtual router namespace to the private IP and
>> possibly to a different port
>> that is running on that instance for example port 80
>>
>> So for example if we have two VM's with private IP's 10.0.0.1 and
>> 10.0.0.2 and we have a floating ip assigned to the router of 172.20.20.10
>> with port forwarding we can build the following mapping:
>>
>> 172.20.20.10:4001  =>  10.0.0.1:80
>> 172.20.20.10:4002  =>  10.0.0.2:80
>>
>> And this is only from the Neutron API, this feature is usefull when you
>> offer PaaS, SaaS and have an automated framework that calls the API
>> to allocate these "client ports"
>>
>> I am not sure why you think the operator will need to ssh the instances,
>> the operator just needs to build the mapping of   to the
>> instance private IP.
>> Of course keep in mind that we didnt yet discuss full API details but its
>> going to be something like that (at least the way i see it)
>>
>> Hope thats explains it.
>>
>> Gal.
>>
>> On Mon, Sep 7, 2015 at 5:21 AM, Germy Lure  wrote:
>>
>>> Hi Gal,
>>>
>>> I'm sorry for my poor English. Let me try again.
>>>
>>> What operator wants to access is several related instances, instead of
>>> only one or one by one. The use case is periodical check and maintain.
>>> RELATED means instance maybe in one subnet, or one network, or one host.
>>> The host's scene is similar to access the docker on the host as you
>>> mentioned before.
>>>
>>> Via what you mentioned of API, user must ssh an instance and then invoke
>>> API to update the IP address and port, or even create a new PF to access
>>> another one. It will be a nightmare to a VPC operator who owns so many
>>> instances.
>>>
>>> In a word, I think the "inside_addr" should be "subnet" or "host".
>>>
>>> Hope this is clear enough.
>>>
>>> Germy
>>>
>>> On Sun, Sep 6, 2015 at 1:05 PM, Gal Sagie  wrote:
>>>
 Hi Germy,

 I am not sure i understand what you mean, can you please explain it
 further?

 Thanks
 Gal.

 On Sun, Sep 6, 2015 at 5:39 AM, Germy Lure 
 wrote:

> Hi, Gal
>
> Thank you for bringing this up. But I have some suggestions for the
> API.
>
> An operator or some other component wants to reach several VMs related

Re: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core

2015-09-08 Thread Stan Lagun
+1

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis



On Tue, Sep 1, 2015 at 3:03 PM, Alexander Tivelkov 
wrote:

> +1. Well deserved.
>
> --
> Regards,
> Alexander Tivelkov
>
> On Tue, Sep 1, 2015 at 2:47 PM, Victor Ryzhenkin 
> wrote:
>
>> +1 from me ;)
>>
>> --
>> Victor Ryzhenkin
>> Junior QA Engeneer
>> freerunner on #freenode
>>
>> Включено 1 сентября 2015 г. в 12:18:19, Ekaterina Chernova (
>> efedor...@mirantis.com) написал:
>>
>> +1
>>
>> On Tue, Sep 1, 2015 at 10:03 AM, Dmitro Dovbii 
>> wrote:
>>
>>> +1
>>>
>>> 2015-09-01 2:24 GMT+03:00 Serg Melikyan :
>>>
 +1

 On Mon, Aug 31, 2015 at 3:45 PM, Kirill Zaitsev 
 wrote:

> I’m pleased to nominate Nikolai for Murano core.
>
> He’s been actively participating in development of murano during
> liberty and is among top5 contributors during last 90 days. He’s also
> leading the CloudFoundry integration initiative.
>
> Here are some useful links:
>
> Overall contribution: http://stackalytics.com/?user_id=starodubcevna
> List of reviews:
> https://review.openstack.org/#/q/reviewer:%22Nikolay+Starodubtsev%22,n,z
> Murano contribution during latest 90 days
> http://stackalytics.com/report/contribution/murano/90
>
> Please vote with +1/-1 for approval/objections
>
> --
> Kirill Zaitsev
> Murano team
> Software Engineer
> Mirantis, Inc
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Mitaka proposed design sessions

2015-09-08 Thread James Slagle
Hi everyone,

I started an etherpad to capture some ideas for the TripleO design
sessions in Tokyo:
https://etherpad.openstack.org/p/tripleo-mitaka-proposed-sessions

Please add your ideas and proposals to the etherpad. Once we have some
set of proposals, we can come back around and everyone to assign a
ranking to them so we can pick the actual sessions we'll have.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Plugin integration and environment file naming

2015-09-08 Thread Jiří Stránský

On 8.9.2015 13:47, Jiří Stránský wrote:

Apart from "cinder" and "neutron-ml2" directories, we could also have a
"combined" (or sth similar) directory for env files which combine
multiple other env files. The use case which i see is for extra
pre-deployment configs which would be commonly used together. E.g.
combining Neutron and Horizon extensions of a single vendor [4].


Ah i mixed up two things in this paragraph -- env files vs. extraconfig 
nested stacks. Not sure if we want to start namespacing the extraconfig 
bits in a parallel manner. E.g. 
"puppet/extraconfig/pre_deploy/controller/cinder", 
"puppet/extraconfig/pre_deploy/controller/neutron-ml2". It would be 
nice, especially if we're sort of able to map the extraconfig categories 
to env file categories most of the time. OTOH the directory nesting is 
getting quite deep there :)


J.


[4]
https://review.openstack.org/#/c/213142/1/puppet/extraconfig/pre_deploy/controller/all-bigswitch.yaml



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-08 Thread Sean Dague
On 09/03/2015 08:22 AM, Thierry Carrez wrote:
> Hi everyone,
> 
> A feature deprecation policy is a standard way to communicate and
> perform the removal of user-visible behaviors and capabilities. It helps
> setting user expectations on how much and how long they can rely on a
> feature being present. It gives them reassurance over the timeframe they
> have to adapt in such cases.
> 
> In OpenStack we always had a feature deprecation policy that would apply
> to "integrated projects", however it was never written down. It was
> something like "to remove a feature, you mark it deprecated for n
> releases, then you can remove it".
> 
> We don't have an "integrated release" anymore, but having a base
> deprecation policy, and knowing which projects are mature enough to
> follow it, is a great piece of information to communicate to our users.
> 
> That's why the next-tags workgroup at the Technical Committee has been
> working to propose such a base policy as a 'tag' that project teams can
> opt to apply to their projects when they agree to apply it to one of
> their deliverables:
> 
> https://review.openstack.org/#/c/207467/
> 
> Before going through the last stage of this, we want to survey existing
> projects to see which deprecation policy they currently follow, and
> verify that our proposed base deprecation policy makes sense. The goal
> is not to dictate something new from the top, it's to reflect what's
> generally already applied on the field.
> 
> In particular, the current proposal says:
> 
> "At the very minimum the feature [...] should be marked deprecated (and
> still be supported) in the next two coordinated end-of-cyle releases.
> For example, a feature deprecated during the M development cycle should
> still appear in the M and N releases and cannot be removed before the
> beginning of the O development cycle."
> 
> That would be a n+2 deprecation policy. Some suggested that this is too
> far-reaching, and that a n+1 deprecation policy (feature deprecated
> during the M development cycle can't be removed before the start of the
> N cycle) would better reflect what's being currently done. Or that
> config options (which are user-visible things) should have n+1 as long
> as the underlying feature (or behavior) is not removed.
> 
> Please let us know what makes the most sense. In particular between the
> 3 options (but feel free to suggest something else):
> 
> 1. n+2 overall
> 2. n+2 for features and capabilities, n+1 for config options
> 3. n+1 overall
> 
> Thanks in advance for your input.

Based on my experience of projects in OpenStack projects in what they
are doing today:

Configuration options are either N or N+1: either they are just changed,
or there is a single deprecation cycle (i.e. deprecated by Milestone 3
of release N, removed before milestone 1 of release N+1). I know a lot
of projects continue to just change configs based on the number of
changes we block landing with Grenade.

An N+1 policy for configuration seems sensible. N+2 ends up pretty
burdensome because typically removing a config option means dropping a
code path as well, and an N+2 policy means the person deprecating the
config may very well not be the one removing the code, leading to debt
or more bugs.

For features, this is all over the map. I've seen removes in 0 cycles
because everyone is convinced that the feature doesn't work anyway (and
had been broken for some amount of time). I've seen 1 cycle deprecations
for minor features that are believed to be little used. In Nova we did
XML deprecation over 2 cycles IIRC. EC2 is going to be 2+ (we're still
waiting to get field data back on the alternate approach). The API
version deprecations by lots of projects are measured in years at this
point.

I feel like a realistic bit of compromise that won't drive everyone nuts
would be:

config options: n+1
minor features: n+1
major features: at least n+2 (larger is ok)

And come up with some fuzzy words around minor / major features.

I also think that ensuring that any project that gets this tag publishes
a list of deprecations in release notes would be really good. And that
gets looked for going forward.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] GSLB

2015-09-08 Thread Anik
Hello,
Recently saw some discussions in the Designate mailer archive around GSLB and 
saw some API snippets subsequently. Seems like early stages on this project, 
but highly excited that there is some traction now on GSLB.
I would to find out [1] If there has been discussions around how GSLB will work 
across multiple OpenStack regions and [2] The level of integration planned 
between Designate and GSLB. Any pointers in this regard will be helpful.  
Regards, Anik 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Plugin integration and environment file naming

2015-09-08 Thread Jiří Stránský

On 8.9.2015 10:40, Steven Hardy wrote:

Hi all,

So, lately we're seeing an increasing number of patches adding integration
for various third-party plugins, such as different neutron and cinder
backends.

This is great to see, but it also poses the question of how we organize the
user-visible interfaces to these things long term.

Originally, I was hoping to land some Heat composability improvements[1]
which would allow for tagging templates as providing a particular
capability (such as "provides neutron ML2 plugin"), but this has stalled on
some negative review feedback and isn't going to be implemented for
Liberty.

However, today looking at [2] and [3], (which both add t-h-t integration to
enable neutron ML2 plugins), a simpler interim solution occured to me,
which is just to make use of a suggested/mandatory naming convention.

For example:

environments/neutron-ml2-bigswitch.yaml
environments/neutron-ml2-cisco-nexus-ucsm.yaml

Or via directory structure:

environments/neutron-ml2/bigswitch.yaml
environments/neutron-ml2/cisco-nexus-ucsm.yaml


+1 for this one ^



This would require enforcement via code-review, but could potentially
provide a much more intuitive interface for users when they go to create
their cloud, and particularly it would make life much easier for any Ux to
ask "choose which neutron-ml2 plugin you want", because the available
options can simply be listed by looking at the available environment
files?


Yeah i like the idea of more structure in placing the environment files. 
It seems like customization of deployment via those files is becoming 
common, so we might see more environment files appearing over time.




What do folks think of this, is now a good time to start enforcing such a
convention?


We'd probably need to do this at some point anyway, and sooner seems 
better than later :)



Apart from "cinder" and "neutron-ml2" directories, we could also have a 
"combined" (or sth similar) directory for env files which combine 
multiple other env files. The use case which i see is for extra 
pre-deployment configs which would be commonly used together. E.g. 
combining Neutron and Horizon extensions of a single vendor [4].


Maybe also a couple of other categories could be found like "network" 
(for things related mainly to network isolation) or "devel" [5].



Jirka

[4] 
https://review.openstack.org/#/c/213142/1/puppet/extraconfig/pre_deploy/controller/all-bigswitch.yaml
[5] 
https://github.com/openstack/tripleo-heat-templates/blob/master/environments/puppet-ceph-devel.yaml




Steve

[1] https://review.openstack.org/#/c/196656/
[2] https://review.openstack.org/#/c/213142/
[3] https://review.openstack.org/#/c/198754/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] profiling Latency of single PUT operation on proxy + storage

2015-09-08 Thread Kirubakaran Kaliannan
Hi All,



I have attached a simple timeline of proxy+object latency chart for a
single PUT request. Please check.



I am profiling the swift proxy + object server to improve the latency of
single PUT request. This may help to improve the overall OPS performance.

Test Configuration : 4CPU + 16GB + 1 proxy node + 1 storage node + 1
replica for object ring, 3 replica for container ring on SSD;  perform 4k
PUT (one-by-one) request.

Every 4K PUT request in the above case takes 22ms (30ms for 3 replica-count
for object). Target is to bring the per 4K put request below 10ms to double
the overall OPS performance.



There are some potential places where we can improve the latency to achieve
this. Can you please provide your thoughts.



*Performance optimization-1: *Proxy server don’t have to get blocked in
connect() - getexpect() until object-server responds.

*Problem Today: *On PUT request, the proxy server connect_put_node() wait
for the response from the object server (getexpect()) after the connection
is established. Once the response (‘HTTP_CONTINUE’) is received, the proxy
server goes ahead and spawn the send_file thread to send data to object
server’s. There code looks serialized between proxy and object server.

*Optimization*:

*Option1:* Avoid waiting for all the connect to complete before proceeding
with the send_data to the connected object-server’s ?

*Option2:* The purpose of the getexpect() call is not very clear. Can we
relax  this, so that the proxy server will go-ahead read the data_source
and send it to the object server quickly after the connection is
established. We may have to handle extra failure cases here. (FYI: This
reduce 3ms for a single PUT request ).

def _connect_put_node(self, nodes, part, path, headers,

  logger_thread_locals, req):

"""Method for a file PUT connect"""

   ………..
   *with Timeout(self.app.node_timeout):*

*resp = conn.getexpect()*

   ………



*Performance Optimization-2*: Object server serialize the container_update
after the data write.

*Problem Today:* On PUT request, the object server, after writing the data
and meta data, the container_update() is called, which is serialized to all
storage nodes (3 way). Each container update take 3 millisecond and it adds
to 9 millisecond for the container_update to complete.

*Optimization:* Can we make this parallel using the green thread, and
probably *return success on  the first successful container update*, if
there is no connection error? I am trying to understand whether this will
have any data integrity issues, can you please provide your feed back on
this ?

*(FYI:* this reduce atlest 5 millisecond)



*Performance Optimization-3*:  write(metadata) in object server takes 2 to
3 millisecond

*Problem today:* After writing the data to the file, writer.put(metadata)
-> _*finalize*_put() to process the post write operation. This takes an
average of 3 millisecond for every put request.

*Optimization:*

*Option 1:* Is it possible to flush the file (group of files)
asynchronously in _*finalize*_put()

*Option 2:* Can we make this put(metadata) an asynchronous call ? so the
container update can happen in parallel ?  Error conditions must be handled
properly.



I would like to know, whether we have done any work done in this area, so
not to repeat the effort.



The motivation for this work, is because 30millisecond for a single 4K I/O
looks too high. With this the only way to scale is to put more server’s.
Trying to see whether we can achieve anything quickly to modify some
portion of code  or this may require quite a bit of code-rewrite.



Also, suggest whether this approach/work on reducing latency of 1 PUT
request is correct ?





Thanks

-kiru



*From:* Shyam Kaushik [mailto:sh...@zadarastorage.com
]
*Sent:* Friday, September 04, 2015 11:53 AM
*To:* Kirubakaran Kaliannan
*Subject:* RE: profiling per I/O logs



*Hi Kiru,*



I listed couple of optimization options like below. Can you pls list down
3-4 optimizations like below in similar format & pass it back to me for a
quick review. Once we finalize lets bounce it with community on what they
think.



*Performance optimization-1:* Proxy-server - on PUT request drive client
side independent of auth/object-server connection establishment

*Problem today:* on PUT request, client connects/puts header to
proxy-server. Proxy-server goes to auth & then looks up ring, connects to
each of object-server sending a header. Then when object-servers accept the
connection, proxy-server sends HTTP continue to client & now client writes
data into proxy-server & then proxy-server writes data to the object-servers

*Optimization:* Proxy-server can drive the client side independent of
backend side. i.e. upon auth completes, proxy-server through a thread can
send HTTP continue to client & ask for the data to be written. In the
background it can try to connect to object-server writing the header. This
way when the

Re: [openstack-dev] [Fuel][Plugins] Deployment order with custom role

2015-09-08 Thread Swann Croiset
On Mon, Sep 7, 2015 at 4:25 PM, Igor Kalnitsky 
wrote:

> > that said I no sure anchors are sufficient, we still need priorities to
> > specify orders for independant and/or optional plugins (they don't know
> > each other)
>
> If you need this, you're probably doing something wrong. Priorities
> won't solve your problem here, because plugins will need to know about
> priorities in other plugins and that's weird.

yes that wired, by convention all LMA plugins have priorities well defined
between each other. It works well until we manage all related plugins but
reaches its limit for other plugins if we don't align them together.
This kind of workaround was the only solution at this time...


> The only working
> solution here is to make plugin to know about other plugin if it's
> important to make deployment precisely after other plugin.
>
> > So I guess this will break things if we reference in 'require' a
> > nonexistent plugin-a-task.
>
> That's true. I think the right case here is to implement some sort of
> conditional tasks, so different tasks will be executed in different
> cases.
>
> conditional tasks sounds good indeed, how can we bootstrap this feature?


> > About tasks.yaml, we must support it until an equivalent 'deployment
> order'
> > is implemented with plugin-custom-role feature.
>
This is not about plugin-custom-role, this is about our task
> deployment framework. I heard there were some plans on its
> improvements.
>
> from POV of plugins development, priorities did the trick so far even if
it doesn't look like so natural..
I'm just speaking to provide/preserve the same flexibility within next
plugin framework releases.
If this effort can be made on the whole fuel internals and let plugins
enjoy it I would be happy.
do you have any pointers about these rumours, any BP ?

--
BR
Swann


> Regards,
> Igor
>
> On Mon, Sep 7, 2015 at 3:27 PM, Swann Croiset 
> wrote:
> >
> >
> > On Mon, Sep 7, 2015 at 11:12 AM, Igor Kalnitsky  >
> > wrote:
> >>
> >> Hi Swann,
> >>
> >> > However, we still need deployment order between independent
> >> > plugins and it seems impossible to define the priorities
> >>
> >> There's no such things like priorities for now.. perhaps we can
> >> introduce some kind of anchors instead of priorities, but that's
> >> another story.
> >
> > yes its another story for next release(s), anchors could reuse the actual
> > convention of ranges used (disk, network, software, monitoring)
> > that said I no sure anchors are sufficient, we still need priorities to
> > specify orders for independant and/or optional plugins (they don't know
> each
> > other)
> >
> >
> >>
> >> Currently the only way to synchronize two plugins is to make one to
> >> know about other one. That means you need to properly setup "requires"
> >> field:
> >>
> >> - id: my-plugin-b-task
> >>   type: puppet
> >>   role: [my-plugin-b-role]
> >>   required_for: [post_deployment_end]
> >>   requires: [post_deployment_start, PLUGIN-A-TASK]
> >>   parameters:
> >> puppet_manifest: some-puppet.pp
> >> puppet_modules: /etc/puppet/modules
> >> timeout: 3600
> >> cwd: /
> >>
> > We thought about this solution _but_ in our case we cannot because the
> > plugin is optional and may not be installed/enabled. So I guess this will
> > break things if we reference in 'require' a nonexistent plugin-a-task.
> > For example with LMA plugins, the LMA-Collector plugin must be
> > deployed/installed before LMA-Infrastructure-Alerting plugin (to avoid
> false
> > alerts UNKNOWN state) but the last may not be enabled for the deployment.
> >
> >> Thanks,
> >> Igor
> >>
> >
> > About tasks.yaml, we must support it until an equivalent 'deployment
> order'
> > is implemented with plugin-custom-role feature.
> >
> >>
> >> On Mon, Sep 7, 2015 at 11:31 AM, Swann Croiset 
> >> wrote:
> >> > Hi fuelers,
> >> >
> >> > We're currently porting nearly all LMA plugins to the new plugin fwk
> >> > 3.0.0
> >> > to leverage custom role capabilities.
> >> > That brings up a lot of simplifications for node assignment, disk
> >> > management, network config, reuse core tasks and so on .. thanks to
> the
> >> > fwk.
> >> >
> >> > However, we still need deployment order between independent plugins
> and
> >> > it
> >> > seems impossible to define the priorities [0] in
> deployment_tasks.yaml,
> >> > The only way to preserve deployment order would be to keep tasks.yaml
> >> > too.
> >> >
> >> > So, I'm wondering if this is the recommended solution to address
> plugins
> >> > order deployment with plugin fwk 3.0.0?
> >> > And furthermore if tasks.yaml will still be supported in future by the
> >> > plugin fwk or if the fwk shouldn't evolve  by adding priorities
> >> > definitions
> >> > in deployment_tasks.yaml ?
> >> >
> >> > Thanks
> >> >
> >> > [0]
> >> > https://wiki.openstack.org/wiki/Fuel/Plugins#Plugins_deployment_order
> >> >
> >> >
> >> >
> ___

Re: [openstack-dev] 9/4 state of the gate

2015-09-08 Thread Sean Dague
On 09/05/2015 09:50 PM, Joe Gordon wrote:
> 
> 
> On Fri, Sep 4, 2015 at 6:43 PM, Matt Riedemann
> mailto:mrie...@linux.vnet.ibm.com>> wrote:

> 
> I haven't seen the elastic-recheck bot comment on any changes in
> awhile either so I'm wondering if that's not running.
> 
> 
> Looks like there was a suspicious 4 day gap in elastic-recheck, but it
> appears to be running again?
> 
> $ ./lastcomment.py 
> Checking name: Elastic Recheck
> [0] 2015-09-06 01:12:40 (0:35:54 old)
> https://review.openstack.org/220386 'Reject the cell name include '!',
> '.' and '@' for Nova API' 
> [1] 2015-09-02 00:54:54 (4 days, 0:53:40 old)
> https://review.openstack.org/218781 'Remove the unnecassary
> volume_api.get(context, volume_id)' 

Remember, there is a 15 minute report contract on the bot, assuming that
if we're > 15 minutes late enough of the environment is backed up that
there is no point in waiting. We had some pretty substantial backups in
Elastic Search recently.

-Sean


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >