Re: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration

2018-09-27 Thread Rabi Mishra
On Thu, Sep 27, 2018 at 11:45 PM Zane Bitter  wrote:

> On 26/09/18 10:27 PM, Qiming Teng wrote:
>
 

> Heat still has a *lot* of users running very important stuff on Heat
> scaling group code which, as you know, is burdened by a lot of technical
> debt.
>
> Though I agree that a common library that can be used by both projects
would be really good, I still don't understand what user issues (though the
resource implementations are not the best, they actually work) we're trying
to address here.

As far as duplicated effort is concerned (that's the only justification I
could get from the etherpad), possibly senlin duplicated some stuff
expecting to replace heat implementation in time. Also, we've not made any
feature additions to heat group resources since long time (expecting senlin
to do it instead) and I've not seen any major bugs reported by users. May
be we're talking about duplicated effort in the "future", now that we have
changed plans for heat ASG?;)

>> What will be great if we can build common library cross projects, and use
> >> that common library in both projects, make sure we have all improvement
> >> implemented in that library, finally to use Senlin from that from that
> >> library call in Heat autoscaling group. And in long-term, we gonna let
> all
>
> 

>
> +1 - to expand on Rico's example, we have at least 3 completely separate
> implementations of batching, each supporting different actions:
>
> Heat AutoscalingGroup: updates only
> Heat ResourceGroup: create or update
> Senlin Batch Policy: updates only
>
> and users are asking for batch delete as well.
>

I've seen this request a few times. But, what I wonder is "why a user would
want to do a delete in a controlled batched manner"? The only
justifications provided is that "they want to throttle requests to other
services, as those services are not able to handle large concurrent
requests sent by heat properly". Are we not looking at the wrong place to
fix those issues?

IMHO, a good list of user issues on the mentioned etherpad would really
help justify the effort needed.

This is clearly an area
> where technical debt from duplicate implementations is making it hard to
> deliver value to users.
>
> cheers,
> Zane.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal)

2018-09-27 Thread hao wang
+1 to Julia's suggestion, Cinder should also have a spec to discuss
the detail about how to implement the creation of volume from an
encrypted image.
Julia Kreger  于2018年9月28日周五 上午9:39写道:
>
> Greetings!
>
> I suspect the avenue of at least three different specs is likely going
> to be the best path forward and likely what will be required for each
> project to fully understand how/what/why. From my point of view, I'm
> quite interested in this from a Nova point of view because that is the
> initial user interaction point for majority of activities. I'm also
> wondering if this is virt driver specific, or if it can be applied to
> multiple virt drivers in the nova tree, since each virt driver has
> varying constraints. So maybe the best path forward is something nova
> centric to start?
>
> -Julia
>
> On Thu, Sep 27, 2018 at 10:36 AM Markus Hentsch
>  wrote:
> >
> > Dear OpenStack developers,
> >
> > we would like to propose the introduction of an encrypted image format
> > in OpenStack. We already created a basic implementation involving Nova,
> > Cinder, OSC and Glance, which we'd like to contribute.
> >
> > We originally created a full spec document but since the official
> > cross-project contribution workflow in OpenStack is a thing of the past,
> > we have no single repository to upload it to. Thus, the Glance team
> > advised us to post this on the mailing list [1].
> >
> > Ironically, Glance is the least affected project since the image
> > transformation processes affected are taking place elsewhere (Nova and
> > Cinder mostly).
> >
> > Below you'll find the most important parts of our spec that describe our
> > proposal - which our current implementation is based on. We'd love to
> > hear your feedback on the topic and would like to encourage all affected
> > projects to join the discussion.
> >
> > Subsequently, we'd like to receive further instructions on how we may
> > contribute to all of the affected projects in the most effective and
> > collaborative way possible. The Glance team suggested starting with a
> > complete spec in the glance-specs repository, followed by individual
> > specs/blueprints for the remaining projects [1]. Would that be alright
> > for the other teams?
> >
> > [1]
> > http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-09-27-14.00.log.html
> >
> > Best regards,
> > Markus Hentsch
> >
> [trim]
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal)

2018-09-27 Thread Julia Kreger
Greetings!

I suspect the avenue of at least three different specs is likely going
to be the best path forward and likely what will be required for each
project to fully understand how/what/why. From my point of view, I'm
quite interested in this from a Nova point of view because that is the
initial user interaction point for majority of activities. I'm also
wondering if this is virt driver specific, or if it can be applied to
multiple virt drivers in the nova tree, since each virt driver has
varying constraints. So maybe the best path forward is something nova
centric to start?

-Julia

On Thu, Sep 27, 2018 at 10:36 AM Markus Hentsch
 wrote:
>
> Dear OpenStack developers,
>
> we would like to propose the introduction of an encrypted image format
> in OpenStack. We already created a basic implementation involving Nova,
> Cinder, OSC and Glance, which we'd like to contribute.
>
> We originally created a full spec document but since the official
> cross-project contribution workflow in OpenStack is a thing of the past,
> we have no single repository to upload it to. Thus, the Glance team
> advised us to post this on the mailing list [1].
>
> Ironically, Glance is the least affected project since the image
> transformation processes affected are taking place elsewhere (Nova and
> Cinder mostly).
>
> Below you'll find the most important parts of our spec that describe our
> proposal - which our current implementation is based on. We'd love to
> hear your feedback on the topic and would like to encourage all affected
> projects to join the discussion.
>
> Subsequently, we'd like to receive further instructions on how we may
> contribute to all of the affected projects in the most effective and
> collaborative way possible. The Glance team suggested starting with a
> complete spec in the glance-specs repository, followed by individual
> specs/blueprints for the remaining projects [1]. Would that be alright
> for the other teams?
>
> [1]
> http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-09-27-14.00.log.html
>
> Best regards,
> Markus Hentsch
>
[trim]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration

2018-09-27 Thread Zane Bitter

On 27/09/18 7:00 PM, Duc Truong wrote:

On Thu, Sep 27, 2018 at 11:14 AM Zane Bitter  wrote:



and we will gradually fade out the existing 'AutoScalingGroup'
and related resource types in Heat.


That's almost impossible to do without breaking existing users.


One approach would be to switch the underlying Heat AutoScalingGroup
implementation to use Senlin and then deprecate the AutoScalingGroup
resource type in favor of the Senlin resource type over several
cycles. 


The hard part (or one hard part, at least) of that is migrating the 
existing data.



Not saying that this is the definitive solution, but it is
worth discussing as an option since this follows a path other projects
have taken (e.g. nova-volume extraction into cinder).


+1, *definitely* worth discussing.


A prerequisite to this approach would probably require Heat to create
the so-called common library to house the autoscaling code.  Then
Senlin would need to achieve feature parity against this autoscaling
library before the switch could happen.



Clearly there are _some_ parts that could in principle be shared. (I
added some comments to the etherpad to clarify what I think Rico was
referring to.)

It seems to me that there's value in discussing it together rather than
just working completely independently, even if the outcome of that
discussion is that


+1.  The outcome of any discussion will be beneficial not only to the
teams but also the operators and users.

Regards,

Duc (dtruong)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][elections] Stein TC Election Results

2018-09-27 Thread Amy
Congrats all! And for those of you who ran and we’re not elected thank you for 
all you do in the community!

Amy (spotz)

Sent from my iPhone

> On Sep 27, 2018, at 6:56 PM, Emmet Hikory  wrote:
> 
> Please join me in congratulating the 6 newly elected members of the
> Technical Committee (TC):
> 
>  - Doug Hellmann (dhellmann)
>  - Julia Kreger (TheJulia)
>  - Jeremy Stanley (fungi)
>  - Jean-Philippe Evrard (evrardjp)
>  - Lance Bragstad (lbragstad)
>  - Ghanshyam Mann (gmann)
> 
> Full Results:
> https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f773fda2d0695864
> 
> Election process details and results are also available here:
> https://governance.openstack.org/election/
> 
> Thank you to all of the candidates, having a good group of candidates helps
> engage the community in our democratic process.
> 
> Thank you to all who voted and who encouraged others to vote.  Voter turnout
> was significantly up from recent cycles.  We need to ensure your voices are
> heard.
> 
> -- 
> Emmet HIKORY
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][elections] Stein TC Election Results

2018-09-27 Thread Jeremy Stanley
On 2018-09-27 20:00:42 -0400 (-0400), Mohammed Naser wrote:
[...]
> A big thank you to our election team who oversees all of this as
> well :)
[...]

I wholeheartedly concur!

And an even bigger thank you to the 5 candidates who were not
elected this term; please run again in the next election if you're
able, I think every one of you would have made a great choice for a
seat on the OpenStack TC. Our community is really lucky to have so
many qualified people eager to take on governance tasks.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][elections] Stein TC Election Results

2018-09-27 Thread Mohammed Naser
On Thu, Sep 27, 2018 at 7:57 PM Emmet Hikory  wrote:
>
> Please join me in congratulating the 6 newly elected members of the
> Technical Committee (TC):
>
>   - Doug Hellmann (dhellmann)
>   - Julia Kreger (TheJulia)
>   - Jeremy Stanley (fungi)

Welcome back!

>   - Jean-Philippe Evrard (evrardjp)
>   - Lance Bragstad (lbragstad)
>   - Ghanshyam Mann (gmann)

..and welcome to the TC :)

>
> Full Results:
> https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f773fda2d0695864
>
> Election process details and results are also available here:
> https://governance.openstack.org/election/
>
> Thank you to all of the candidates, having a good group of candidates helps
> engage the community in our democratic process.

A big thank you to our election team who oversees all of this as well :)

> Thank you to all who voted and who encouraged others to vote.  Voter turnout
> was significantly up from recent cycles.  We need to ensure your voices are
> heard.
>
> --
> Emmet HIKORY
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][tc][elections] Stein TC Election Results

2018-09-27 Thread Emmet Hikory
Please join me in congratulating the 6 newly elected members of the
Technical Committee (TC):

  - Doug Hellmann (dhellmann)
  - Julia Kreger (TheJulia)
  - Jeremy Stanley (fungi)
  - Jean-Philippe Evrard (evrardjp)
  - Lance Bragstad (lbragstad)
  - Ghanshyam Mann (gmann)

Full Results:
https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f773fda2d0695864

Election process details and results are also available here:
https://governance.openstack.org/election/

Thank you to all of the candidates, having a good group of candidates helps
engage the community in our democratic process.

Thank you to all who voted and who encouraged others to vote.  Voter turnout
was significantly up from recent cycles.  We need to ensure your voices are
heard.

-- 
Emmet HIKORY

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-09-27 Thread Jay Pipes

On 09/27/2018 06:23 PM, Matt Riedemann wrote:

On 9/27/2018 3:02 PM, Jay Pipes wrote:
A great example of this would be the proposed "deploy template" from 
[2]. This is nothing more than abusing the placement traits API in 
order to allow passthrough of instance configuration data from the 
nova flavor extra spec directly into the nodes.instance_info field in 
the Ironic database. It's a hack that is abusing the entire concept of 
the placement traits concept, IMHO.


We should have a way *in Nova* of allowing instance configuration 
key/value information to be passed through to the virt driver's 
spawn() method, much the same way we provide for user_data that gets 
exposed after boot to the guest instance via configdrive or the 
metadata service API. What this deploy template thing is is just a 
hack to get around the fact that nova doesn't have a basic way of 
passing through some collated instance configuration key/value 
information, which is a darn shame and I'm really kind of annoyed with 
myself for not noticing this sooner. :(


We talked about this in Dublin through right? We said a good thing to do 
would be to have some kind of template/profile/config/whatever stored 
off in glare where schema could be registered on that thing, and then 
you pass a handle (ID reference) to that to nova when creating the 
(baremetal) server, nova pulls it down from glare and hands it off to 
the virt driver. It's just that no one is doing that work.


No, nobody is doing that work.

I will if need be if it means not hacking the placement API to serve 
this purpose (for which it wasn't intended).


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration

2018-09-27 Thread Duc Truong
On Thu, Sep 27, 2018 at 11:14 AM Zane Bitter  wrote:
>
> > and we will gradually fade out the existing 'AutoScalingGroup'
> > and related resource types in Heat.
>
> That's almost impossible to do without breaking existing users.

One approach would be to switch the underlying Heat AutoScalingGroup
implementation to use Senlin and then deprecate the AutoScalingGroup
resource type in favor of the Senlin resource type over several
cycles.  Not saying that this is the definitive solution, but it is
worth discussing as an option since this follows a path other projects
have taken (e.g. nova-volume extraction into cinder).

A prerequisite to this approach would probably require Heat to create
the so-called common library to house the autoscaling code.  Then
Senlin would need to achieve feature parity against this autoscaling
library before the switch could happen.

>
> Clearly there are _some_ parts that could in principle be shared. (I
> added some comments to the etherpad to clarify what I think Rico was
> referring to.)
>
> It seems to me that there's value in discussing it together rather than
> just working completely independently, even if the outcome of that
> discussion is that

+1.  The outcome of any discussion will be beneficial not only to the
teams but also the operators and users.

Regards,

Duc (dtruong)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-09-27 Thread melanie witt

On Thu, 27 Sep 2018 17:23:26 -0500, Matt Riedemann wrote:

On 9/27/2018 3:02 PM, Jay Pipes wrote:

A great example of this would be the proposed "deploy template" from
[2]. This is nothing more than abusing the placement traits API in order
to allow passthrough of instance configuration data from the nova flavor
extra spec directly into the nodes.instance_info field in the Ironic
database. It's a hack that is abusing the entire concept of the
placement traits concept, IMHO.

We should have a way *in Nova* of allowing instance configuration
key/value information to be passed through to the virt driver's spawn()
method, much the same way we provide for user_data that gets exposed
after boot to the guest instance via configdrive or the metadata service
API. What this deploy template thing is is just a hack to get around the
fact that nova doesn't have a basic way of passing through some collated
instance configuration key/value information, which is a darn shame and
I'm really kind of annoyed with myself for not noticing this sooner. :(


We talked about this in Dublin through right? We said a good thing to do
would be to have some kind of template/profile/config/whatever stored
off in glare where schema could be registered on that thing, and then
you pass a handle (ID reference) to that to nova when creating the
(baremetal) server, nova pulls it down from glare and hands it off to
the virt driver. It's just that no one is doing that work.


If I understood correctly, that discussion was around adding a way to 
pass a desired hardware configuration to nova when booting an ironic 
instance. And that it's something that isn't yet possible to do using 
the existing ComputeCapabilitiesFilter. Someone please correct me if I'm 
wrong there.


That said, I still don't understand why we are talking about deprecating 
the ComputeCapabilitiesFilter if there's no supported way to replace it 
yet. If boolean traits are not enough to replace it, then we need to 
hold off on deprecating it, right? Would the 
template/profile/config/whatever in glare approach replace what the 
ComputeCapabilitiesFilter is doing or no? Sorry, I'm just not clearly 
understanding this yet.


-melanie




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-09-27 Thread Matt Riedemann

On 9/27/2018 3:02 PM, Jay Pipes wrote:
A great example of this would be the proposed "deploy template" from 
[2]. This is nothing more than abusing the placement traits API in order 
to allow passthrough of instance configuration data from the nova flavor 
extra spec directly into the nodes.instance_info field in the Ironic 
database. It's a hack that is abusing the entire concept of the 
placement traits concept, IMHO.


We should have a way *in Nova* of allowing instance configuration 
key/value information to be passed through to the virt driver's spawn() 
method, much the same way we provide for user_data that gets exposed 
after boot to the guest instance via configdrive or the metadata service 
API. What this deploy template thing is is just a hack to get around the 
fact that nova doesn't have a basic way of passing through some collated 
instance configuration key/value information, which is a darn shame and 
I'm really kind of annoyed with myself for not noticing this sooner. :(


We talked about this in Dublin through right? We said a good thing to do 
would be to have some kind of template/profile/config/whatever stored 
off in glare where schema could be registered on that thing, and then 
you pass a handle (ID reference) to that to nova when creating the 
(baremetal) server, nova pulls it down from glare and hands it off to 
the virt driver. It's just that no one is doing that work.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StoryBoard] PTG Summary

2018-09-27 Thread Kendall Nelson
Updates!

On Thu, Sep 20, 2018 at 2:15 PM Kendall Nelson 
wrote:

> Hello Lovers of Task Tracking!
>
> So! We talked about a lot of things, and I went to a lot of rooms to talk
> about StoryBoard related things and it was already a week ago so bear with
> me.
>
> We had a lot of good discussions as we were able to include SotK in
> discussions via videocalling. We also had the privilege of our outreachy
> intern to come all the way from Cairo to Denver to join us :)
>
> Onto the summaries!
>
> Story Attachments
>
> ==
>
> This topic has started coming up with increasing regularity. Currently,
> StoryBoard doesn’t support attachments, but it’s a feature that several
> projects claim to be blocking their migration. The current work around is
> either to trim down logs and paste the relevant section, or to host the
> file elsewhere and link to its location. After consulting with the
> infrastructure team, we concluded that currently, there is no donated
> storage. The next step is for me to draft a spec detailing our requirements
> and implementation details and then to include infra on the review to help
> them have something concrete to go to vendors with. For notes on the
> proposed method see the etherpad[1].
>
> One other thing discussed during this topic was how we could maybe migrate
> the current attachments. This isn’t supported by the migration script at
> this point, but it’s something we could write a separate script for. It
> should be separate because it would be a painfully slow process and we
> wouldn’t want to slow down the migration script more than it already is by
> the Launchpad API. The attachments script would be run after the initial
> migration; that being said, everything still persists in Launchpad so
> things can still be referenced there.
>
> Handling Duplicate Stories
>
> 
>
> This is also an ongoing topic for discussion. Duplicate stories if not
> handled properly could dilute the database as we get more projects migrated
> over. The plan we settled on is to add a ‘Mark as Duplicate’ button to the
> webclient and corresponding functions to the API. The user would be
> prompted for a link to the master story. The master story would get a new
> timeline event that would have the link to the duplicate and the duplicate
> story would have all tasks auto marked as invalid (aside from those marked
> as merged) so that the story then shows as inactive. The duplicate story
> could also get a timeline event that explains what happened and links to
> the master story. I’ve yet to create a story for all of this, but it’s on
> my todo list.
>

Turns out there is a story already[5].


>
> Handling Thousands of Comments Per Story
>
> ==
>
> There’s this special flower story[2] that has literally thousands of
> comments on it because of all of the gerrit comments being added to the
> timeline for all the patches for all the tasks. Rendering of the timeline
> portion of the webpage in the webclient is virtually impossible. It will
> load the tasks and then hang forever. The discussion around this boiled
> down to this: other task trackers also can’t handle this and there is a
> better way to divvy up the story into several stories and contain them in a
> worklist for future, similar work. For now, users can select what they want
> to load in their timeline views for stories, so by unmarking all of the
> timeline events in their preferences, the story will load completely sans
> timeline details. Another solution we discussed to help alleviate the
> timeline load on stories with lots of tasks is to have a task field that
> links to the review, rather than a comment from gerrit every time a new
> patch gets pushed. Essentially we want to focus on cleaning up the timeline
> rather than just going straight to a pagination type of solution. It was
> also concluded that we want to add another user preference for page sizes
> of 1000. Tasks have not been created in the story related to this issue
> yet[3], but its on my todo list.
>

Updates story[6].


> Project Group Descriptions
>
> =
>
> There was a request to have project group descriptions, but currently
> there is nothing in the API handling this. Discussion concluded with
> agreement that this shouldn’t be too difficult. All that needs to happen is
> a few additions to the API and the connection to managing group definitions
> in project-config. I still need to make a story for this.
>

Created a story for this[7].


> Translating storyboard-webclient
>
> =
>
> There was an infrastructure mailing list thread a little while back that
> kicked off discussion on this topic. It was received as an interesting idea
> and could help with the adoption of StoryBoard outside of OpenStack. The
> biggest concern was communicating to users that are seeing the webclient
> rendered in some other language that they still need to create
> 

Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Fox, Kevin M
Its the commons problem again. Either we encourage folks to contribute a little 
bit to the commons (review a few other peoples noncompute cli thingies. in 
doing so, you learn how to better do the cli in the generic/user friendly 
ways), to further their own project goals (get easier access to contribute to 
the cli of the compute stuff), or we do what we've always done. Let each 
project maintain their own cli and have no uniformity at all. Why are the walls 
in OpenStack so high?

Kevin

From: Matt Riedemann [mriede...@gmail.com]
Sent: Thursday, September 27, 2018 12:35 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting 
goal selection for T series

On 9/27/2018 2:33 PM, Fox, Kevin M wrote:
> If the project plugins were maintained by the OSC project still, maybe there 
> would be incentive for the various other projects to join the OSC project, 
> scaling things up?

Sure, I don't really care about governance. But I also don't really care
about all of the non-compute API things in OSC either.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Matt Riedemann

On 9/27/2018 2:33 PM, Fox, Kevin M wrote:

If the project plugins were maintained by the OSC project still, maybe there 
would be incentive for the various other projects to join the OSC project, 
scaling things up?


Sure, I don't really care about governance. But I also don't really care 
about all of the non-compute API things in OSC either.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Fox, Kevin M
If the project plugins were maintained by the OSC project still, maybe there 
would be incentive for the various other projects to join the OSC project, 
scaling things up?

Thanks,
Kevin

From: Matt Riedemann [mriede...@gmail.com]
Sent: Thursday, September 27, 2018 12:12 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting 
goal selection for T series

On 9/27/2018 10:13 AM, Dean Troyer wrote:
> On Thu, Sep 27, 2018 at 9:10 AM, Doug Hellmann  wrote:
>> Monty Taylor  writes:
>>> Main difference is making sure these new deconstructed plugin teams
>>> understand the client support lifecycle - which is that we don't drop
>>> support for old versions of services in OSC (or SDK). It's a shift from
>>> the support lifecycle and POV of python-*client, but it's important and
>>> we just need to all be on the same page.
>> That sounds like a reason to keep the governance of the libraries under
>> the client tool project.
> Hmmm... I think that may address a big chunk of my reservations about
> being able to maintain consistency and user experience in a fully
> split-OSC world.
>
> dt

My biggest worry with splitting everything out into plugins with new
core teams, even with python-openstackclient-core as a superset, is that
those core teams will all start approving things that don't fit with the
overall guidelines of how OSC commands should be written. I've had to go
to the "Dean well" several times when reviewing osc-placement commands.

But the python-openstackclient-core team probably isn't going to scale
to fit the need of all of these gaps that need closing from the various
teams, either. So how does that get fixed? I've told Dean and Steve
before that if they want me to review / ack something compute-specific
in OSC that they can call on me, like a liaison. Maybe that's all we
need to start? Because I've definitely disagreed with compute CLI
changes in OSC that have a +2 from the core team because of a lack of
understanding from both the contributor and the reviewers about what the
compute API actually does, or how a microversion behaves. Or maybe we
just do some kind of subteam thing where OSC core doesn't look at a
change until the subteam has +1ed it. We have a similar concept in nova
with virt driver subteams.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][puppet][kolla][helm][ansible] Change in Cinder backup driver naming

2018-09-27 Thread Mohammed Naser
Thanks for the email Sean.

https://review.openstack.org/605846 Fix Cinder backup to use full paths

I think this should cover us, please let me know if we did things right.

FYI: the docs all still seem to point at the old paths..

https://docs.openstack.org/cinder/latest/configuration/block-storage/backup-drivers.html
On Thu, Sep 27, 2018 at 2:33 PM Sean McGinnis  wrote:
>
> This probably applies to all deployment tools, so hopefully this reaches the
> right folks.
>
> In Havana, Cinder deprecated the use of specifying the module for configuring
> backup drivers. Patch https://review.openstack.org/#/c/595372/ finally removed
> the backwards compatibility handling for configs that still used the old way.
>
> Looking through a quick search, it appears there may be some tools that are
> still defaulting to setting the backup driver name using the patch. If your
> project does not specify the full driver class path, please update these to do
> so now.
>
> Any questions, please reach out here or in the #openstack-cinder channel.
>
> Thanks!
> Sean
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][castellan] Time for a 1.0 release?

2018-09-27 Thread Ade Lee
On Tue, 2018-09-25 at 16:30 -0500, Ben Nemec wrote:
> Doug pointed out on a recent Oslo release review that castellan is
> still 
> not officially 1.0. Given the age of the project and the fact that
> we're 
> asking people to deploy a Castellan-compatible keystore as one of
> the 
> base services, it's probably time to address that.
> 
> To that end, I'm sending this to see if anyone is aware of any
> reasons 
> we shouldn't go ahead and tag a 1.0 of Castellan.
> 

+ 1

> Thanks.
> 
> -Ben
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Matt Riedemann

On 9/27/2018 10:13 AM, Dean Troyer wrote:

On Thu, Sep 27, 2018 at 9:10 AM, Doug Hellmann  wrote:

Monty Taylor  writes:

Main difference is making sure these new deconstructed plugin teams
understand the client support lifecycle - which is that we don't drop
support for old versions of services in OSC (or SDK). It's a shift from
the support lifecycle and POV of python-*client, but it's important and
we just need to all be on the same page.

That sounds like a reason to keep the governance of the libraries under
the client tool project.

Hmmm... I think that may address a big chunk of my reservations about
being able to maintain consistency and user experience in a fully
split-OSC world.

dt


My biggest worry with splitting everything out into plugins with new 
core teams, even with python-openstackclient-core as a superset, is that 
those core teams will all start approving things that don't fit with the 
overall guidelines of how OSC commands should be written. I've had to go 
to the "Dean well" several times when reviewing osc-placement commands.


But the python-openstackclient-core team probably isn't going to scale 
to fit the need of all of these gaps that need closing from the various 
teams, either. So how does that get fixed? I've told Dean and Steve 
before that if they want me to review / ack something compute-specific 
in OSC that they can call on me, like a liaison. Maybe that's all we 
need to start? Because I've definitely disagreed with compute CLI 
changes in OSC that have a +2 from the core team because of a lack of 
understanding from both the contributor and the reviewers about what the 
compute API actually does, or how a microversion behaves. Or maybe we 
just do some kind of subteam thing where OSC core doesn't look at a 
change until the subteam has +1ed it. We have a similar concept in nova 
with virt driver subteams.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][plugins] npm jobs fail due to new XStatic-jQuery release (was: Horizon gates are broken)

2018-09-27 Thread Ivan Kolodyazhny
Hi,

Unfortunately, this issue affects some of the plugins too :(. At least
gates for the magnum-ui, senlin-dashboard, zaqar-ui and zun-ui are broken
now. I'm working both with project teams to fix it asap. Let's wait if [5]
helps for senlin-dashboard and fix all the rest of plugins.


[5] https://review.openstack.org/#/c/605826/

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/


On Wed, Sep 26, 2018 at 4:50 PM Ivan Kolodyazhny  wrote:

> Hi all,
>
> Patch [1]  is merged and our gates are un-blocked now. I went throw review
> list and post 'recheck' where it was needed.
>
> We need to cherry-pick this fix to stable releases too. I'll do it asap
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
>
> On Mon, Sep 24, 2018 at 11:18 AM Ivan Kolodyazhny  wrote:
>
>> Hi team,
>>
>> Unfortunately, horizon gates are broken now. We can't merge any patch due
>> to the -1 from CI.
>> I don't want to disable tests now, that's why I proposed a fix [1].
>>
>> We'd got released some of XStatic-* packages last week. At least new
>> XStatic-jQuery [2] breaks horizon [3]. I'm working on a new job for
>> requirements repo [4] to prevent such issues in the future.
>>
>> Please, do not try 'recheck' until [1] will be merged.
>>
>> [1] https://review.openstack.org/#/c/604611/
>> [2] https://pypi.org/project/XStatic-jQuery/#history
>> [3] https://bugs.launchpad.net/horizon/+bug/1794028
>> [4] https://review.openstack.org/#/c/604613/
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cyborg] Stein PTG summary

2018-09-27 Thread Li Liu
I've written up a high-level summary of the discussions we had at the
PTG -- please feel free to reply to this thread to fill in anything I've
missed.

Sorry about the delay, I was really tied up after the PTG

We used our PTG etherpad:
https://etherpad.openstack.org/p/cyborg-ptg-stein

Cyborg Centric:

1. Stein Specs:
a. os-acc e2e (in queue) -- probably needs to be refactored to move
REST API signatures to Nova specs, and Cyborg-specific details in this spec
b. DB evolution introducing VAR and device profile concept (need to
add) Yes (part of the Rolling upgrade req)
i. VAR related APIs spec:
- GET /vars (optionally with request body of a list of VAR
UUIDs)
- GET /vars/instance/{instance_uuid}
- GET, POST, PUT, DELETE  /vars/unbound
- GET, POST, PUT, DELETE  /vars/bound
ii. Device profiles spec
c. device discovery (in queue)
d. pci_white_list (low priority for S ?) low priority.

2. Drivers:
a. Land current drivers in the queue: opae, gpu, clock
b. add Xilinx driver
c. add NPU driver (possibly from Huawei first, other NPU cards are
welcomed as well)
d. add RISC-V driver support

3. DOC: We will catch up with the documentation for Cyborg in the up
coming cycle

4. Infra:
a. fake driver to facilitate end-to-end function testing (part of
the Rolling upgrade request)(check with shaohe)
b. utilize storyboard for task mgmt and tracing

Nova-Cyborg:
Discussion with Nova details can be found
https://etherpad.openstack.org/p/cyborg-nova-ptg-stein
1. Nova Stein Specs:
a. device-profile e2e (for phase 1)
b. new nova attach-device api (for phase 2)
2. Need Nova to complete:
a. Nested Resource Provider: Keep your eyes on this series and its
associated blueprint:
https://review.openstack.org/#/q/topic:use-nested-allocation-candidates

Note: Nova has made it clear that they do not expect Nova changes needed
for Cyborg to be upstreamed in Stein, because the bar for integration is
high. Cyborg needs to prove rolling upgrade etc., we need to pass CI/gates
with Nova, Nova changes need to be tested at unit/functional/tempest
levels. We have to make a push to get this done against expectations.

Neutron-Cyborg:
1. Neutron Stein Specs:
a. Propose a ML2 Plugin (networking-cyborg)
b. neutron notification : add notification support in cyborg

MISC:
1. work with SKT team on LOCI (OCI container image) support for
OpenStack-Helm (after stein-1 or stein-2)
   Is SKT the SIG-K8 team?(they are one of the biggest Korean Telco
Operators :) )
2. work with SKT team and Dims on the k8s integration design/discussion

-- 
Thank you

Regards

Li Liu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][puppet][kolla][helm][ansible] Change in Cinder backup driver naming

2018-09-27 Thread Sean McGinnis
This probably applies to all deployment tools, so hopefully this reaches the
right folks.

In Havana, Cinder deprecated the use of specifying the module for configuring
backup drivers. Patch https://review.openstack.org/#/c/595372/ finally removed
the backwards compatibility handling for configs that still used the old way.

Looking through a quick search, it appears there may be some tools that are
still defaulting to setting the backup driver name using the patch. If your
project does not specify the full driver class path, please update these to do
so now.

Any questions, please reach out here or in the #openstack-cinder channel.

Thanks!
Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][neutron] SmartNics with Ironic

2018-09-27 Thread Julia Kreger
Greetings everyone,

Now that the PTG is over, I would like to go ahead and get the
specification that was proposed to ironic-specs updated to represent
the discussions that took place at the PTG.

A few highlights from my recollection:

* Ironic being the source of truth for the hardware configuration for
the neutron agent to determine where to push configuration to. This
would include the address and credential information (certificates
right?).
* The information required is somehow sent to neutron (possibly as
part of the binding profile, which we could send at each time port
actions are requested by Ironic.)
* The Neutron agent running on the control plane connects outbound to
the smartnic, using information supplied to perform the appropriate
network configuration.
* In Ironic, this would likely be a new network_interface driver
module, with some additional methods that help facilitate the
work-flow logic changes needed in each deploy_interface driver module.
* Ironic would then be informed or gain awareness that the
configuration has been completed and that the deployment can proceed.
(A different spec has been proposed regarding this.)

I have submitted a forum session based upon this and the agreed upon
goal at the PTG was to have the ironic spec written up to describe the
required changes.

I guess the next question is, who wants to update the specification?

-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Stein PTG summary

2018-09-27 Thread Jay Pipes

On 09/27/2018 11:15 AM, Eric Fried wrote:

On 09/27/2018 07:37 AM, Matt Riedemann wrote:

On 9/27/2018 5:23 AM, Sylvain Bauza wrote:



On Thu, Sep 27, 2018 at 2:46 AM Matt Riedemann mailto:mriede...@gmail.com>> wrote:

     On 9/26/2018 5:30 PM, Sylvain Bauza wrote:
  > So, during this day, we also discussed about NUMA affinity and we
     said
  > that we could possibly use nested resource providers for NUMA
     cells in
  > Stein, but given we don't have yet a specific Placement API
     query, NUMA
  > affinity should still be using the NUMATopologyFilter.
  > That said, when looking about how to use this filter for vGPUs,
     it looks
  > to me that I'd need to provide a new version for the NUMACell
     object and
  > modify the virt.hardware module. Are we also accepting this
     (given it's
  > a temporary question), or should we need to wait for the
     Placement API
  > support ?
  >
  > Folks, what are you thoughts ?

     I'm pretty sure we've said several times already that modeling
NUMA in
     Placement is not something for which we're holding up the extraction.


It's not an extraction question. Just about knowing whether the Nova
folks would accept us to modify some o.vo object and module just for a
temporary time until Placement API has some new query parameter.
Whether Placement is extracted or not isn't really the problem, it's
more about the time it will take for this query parameter ("numbered
request groups to be in the same subtree") to be implemented in the
Placement API.
The real problem we have with vGPUs is that if we don't have NUMA
affinity, the performance would be around 10% less for vGPUs (if the
pGPU isn't on the same NUMA cell than the pCPU). Not sure large
operators would accept that :(

-Sylvain


I don't know how close we are to having whatever we need for modeling
NUMA in the placement API, but I'll go out on a limb and assume we're
not close.


True story. We've been talking about ways to do this since (at least)
the Queens PTG, but haven't even landed on a decent design, let alone
talked about getting it specced, prioritized, and implemented. Since
full NRP support was going to be a prerequisite in any case, and our
Stein plate is full, Train is the earliest we could reasonably expect to
get the placement support going, let alone the nova side. So yeah...


Given that, if we have to do something within nova for NUMA
affinity for vGPUs for the NUMATopologyFilter, then I'd be OK with that
since it's short term like you said (although our "short term"
workarounds tend to last for many releases). Anyone that cares about
NUMA today already has to enable the scheduler filter anyway.



+1 to this ^


Or, I don't know, maybe don't do anything and deal with the (maybe) 10% 
performance impact from the cross-NUMA main memory <-> CPU hit for 
post-processing of already parallel-processed GPU data.


In other words, like I've mentioned in numerous specs and in person, I 
really don't think this is a major problem and is mostly something we're 
making a big deal about for no real reason.


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration

2018-09-27 Thread Zane Bitter

On 26/09/18 10:27 PM, Qiming Teng wrote:

Hi,

Due to many reasons, I cannot join you on this event, but I do like to
leave some comments here for references.

On Tue, Sep 18, 2018 at 11:27:29AM +0800, Rico Lin wrote:

*TL;DR*
*How about a forum in Berlin for discussing autoscaling integration (as a
long-term goal) in OpenStack?*


First of all, there is nothing called "auto-scaling" in my mind and
"auto" is most of the time a scary word to users. It means the service
or tool is hiding some details from the users when it is doing something
without human intervention. There are cases where this can be useful,
there are also many other cases the service or tool is messing up things
to a state difficult to recover from. What matters most is the usage
scenarios we support. I don't think users care that much how project
teams are organized.


Yeah, I mostly agree with you, and in fact I often use the term 'scaling 
group' to encompass all of the different types of groups in Heat. Our 
job is to provide an API that is legible to external tools to increase 
and decrease the size of the group. The 'auto' part is created by 
connecting it with other services, whether they be OpenStack services 
like Aodh or Monasca, monitoring services provided by the user 
themselves, or just manual invocation.


(BTW people from the HA-clustering world have a _very_ negative reaction 
to Senlin's use of the term 'cluster'... there is no perfect terminology.)



Hi all, as we start to discuss how can we join develop from Heat and Senlin
as we originally planned when we decided to fork Senlin from Heat long time
ago.

IMO the biggest issues we got now are we got users using autoscaling in
both services, appears there is a lot of duplicated effort, and some great
enhancement didn't exist in another service.
As a long-term goal (from the beginning), we should try to join development
to sync functionality, and move users to use Senlin for autoscaling. So we
should start to review this goal, or at least we should try to discuss how
can we help users without break or enforce anything.


The original plan, iirc, was to make sure Senlin resources are supported
in Heat,


This happened.


and we will gradually fade out the existing 'AutoScalingGroup'
and related resource types in Heat.


That's almost impossible to do without breaking existing users.


I have no clue since when Heat is
interested in "auto-scaling" again.


It's something that Rico and I have been discussing - it turns out that 
Heat still has a *lot* of users running very important stuff on Heat 
scaling group code which, as you know, is burdened by a lot of technical 
debt.



What will be great if we can build common library cross projects, and use
that common library in both projects, make sure we have all improvement
implemented in that library, finally to use Senlin from that from that
library call in Heat autoscaling group. And in long-term, we gonna let all
user use more general way instead of multiple ways but generate huge
confusing for users.


The so called "auto-scaling" is always a solution, built by
orchestrating many moving parts across the infrastructure. In some
cases, you may have to install agents into VMs for workload metering.


Totally agree, but...


I
am not convinced this can be done using a library approach.


Clearly there are _some_ parts that could in principle be shared. (I 
added some comments to the etherpad to clarify what I think Rico was 
referring to.)


It seems to me that there's value in discussing it together rather than 
just working completely independently, even if the outcome of that 
discussion is that



*As an action, I propose we have a forum in Berlin and sync up all effort
from both teams to plan for idea scenario design. The forum submission [1]
ended at 9/26.*
Also would benefit from both teams to start to think about how they can
modulize those functionalities for easier integration in the future.

 From some Heat PTG sessions, we keep bring out ideas on how can we improve
current solutions for Autoscaling. We should start to talk about will it
make sense if we combine all group resources into one, and inherit from it
for other resources (ideally gonna deprecate rest resource types). Like we
can do Batch create/delete in Resource Group, but not in ASG. We definitely
got some unsynchronized works inner Heat, and cross Heat and Senlin.


Totally agree with you on this. We should strive to minimize the
technologies users have to master when they have a need.


+1 - to expand on Rico's example, we have at least 3 completely separate 
implementations of batching, each supporting different actions:


Heat AutoscalingGroup: updates only
Heat ResourceGroup: create or update
Senlin Batch Policy: updates only

and users are asking for batch delete as well. This is clearly an area 
where technical debt from duplicate implementations is making it hard to 
deliver value to users.


cheers,
Zane.


[openstack-dev] [congress] 4AM UTC meeting today 9/28

2018-09-27 Thread Eric K
Hi all, the Congress team meeting is transitioning to Fridays 4AM UTC
on even weeks (starting 10/5). During this week's transition, we'll
have a special transition meeting today Friday at 4AM UTC (instead of
previous 2:30AM UTC) even though it's an odd week.

Thank you!

Eric Kao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Lance Bragstad
Ack - thanks for the clarification, Tim.

On Thu, Sep 27, 2018 at 12:10 PM Tim Bell  wrote:

>
>
> Lance,
>
>
>
> The comment regarding ‘readers’ is more to explain that the distinction
> between ‘admin’ and ‘user’ commands is gradually reducing, where OSC has
> been prioritising ‘user’ commands.
>
>
>
> As an example, we give the CERN security team view-only access to many
> parts of the cloud. This allows them to perform their investigations
> independently.  Thus, many commands which would be, by default, admin only
> are also available to roles such as the ‘readers’ (e.g. list, show, … of
> internals or projects which they are not in the members list)
>
>
>
> I don’t think there is any implications for Keystone (and the readers role
> is a nice improvement to replace the previous manual policy definitions)
> but more of a question of which subcommands we should aim to support in OSC.
>
>
>
> The *-manage commands such as nova-manage, I would consider, out of scope
> for OSC. Only admins would be migrating between versions or DB schemas.
>
>
>
> Tim
>
>
>
> *From: *Lance Bragstad 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Thursday, 27 September 2018 at 15:30
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [goals][tc][ptl][uc] starting goal
> selection for T series
>
>
>
>
>
> On Wed, Sep 26, 2018 at 1:56 PM Tim Bell  wrote:
>
>
> Doug,
>
> Thanks for raising this. I'd like to highlight the goal "Finish moving
> legacy python-*client CLIs to python-openstackclient" from the etherpad and
> propose this for a T/U series goal.
>
> To give it some context and the motivation:
>
> At CERN, we have more than 3000 users of the OpenStack cloud. We write an
> extensive end user facing documentation which explains how to use the
> OpenStack along with CERN specific features (such as workflows for
> requesting projects/quotas/etc.).
>
> One regular problem we come across is that the end user experience is
> inconsistent. In some cases, we find projects which are not covered by the
> unified OpenStack client (e.g. Manila). In other cases, there are subsets
> of the function which require the native project client.
>
> I would strongly support a goal which targets
>
> - All new projects should have the end user facing functionality fully
> exposed via the unified client
> - Existing projects should aim to close the gap within 'N' cycles (N to be
> defined)
> - Many administrator actions would also benefit from integration (reader
> roles are end users too so list and show need to be covered too)
> - Users should be able to use a single openrc for all interactions with
> the cloud (e.g. not switch between password for some CLIs and Kerberos for
> OSC)
>
>
>
> Sorry to back up the conversation a bit, but does reader role require work
> in the clients? Last release we incorporated three roles by default during
> keystone's installation process [0]. Is the definition in the specification
> what you mean by reader role, or am I on a different page?
>
>
>
> [0]
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html#default-roles
>
>
>
> The end user perception of a solution will be greatly enhanced by a single
> command line tool with consistent syntax and authentication framework.
>
> It may be a multi-release goal but it would really benefit the cloud
> consumers and I feel that goals should include this audience also.
>
> Tim
>
> -Original Message-
> From: Doug Hellmann 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, 26 September 2018 at 18:00
> To: openstack-dev ,
> openstack-operators ,
> openstack-sigs 
> Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for
> T series
>
> It's time to start thinking about community-wide goals for the T
> series.
>
> We use community-wide goals to achieve visible common changes, push for
> basic levels of consistency and user experience, and efficiently
> improve
> certain areas where technical debt payments have become too high -
> across all OpenStack projects. Community input is important to ensure
> that the TC makes good decisions about the goals. We need to consider
> the timing, cycle length, priority, and feasibility of the suggested
> goals.
>
> If you are interested in proposing a goal, please make sure that before
> the summit it is described in the tracking etherpad [1] and that you
> have started a mailing list thread on the openstack-dev list about the
> proposal so that everyone in the forum session [2] has an opportunity
> to
> consider the details.  The forum session is only one step in the
> selection process. See [3] for more details.
>
> Doug
>
> [1] https://etherpad.openstack.org/p/community-goals
> 

[openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal)

2018-09-27 Thread Markus Hentsch
Dear OpenStack developers,

we would like to propose the introduction of an encrypted image format
in OpenStack. We already created a basic implementation involving Nova,
Cinder, OSC and Glance, which we'd like to contribute.

We originally created a full spec document but since the official
cross-project contribution workflow in OpenStack is a thing of the past,
we have no single repository to upload it to. Thus, the Glance team
advised us to post this on the mailing list [1].

Ironically, Glance is the least affected project since the image
transformation processes affected are taking place elsewhere (Nova and
Cinder mostly).

Below you'll find the most important parts of our spec that describe our
proposal - which our current implementation is based on. We'd love to
hear your feedback on the topic and would like to encourage all affected
projects to join the discussion.

Subsequently, we'd like to receive further instructions on how we may
contribute to all of the affected projects in the most effective and
collaborative way possible. The Glance team suggested starting with a
complete spec in the glance-specs repository, followed by individual
specs/blueprints for the remaining projects [1]. Would that be alright
for the other teams?

[1]
http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-09-27-14.00.log.html

Best regards,
Markus Hentsch

(excerpts from our image encryption spec below)

Problem description
===

An image, when uploaded to Glance or being created through Nova from an
existing server (VM), may contain sensitive information. The already
provided signature functionality only protects images against
alteration. Images may be stored on several hosts over long periods of
time. First and foremost this includes the image storage hosts of Glance
itself. Furthermore it might also involve caches on systems like compute
hosts. In conclusion they are exposed to a multitude of potential
scenarios involving different hosts with different access patterns and
attack surfaces. The OpenStack components involved in those scenarios do
not protect the confidentiality of image data. That’s why we propose the
introduction of an encrypted image format.

Use Cases
-

* A user wants to upload an image, which includes sensitive information.
To ensure the integrity of the image, a signature can be generated and
used for verification. Additionally, the user wants to protect the
confidentiality of the image data through encryption. The user generates
or uploads a key in the key manager (e.g. Barbican) and uses it to
encrypt the image locally using the OpenStack client (osc) when
uploading it. Consequently, the image stored on the Glance host is
encrypted.

* A user wants to create an image from an existing server with ephemeral
storage. This server may contain sensitive user data. The corresponding
compute host then generates the image based on the data of the ephemeral
storage disk. To protect the confidentiality of the data within the
image, the user wants Nova to also encrypt the image using a key from
the key manager, specified by its secret ID. Consequently, the image
stored on the Glance host is encrypted.

* A user wants to create a new server or volume based on an encrypted
image created by any of the use cases described above. The corresponding
compute or volume host has to be able to decrypt the image using the
symmetric key stored in the key manager and transform it into the
requested resource (server disk or volume).

Although not required on a technical level, all of the use cases
described above assume the usage of encrypted volume types and encrypted
ephemeral storage as provided by OpenStack.


Proposed changes


* Glance: Adding a container type for encrypted images that supports
different mechanisms (format, cipher algorithms, secret ID) via a
metadata property. Whether introducing several container types or
outsourcing the mechanism definition into metadata properties may still
be up for discussion, although we do favor the latter.

* Nova: Adding support for decrypting an encrypted image when a servers
ephemeral disk is created. This includes direct decryption streaming for
encrypted disks. Nova should select a suitable mechanism according to
the image container type and metadata. The symmetric key will be
retrieved from the key manager (e.g. Barbican).

* Cinder: Adding support for decrypting an encrypted image when a volume
is created from it. Cinder should select a suitable mechanism according
to the image container type and metadata. The symmetric key will be
retrieved from the key manager (e.g. Barbican).

* OpenStack Client / SDK: Adding support for encrypting images using a
secret ID which references the symmetric key in the key manager (e.g.
Barbican). This also involves new CLI arguments to specify the secret ID
and encryption method.

We propose to use an implementation of symmetric AES 256 encryption
provided by GnuPG as a basic 

Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Tim Bell

Lance,

The comment regarding ‘readers’ is more to explain that the distinction between 
‘admin’ and ‘user’ commands is gradually reducing, where OSC has been 
prioritising ‘user’ commands.

As an example, we give the CERN security team view-only access to many parts of 
the cloud. This allows them to perform their investigations independently.  
Thus, many commands which would be, by default, admin only are also available 
to roles such as the ‘readers’ (e.g. list, show, … of internals or projects 
which they are not in the members list)

I don’t think there is any implications for Keystone (and the readers role is a 
nice improvement to replace the previous manual policy definitions) but more of 
a question of which subcommands we should aim to support in OSC.

The *-manage commands such as nova-manage, I would consider, out of scope for 
OSC. Only admins would be migrating between versions or DB schemas.

Tim

From: Lance Bragstad 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 27 September 2018 at 15:30
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series


On Wed, Sep 26, 2018 at 1:56 PM Tim Bell 
mailto:tim.b...@cern.ch>> wrote:

Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose 
this for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.).

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

Sorry to back up the conversation a bit, but does reader role require work in 
the clients? Last release we incorporated three roles by default during 
keystone's installation process [0]. Is the definition in the specification 
what you mean by reader role, or am I on a different page?

[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html#default-roles

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann mailto:d...@doughellmann.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev 
mailto:openstack-dev@lists.openstack.org>>, 
openstack-operators 
mailto:openstack-operat...@lists.openstack.org>>,
 openstack-sigs 
mailto:openstack-s...@lists.openstack.org>>
Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
[3] https://governance.openstack.org/tc/goals/index.html


[openstack-dev] [k8s][tc] List of OpenStack and K8s Community Updates

2018-09-27 Thread Chris Hoge
In the last year the SIG-K8s/SIG-OpenStack group has facilitated quite
a bit of discussion between the OpenStack and Kubernetes communities.
In doing this work we've delivered a number of presentations and held
several working sessions. I've created an etherpad that contains links
to these documents as a reference to the work and the progress we've
made. I'll continue to keep the document updated, and if I've missed
any links please feel free to add them.

https://etherpad.openstack.org/p/k8s-openstack-updates

-Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-09-27 Thread Michael McCune
Greetings OpenStack community,

This week's meeting was mostly ceremonial, with the main topic of
discussion being the office hours for the SIG. If you have not heard
the news about the API-SIG, we are converting from a regular weekly
meeting time to a set of scheduled office hours. This change was
discussed over the course of meeting leading up to the PTG and was
finalized last week. The reasoning behind this decision was summarized
nicely by edleafe in the last newsletter:

We, as a SIG, have recognized that we have moved into a new phase.
With most of the API guidelines that we needed to write having been
written, there is not "new stuff" to make demands on our time. In
recognition of this, we are changing how we will work.


How can you find the API-SIG?

We have 2 office hours that we will hold in the #openstack-sdks
channel on freenode:
* Thursdays 0900-1000 UTC
* Thursdays 1600-1700 UTC

Additionally, there is usually someone from the API-SIG hanging out in
the #openstack-sdks channel. Please feel free to ping dtanstur,
edleafe, or elmiko as direct contacts.

Although this marks the end of our weekly meetings, the API-SIG will
continue to be active in the community and we would like to extend a
hearty "huzzah!" to all the OpenStack contributors, operators, and
users who have helped to create the guidelines and guidance that we
all share.

Huzzah!

If you're interested in helping out, here are some things to get you started:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for
changes over time. If you find something that's not quite right,
submit a patch [6] to fix it.
* Have you done something for which you think guidance would have made
things easier but couldn't find any? Submit a patch and help others
[6].

# Newly Published Guidelines

* None

# API Guidelines Proposed for Freeze

* None

# Guidelines that are ready for wider review by the whole community.

* None

# Guidelines Currently Under Review [3]

* Add an api-design doc with design advice
  https://review.openstack.org/592003

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* Version and service discovery series
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet
ready for review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs
that you are developing or changing, please address your concerns in
an email to the OpenStack developer mailing list[1] with the tag
"[api]" in the subject. In your email, you should include any relevant
reviews, links, and comments to help guide the discussion of the
specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our
wiki page [4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-sig,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://storyboard.openstack.org/#!/project/1039
[6] https://git.openstack.org/cgit/openstack/api-sig


Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3-first] support in stable branches

2018-09-27 Thread Ben Nemec



On 9/27/18 10:36 AM, Doug Hellmann wrote:

Dariusz Krol  writes:


Hello Champions :)


I work on the Trove project and we are wondering if python3 should be
supported in previous releases as well?

Actually this question was asked by Alan Pevec from the stable branch
maintainers list.

I saw you added releases up to ocata to support python3 and there are
already changes on gerrit waiting to be merged but after reading [1] I
have my doubts about this.


I'm not sure what you're referring to when you say "added releases up to
ocata" here. Can you link to the patches that you have questions about?


Possibly the zuul migration patches for all the stable branches? If so, 
those don't change the status of python 3 support on the stable 
branches, they just split the zuul configuration to make it easier to 
add new python 3 jobs on master without affecting the stable branches.





Could you elaborate why it is necessary to support previous releases ?


Best,

Dariusz Krol


[1] https://docs.openstack.org/project-team-guide/stable-branches.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3-first] support in stable branches

2018-09-27 Thread Doug Hellmann
Dariusz Krol  writes:

> Hello Champions :)
>
>
> I work on the Trove project and we are wondering if python3 should be 
> supported in previous releases as well?
>
> Actually this question was asked by Alan Pevec from the stable branch 
> maintainers list.
>
> I saw you added releases up to ocata to support python3 and there are 
> already changes on gerrit waiting to be merged but after reading [1] I 
> have my doubts about this.

I'm not sure what you're referring to when you say "added releases up to
ocata" here. Can you link to the patches that you have questions about?

> Could you elaborate why it is necessary to support previous releases ?
>
>
> Best,
>
> Dariusz Krol
>
>
> [1] https://docs.openstack.org/project-team-guide/stable-branches.html
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Stein PTG summary

2018-09-27 Thread Eric Fried


On 09/27/2018 07:37 AM, Matt Riedemann wrote:
> On 9/27/2018 5:23 AM, Sylvain Bauza wrote:
>>
>>
>> On Thu, Sep 27, 2018 at 2:46 AM Matt Riedemann > > wrote:
>>
>>     On 9/26/2018 5:30 PM, Sylvain Bauza wrote:
>>  > So, during this day, we also discussed about NUMA affinity and we
>>     said
>>  > that we could possibly use nested resource providers for NUMA
>>     cells in
>>  > Stein, but given we don't have yet a specific Placement API
>>     query, NUMA
>>  > affinity should still be using the NUMATopologyFilter.
>>  > That said, when looking about how to use this filter for vGPUs,
>>     it looks
>>  > to me that I'd need to provide a new version for the NUMACell
>>     object and
>>  > modify the virt.hardware module. Are we also accepting this
>>     (given it's
>>  > a temporary question), or should we need to wait for the
>>     Placement API
>>  > support ?
>>  >
>>  > Folks, what are you thoughts ?
>>
>>     I'm pretty sure we've said several times already that modeling
>> NUMA in
>>     Placement is not something for which we're holding up the extraction.
>>
>>
>> It's not an extraction question. Just about knowing whether the Nova
>> folks would accept us to modify some o.vo object and module just for a
>> temporary time until Placement API has some new query parameter.
>> Whether Placement is extracted or not isn't really the problem, it's
>> more about the time it will take for this query parameter ("numbered
>> request groups to be in the same subtree") to be implemented in the
>> Placement API.
>> The real problem we have with vGPUs is that if we don't have NUMA
>> affinity, the performance would be around 10% less for vGPUs (if the
>> pGPU isn't on the same NUMA cell than the pCPU). Not sure large
>> operators would accept that :(
>>
>> -Sylvain
> 
> I don't know how close we are to having whatever we need for modeling
> NUMA in the placement API, but I'll go out on a limb and assume we're
> not close.

True story. We've been talking about ways to do this since (at least)
the Queens PTG, but haven't even landed on a decent design, let alone
talked about getting it specced, prioritized, and implemented. Since
full NRP support was going to be a prerequisite in any case, and our
Stein plate is full, Train is the earliest we could reasonably expect to
get the placement support going, let alone the nova side. So yeah...

> Given that, if we have to do something within nova for NUMA
> affinity for vGPUs for the NUMATopologyFilter, then I'd be OK with that
> since it's short term like you said (although our "short term"
> workarounds tend to last for many releases). Anyone that cares about
> NUMA today already has to enable the scheduler filter anyway.
> 

+1 to this ^

-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Dean Troyer
On Thu, Sep 27, 2018 at 9:10 AM, Doug Hellmann  wrote:
> Monty Taylor  writes:
>> Main difference is making sure these new deconstructed plugin teams
>> understand the client support lifecycle - which is that we don't drop
>> support for old versions of services in OSC (or SDK). It's a shift from
>> the support lifecycle and POV of python-*client, but it's important and
>> we just need to all be on the same page.
>
> That sounds like a reason to keep the governance of the libraries under
> the client tool project.

Hmmm... I think that may address a big chunk of my reservations about
being able to maintain consistency and user experience in a fully
split-OSC world.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Dean Troyer
On Thu, Sep 27, 2018 at 9:06 AM, Doug Hellmann  wrote:
> I definitely think having details about the gaps would be a prerequisite
> for approving a goal, but I wonder if that's something 1 person could
> even do alone. Is this an area where a small team is needed?

Maybe, but it does break down along project/API lines for the most
part, only crossing in places like Matt mentioned where compute+volume
interact in server create, etc.

For the purposes of a goal, I think we need to be thinking more about
structural things than specific command changes.  Things like Monty
mentioned elsewhere in the thread about getting all of the exiting
client libs to correctly use an SDK adapter then behaviours converge
and the details of command changes become project-specific.

> We built cliff to be based on plugins to support this sort of work
> distribution, right?

We did, my concerns about splitting the OSC in-repo plugins out is
frankly more around losing control of things like command structure
and consistency, not about the code.  Looking at the loss of
consistency in plugins shows that is a hard thing to maintain across a
distributed set of groups.

>> One thing I don't like about that is we just replace N client libs
>> with N (or more) plugins now and the number of things a user must
>> install doesn't go down.  I would like to hear from anyone who deals
>> with installing OSC if that is still a big deal or should I let go of
>> that worry?
>
> Don't package managers just deal with this? I can pip/yum/apt install
> something and get all of its dependencies, right?

For those using that, yes.  The set of folks interacting with
OpenStack from a Windows desktop is not as large but their experience
is sometimes a painful one...although wheels were just becoming a
thing when I last tried to bundle OSC into a py2exe-style thing so the
pains of things like OpenSSL may be fewer now.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [goals][python3] switching python package jobs

2018-09-27 Thread Doug Hellmann
I think we are ready to go ahead and switch all of the python packaging
jobs to the new set defined in the publish-to-pypi-python3 template
[1]. We still have some cleanup patches for projects that have not
completed their zuul migration, but there are only a few and rebasing
those will be easy enough.

The template adds a new check job that runs when any files related to
packaging are changed (readme, setup, etc.). Otherwise it switches from
the python2-based PyPI job to use python3.

I have the patch to switch all official projects ready in [2].

Doug

[1] 
http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/project-templates.yaml#n218
[2] https://review.openstack.org/#/c/598323/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python3-first] support in stable branches

2018-09-27 Thread Dariusz Krol
Hello Champions :)


I work on the Trove project and we are wondering if python3 should be 
supported in previous releases as well?

Actually this question was asked by Alan Pevec from the stable branch 
maintainers list.

I saw you added releases up to ocata to support python3 and there are 
already changes on gerrit waiting to be merged but after reading [1] I 
have my doubts about this.


Could you elaborate why it is necessary to support previous releases ?


Best,

Dariusz Krol


[1] https://docs.openstack.org/project-team-guide/stable-branches.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Doug Hellmann
Monty Taylor  writes:

> Main difference is making sure these new deconstructed plugin teams 
> understand the client support lifecycle - which is that we don't drop 
> support for old versions of services in OSC (or SDK). It's a shift from 
> the support lifecycle and POV of python-*client, but it's important and 
> we just need to all be on the same page.

That sounds like a reason to keep the governance of the libraries under
the client tool project.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Doug Hellmann
Dean Troyer  writes:

> On Wed, Sep 26, 2018 at 3:44 PM, Matt Riedemann  wrote:
>> I started documenting the compute API gaps in OSC last release [1]. It's a
>> big gap and needs a lot of work, even for existing CLIs (the cold/live
>> migration CLIs in OSC are a mess, and you can't even boot from volume where
>> nova creates the volume for you). That's also why I put something into the
>> etherpad about the OSC core team even being able to handle an onslaught of
>> changes for a goal like this.
>
> The OSC core team is very thin, yes, it seems as though companies
> don't like to spend money on client-facing things...I'll be in the
> hall following this thread should anyone want to talk...
>
> The migration commands are a mess, mostly because I got them wrong to
> start with and we have only tried to patch it up, this is one area I
> think we need to wipe clean and fix properly.  Yay! Major version
> release!

I definitely think having details about the gaps would be a prerequisite
for approving a goal, but I wonder if that's something 1 person could
even do alone. Is this an area where a small team is needed?

>> I thought the same, and we talked about this at the Austin summit, but OSC
>> is inconsistent about this (you can live migrate a server but you can't
>> evacuate it - there is no CLI for evacuation). It also came up at the Stein
>> PTG with Dean in the nova room giving us some direction. [2] I believe the
>> summary of that discussion was:
>
>> a) to deal with the core team sprawl, we could move the compute stuff out of
>> python-openstackclient and into an osc-compute plugin (like the
>> osc-placement plugin for the placement service); then we could create a new
>> core team which would have python-openstackclient-core as a superset
>
> This is not my first choice but is not terrible either...

We built cliff to be based on plugins to support this sort of work
distribution, right?

>> b) Dean suggested that we close the compute API gaps in the SDK first, but
>> that could take a long time as well...but it sounded like we could use the
>> SDK for things that existed in the SDK and use novaclient for things that
>> didn't yet exist in the SDK
>
> Yup, this can be done in parallel.  The unit of decision for use sdk
> vs use XXXclient lib is per-API call.  If the client lib can use an
> SDK adapter/session it becomes even better.  I think the priority for
> what to address first should be guided by complete gaps in coverage
> and the need for microversion-driven changes.
>
>> This might be a candidate for one of these multi-release goals that the TC
>> started talking about at the Stein PTG. I could see something like this
>> being a goal for Stein:
>>
>> "Each project owns its own osc- plugin for OSC CLIs"
>>
>> That deals with the core team and sprawl issue, especially with stevemar
>> being gone and dtroyer being distracted by shiny x-men bird related things.
>> That also seems relatively manageable for all projects to do in a single
>> release. Having a single-release goal of "close all gaps across all service
>> types" is going to be extremely tough for any older projects that had CLIs
>> before OSC was created (nova/cinder/glance/keystone). For newer projects,
>> like placement, it's not a problem because they never created any other CLI
>> outside of OSC.

Yeah, I agree this work is going to need to be split up. I'm still not
sold on the idea of multi-cycle goals, personally.

> I think the major difficulty here is simply how to migrate users from
> today state to future state in a reasonable manner.  If we could teach
> OSC how to handle the same command being defined in multiple plugins
> properly (hello entrypoints!) it could be much simpler as we could
> start creating the new plugins and switch as the new command
> implementations become available rather than having a hard cutover.
>
> Or maybe the definition of OSC v4 is as above and we just work at it
> until complete and cut over at the end.  Note that the current APIs
> that are in-repo (Compute, Identity, Image, Network, Object, Volume)
> are all implemented using the plugin structure, OSC v4 could start as
> the breaking out of those without command changes (except new
> migration commands!) and then the plugins all re-write and update at
> their own tempo.  Dang, did I just deconstruct my project?

It sure sounds like it. Congratulations!

I like the idea of moving the existing code into libraries, having
python-openstackclient depend on them, and then asking project teams for
more help with them.

> One thing I don't like about that is we just replace N client libs
> with N (or more) plugins now and the number of things a user must
> install doesn't go down.  I would like to hear from anyone who deals
> with installing OSC if that is still a big deal or should I let go of
> that worry?

Don't package managers just deal with this? I can pip/yum/apt install
something and get all of its dependencies, right?

Doug


Re: [openstack-dev] [storyboard] Prioritization?

2018-09-27 Thread Ben Nemec



On 9/27/18 4:17 AM, Thierry Carrez wrote:

Ben Nemec wrote:

On 9/25/18 3:29 AM, Thierry Carrez wrote:
Absence of priorities was an initial design choice[1] based on the 
fact that in an open collaboration every group, team, organization 
has their own views on what the priority of a story is, so worklist 
and tags are better ways to capture that. Also they don't really work 
unless you triage everything. And then nobody really looks at them to 
prioritize their work, so they are high cost for little benefit.


So was the storyboard implementation based on the rant section then? 
Because I don't know that I agree with/understand some of the 
assertions there.


First, don't we _need_ to triage everything? At least on some minimal 
level? Not looking at new bugs at all seems like the way you end up 
with a security bug open for two years *ahem*. Not that I would know 
anything about that (it's been fixed now, FTR).


StoryBoard's initial design is definitely tainted by an environment that 
has changed since. Back in 2014, most teams did not triage every bug, 
and were basically using Launchpad as a task tracker (you created the 
bugs that you worked on) rather than a bug tracker (you triage incoming 
requests and prioritize them).


I'm not sure that has actually changed much. Stemming from this thread I 
had an offline discussion around bug management and the gist was that we 
don't actually spend much time looking at the bug list for something to 
work on. I tend to pick up a bug when I hit it in my own environments or 
if I'm doing bug triage and it's something I think I can fix quickly. I 
would like to think that others are more proactive, but I suspect that's 
wishful thinking. I had vague thoughts that I might actually start 
tackling bugs that way this cycle since I spent a lot of last cycle 
getting Oslo bugs triaged so I might be able to repurpose that time, but 
until it actually happens it's just hopes and dreams. :-)


Unfortunately, even if bug triage is a "write once, read never" process 
I think we still need to do it to avoid the case I mentioned above where 
something important falls through the cracks. :-/




Storyboard is therefore designed primarily a task tracker (a way to 
organize work within teams), so it's not great as an issue tracker (a 
way for users to report issues). The tension between the two concepts 
was explored in [1], with the key argument that trying to do both at the 
same time is bound to create frustration one way or another. In PM 
literature you will even find suggestions that the only way to solve the 
diverging requirements is to use two completely different tools (with 
ways to convert a reported issue into a development story). But that 
"solution" works a lot better in environments where the users and the 
developers are completely separated (proprietary software).


[1] https://wiki.openstack.org/wiki/StoryBoard/Vision


[...]
Also, like it or not there is technical debt we're carrying over here. 
All of our bug triage up to this point has been based on launchpad 
priorities, and as I think I noted elsewhere it would be a big step 
backward to completely throw that out. Whatever model for 
prioritization and triage that we choose, I feel like there needs to 
be a reasonable migration path for the thousands of existing triaged 
lp bugs in OpenStack.


I agree, which is why I'm saying that the "right" answer might not be 
the "best" answer.




Yeah, I was mostly just +1'ing your point here. :-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] I would like to know how to link vitrage and prometheus.

2018-09-27 Thread ddaasd
Hello Ifat,

Thank you for your help!
I really appreciate it. Thanks again!

Best Regards,
Won

2018년 9월 27일 (목) 오후 10:00, Ifat Afek 님이 작성:

> Hi,
>
> You can use the Prometheus datasource in Vitrage starting from Rocky
> release.
>
> In order to use it, follow these steps:
>
> 1. Add 'prometheus' to 'types' configuration under
> /etc/vitrage/vitrage.conf
> 2. In alertmanager.yml add a receiver as follows:
>
> - name: **
>
>   webhook_configs:
>
>   - url: **  # example: 'http://127.0.0.1:8999/v1/event
> '
>
> send_resolved: true
>
> http_config:
>
>   basic_auth:
>
> username: **
>
> password: **
>
>
>
> Br,
> Ifat
>
>
> On Thu, Sep 27, 2018 at 2:38 PM ddaasd  wrote:
>
>> I would like to know how to link vitrage and prometheus.
>> Is there a way to receive alert information from vitrage that occurred in
>> prometheus and Alert manager like zabbix-vitrage?
>> I wonder ,if i can, receive prometheus's alert and place it on the entity
>> graph in vitrage.
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Lance Bragstad
On Wed, Sep 26, 2018 at 1:56 PM Tim Bell  wrote:

>
> Doug,
>
> Thanks for raising this. I'd like to highlight the goal "Finish moving
> legacy python-*client CLIs to python-openstackclient" from the etherpad and
> propose this for a T/U series goal.
>
> To give it some context and the motivation:
>
> At CERN, we have more than 3000 users of the OpenStack cloud. We write an
> extensive end user facing documentation which explains how to use the
> OpenStack along with CERN specific features (such as workflows for
> requesting projects/quotas/etc.).
>
> One regular problem we come across is that the end user experience is
> inconsistent. In some cases, we find projects which are not covered by the
> unified OpenStack client (e.g. Manila). In other cases, there are subsets
> of the function which require the native project client.
>
> I would strongly support a goal which targets
>
> - All new projects should have the end user facing functionality fully
> exposed via the unified client
> - Existing projects should aim to close the gap within 'N' cycles (N to be
> defined)
> - Many administrator actions would also benefit from integration (reader
> roles are end users too so list and show need to be covered too)
> - Users should be able to use a single openrc for all interactions with
> the cloud (e.g. not switch between password for some CLIs and Kerberos for
> OSC)
>
>
Sorry to back up the conversation a bit, but does reader role require work
in the clients? Last release we incorporated three roles by default during
keystone's installation process [0]. Is the definition in the specification
what you mean by reader role, or am I on a different page?

[0]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html#default-roles


> The end user perception of a solution will be greatly enhanced by a single
> command line tool with consistent syntax and authentication framework.
>
> It may be a multi-release goal but it would really benefit the cloud
> consumers and I feel that goals should include this audience also.
>
> Tim
>
> -Original Message-
> From: Doug Hellmann 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, 26 September 2018 at 18:00
> To: openstack-dev ,
> openstack-operators ,
> openstack-sigs 
> Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for
> T series
>
> It's time to start thinking about community-wide goals for the T
> series.
>
> We use community-wide goals to achieve visible common changes, push for
> basic levels of consistency and user experience, and efficiently
> improve
> certain areas where technical debt payments have become too high -
> across all OpenStack projects. Community input is important to ensure
> that the TC makes good decisions about the goals. We need to consider
> the timing, cycle length, priority, and feasibility of the suggested
> goals.
>
> If you are interested in proposing a goal, please make sure that before
> the summit it is described in the tracking etherpad [1] and that you
> have started a mailing list thread on the openstack-dev list about the
> proposal so that everyone in the forum session [2] has an opportunity
> to
> consider the details.  The forum session is only one step in the
> selection process. See [3] for more details.
>
> Doug
>
> [1] https://etherpad.openstack.org/p/community-goals
> [2]
> https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
> [3] https://governance.openstack.org/tc/goals/index.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs

2018-09-27 Thread Lance Bragstad
Using the domain name + group name pairing also allows for things like:


JSON:{"group_name": "C",
"domain_name": "X"}
JSON:{"group_name": "C",
"domain_name": "Y"}


To showcase how we solve the ambiguity in group names by namespacing them
with domains.

On Thu, Sep 27, 2018 at 3:11 AM Colleen Murphy  wrote:

>
>
> On Thu, Sep 27, 2018, at 5:09 AM, vishakha agarwal wrote:
> > > From : Colleen Murphy 
> > > To : 
> > > Date : Tue, 25 Sep 2018 18:33:30 +0900
> > > Subject : Re: [openstack-dev] [keystone] Domain-namespaced user
> attributes in SAML assertions from Keystone IdPs
> > >  Forwarded message 
> > >  > On Mon, Sep 24, 2018, at 8:40 PM, John Dennis wrote:
> > >  > > On 9/24/18 8:00 AM, Colleen Murphy wrote:
> > >  > > > This is in regard to https://launchpad.net/bugs/1641625 and
> the proposed patch https://review.openstack.org/588211 for it. Thanks
> Vishakha for getting the ball rolling.
> > >  > > >
> > >  > > > tl;dr: Keystone as an IdP should support sending
> non-strings/lists-of-strings as user attribute values, specifically lists
> of keystone groups, here's how that might happen.
> > >  > > >
> > >  > > > Problem statement:
> > >  > > >
> > >  > > > When keystone is set up as a service provider with an external
> non-keystone identity provider, it is common to configure the mapping rules
> to accept a list of group names from the IdP and map them to some property
> of a local keystone user, usually also a keystone group name. When keystone
> acts as the IdP, it's not currently possible to send a group name as a user
> property in the assertion. There are a few problems:
> > >  > > >
> > >  > > >  1. We haven't added any openstack_groups key in the
> creation of the SAML assertion (
> http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164
> ).
> > >  > > >  2. If we did, this would not be enough. Unlike other IdPs,
> in keystone there can be multiple groups with the same name, namespaced by
> domain. So it's not enough for the SAML AttributeStatement to contain a
> semi-colon-separated list of group names, since a user could theoretically
> be a member of two or more groups with the same name.
> > >  > > > * Why can't we just send group IDs, which are unique?
> Because two different keystones are not going to have independent groups
> with the same UUID, so we cannot possibly map an ID of a group from
> keystone A to the ID of a different group in keystone B. We could map the
> ID of the group in in A to the name of a group in B but then operators need
> to create groups with UUIDs as names which is a little awkward for both the
> operator and the user who now is a member of groups with nondescriptive
> names.
> > >  > > >  3. If we then were able to encode a complex type like a
> group dict in a SAML assertion, we'd have to deal with it on the service
> provider side by being able to parse such an environment variable from the
> Apache headers.
> > >  > > >  4. The current mapping rules engine uses basic python
> string formatting to translate remote key-value pairs to local rules. We
> would need to change the mapping API to work with values more complex than
> strings and lists of strings.
> > >  > > >
> > >  > > > Possible solution:
> > >  > > >
> > >  > > > Vishakha's patch (https://review.openstack.org/588211) starts
> to solve (1) but it doesn't go far enough to solve (2-4). What we talked
> about at the PTG was:
> > >  > > >
> > >  > > >  2. Encode the group+domain as a string, for example by
> using the dict string repr or a string representation of some custom XML
> and maybe base64 encoding it.
> > >  > > >  * It's not totally clear whether the AttributeValue
> class of the pysaml2 library supports any data types outside of the
> xmlns:xs namespace or whether nested XML is an option, so encoding the
> whole thing as an xs:string seems like the simplest solution.
> > >  > > >  3. The SP will have to be aware that openstack_groups is a
> special key that needs the encoding reversed.
> > >  > > >  * I wrote down "MultiDict" in my notes but I don't
> recall exactly what format the environment variable would take that would
> make a MultiDict make sense here, in any case I think encoding the whole
> thing as a string eliminates the need for this.
> > >  > > >  4. We didn't talk about the mapping API, but here's what I
> think. If we were just talking about group names, the mapping API today
> would work like this (slight oversimplification for brevity):
> > >  > > >
> > >  > > > Given a list of openstack_groups like ["A", "B", "C"], it would
> work like this:
> > >  > > >
> > >  > > > [
> > >  > > >{
> > >  > > >  "local":
> > >  > > >  [
> > >  > > >{
> > >  > > >  "group":
> > >  > > >  {
> > >  > > >"name": "{0}",
> > >  > > >"domain":
> > >  > > >{
> > >  > > >  "name": 

Re: [openstack-dev] [vitrage] I would like to know how to link vitrage and prometheus.

2018-09-27 Thread Ifat Afek
Hi,

You can use the Prometheus datasource in Vitrage starting from Rocky
release.

In order to use it, follow these steps:

1. Add 'prometheus' to 'types' configuration under /etc/vitrage/vitrage.conf
2. In alertmanager.yml add a receiver as follows:

- name: **

  webhook_configs:

  - url: **  # example: 'http://127.0.0.1:8999/v1/event'

send_resolved: true

http_config:

  basic_auth:

username: **

password: **



Br,
Ifat


On Thu, Sep 27, 2018 at 2:38 PM ddaasd  wrote:

> I would like to know how to link vitrage and prometheus.
> Is there a way to receive alert information from vitrage that occurred in
> prometheus and Alert manager like zabbix-vitrage?
> I wonder ,if i can, receive prometheus's alert and place it on the entity
> graph in vitrage.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Stein PTG summary

2018-09-27 Thread Matt Riedemann

On 9/27/2018 5:23 AM, Sylvain Bauza wrote:



On Thu, Sep 27, 2018 at 2:46 AM Matt Riedemann > wrote:


On 9/26/2018 5:30 PM, Sylvain Bauza wrote:
 > So, during this day, we also discussed about NUMA affinity and we
said
 > that we could possibly use nested resource providers for NUMA
cells in
 > Stein, but given we don't have yet a specific Placement API
query, NUMA
 > affinity should still be using the NUMATopologyFilter.
 > That said, when looking about how to use this filter for vGPUs,
it looks
 > to me that I'd need to provide a new version for the NUMACell
object and
 > modify the virt.hardware module. Are we also accepting this
(given it's
 > a temporary question), or should we need to wait for the
Placement API
 > support ?
 >
 > Folks, what are you thoughts ?

I'm pretty sure we've said several times already that modeling NUMA in
Placement is not something for which we're holding up the extraction.


It's not an extraction question. Just about knowing whether the Nova 
folks would accept us to modify some o.vo object and module just for a 
temporary time until Placement API has some new query parameter.
Whether Placement is extracted or not isn't really the problem, it's 
more about the time it will take for this query parameter ("numbered 
request groups to be in the same subtree") to be implemented in the 
Placement API.
The real problem we have with vGPUs is that if we don't have NUMA 
affinity, the performance would be around 10% less for vGPUs (if the 
pGPU isn't on the same NUMA cell than the pCPU). Not sure large 
operators would accept that :(


-Sylvain


I don't know how close we are to having whatever we need for modeling 
NUMA in the placement API, but I'll go out on a limb and assume we're 
not close. Given that, if we have to do something within nova for NUMA 
affinity for vGPUs for the NUMATopologyFilter, then I'd be OK with that 
since it's short term like you said (although our "short term" 
workarounds tend to last for many releases). Anyone that cares about 
NUMA today already has to enable the scheduler filter anyway.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] I would like to know how to link vitrage and prometheus.

2018-09-27 Thread ddaasd
I would like to know how to link vitrage and prometheus.
Is there a way to receive alert information from vitrage that occurred in
prometheus and Alert manager like zabbix-vitrage?
I wonder ,if i can, receive prometheus's alert and place it on the entity
graph in vitrage.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] I would like to know how to link vitrage and prometheus.

2018-09-27 Thread ddaasd
I would like to know how to link vitrage and prometheus.
Is there a way to receive alert information from vitrage that occurred in
prometheus and Alert manager like zabbix-vitrage?
I wonder ,if i can, receive prometheus's alert and place it on the entity
graph in vitrage.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Stein PTG summary

2018-09-27 Thread Sylvain Bauza
On Thu, Sep 27, 2018 at 2:46 AM Matt Riedemann  wrote:

> On 9/26/2018 5:30 PM, Sylvain Bauza wrote:
> > So, during this day, we also discussed about NUMA affinity and we said
> > that we could possibly use nested resource providers for NUMA cells in
> > Stein, but given we don't have yet a specific Placement API query, NUMA
> > affinity should still be using the NUMATopologyFilter.
> > That said, when looking about how to use this filter for vGPUs, it looks
> > to me that I'd need to provide a new version for the NUMACell object and
> > modify the virt.hardware module. Are we also accepting this (given it's
> > a temporary question), or should we need to wait for the Placement API
> > support ?
> >
> > Folks, what are you thoughts ?
>
> I'm pretty sure we've said several times already that modeling NUMA in
> Placement is not something for which we're holding up the extraction.
>
>
It's not an extraction question. Just about knowing whether the Nova folks
would accept us to modify some o.vo object and module just for a temporary
time until Placement API has some new query parameter.
Whether Placement is extracted or not isn't really the problem, it's more
about the time it will take for this query parameter ("numbered request
groups to be in the same subtree") to be implemented in the Placement API.
The real problem we have with vGPUs is that if we don't have NUMA affinity,
the performance would be around 10% less for vGPUs (if the pGPU isn't on
the same NUMA cell than the pCPU). Not sure large operators would accept
that :(

-Sylvain

-- 
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] Tetsuro Nakamura now core

2018-09-27 Thread Chris Dent


Since there were no objections and a week has passed, I've made
Tetsuro a member of placement-core.

Thanks for your willingness and continued help. Use your powers
wisely.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Thierry Carrez

First I think that is a great goal, but I want to pick up on Dean's comment:

Dean Troyer wrote:

[...]
The OSC core team is very thin, yes, it seems as though companies
don't like to spend money on client-facing things...I'll be in the
hall following this thread should anyone want to talk...


I think OSC (and client-facing tooling in general) is a great place for 
OpenStack users (deployers of OpenStack clouds) to contribute. It's a 
smaller territory, it's less time-consuming than the service side, they 
are the most obvious interested party, and a small, 20% time investment 
would have a dramatic impact.


It's arguably difficult for OpenStack users to get involved in 
"OpenStack development": keeping track of what's happening in a large 
team is already likely to consume most of the time you can dedicate to 
it. But OSC is a specific, smaller area which would be a good match for 
the expertise and time availability of anybody running an OpenStack 
cloud that wants to contribute back and make OpenStack better.


Shameless plug: I proposed a Forum session in Berlin to discuss "Getting 
OpenStack users involved in the project" -- and we'll discuss such areas 
that are a particularly good match for users to get involved.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard] Prioritization?

2018-09-27 Thread Thierry Carrez

Ben Nemec wrote:

On 9/25/18 3:29 AM, Thierry Carrez wrote:
Absence of priorities was an initial design choice[1] based on the 
fact that in an open collaboration every group, team, organization has 
their own views on what the priority of a story is, so worklist and 
tags are better ways to capture that. Also they don't really work 
unless you triage everything. And then nobody really looks at them to 
prioritize their work, so they are high cost for little benefit.


So was the storyboard implementation based on the rant section then? 
Because I don't know that I agree with/understand some of the assertions 
there.


First, don't we _need_ to triage everything? At least on some minimal 
level? Not looking at new bugs at all seems like the way you end up with 
a security bug open for two years *ahem*. Not that I would know anything 
about that (it's been fixed now, FTR).


StoryBoard's initial design is definitely tainted by an environment that 
has changed since. Back in 2014, most teams did not triage every bug, 
and were basically using Launchpad as a task tracker (you created the 
bugs that you worked on) rather than a bug tracker (you triage incoming 
requests and prioritize them).


Storyboard is therefore designed primarily a task tracker (a way to 
organize work within teams), so it's not great as an issue tracker (a 
way for users to report issues). The tension between the two concepts 
was explored in [1], with the key argument that trying to do both at the 
same time is bound to create frustration one way or another. In PM 
literature you will even find suggestions that the only way to solve the 
diverging requirements is to use two completely different tools (with 
ways to convert a reported issue into a development story). But that 
"solution" works a lot better in environments where the users and the 
developers are completely separated (proprietary software).


[1] https://wiki.openstack.org/wiki/StoryBoard/Vision


[...]
Also, like it or not there is technical debt we're carrying over here. 
All of our bug triage up to this point has been based on launchpad 
priorities, and as I think I noted elsewhere it would be a big step 
backward to completely throw that out. Whatever model for prioritization 
and triage that we choose, I feel like there needs to be a reasonable 
migration path for the thousands of existing triaged lp bugs in OpenStack.


I agree, which is why I'm saying that the "right" answer might not be 
the "best" answer.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Chason Chan (chason) as kolla-ansible core

2018-09-27 Thread Paul Bourke

+1

On 25/09/18 16:47, Eduardo Gonzalez wrote:

Hi,

I would like to propose Chason Chan to the kolla-ansible core team.

Chason is been working on addition of Vitrage roles, rework VpnaaS 
service, maintaining

documentation as well as fixing many bugs.

Voting will be open for 14 days (until 9th of Oct).

Kolla-ansible cores, please leave a vote.
Consider this mail my +1 vote

Regards,
Eduardo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [PTG][QA] QA PTG Stein Summary

2018-09-27 Thread Ghanshyam Mann
Hi All,

Thanks for joining Stein QA PTG and making it successful. 
I am writing the QA PTG summary and detailed discussion can be found on main 
PTG etherpad - https://etherpad.openstack.org/p/qa-stein-ptg
We are continuing the 'owner' for each working item so that we have single 
point of contact to track those. 


1. QA Help Room
---
QA team was present in Help Room on Monday. We were happy to help few queries  
from Octavia multinode job and Kuryr-kubernetes testing part. 
Other than that, there was not much that day except few other random queries. 

2. Rocky Retrospective
-
We discussed the Rocky Retrospective as first thing on Tuesday. We went through 
1. what went well and 2. what needs to improve and gather some concrete action
items.

Patrole has good progress in Rocky cycle with code as well as documentation.  
Also we were able to fill the compute microversion gap almost all till Rocky. 

Action Items:
- Need to add Tempest CLI documentation and other usefull stuff from tripleo 
Doc  to Tempest Doc - chandankumar
- Run all tests in tempest-full-parallel job and move it to periodic job 
pipeline - afazekas
- Need to merge the QA office hour, check with andrea for 17 UTC office hour 
and if ok then, close that and modify the current office hour from 9 UTC to 8 
UTC .  - gmann
- Need to ask chandankumar or manik for bug triage volunteer. - gmann
- Create the low hanging items list and publish for new contributors - gmann

We will be tracking the above AI in our QA office hour to finish them on time.
Owner: gmann
Etherpad link: https://etherpad.openstack.org/p/qa-rocky-retrospective 


3. Stable interfaces from Tempest Plugins
---
We discussed about having stable interface from Tempest plugins like Tempest so 
that other plugins can consume those. Service client is good example of those 
which are required to do cross project testing. For example: congress tempest 
plugin needs to use mistral service clients for integration testing of 
congress+mistral [1].  Similarly Patrole need to use neutron tempest plugin 
service client(for n/n-1/n-2).  

Idea here is to have lib or stable interface in Tempest plugins side like 
Tempest so that other plugins can use them. We will start with some 
documentation about use case and benefits and then work with 
neutron-tempest-plugin team to make their service client expose as stable 
interface. Once that is done then, we can suggest the same to other plugins.  

Action Items:
- Need some documentation and guidance with use case and example, benefits 
for plugins. - felipemonteiro
- mailing list discussions on making specific plugins stable that are 
consumed by other plugins - felipemonteiro
- check with requirement team to add the tempest plugin in g-r and then 
those can be added on other plugins requirement.txt - gmann
Owner: felipemonteiro
Etherpad link: 
https://etherpad.openstack.org/p/stable-interfaces-from-tempest-plugins


4. Tempest Plugins CI to cover stable branches & Plugins release and tagging 
clarification
--
We discussed about how other projects or Plugins can setup the CI to cover the 
stable branches testing on their master changes. Solution can be simple to 
define the supported stable branches and run them on master gate (same way 
Tempest does).  QA team will start the guidelines on this. 
Other part we need to cover is release and tagging guidelines. There were lot 
of confusion about release of Tempest plugins in Rocky. To make it better, QA 
team will write guidelines and document the clear process. 

Action Items:
- move/update documentation on branchless considerations in tempest to 
somewhere more global so that it covers plugins documentation too - gmann
- Add tagging and release clarification for plugins. 
- talk with neutron team about moving in-tree tempest plugins of stadium 
projects to neutron-tempest-plugin or separate tempest-plugins repositories - 
slaweq
- Add config options to disable the plugins load - gmann
Owner: gmann
Etherpad link: 
https://etherpad.openstack.org/p/tempest-plugins-ci-release-tagging-clarification
 


5. Tempest Cleanup Feature 
-
Current Tempest CLI for cleanup the test resource is not so good. It does 
cleanup the resources based on saved_state.json file which save the resources 
difference before and after Tempest run. This can end up cleaning up the other 
non-test resources which got created during time period of tempest run. 

There is a QA spec which proposing the different approach for cleanup[2]. After 
discussing all those approach, we decided to go with resource_prefix. We will 
bring back the resource_prefix approach (which got removed after deprecation) 
and 

Re: [openstack-dev] [docs][charms] Updating Deployment Guides indices of published pages

2018-09-27 Thread Andreas Jaeger

On 27/09/2018 10.04, Frode Nordahl wrote:

Hello docs team,

What would it take to re-generate the indices for Deployment Guides on 
the published Queens [0] and Rocky [1] docs pages?


In a nutshell: Patience ;) All looks fine, details below:

They are regenerated as part of each merge to openstack-manuals.

Looking at [0] it was last regenerated on the 11th, might be the post 
job was never run for the changes you reference in [2].


It seems that the charms project has missed the index for those releases 
due to some issues which now has been resolved [2].  We would love to 
reclaim our space in the list!



Right now we have a HUGE backlog of post jobs (70+ hours) due to high 
load of CI as mentioned by Clark earlier this month. After the next 
merge and post job run, those should be up.


So, please check again once the backlog is worked through and a next 
change merged. If the content is then still not there, we need to 
investigate the post jobs and why they failed.


Andreas


0: https://docs.openstack.org/queens/deploy/
1: https://docs.openstack.org/rocky/deploy/
2: 
https://review.openstack.org/#/q/topic:enable-openstack-manuals-rocky-latest+(status:open+OR+status:merged)


--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs

2018-09-27 Thread Colleen Murphy


On Thu, Sep 27, 2018, at 5:09 AM, vishakha agarwal wrote:
> > From : Colleen Murphy 
> > To : 
> > Date : Tue, 25 Sep 2018 18:33:30 +0900
> > Subject : Re: [openstack-dev] [keystone] Domain-namespaced user attributes 
> > in SAML assertions from Keystone IdPs
> >  Forwarded message 
> >  > On Mon, Sep 24, 2018, at 8:40 PM, John Dennis wrote:
> >  > > On 9/24/18 8:00 AM, Colleen Murphy wrote:
> >  > > > This is in regard to https://launchpad.net/bugs/1641625 and the 
> > proposed patch https://review.openstack.org/588211 for it. Thanks Vishakha 
> > for getting the ball rolling.
> >  > > >
> >  > > > tl;dr: Keystone as an IdP should support sending 
> > non-strings/lists-of-strings as user attribute values, specifically lists 
> > of keystone groups, here's how that might happen.
> >  > > >
> >  > > > Problem statement:
> >  > > >
> >  > > > When keystone is set up as a service provider with an external 
> > non-keystone identity provider, it is common to configure the mapping rules 
> > to accept a list of group names from the IdP and map them to some property 
> > of a local keystone user, usually also a keystone group name. When keystone 
> > acts as the IdP, it's not currently possible to send a group name as a user 
> > property in the assertion. There are a few problems:
> >  > > >
> >  > > >  1. We haven't added any openstack_groups key in the creation of 
> > the SAML assertion 
> > (http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164).
> >  > > >  2. If we did, this would not be enough. Unlike other IdPs, in 
> > keystone there can be multiple groups with the same name, namespaced by 
> > domain. So it's not enough for the SAML AttributeStatement to contain a 
> > semi-colon-separated list of group names, since a user could theoretically 
> > be a member of two or more groups with the same name.
> >  > > > * Why can't we just send group IDs, which are unique? Because 
> > two different keystones are not going to have independent groups with the 
> > same UUID, so we cannot possibly map an ID of a group from keystone A to 
> > the ID of a different group in keystone B. We could map the ID of the group 
> > in in A to the name of a group in B but then operators need to create 
> > groups with UUIDs as names which is a little awkward for both the operator 
> > and the user who now is a member of groups with nondescriptive names.
> >  > > >  3. If we then were able to encode a complex type like a group 
> > dict in a SAML assertion, we'd have to deal with it on the service provider 
> > side by being able to parse such an environment variable from the Apache 
> > headers.
> >  > > >  4. The current mapping rules engine uses basic python string 
> > formatting to translate remote key-value pairs to local rules. We would 
> > need to change the mapping API to work with values more complex than 
> > strings and lists of strings.
> >  > > >
> >  > > > Possible solution:
> >  > > >
> >  > > > Vishakha's patch (https://review.openstack.org/588211) starts to 
> > solve (1) but it doesn't go far enough to solve (2-4). What we talked about 
> > at the PTG was:
> >  > > >
> >  > > >  2. Encode the group+domain as a string, for example by using 
> > the dict string repr or a string representation of some custom XML and 
> > maybe base64 encoding it.
> >  > > >  * It's not totally clear whether the AttributeValue class 
> > of the pysaml2 library supports any data types outside of the xmlns:xs 
> > namespace or whether nested XML is an option, so encoding the whole thing 
> > as an xs:string seems like the simplest solution.
> >  > > >  3. The SP will have to be aware that openstack_groups is a 
> > special key that needs the encoding reversed.
> >  > > >  * I wrote down "MultiDict" in my notes but I don't recall 
> > exactly what format the environment variable would take that would make a 
> > MultiDict make sense here, in any case I think encoding the whole thing as 
> > a string eliminates the need for this.
> >  > > >  4. We didn't talk about the mapping API, but here's what I 
> > think. If we were just talking about group names, the mapping API today 
> > would work like this (slight oversimplification for brevity):
> >  > > >
> >  > > > Given a list of openstack_groups like ["A", "B", "C"], it would work 
> > like this:
> >  > > >
> >  > > > [
> >  > > >{
> >  > > >  "local":
> >  > > >  [
> >  > > >{
> >  > > >  "group":
> >  > > >  {
> >  > > >"name": "{0}",
> >  > > >"domain":
> >  > > >{
> >  > > >  "name": "federated_domain"
> >  > > >}
> >  > > >  }
> >  > > >}
> >  > > >  ], "remote":
> >  > > >  [
> >  > > >{
> >  > > >  "type": "openstack_groups"
> >  > > >}
> >  > > >  ]
> >  > > >}
> >  > > > ]
> >  > > > (paste in case the 

[openstack-dev] [docs][charms] Updating Deployment Guides indices of published pages

2018-09-27 Thread Frode Nordahl
Hello docs team,

What would it take to re-generate the indices for Deployment Guides on the
published Queens [0] and Rocky [1] docs pages?

It seems that the charms project has missed the index for those releases
due to some issues which now has been resolved [2].  We would love to
reclaim our space in the list!

0: https://docs.openstack.org/queens/deploy/
1: https://docs.openstack.org/rocky/deploy/
2:
https://review.openstack.org/#/q/topic:enable-openstack-manuals-rocky-latest+(status:open+OR+status:merged)

-- 
Frode Nordahl
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Searchlight] Team meeting cancellation today

2018-09-27 Thread Trinh Nguyen
Dear team,

Because most of our cores are based in China and South Korea which are in
the holiday season, and we had covered most of the things during our vPTG
last week, we will cancel the team meeting this week.

Next meeting will be held on 11 Oct 2018.

Bests,

*Trinh Nguyen *| Founder & Chief Architect



*E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ryu integration with Openstack

2018-09-27 Thread Slawomir Kaplonski
Hi,

Code of app is in 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py
 and classes for specific bridge types are in 
https://github.com/openstack/neutron/tree/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native

> Wiadomość napisana przez Niket Agrawal  w dniu 
> 27.09.2018, o godz. 00:08:
> 
> Hi,
> 
> Thanks for your reply. Is there a way to access the code that is running in 
> the app to see what is the logic implemented in the app?
> 
> Regards,
> Niket
> 
> On Wed, Sep 26, 2018 at 10:31 PM Slawomir Kaplonski  
> wrote:
> Hi,
> 
> > Wiadomość napisana przez Niket Agrawal  w dniu 
> > 26.09.2018, o godz. 18:11:
> > 
> > Hello,
> > 
> > I have a question regarding the Ryu integration in Openstack. By default, 
> > the openvswitch bridges (br-int, br-tun and br-ex) are registered to a 
> > controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl 
> > get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute 
> > node. However there is a different instance of the same Ryu controller 
> > running on the neutron gateway as well and the three openvswitch bridges 
> > (br-int, br-tun and br-ex) are registered to this instance of Ryu 
> > controller. If I stop neutron-openvswitch agent on the nova compute node, 
> > the bridges there are no longer connected to the controller, but the 
> > bridges in the neutron gateway continue to remain connected to the 
> > controller. Only when I stop the neutron openvswitch agent in the neutron 
> > gateway as well, the bridges there get disconnected. 
> > 
> > I'm unable to find where in the Openstack code I can access this 
> > implementation, because I intend to make a few tweaks to this architecture 
> > which is present currently. Also, I'd like to know which app is the Ryu SDN 
> > controller running by default at the moment. I feel the information in the 
> > code can help me find it too.
> 
> Ryu app is started by neutron-openvswitch-agent in: 
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34
> Is it what You are looking for?
> 
> > 
> > Regards,
> > Niket
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> — 
> Slawek Kaplonski
> Senior software engineer
> Red Hat
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Reminder weekly meeting Public Cloud WG

2018-09-27 Thread Tobias Rydberg

Hi everyone,

Time for a new meeting for PCWG - today (27th) 1400 UTC in 
#openstack-publiccloud! Agenda found at 
https://etherpad.openstack.org/p/publiccloud-wg


We will again have a short brief from the PTG for those of you that 
missed that last week. Also, time to start planning for the upcoming 
summit - forum sessions submitted etc. Another important item on the 
agenda is the prio/ranking of our "missing features" list. We have 
identified a few cross project goals already that we see as important, 
but we need more operators to engage in this ranking.


Talk to you later today!

Cheers,
Tobias

--
Tobias Rydberg
Senior Developer
Twitter & IRC: tobberydberg

www.citynetwork.eu | www.citycloud.com

INNOVATION THROUGH OPEN IT INFRASTRUCTURE
ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Extend created(updated)_at by started(finished)_at to clarify the duration of the task

2018-09-27 Thread Renat Akhmerov
Hi Oleg,

I looked at the blueprint. Looks good to me, I understand the motivation 
behind. The fact that we use created_at and updated_at now to see the duration 
of the task is often confusing, I agree. So this would be a good addition and 
it is backward compatible. The only subtle thing is that when you make changes 
in CloudFlow we’ll have to make a note that since version X of CloudFlow (that 
will be using new fields to calculate durations) it will require Mistral Stein. 
Or, another option is to make it flexible: if those fields are present in the 
HTTP response, we can use them for calculation and if not, use the old way.

Thanks

Renat Akhmerov
@Nokia
On 26 Sep 2018, 18:02 +0700, Олег Овчарук , wrote:
> Hi everyone! Please take a look to the blueprint that i've just created  
> https://blueprints.launchpad.net/mistral/+spec/mistral-add-started-finished-at
> I'd like to implement this feature, also I want to update CloudFlow when this 
> will be done. Please let me know in the blueprint if I can start implementing.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev