Re: [openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud

2018-06-05 Thread Cédric Jeanneret


On 06/06/2018 06:59 AM, Mike Carden wrote:
> 
> \o/ - care to add the links on the doc? Would be really helpful for
> others I guess :).
> 
> 
> Doc? What doc?

This one: https://docs.openstack.org/oslo.privsep/latest/index.html

I just created https://review.openstack.org/#/c/572670/

So. back to business: we need some spec and discussions in order to get
a consensus and implement best practices.

Using privsep will allow to drop the sudo part, as it uses rootwrap
instead. This way also allows to filter out the rights, and we can
ensure we actually don't let people do bad things.

The mentioned blog posts also points to the test process, and shows how
we can ensure we actually mock the calls. It also proposes a directory
structure, and stress on the way to actually call the privileged methods.
All of that makes perfectly sense, as it has a simple logic: if you need
privileges, show them without any hide-and-seek game.

Those advice should be followed, and integrated in any spec/blueprint
we're to write prior the implementation.

Regarding the tripleoclient part: there's currently one annoying issue,
as the generated files aren't owned by the deploy user (usually named
"stack").
This isn't a really urgent correction, but I'm pretty sure we have to
lock any change toward a "quick'n'dirty resolution".

Cheers,

C.

> 
> --
> MC
>  
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud

2018-06-05 Thread Mike Carden
>
>
> \o/ - care to add the links on the doc? Would be really helpful for
> others I guess :).
>

Doc? What doc?

--
MC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud

2018-06-05 Thread Cédric Jeanneret


On 06/06/2018 06:37 AM, Mike Carden wrote:
> 
> > In regards to your suggested positions within python code such as the
> > client, its worth looking at oslo.privsep [1] where a decorator can be
> > used for when needing to setuid.
> 
> hmm yep, have to understand how to use it - its doc is.. well. kind of
> sparse. Would be good to get examples.
> 
> 
> 
> Examples you say? Michael Still has been at that recently:
> 
> https://www.madebymikal.com/how-to-make-a-privileged-call-with-oslo-privsep/
> https://www.madebymikal.com/adding-oslo-privsep-to-a-new-project-a-worked-example/

\o/ - care to add the links on the doc? Would be really helpful for
others I guess :).

> 
> -- 
> MC
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud

2018-06-05 Thread Mike Carden
>
>
> > In regards to your suggested positions within python code such as the
> > client, its worth looking at oslo.privsep [1] where a decorator can be
> > used for when needing to setuid.
>
> hmm yep, have to understand how to use it - its doc is.. well. kind of
> sparse. Would be good to get examples.



Examples you say? Michael Still has been at that recently:

https://www.madebymikal.com/how-to-make-a-privileged-call-with-oslo-privsep/
https://www.madebymikal.com/adding-oslo-privsep-to-a-new-project-a-worked-example/

-- 
MC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud

2018-06-05 Thread Cédric Jeanneret


On 06/05/2018 06:08 PM, Luke Hinds wrote:
> 
> 
> On Tue, Jun 5, 2018 at 3:44 PM, Cédric Jeanneret  > wrote:
> 
> Hello guys!
> 
> I'm currently working on python-tripleoclient in order to squash the
> dreadful "NOPASSWD:ALL" allowed to the "stack" user.
> 
> The start was an issue with the rights on some files being wrong (owner
> by root instead of stack, in stack home). After some digging and poking,
> it appears the undercloud deployment is called with a "sudo openstack
> tripleo deploy" command - this, of course, creates some major issues
> regarding both security and right management.
> 
> I see a couple of ways to correct that bad situation:
> - let the global "sudo" call, and play with setuid/setgid when we
> actually don't need the root access (as it's mentioned in this comment¹)
> 
> - drop that global sudo call, and replace all the necessary calls by
> some "sudo" when needed. This involves the replacement of native python
> code, like "os.mkdir" and the like.
> 
> The first one isn't a solution - code maintenance will not be possible,
> having to thing "darn, os.setuid() before calling that, because I don't
> need root" is the current way, and it just doesn't apply.
> 
> So I started the second one. It's, of course, longer, not really nice
> and painful, but at least this will end to a good status, and not so bad
> solution.
> 
> This also meets the current work of the Security Squad about "limiting
> sudo rights and accesses".
> 
> For now I don't have a proper patch to show, but it will most probably
> appear shortly, as a Work In Progress (I don't think it will be
> mergeable before some time, due to all the constraints we have regarding
> version portability, new sudoer integration and so on).
> 
> I'll post the relevant review link as an answer of this thread when I
> have something I can show.
> 
> Cheers,
> 
> C.
> 
> 
> Hi Cédric,

Hello Luke,

> 
> Pleased to hear you are willing to take this on.

Well, we have to ;).

> 
> It makes sense we should co-ordinate efforts here as I have been looking
> at the same item, but planned to start with heat-admin over on the
> overcloud.

yep, took part in some discussions already.

> 
> Due to the complexity / level of coverage in the use of sudo, it makes
> sense to have a spec where we can then get community consensus on the
> approach selected. This is important as it looks like we will need to
> have some sort of white list to maintain and make considerations around
> functional test coverage in CI (in case someone writes something new
> wrapped in sudo).

For now, I'm trying to see how's the extend at the code level itself.
This also helps me understanding the different things involved, and I
also make some archaeology in order to understand the current situation.

But indeed, we should push a spec/blueprint in order to get a good idea
of the task and open the discussion on a clear basis.

> 
> In regards to your suggested positions within python code such as the
> client, its worth looking at oslo.privsep [1] where a decorator can be
> used for when needing to setuid.

hmm yep, have to understand how to use it - its doc is.. well. kind of
sparse. Would be good to get examples.

> 
> Let's discuss this also in the squad meeting tomorrow and try to
> synergize approach for all tripleo nix accounts.

You can ping me on #tripleo - I go there by Tengu nick. I'm CET (so
yeah, already up'n'running ;)).

Cheers,

C.

> 
> [1] https://github.com/openstack/oslo.privsep
> 
> Cheers,
> 
> Luke
> 
> 
> ¹
> 
> https://github.com/openstack/python-tripleoclient/blob/master/tripleoclient/v1/tripleo_deploy.py#L827-L829
> 
> 
> 
> 
> -- 
> Cédric Jeanneret
> Software Engineer
> DFG:DF
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1

2018-06-05 Thread Peng Liu
Hi Kuryr-kubernetes team,

I'm thinking to propose a new BP to support  Kubernetes Network Custom
Resource Definition De-facto Standard Version 1 [1], which was drafted by
network plumbing working group of kubernetes-sig-network. I'll call it NPWG
spec below.

The purpose of NPWG spec is trying to standardize the multi-network effort
around K8S by defining a CRD object 'network' which can be consumed by
various CNI plugins. I know there has already been a BP VIF-Handler And Vif
Drivers Design, which has designed a set of mechanism to implement the
multi-network functionality. However I think it is still worthwhile to
support this widely accepted NPWG spec.

My proposal is to implement a new vif_driver, which can interpret the PoD
annotation and CRD defined by NPWG spec, and attach pod to additional
neutron subnet and port accordingly. This new driver should be mutually
exclusive with the sriov and additional_subnets drivers.So the endusers can
choose either way of using mult-network with kuryr-kubernetes.

Please let me know your thought, any comments are welcome.



[1] https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_
RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd


Regards,

-- 
Peng Liu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-06-05 Thread Akihiro Motoki
2018年6月6日(水) 11:54 Xinni Ge :

> Hi, akihiro and other guys,
>
> I understand why minified is considered to be non-free, but I was confused
> about the statement
> "At the very least, a non-minified version should be present next to the
> minified version" [1]
> in the documentation.
>
> Actually in existing xstatic repo, I observed several minified files in
> angular_fileupload, jquery-migrate, or bootstrap_scss.
> So, I uploaded those minified files as in the release package of
>  angular/material.
>

Good point. My interpretation is:
- Basically minified files should not be included in xstatic deliverables.
- Even though not suggested, if minified files are included, corresponding
non-minified version must be included.

Considering this, I believe we should not include minified files for new
xstatic deliverables.
Makes sense?


>
> Personally I don't insist on minified files, and I will delete all
> minified files and re-upload the patch.
> Thanks a lot for the advice.
>

Thanks for understanding and your patience.
Let's land pending reviews soon :)

Akihiro


>
> [1]
> https://docs.openstack.org/horizon/latest/contributor/topics/packaging.html#minified-javascript-policy
>
> 
> Ge Xinni
> Email: xinni.ge1...@gmail.com
> 
>
> On Tue, Jun 5, 2018 at 8:59 PM, Akihiro Motoki  wrote:
>
>> Hi,
>>
>> Sorry for re-using the ancient ML thread.
>> Looking at recent xstatic-* repo reviews, I am a bit afraid that
>> xstatic-cores do not have a common understanding on the principle of
>> xstatic packages.
>> I hope all xstatic-cores re-read "Packing Software" in the horizon
>> contributor docs [1], especially "Minified Javascript policy" [2],
>> carefully.
>>
>> Thanks,
>> Akihiro
>>
>> [1]
>> https://docs.openstack.org/horizon/latest/contributor/topics/packaging.html
>> [2]
>> https://docs.openstack.org/horizon/latest/contributor/topics/packaging.html#minified-javascript-policy
>>
>>
>> 2018年4月4日(水) 14:35 Xinni Ge :
>>
>>> Hi Ivan and other Horizon team member,
>>>
>>> Thanks for adding us into xstatic-core group.
>>> But I still need your opinion and help to release the newly-added
>>> xstatic packages to pypi index.
>>>
>>> Current `xstatic-core` group doesn't have the permission to PUSH SIGNED
>>> TAG, and I cannot release the first non-trivial version.
>>>
>>> If I (or maybe Kaz) could be added into xstatic-release group, we can
>>> release all the 8 packages by ourselves.
>>>
>>> Or, we are very appreciate if any member of xstatic-release could help
>>> to do it.
>>>
>>> Just for your quick access, here is the link of access permission page
>>> of one xstatic package.
>>>
>>> https://review.openstack.org/#/admin/projects/openstack/xstatic-angular-material,access
>>>
>>>
>>> --
>>> Best Regards,
>>> Xinni
>>>
>>> On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara 
>>> wrote:
>>>
 Hi Ivan,


 Thank you very much.
 I've confirmed that all of us have been added to xstatic-core.

 As discussed, we will focus on the followings what we added for
 heat-dashboard, will not touch other xstatic repos as core.

 xstatic-angular-material
 xstatic-angular-notify
 xstatic-angular-uuid
 xstatic-angular-vis
 xstatic-filesaver
 xstatic-js-yaml
 xstatic-json2yaml
 xstatic-vis

 Regards,
 Kaz

 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny :
 > Hi Kuz,
 >
 > Don't worry, we're on the same page with you. I added both you, Xinni
 and
 > Keichii to the xstatic-core group. Thank you for your contributions!
 >
 > Regards,
 > Ivan Kolodyazhny,
 > http://blog.e0ne.info/
 >
 > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara 
 wrote:
 >>
 >> Hi Ivan & Horizon folks
 >>
 >>
 >> AFAIK, Horizon team had conclusion that you will add the specific
 >> members to xstatic-core, correct ?
 >> Can I ask you to add the following members ?
 >> # All of tree are heat-dashboard core.
 >>
 >> Kazunori Shinohara / ksnhr.t...@gmail.com #myself
 >> Xinni Ge / xinni.ge1...@gmail.com
 >> Keiichi Hikita / keiichi.hik...@gmail.com
 >>
 >> Please give me a shout, if we are not on same page or any concern.
 >>
 >> Regards,
 >> Kaz
 >>
 >>
 >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara :
 >> > Hi Ivan, Akihiro,
 >> >
 >> >
 >> > Thanks for your kind arrangement.
 >> > Looking forward to hearing your decision soon.
 >> >
 >> > Regards,
 >> > Kaz
 >> >
 >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny :
 >> >> HI Team,
 >> >>
 >> >> From my perspective, I'm OK both with #2 and #3 options. I agree
 that
 >> >> #4
 >> >> could be too complicated for us. Anyway, we've got this topic on
 the
 >> >> meeting
 >> >> agenda [1] so we'll discuss it there too. I'll share our decision
 after
 >> >> the
 >> >> meeting.
 >> >>
 >> >> [1] 

Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-06-05 Thread Xinni Ge
Hi, akihiro and other guys,

I understand why minified is considered to be non-free, but I was confused
about the statement
"At the very least, a non-minified version should be present next to the
minified version" [1]
in the documentation.

Actually in existing xstatic repo, I observed several minified files in
angular_fileupload, jquery-migrate, or bootstrap_scss.
So, I uploaded those minified files as in the release package of
 angular/material.

Personally I don't insist on minified files, and I will delete all minified
files and re-upload the patch.
Thanks a lot for the advice.

[1]
https://docs.openstack.org/horizon/latest/contributor/topics/packaging.html#minified-javascript-policy


Ge Xinni
Email: xinni.ge1...@gmail.com


On Tue, Jun 5, 2018 at 8:59 PM, Akihiro Motoki  wrote:

> Hi,
>
> Sorry for re-using the ancient ML thread.
> Looking at recent xstatic-* repo reviews, I am a bit afraid that
> xstatic-cores do not have a common understanding on the principle of
> xstatic packages.
> I hope all xstatic-cores re-read "Packing Software" in the horizon
> contributor docs [1], especially "Minified Javascript policy" [2],
> carefully.
>
> Thanks,
> Akihiro
>
> [1] https://docs.openstack.org/horizon/latest/contributor/
> topics/packaging.html
> [2] https://docs.openstack.org/horizon/latest/
> contributor/topics/packaging.html#minified-javascript-policy
>
>
> 2018年4月4日(水) 14:35 Xinni Ge :
>
>> Hi Ivan and other Horizon team member,
>>
>> Thanks for adding us into xstatic-core group.
>> But I still need your opinion and help to release the newly-added xstatic
>> packages to pypi index.
>>
>> Current `xstatic-core` group doesn't have the permission to PUSH SIGNED
>> TAG, and I cannot release the first non-trivial version.
>>
>> If I (or maybe Kaz) could be added into xstatic-release group, we can
>> release all the 8 packages by ourselves.
>>
>> Or, we are very appreciate if any member of xstatic-release could help to
>> do it.
>>
>> Just for your quick access, here is the link of access permission page of
>> one xstatic package.
>> https://review.openstack.org/#/admin/projects/openstack/
>> xstatic-angular-material,access
>>
>> --
>> Best Regards,
>> Xinni
>>
>> On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara 
>> wrote:
>>
>>> Hi Ivan,
>>>
>>>
>>> Thank you very much.
>>> I've confirmed that all of us have been added to xstatic-core.
>>>
>>> As discussed, we will focus on the followings what we added for
>>> heat-dashboard, will not touch other xstatic repos as core.
>>>
>>> xstatic-angular-material
>>> xstatic-angular-notify
>>> xstatic-angular-uuid
>>> xstatic-angular-vis
>>> xstatic-filesaver
>>> xstatic-js-yaml
>>> xstatic-json2yaml
>>> xstatic-vis
>>>
>>> Regards,
>>> Kaz
>>>
>>> 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny :
>>> > Hi Kuz,
>>> >
>>> > Don't worry, we're on the same page with you. I added both you, Xinni
>>> and
>>> > Keichii to the xstatic-core group. Thank you for your contributions!
>>> >
>>> > Regards,
>>> > Ivan Kolodyazhny,
>>> > http://blog.e0ne.info/
>>> >
>>> > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara 
>>> wrote:
>>> >>
>>> >> Hi Ivan & Horizon folks
>>> >>
>>> >>
>>> >> AFAIK, Horizon team had conclusion that you will add the specific
>>> >> members to xstatic-core, correct ?
>>> >> Can I ask you to add the following members ?
>>> >> # All of tree are heat-dashboard core.
>>> >>
>>> >> Kazunori Shinohara / ksnhr.t...@gmail.com #myself
>>> >> Xinni Ge / xinni.ge1...@gmail.com
>>> >> Keiichi Hikita / keiichi.hik...@gmail.com
>>> >>
>>> >> Please give me a shout, if we are not on same page or any concern.
>>> >>
>>> >> Regards,
>>> >> Kaz
>>> >>
>>> >>
>>> >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara :
>>> >> > Hi Ivan, Akihiro,
>>> >> >
>>> >> >
>>> >> > Thanks for your kind arrangement.
>>> >> > Looking forward to hearing your decision soon.
>>> >> >
>>> >> > Regards,
>>> >> > Kaz
>>> >> >
>>> >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny :
>>> >> >> HI Team,
>>> >> >>
>>> >> >> From my perspective, I'm OK both with #2 and #3 options. I agree
>>> that
>>> >> >> #4
>>> >> >> could be too complicated for us. Anyway, we've got this topic on
>>> the
>>> >> >> meeting
>>> >> >> agenda [1] so we'll discuss it there too. I'll share our decision
>>> after
>>> >> >> the
>>> >> >> meeting.
>>> >> >>
>>> >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> Regards,
>>> >> >> Ivan Kolodyazhny,
>>> >> >> http://blog.e0ne.info/
>>> >> >>
>>> >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki <
>>> amot...@gmail.com>
>>> >> >> wrote:
>>> >> >>>
>>> >> >>> Hi Kaz and Ivan,
>>> >> >>>
>>> >> >>> Yeah, it is worth discussed officially in the horizon team
>>> meeting or
>>> >> >>> the
>>> >> >>> mailing list thread to get a consensus.
>>> >> >>> Hopefully you can add this topic to the horizon meeting agenda.
>>> >> >>>
>>> >> >>> After sending the previous mail, I noticed anther option. I see
>>> 

Re: [openstack-dev] [neutron][stable] Stepping down from core

2018-06-05 Thread Tony Breeds
On Mon, Jun 04, 2018 at 01:31:11PM -0700, Ihar Hrachyshka wrote:
> Hi neutrinos and all,
> 
> As some of you've already noticed, the last several months I was
> scaling down my involvement in Neutron and, more generally, OpenStack.
> I am at a point where I feel confident my disappearance won't disturb
> the project, and so I am ready to make it official.
> 
> I am stepping down from all administrative roles I so far accumulated
> in Neutron and Stable teams. I shifted my focus to another project,
> and so I just removed myself from all relevant admin groups to reflect
> the change.
> 
> It was a nice 4.5 year ride for me. I am very happy with what we
> achieved in all these years and a bit sad to leave. The community is
> the most brilliant and compassionate and dedicated to openness group
> of people I was lucky to work with, and I am reminded daily how
> awesome it is.
> 
> I am far from leaving the industry, or networking, or the promise of
> open source infrastructure, so I am sure we will cross our paths once
> in a while with most of you. :) I also plan to hang out in our IRC
> channels and make snarky comments, be aware!

Thanks for all your help and support with Stable Maintenance.  Your
input, and snarky comments will be missed!

Best of luck with your new adventure!

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][plugins][heat][searchlight][murano][sahara][watcher] Use default Django test runner instead of nose

2018-06-05 Thread Kaz Shinohara
Thanks Ivan, will check your patch soon.

Regards,
Kaz(Heat)


2018-06-05 22:59 GMT+09:00 Akihiro Motoki :

> This is an important step to drop nose and nosehtmloutput :)
> We plan to switch the test runner and then re-enable integration tests
> (with selenium) for cross project testing.
>
> In addition, we horizon team are trying to minimize gate breakage in
> horizon plugins for recent changes (this and django 2.0).
> Hopefully pending related patches will land soon.
>
>
> 2018年6月5日(火) 22:52 Doug Hellmann :
>
>> Excerpts from Ivan Kolodyazhny's message of 2018-06-05 16:32:22 +0300:
>> > Hi team,
>> >
>> > In Horizon, we're going to get rid of unsupported Nose and use Django
>> Test
>> > Runner instead of it [1]. Nose has some issues and limitations which
>> blocks
>> > us in our testing improvement efforts.
>> >
>> > Nose has different test discovery mechanism than Django does. So, there
>> was
>> > a chance to break some Horizon Plugins:(. Unfortunately, we haven't
>> > cross-project CI yet (TBH, I'm working on it and it's one of the first
>> > steps to get it done), that's why I tested this change [2] against all
>> > known plugins [3].
>> >
>> > Most of the projects don't need any changes. I proposed few changed to
>> > plugins repositories [4] and most of them are merged already. Thanks a
>> lot
>> > to everybody who helped me with it. Patches for heat-dashboard [5] and
>> > searchlight-ui [6] are under review.
>> >
>> > Additional efforts are needed for murano-dashboard, sahara-dashboard,
>> and
>> > watcher-dashboard projects. murano-dashboard has Nose test runner
>> enabled
>> > in the config, so Horizon change won't affect it.
>> >
>> > I proposed patches for sahara-dashboard [7] and watcher-dashboard [8] to
>> > explicitly enable Nose test runner there until we'll fix all related
>> > issues. I hope we'll have a good number of cross-project activities with
>> > these teams.
>> >
>> > Once all patches above will be merged, we'll be ready to the next step
>> to
>> > make Horizon and plugins CI better.
>> >
>> >
>> > [1] https://review.openstack.org/#/c/544296/
>> > [2]
>> > https://docs.google.com/spreadsheets/d/17Yiso6JLeRHBSqJhAiQYkqIAvQhvN
>> FM8NgTkrPxovMo/edit?usp=sharing
>> > [3] https://docs.openstack.org/horizon/latest/install/plugin-
>> registry.html
>> > [4]
>> > https://review.openstack.org/#/q/topic:bp/improve-horizon-
>> testing+(status:open+OR+status:merged)
>> > [5] https://review.openstack.org/572095
>> > [6] https://review.openstack.org/572124
>> > [7] https://review.openstack.org/572390
>> > [8] https://review.openstack.org/572391
>> >
>> >
>> >
>> > Regards,
>> > Ivan Kolodyazhny,
>> > http://blog.e0ne.info/
>>
>> Nice work! Thanks for taking the initiative on updating our tooling.
>>
>> Doug
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [barbican] [tc] key store in base services

2018-06-05 Thread Jeremy Stanley
On 2018-05-31 13:00:47 + (+), Jeremy Stanley wrote:
> On 2018-05-31 10:33:51 +0200 (+0200), Thierry Carrez wrote:
> > Ade Lee wrote:
> > > [...]
> > > So it seems that the two blockers above have been resolved. So is it
> > > time to ad a castellan compatible secret store to the base services?
> > 
> > It's definitely time to start a discussion about it, at least :)
> > 
> > Would you be interested in starting a ML thread about it ? If not, that's
> > probably something I can do :)
> 
> That was, in fact, the entire reason I started this subthread,
> changed the subject and added the [tc] tag. ;)
> 
> http://lists.openstack.org/pipermail/openstack-dev/2018-May/130567.html
> 
> I figured I'd let it run through the summit to garner feedback
> before proposing the corresponding Gerrit change.

Seeing no further objections, I give you
https://review.openstack.org/572656 for the next step.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][ptg] Mark your availability for Denver PTG, 2018

2018-06-05 Thread Ghanshyam
Hi All,

As you all might know that next PTG is in Denver [1] and we will plan the QA 
space in PTG.  

Please let me know if you are planning to attend the QA sessions (Not necessary 
to be full time in QA area). This is to get the rough number of attendees in QA.
I know it might be little early to ask but you can reply your tentative plan. 

Either reply to this ML Or ping me on IRC. It will be helpful if you can let me 
know by 13th June. 

Thanks and hope to see more attendee in PTG. 

[1] 
https://www.openstack.org/ptg/ 
http://lists.openstack.org/pipermail/openstack-dev/2018-April/129564.html





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Signing off

2018-06-05 Thread Rodrigo Duarte
Henry,

I'm really sad to see you go, you were a terrific mentor when I first
joined the community - I remember all the thorough reviews and nice
discussions ranging from topics on how to model the root domain for the
reseller usecase to how to improve the role assignments API. :)

Thanks for everything!

On Wed, May 30, 2018 at 11:22 AM, Gage Hugo  wrote:

> It was great working with you Henry.  Hope to see you around sometime and
> wishing you all the best!
>
> On Wed, May 30, 2018 at 3:45 AM, Henry Nash  wrote:
>
>> Hi
>>
>> It is with a somewhat heavy heart that I have decided that it is time to
>> hang up my keystone core status. Having been involved since the closing
>> stages of Folsom, I've had a good run! When I look at how far keystone has
>> come since the v2 days, it is remarkable - and we should all feel a sense
>> of pride in that.
>>
>> Thanks to all the hard work, commitment, humour and support from all the
>> keystone folks over the years - I am sure we will continue to interact and
>> meet among the many other open source projects that many of us are becoming
>> involved with. Ad astra!
>>
>> Best regards,
>>
>> Henry
>> Twitter: @henrynash
>> linkedIn: www.linkedin.com/in/henrypnash
>>
>> Unless stated otherwise above:
>> IBM United Kingdom Limited - Registered in England and Wales with number
>> 741598.
>> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3][tc][infra] Python 3.6

2018-06-05 Thread Paul Belanger
On Tue, Jun 05, 2018 at 04:48:00PM -0400, Zane Bitter wrote:
> On 05/06/18 16:38, Doug Hellmann wrote:
> > Excerpts from Zane Bitter's message of 2018-06-05 15:55:49 -0400:
> > > We've talked a bit about migrating to Python 3, but (unless I missed it)
> > > not a lot about which version of Python 3. Currently all projects that
> > > support Python 3 are gating against 3.5. However, Ubuntu Artful and
> > > Fedora 26 already ship Python 3.6 by default. (And Bionic and F28 have
> > > been released since then.) The one time it did come up in a thread, we
> > > decided it was blocked on the availability of 3.6 in Ubuntu to run on
> > > the test nodes, so it's time to discuss it again.
> > > 
> > > AIUI we're planning to switch the test nodes to Bionic, since it's the
> > > latest LTS release, so I'd assume that means that when we talk about
> > > running docs jobs, pep8  with Python3 (under the python3-first
> > > project-wide goal) that means 3.6. And while 3.5 jobs should continue to
> > > work, it seems like we ought to start testing ASAP with the version that
> > > users are going to get by default if they choose to use our Python3
> > > packages.
> > > 
> > > The list of breaking changes in 3.6 is quite short (although not zero),
> > > so I wouldn't expect too many roadblocks:
> > > https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6
> > > 
> > > I think we can split the problem into two parts:
> > > 
> > > * How can we detect any issues ASAP.
> > > 
> > > Would it be sane to give all projects with a py35 unit tests job a
> > > non-voting py36 job so that they can start fixing any issues right away?
> > > Like this: https://review.openstack.org/572535
> > 
> > That seems like a good way to start.
> > 
> > Maybe we want to rename that project template to openstack-python3-jobs
> > to keep it version-agnostic?
> 
> You mean the 35_36 one? Actually, let's discuss this on the review.
> 
Yes please lets keep python35 / python36 project-templates, I've left comments
in review.

> > > 
> > > * How can we ensure every project fixes any issues and migrates to
> > > voting gates, including for functional test jobs?
> > > 
> > > Would it make sense to make this part of the 'python3-first'
> > > project-wide goal?
> > 
> > Yes, that seems like a good idea. We can be specific about the version
> > of python 3 to be used to achieve that goal (assuming it is selected as
> > a goal).
> > 
> > The instructions I've been putting together are based on just using
> > "python3" in the tox.ini file because I didn't want to have to update
> > that every time we update to a new version of python. Do you think we
> > should be more specific there, too?
> 
> That's probably fine IMHO. We should just be aware that e.g. when distros
> start switching to 3.7 then people's local jobs will start running in 3.7.
> 
> For me, at least, this has already been the case with 3.6 - tox is now
> python3 by default in Fedora, so e.g. pep8 jobs have been running under 3.6
> for a while now. There were a *lot* of deprecation warnings at first.
> 
> > Doug
> > 
> > > 
> > > cheers,
> > > Zane.
> > > 
> > > 
> > > (Disclaimer for the conspiracy-minded: you might assume that I'm
> > > cleverly concealing inside knowledge of which version of Python 3 will
> > > replace Python 2 in the next major release of RHEL/CentOS, but in fact
> > > you would be mistaken. The truth is I've been too lazy to find out, so
> > > I'm as much in the dark as anybody. Really. Anyway, this isn't about
> > > that, it's about testing within upstream OpenStack.)
> > > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

2018-06-05 Thread Alex Xu
2018-06-05 22:53 GMT+08:00 Eric Fried :

> Alex-
>
> Allocations for an instance are pulled down by the compute manager
> and
> passed into the virt driver's spawn method since [1].  An allocation
> comprises a consumer, provider, resource class, and amount.  Once we can
> schedule to trees, the allocations pulled down by the compute manager
> will span the tree as appropriate.  So in that sense, yes, nova-compute
> knows which amounts of which resource classes come from which providers.
>

Eric, thanks, that is the thing I missed. Initial I thought we will return
the allocations from the scheduler and down to the compute manager. I see
we already pull the allocations in the compute manager now.


>
> However, if you're asking about the situation where we have two
> different allocations of the same resource class coming from two
> separate providers: Yes, we can still tell which RCxAMOUNT is associated
> with which provider; but No, we still have no inherent way to correlate
> a specific one of those allocations with the part of the *request* it
> came from.  If just the provider UUID isn't enough for the virt driver
> to figure out what to do, it may have to figure it out by looking at the
> flavor (and/or image metadata), inspecting the traits on the providers
> associated with the allocations, etc.  (The theory here is that, if the
> virt driver can't tell the difference at that point, then it actually
> doesn't matter.)
>
> [1] https://review.openstack.org/#/c/511879/
>
> On 06/05/2018 09:05 AM, Alex Xu wrote:
> > Maybe I missed something. Is there anyway the nova-compute can know the
> > resources are allocated from which child resource provider? For example,
> > the host has two PFs. The request is asking one VF, then the
> > nova-compute needs to know the VF is allocated from which PF (resource
> > provider). As my understand, currently we only return a list of
> > alternative resource provider to the nova-compute, those alternative is
> > root resource provider.
> >
> > 2018-06-05 21:29 GMT+08:00 Jay Pipes  > >:
> >
> > On 06/05/2018 08:50 AM, Stephen Finucane wrote:
> >
> > I thought nested resource providers were already supported by
> > placement? To the best of my knowledge, what is /not/ supported
> > is virt drivers using these to report NUMA topologies but I
> > doubt that affects you. The placement guys will need to weigh in
> > on this as I could be missing something but it sounds like you
> > can start using this functionality right now.
> >
> >
> > To be clear, this is what placement and nova *currently* support
> > with regards to nested resource providers:
> >
> > 1) When creating a resource provider in placement, you can specify a
> > parent_provider_uuid and thus create trees of providers. This was
> > placement API microversion 1.14. Also included in this microversion
> > was support for displaying the parent and root provider UUID for
> > resource providers.
> >
> > 2) The nova "scheduler report client" (terrible name, it's mostly
> > just the placement client at this point) understands how to call
> > placement API 1.14 and create resource providers with a parent
> provider.
> >
> > 3) The nova scheduler report client uses a ProviderTree object [1]
> > to cache information about the hierarchy of providers that it knows
> > about. For nova-compute workers managing hypervisors, that means the
> > ProviderTree object contained in the report client is rooted in a
> > resource provider that represents the compute node itself (the
> > hypervisor). For nova-compute workers managing baremetal, that means
> > the ProviderTree object contains many root providers, each
> > representing an Ironic baremetal node.
> >
> > 4) The placement API's GET /allocation_candidates endpoint now
> > understands the concept of granular request groups [2]. Granular
> > request groups are only relevant when a user wants to specify that
> > child providers in a provider tree should be used to satisfy part of
> > an overall scheduling request. However, this support is yet
> > incomplete -- see #5 below.
> >
> > The following parts of the nested resource providers modeling are
> > *NOT* yet complete, however:
> >
> > 5) GET /allocation_candidates does not currently return *results*
> > when granular request groups are specified. So, while the placement
> > service understands the *request* for granular groups, it doesn't
> > yet have the ability to constrain the returned candidates
> > appropriately. Tetsuro is actively working on this functionality in
> > this patch series:
> >
> > https://review.openstack.org/#/q/status:open+project:
> openstack/nova+branch:master+topic:bp/nested-resource-
> providers-allocation-candidates
> > 

Re: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker

2018-06-05 Thread Lingxian Kong
Hi Zane, please count me in :-)


Cheers,
Lingxian Kong

On Wed, Jun 6, 2018 at 4:52 AM, Remo Mattei  wrote:

> I will be happy to check it out.
>
> Remo
>
>
> On Jun 5, 2018, at 9:19 AM, Zane Bitter  wrote:
>
> I've been doing some investigation into the Service Catalog in Kubernetes
> and how we can get OpenStack resources to show up in the catalog for use by
> applications running in Kubernetes. (The Big 3 public clouds already
> support this.) The short answer is via an implementation of something
> called the Open Service Broker API, but there are shortcuts available to
> make it easier to do.
>
> I'm convinced that this is readily achievable and something we ought to do
> as a community.
>
> I've put together a (long-winded) FAQ below to answer all of your
> questions about it.
>
> Would you be interested in working on a new project to implement this
> integration? Reply to this thread and let's collect a list of volunteers to
> form the initial core review team.
>
> cheers,
> Zane.
>
>
> What is the Open Service Broker API?
> 
>
> The Open Service Broker API[1] is a standard way to expose external
> resources to applications running in a PaaS. It was originally developed in
> the context of CloudFoundry, but the same standard was adopted by
> Kubernetes (and hence OpenShift) in the form of the Service Catalog
> extension[2]. (The Service Catalog in Kubernetes is the component that
> calls out to a service broker.) So a single implementation can cover the
> most popular open-source PaaS offerings.
>
> In many cases, the services take the form of simply a pre-packaged
> application that also runs inside the PaaS. But they don't have to be -
> services can be anything. Provisioning via the service broker ensures that
> the services requested are tied in to the PaaS's orchestration of the
> application's lifecycle.
>
> (This is certainly not the be-all and end-all of integration between
> OpenStack and containers - we also need ways to tie PaaS-based applications
> into the OpenStack's orchestration of a larger group of resources. Some
> applications may even use both. But it's an important part of the story.)
>
> What sorts of services would OpenStack expose?
> --
>
> Some example use cases might be:
>
> * The application needs a reliable message queue. Rather than spinning up
> multiple storage-backed containers with anti-affinity policies and dealing
> with the overhead of managing e.g. RabbitMQ, the application requests a
> Zaqar queue from an OpenStack cloud. The overhead of running the queueing
> service is amortised across all of the applications in the cloud. The queue
> gets cleaned up correctly when the application is removed, since it is tied
> into the application definition.
>
> * The application needs a database. Rather than spinning one up in a
> storage-backed container and dealing with the overhead of managing it, the
> application requests a Trove DB from an OpenStack cloud.
>
> * The application includes a service that needs to run on bare metal for
> performance reasons (e.g. could also be a database). The application
> requests a bare-metal server from Nova w/ Ironic for the purpose. (The same
> applies to requesting a VM, but there are alternatives like KubeVirt -
> which also operates through the Service Catalog - available for getting a
> VM in Kubernetes. There are no non-proprietary alternatives for getting a
> bare-metal server.)
>
> AWS[3], Azure[4], and GCP[5] all have service brokers available that
> support these and many more services that they provide. I don't know of any
> reason in principle not to expose every type of resource that OpenStack
> provides via a service broker.
>
> How is this different from cloud-provider-openstack?
> 
>
> The Cloud Controller[6] interface in Kubernetes allows Kubernetes itself
> to access features of the cloud to provide its service. For example, if k8s
> needs persistent storage for a container then it can request that from
> Cinder through cloud-provider-openstack[7]. It can also request a load
> balancer from Octavia instead of having to start a container running
> HAProxy to load balance between multiple instances of an application
> container (thus enabling use of hardware load balancers via the cloud's
> abstraction for them).
>
> In contrast, the Service Catalog interface allows the *application*
> running on Kubernetes to access features of the cloud.
>
> What does a service broker look like?
> -
>
> A service broker provides an HTTP API with 5 actions:
>
> * List the services provided by the broker
> * Create an instance of a resource
> * Bind the resource into an instance of the application
> * Unbind the resource from an instance of the application
> * Delete the resource
>
> The binding step is used for things like providing a set of DB credentials
> to a 

Re: [openstack-dev] [all][python3][tc][infra] Python 3.6

2018-06-05 Thread Sean McGinnis

On 06/05/2018 02:55 PM, Zane Bitter wrote:

[snip]
The list of breaking changes in 3.6 is quite short (although not 
zero), so I wouldn't expect too many roadblocks:

https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6

I think we can split the problem into two parts:

* How can we detect any issues ASAP.

Would it be sane to give all projects with a py35 unit tests job a 
non-voting py36 job so that they can start fixing any issues right 
away? Like this: https://review.openstack.org/572535


FWIW, Cinder has had py36 jobs running (and voting) for both unit tests 
and functional tests for just over a month

now with no issues - https://review.openstack.org/#/c/564513/



* How can we ensure every project fixes any issues and migrates to 
voting gates, including for functional test jobs?


Would it make sense to make this part of the 'python3-first' 
project-wide goal?


+1



cheers,
Zane.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3][tc][infra] Python 3.6

2018-06-05 Thread Zane Bitter

On 05/06/18 16:38, Doug Hellmann wrote:

Excerpts from Zane Bitter's message of 2018-06-05 15:55:49 -0400:

We've talked a bit about migrating to Python 3, but (unless I missed it)
not a lot about which version of Python 3. Currently all projects that
support Python 3 are gating against 3.5. However, Ubuntu Artful and
Fedora 26 already ship Python 3.6 by default. (And Bionic and F28 have
been released since then.) The one time it did come up in a thread, we
decided it was blocked on the availability of 3.6 in Ubuntu to run on
the test nodes, so it's time to discuss it again.

AIUI we're planning to switch the test nodes to Bionic, since it's the
latest LTS release, so I'd assume that means that when we talk about
running docs jobs, pep8  with Python3 (under the python3-first
project-wide goal) that means 3.6. And while 3.5 jobs should continue to
work, it seems like we ought to start testing ASAP with the version that
users are going to get by default if they choose to use our Python3
packages.

The list of breaking changes in 3.6 is quite short (although not zero),
so I wouldn't expect too many roadblocks:
https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6

I think we can split the problem into two parts:

* How can we detect any issues ASAP.

Would it be sane to give all projects with a py35 unit tests job a
non-voting py36 job so that they can start fixing any issues right away?
Like this: https://review.openstack.org/572535


That seems like a good way to start.

Maybe we want to rename that project template to openstack-python3-jobs
to keep it version-agnostic?


You mean the 35_36 one? Actually, let's discuss this on the review.



* How can we ensure every project fixes any issues and migrates to
voting gates, including for functional test jobs?

Would it make sense to make this part of the 'python3-first'
project-wide goal?


Yes, that seems like a good idea. We can be specific about the version
of python 3 to be used to achieve that goal (assuming it is selected as
a goal).

The instructions I've been putting together are based on just using
"python3" in the tox.ini file because I didn't want to have to update
that every time we update to a new version of python. Do you think we
should be more specific there, too?


That's probably fine IMHO. We should just be aware that e.g. when 
distros start switching to 3.7 then people's local jobs will start 
running in 3.7.


For me, at least, this has already been the case with 3.6 - tox is now 
python3 by default in Fedora, so e.g. pep8 jobs have been running under 
3.6 for a while now. There were a *lot* of deprecation warnings at first.



Doug



cheers,
Zane.


(Disclaimer for the conspiracy-minded: you might assume that I'm
cleverly concealing inside knowledge of which version of Python 3 will
replace Python 2 in the next major release of RHEL/CentOS, but in fact
you would be mistaken. The truth is I've been too lazy to find out, so
I'm as much in the dark as anybody. Really. Anyway, this isn't about
that, it's about testing within upstream OpenStack.)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3][tc][infra] Python 3.6

2018-06-05 Thread Doug Hellmann
Excerpts from Zane Bitter's message of 2018-06-05 15:55:49 -0400:
> We've talked a bit about migrating to Python 3, but (unless I missed it) 
> not a lot about which version of Python 3. Currently all projects that 
> support Python 3 are gating against 3.5. However, Ubuntu Artful and 
> Fedora 26 already ship Python 3.6 by default. (And Bionic and F28 have 
> been released since then.) The one time it did come up in a thread, we 
> decided it was blocked on the availability of 3.6 in Ubuntu to run on 
> the test nodes, so it's time to discuss it again.
> 
> AIUI we're planning to switch the test nodes to Bionic, since it's the 
> latest LTS release, so I'd assume that means that when we talk about 
> running docs jobs, pep8  with Python3 (under the python3-first 
> project-wide goal) that means 3.6. And while 3.5 jobs should continue to 
> work, it seems like we ought to start testing ASAP with the version that 
> users are going to get by default if they choose to use our Python3 
> packages.
> 
> The list of breaking changes in 3.6 is quite short (although not zero), 
> so I wouldn't expect too many roadblocks:
> https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6
> 
> I think we can split the problem into two parts:
> 
> * How can we detect any issues ASAP.
> 
> Would it be sane to give all projects with a py35 unit tests job a 
> non-voting py36 job so that they can start fixing any issues right away? 
> Like this: https://review.openstack.org/572535

That seems like a good way to start.

Maybe we want to rename that project template to openstack-python3-jobs
to keep it version-agnostic?

> 
> * How can we ensure every project fixes any issues and migrates to 
> voting gates, including for functional test jobs?
> 
> Would it make sense to make this part of the 'python3-first' 
> project-wide goal?

Yes, that seems like a good idea. We can be specific about the version
of python 3 to be used to achieve that goal (assuming it is selected as
a goal).

The instructions I've been putting together are based on just using
"python3" in the tox.ini file because I didn't want to have to update
that every time we update to a new version of python. Do you think we
should be more specific there, too?

Doug

> 
> cheers,
> Zane.
> 
> 
> (Disclaimer for the conspiracy-minded: you might assume that I'm 
> cleverly concealing inside knowledge of which version of Python 3 will 
> replace Python 2 in the next major release of RHEL/CentOS, but in fact 
> you would be mistaken. The truth is I've been too lazy to find out, so 
> I'm as much in the dark as anybody. Really. Anyway, this isn't about 
> that, it's about testing within upstream OpenStack.)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3][tc][infra] Python 3.6

2018-06-05 Thread Jeremy Stanley
On 2018-06-05 15:55:49 -0400 (-0400), Zane Bitter wrote:
[...]
> AIUI we're planning to switch the test nodes to Bionic, since it's
> the latest LTS release, so I'd assume that means that when we talk
> about running docs jobs, pep8  with Python3 (under the
> python3-first project-wide goal) that means 3.6. And while 3.5
> jobs should continue to work, it seems like we ought to start
> testing ASAP with the version that users are going to get by
> default if they choose to use our Python3 packages.
[...]

Yes, though to clarify it's sanest to interpret our LTS distro
statement as testing on whatever the latest LTS release is at the
_start_ of the development cycle. Switching default testing
platforms has proven to be extremely disruptive to the development
process so we want that to happen as soon after the coordinated
release as feasible. That means the plan is to have the mandatory
PTI jobs for the Rocky cycle stick with Ubuntu 16.04 LTS (our
ubuntu-xenial nodes) which provides Python 3.5, but encourage teams
to add jobs running on Ubuntu 18.04 LTS (our ubuntu-bionic nodes) as
soon as they can to get a leg up on any potential disruption
(including the Python 3.6 it provides) before we force the PTI jobs
over to it at the start of the Stein cycle.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][python3][tc][infra] Python 3.6

2018-06-05 Thread Zane Bitter
We've talked a bit about migrating to Python 3, but (unless I missed it) 
not a lot about which version of Python 3. Currently all projects that 
support Python 3 are gating against 3.5. However, Ubuntu Artful and 
Fedora 26 already ship Python 3.6 by default. (And Bionic and F28 have 
been released since then.) The one time it did come up in a thread, we 
decided it was blocked on the availability of 3.6 in Ubuntu to run on 
the test nodes, so it's time to discuss it again.


AIUI we're planning to switch the test nodes to Bionic, since it's the 
latest LTS release, so I'd assume that means that when we talk about 
running docs jobs, pep8  with Python3 (under the python3-first 
project-wide goal) that means 3.6. And while 3.5 jobs should continue to 
work, it seems like we ought to start testing ASAP with the version that 
users are going to get by default if they choose to use our Python3 
packages.


The list of breaking changes in 3.6 is quite short (although not zero), 
so I wouldn't expect too many roadblocks:

https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6

I think we can split the problem into two parts:

* How can we detect any issues ASAP.

Would it be sane to give all projects with a py35 unit tests job a 
non-voting py36 job so that they can start fixing any issues right away? 
Like this: https://review.openstack.org/572535


* How can we ensure every project fixes any issues and migrates to 
voting gates, including for functional test jobs?


Would it make sense to make this part of the 'python3-first' 
project-wide goal?


cheers,
Zane.


(Disclaimer for the conspiracy-minded: you might assume that I'm 
cleverly concealing inside knowledge of which version of Python 3 will 
replace Python 2 in the next major release of RHEL/CentOS, but in fact 
you would be mistaken. The truth is I've been too lazy to find out, so 
I'm as much in the dark as anybody. Really. Anyway, this isn't about 
that, it's about testing within upstream OpenStack.)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Status of Standalone installer (aka All-In-One)

2018-06-05 Thread Dan Prince
On Mon, Jun 4, 2018 at 8:26 PM, Emilien Macchi  wrote:
> TL;DR: we made nice progress and you can checkout this demo:
> https://asciinema.org/a/185533
>
> We started the discussion back in Dublin during the last PTG. The idea of
> Standalone (aka All-In-One, but can be mistaken with all-in-one overcloud)
> is to deploy a single node OpenStack where the provisioning happens on the
> same node (there is no notion of {under/over}cloud).
>
> A kind of a "packstack" or "devstack" but using TripleO which has can offer:
> - composable containerized services
> - composable upgrades
> - composable roles
> - Ansible driven deployment
>
> One of the key features we have been focusing so far are:
> - low bar to be able to dev/test TripleO (single machine: VM), with simpler
> tooling

One idea might be worth considering adding to this list is the idea of
"zero-footprint". Right now you can use a VM to isolate the
installation of the all-in-one installer on your laptop which is cool
and you can always use a VM to isolate things. But now that we have
containers it might also be cool to have the installer itself ran in a
container rather than require the end user to install
python-tripleoclient at all.

A few of us tried out a similar sort of idea in Pike with the
undercloud_deploy interface (docker in docker, etc.). At the time we
didn't have config-download working so it had to all be done inside
the container. But now that we have config download working with the
undercloud/all-in-one installers the Ansible which is generated can
run anywhere so long as the relevant hooks are installed. (paunch,
etc.)

The benefit here is that the requirements are even less... the
developer can just use the framework to generate Ansible that spins up
containers on his/her laptop directly. Again, only the required
Ansible/heat hooks would need to be installed.

I mentioned a few months ago my old attempt was here (uses
undercloud_deploy) [1].

Also, worth mentioning that I got it working without installing Puppet
on my laptop too [2]. The idea being that now that our containers have
all the puppet-modules in them no real need to bind mount them in from
the host anymore unless you are using the last few (HA??!!) services
that require puppet modules on baremetal. Perhaps we should switch to
installing the required puppet modules there dynamically instead of
requiring them for any old undercloud/all-in-one installer which
largely focus on non-HA deployments anyway I think.

Is anyone else interested in the zero-footprint idea? Perhaps this is
the next iteration of the all-in-one installer?... but the one I'm
perhaps most interested in as a developer.

[1] https://github.com/dprince/talon
[2] https://review.openstack.org/#/c/550848/ (Add
DockerPuppetMountHostPuppet parameter)

Dan

> - make it fast (being able to deploy OpenStack in minutes)
> - being able to make a change in OpenStack (e.g. Keystone) and test the
> change immediately
>
> The workflow that we're currently targeting is:
> - deploy the system by yourself (centos7 or rhel7)
> - deploy the repos, install python-tripleoclient
> - run 'openstack tripleo deploy (+ few args)
> - (optional) modify your container with a Dockerfile + Ansible
> - Test your change
>
> Status:
> - tripleoclient was refactored in a way that the undercloud is actually a
> special configuration of the standalone deployment (still work in progress).
> We basically refactored the containerized undercloud to be more generic and
> configurable for standalone.
> - we can now deploy a standalone OpenStack with just Keystone + dependencies

Fwiw you could always do this with undercloud_deploy as well. But the
new interface is much nicer I agree. :)

> - which takes 12 minutes total (demo here: https://asciinema.org/a/185533
> and doc in progress:
> http://logs.openstack.org/27/571827/6/check/build-openstack-sphinx-docs/1885304/html/install/containers_deployment/standalone.html)
> - we have an Ansible role to push modifications to containers via a Docker
> file: https://github.com/openstack/ansible-role-tripleo-modify-image/
>
> What's next:
> - Documentation: as you can see the documentation is still in progress
> (https://review.openstack.org/#/c/571827/)
> - Continuous Integration: we're working on a new CI job:
> tripleo-ci-centos-7-standalone
> https://trello.com/c/HInL8pNm/7-upstream-ci-testing
> - Working on the standalone configuration interface, still WIP:
> https://review.openstack.org/#/c/569535/
> - Investigate the use case where a developer wants to prepare the containers
> before the deployment
>
> I hope this update was useful, feel free to give feedback or ask any
> questions,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


Re: [openstack-dev] [heat][ci][infra] telemetry test broken on oslo.messaging stable/queens

2018-06-05 Thread Matthew Treinish
On Tue, Jun 05, 2018 at 10:47:17AM -0400, Ken Giusti wrote:
> Hi,
> 
> The telemetry integration test for oslo.messaging has started failing
> on the stable/queens branch [0].
> 
> A quick review of the logs points to a change in heat-tempest-plugin
> that is incompatible with the version of gabbi from queens upper
> constraints (1.40.0) [1][2].
> 
> The job definition [3] includes required-projects that do not have
> stable/queens branches - including heat-tempest-plugin.
> 
> My question - how do I prevent this job from breaking when these
> unbranched projects introduce changes that are incompatible with
> upper-constrants for a particular branch?

Tempest and plugins should be installed in a venv to isolate it's
requirements from the rest of what devstack is installing during the
job. This should be happening by default, the only place it gets installed
on system python and where there is a potential conflict is if INSTALL_TEMPEST
is set to True. See:

https://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/tempest#n57

That flag only exists so we test tempest coinstallability in the gate, as well
as for local devstack users.

We don't install branchless projects on system python in stable jobs exactly
because they're is a likely conflict between the stable branch's requirements
and master's (which is what branchless projects follow).

-Matt Treinish

> 
> I've tried to use override-checkout in the job definition, but that
> seems a bit hacky in this case since the tagged versions don't appear
> to work and I've resorted to a hardcoded ref [4].
> 
> Advice appreciated, thanks!
> 
> [0] https://review.openstack.org/#/c/567124/
> [1] 
> http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstack-gate-post_test_hook.txt.gz#_2018-05-16_05_20_05_624
> [2] 
> http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstacklog.txt.gz#_2018-05-16_05_19_06_332
> [3] 
> https://git.openstack.org/cgit/openstack/oslo.messaging/tree/.zuul.yaml?h=stable/queens#n250
> [4] https://review.openstack.org/#/c/572193/2/.zuul.yaml


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Status of Standalone installer (aka All-In-One)

2018-06-05 Thread Wesley Hayutin
On Tue, Jun 5, 2018 at 3:31 AM Raoul Scarazzini  wrote:

> On 05/06/2018 02:26, Emilien Macchi wrote:
> [...]
> > I hope this update was useful, feel free to give feedback or ask any
> > questions,
> [...]
>
> I'm no prophet here, but I see a bright future for this approach. I can
> imagine how useful this can be on the testing and much more the learning
> side. Thanks for sharing!
>
> --
> Raoul Scarazzini
> ra...@redhat.com


Real big +1 to everyone who has contributed to the standalone
installer.
>From an end user experience, this is simple, fast! This is going to be the
base for some really cool work.

Emilien, the CI is working, enjoy your PTO :)
http://logs.openstack.org/17/572217/6/check/tripleo-ci-centos-7-standalone/b2eb1b7/logs/ara_oooq/result/bb49965e-4fb7-43ea-a9e3-c227702c17de/

Thanks!



>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker

2018-06-05 Thread Artem Goncharov
Hi Zane,

> Would you be interested in working on a new project to implement this
> integration? Reply to this thread and let's collect a list of volunteers
> to form the initial core review team.

Yes, I would also like to join. That's exactly what I am looking at in my
company as part of K8 over OpenStack offering.

Regards,
Artem
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker

2018-06-05 Thread Remo Mattei
I will be happy to check it out.

Remo 

> On Jun 5, 2018, at 9:19 AM, Zane Bitter  wrote:
> 
> I've been doing some investigation into the Service Catalog in Kubernetes and 
> how we can get OpenStack resources to show up in the catalog for use by 
> applications running in Kubernetes. (The Big 3 public clouds already support 
> this.) The short answer is via an implementation of something called the Open 
> Service Broker API, but there are shortcuts available to make it easier to do.
> 
> I'm convinced that this is readily achievable and something we ought to do as 
> a community.
> 
> I've put together a (long-winded) FAQ below to answer all of your questions 
> about it.
> 
> Would you be interested in working on a new project to implement this 
> integration? Reply to this thread and let's collect a list of volunteers to 
> form the initial core review team.
> 
> cheers,
> Zane.
> 
> 
> What is the Open Service Broker API?
> 
> 
> The Open Service Broker API[1] is a standard way to expose external resources 
> to applications running in a PaaS. It was originally developed in the context 
> of CloudFoundry, but the same standard was adopted by Kubernetes (and hence 
> OpenShift) in the form of the Service Catalog extension[2]. (The Service 
> Catalog in Kubernetes is the component that calls out to a service broker.) 
> So a single implementation can cover the most popular open-source PaaS 
> offerings.
> 
> In many cases, the services take the form of simply a pre-packaged 
> application that also runs inside the PaaS. But they don't have to be - 
> services can be anything. Provisioning via the service broker ensures that 
> the services requested are tied in to the PaaS's orchestration of the 
> application's lifecycle.
> 
> (This is certainly not the be-all and end-all of integration between 
> OpenStack and containers - we also need ways to tie PaaS-based applications 
> into the OpenStack's orchestration of a larger group of resources. Some 
> applications may even use both. But it's an important part of the story.)
> 
> What sorts of services would OpenStack expose?
> --
> 
> Some example use cases might be:
> 
> * The application needs a reliable message queue. Rather than spinning up 
> multiple storage-backed containers with anti-affinity policies and dealing 
> with the overhead of managing e.g. RabbitMQ, the application requests a Zaqar 
> queue from an OpenStack cloud. The overhead of running the queueing service 
> is amortised across all of the applications in the cloud. The queue gets 
> cleaned up correctly when the application is removed, since it is tied into 
> the application definition.
> 
> * The application needs a database. Rather than spinning one up in a 
> storage-backed container and dealing with the overhead of managing it, the 
> application requests a Trove DB from an OpenStack cloud.
> 
> * The application includes a service that needs to run on bare metal for 
> performance reasons (e.g. could also be a database). The application requests 
> a bare-metal server from Nova w/ Ironic for the purpose. (The same applies to 
> requesting a VM, but there are alternatives like KubeVirt - which also 
> operates through the Service Catalog - available for getting a VM in 
> Kubernetes. There are no non-proprietary alternatives for getting a 
> bare-metal server.)
> 
> AWS[3], Azure[4], and GCP[5] all have service brokers available that support 
> these and many more services that they provide. I don't know of any reason in 
> principle not to expose every type of resource that OpenStack provides via a 
> service broker.
> 
> How is this different from cloud-provider-openstack?
> 
> 
> The Cloud Controller[6] interface in Kubernetes allows Kubernetes itself to 
> access features of the cloud to provide its service. For example, if k8s 
> needs persistent storage for a container then it can request that from Cinder 
> through cloud-provider-openstack[7]. It can also request a load balancer from 
> Octavia instead of having to start a container running HAProxy to load 
> balance between multiple instances of an application container (thus enabling 
> use of hardware load balancers via the cloud's abstraction for them).
> 
> In contrast, the Service Catalog interface allows the *application* running 
> on Kubernetes to access features of the cloud.
> 
> What does a service broker look like?
> -
> 
> A service broker provides an HTTP API with 5 actions:
> 
> * List the services provided by the broker
> * Create an instance of a resource
> * Bind the resource into an instance of the application
> * Unbind the resource from an instance of the application
> * Delete the resource
> 
> The binding step is used for things like providing a set of DB credentials to 
> a container. You can rotate credentials when replacing a 

Re: [openstack-dev] [tc] proposing changes to the project-team-guide-core review team

2018-06-05 Thread Sean McGinnis


On 06/05/2018 11:14 AM, Doug Hellmann wrote:

[snip]

My understanding is that Kyle Mestery and Nikhil Komawar have both
moved on from OpenStack to other projects, so I propose that we
remove them from the core team.

Chris Dent has been active with reviews lately and has expressed
willingness to help manage the guide. I propose that we add him to
the team.

Please let me know what you think,
Doug

+1 from me. I think Chris would be a good addition.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] proposing changes to the project-team-guide-core review team

2018-06-05 Thread Thierry Carrez
Doug Hellmann wrote:
> The review team [1] for the project-team-guide repository (managed
> by the TC) hasn't been updated for a while. I would like to propose
> removing a few reviewers who are no longer active, and adding one
> new reviewer.
> 
> My understanding is that Kyle Mestery and Nikhil Komawar have both
> moved on from OpenStack to other projects, so I propose that we
> remove them from the core team.
> 
> Chris Dent has been active with reviews lately and has expressed
> willingness to help manage the guide. I propose that we add him to
> the team.
> 
> Please let me know what you think,

+1

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-05 Thread Doug Hellmann
Excerpts from Fox, Kevin M's message of 2018-06-05 16:09:24 +:
> That might not be a good idea. That may just push the problem underground as 
> people are afraid to speak up publicly.
> 
> Perhaps an anonymous poll kind of thing, so that it can be counted publicly 
> but doesn't cause people to fear retaliation?

I have no idea how to judge the outcome of any sort of anonymous
poll.  And I really don't want my inbox to become one. :-)

We do our best to make governance decisions openly, based on the
information we have. But in more cases than I like we end up making
assumptions based on extrapolating from a small number of experiences
relayed privately. I don't want to base a review diversity policy
that may end up making it harder to accept contribution on assumptions.

Maybe if folks aren't comfortable talking publicly, they can talk
to their PTLs privately? Then we can get a sense of which teams
feel this sort of pressure, overall, instead of individuals.

> 
> Thanks,
> Kevin
> 
> From: Doug Hellmann [d...@doughellmann.com]
> Sent: Tuesday, June 05, 2018 7:39 AM
> To: openstack-dev
> Subject: Re: [openstack-dev] [tc] Organizational diversity tag
> 
> Excerpts from Doug Hellmann's message of 2018-06-02 15:08:28 -0400:
> > Excerpts from Jeremy Stanley's message of 2018-06-02 18:51:47 +:
> > > On 2018-06-02 13:23:24 -0400 (-0400), Doug Hellmann wrote:
> > > [...]
> > > > It feels like we would be saying that we don't trust 2 core reviewers
> > > > from the same company to put the project's goals or priorities over
> > > > their employer's.  And that doesn't feel like an assumption I would
> > > > want us to encourage through a tag meant to show the health of the
> > > > project.
> > > [...]
> > >
> > > That's one way of putting it. On the other hand, if we ostensibly
> > > have that sort of guideline (say, two core reviewers shouldn't be
> > > the only ones to review a change submitted by someone else from
> > > their same organization if the team is large and diverse enough to
> > > support such a pattern) then it gives our reviewers a better
> > > argument to push back on their management _if_ they're being
> > > strongly urged to review/approve certain patches. At least then they
> > > can say, "this really isn't going to fly because we have to get a
> > > reviewer from another organization to agree it's in the best
> > > interests of the project" rather than "fire me if you want but I'm
> > > not approving that change, no matter how much your product launch is
> > > going to be delayed."
> >
> > Do we have that problem? I honestly don't know how much pressure other
> > folks are feeling. My impression is that we've mostly become good at
> > finding the necessary compromises, but my experience doesn't cover all
> > of our teams.
> 
> To all of the people who have replied to me privately that they have
> experienced this problem:
> 
> We can't really start to address it until it's out here in the open.
> Please post to the list.
> 
> Doug
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] The Weekly Owl - 23th Edition

2018-06-05 Thread Emilien Macchi
Welcome to the twenty third edition of a weekly update in TripleO world!
The goal is to provide a short reading (less than 5 minutes) to learn
what's new this week.
Any contributions and feedback are welcome.
Link to the previous version:
http://lists.openstack.org/pipermail/openstack-dev/2018-May/130926.html

+-+
| General announcements |
+-+

+--> This week is Rocky Milestone 2.

+--+
| Continuous Integration |
+--+

+--> Ruck is arxcruz and Rover is rlandy. Please let them know any new CI
issue.
+--> Master promotion is 1 day, Queens is 0 day, Pike is 0 day and Ocata is
0 day. Really nice work CI folks!
+--> Sprint 14 is ongoing: Checkout
https://trello.com/c/1W62zvhh/770-sprint-14-goals but focus is to finish
upgrade CI work.
+--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

+-+
| Upgrades |
+-+

+--> Reviews are requested on different topics: CI, Newton, FFU
+--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status

+---+
| Containers |
+---+

+--> Good progress done on All-In-One blueprint, update sent on the ML:
http://lists.openstack.org/pipermail/openstack-dev/2018-June/131135.html
+--> Still working on Containerized undercloud upgrades (bug with rabbitmq
upgrade: https://review.openstack.org/#/c/572449/)
+--> Enabling the containerized undercloud everywhere in CI
+--> Working on updating containers in CI when deploying a containerized
undercloud so we can test changes in all repos
+--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status

+--+
| config-download |
+--+

+--> checkout the new command: "openstack overcloud failures" for
better deployment
failures output
+--> Documentation was improved with recent changes
+--> UI integration is still in progress
+--> More: https://etherpad.openstack.org/p/tripleo-config-downlo
ad-squad-status

+--+
| Integration |
+--+

+--> Working on : "Persist ceph-ansible fetch_directory", check it out:
https://review.openstack.org/#/c/567782/
+--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status

+-+
| UI/CLI |
+-+

+--> Beginning trial of using storyboard not just for bugs but also for
stories/epics
+--> Review of existing config-download patches that still need to merge.
Hoping to finalize this week.
+--> Network config initial patches are up - very cool so far!
+--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status

+---+
| Validations |
+---+

+--> Custom validations work
+--> Nova event callback validation
+--> OpenShift on OpenStack validation work
+--> Mistral workflow plugin
+--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status

+---+
| Networking |
+---+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status

+--+
| Workflows |
+--+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status

+---+
| Security |
+---+

+--> Public TLS is being refactored
+--> Working on limiting sudoers rights
+--> More: https://etherpad.openstack.org/p/tripleo-security-squad

++
| Owl fact  |
++

Owls were once a sign of victory in battle.
In ancient Greece, the Little Owl was the companion of Athena, the Greek
goddess of wisdom, which is one reason why owls symbolize learning and
knowledge.
But Athena was also a warrior goddess and the owl was considered the
protector of armies going into war.
If Greek soldiers saw an owl fly by during battle, they took it as a sign
of coming victory.
Source: http://mentalfloss.com/article/68473/15-mysterious-facts-about-owls

Thank you all for reading and stay tuned!
--
Your fellow reporter, Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker

2018-06-05 Thread Zane Bitter
I've been doing some investigation into the Service Catalog in 
Kubernetes and how we can get OpenStack resources to show up in the 
catalog for use by applications running in Kubernetes. (The Big 3 public 
clouds already support this.) The short answer is via an implementation 
of something called the Open Service Broker API, but there are shortcuts 
available to make it easier to do.


I'm convinced that this is readily achievable and something we ought to 
do as a community.


I've put together a (long-winded) FAQ below to answer all of your 
questions about it.


Would you be interested in working on a new project to implement this 
integration? Reply to this thread and let's collect a list of volunteers 
to form the initial core review team.


cheers,
Zane.


What is the Open Service Broker API?


The Open Service Broker API[1] is a standard way to expose external 
resources to applications running in a PaaS. It was originally developed 
in the context of CloudFoundry, but the same standard was adopted by 
Kubernetes (and hence OpenShift) in the form of the Service Catalog 
extension[2]. (The Service Catalog in Kubernetes is the component that 
calls out to a service broker.) So a single implementation can cover the 
most popular open-source PaaS offerings.


In many cases, the services take the form of simply a pre-packaged 
application that also runs inside the PaaS. But they don't have to be - 
services can be anything. Provisioning via the service broker ensures 
that the services requested are tied in to the PaaS's orchestration of 
the application's lifecycle.


(This is certainly not the be-all and end-all of integration between 
OpenStack and containers - we also need ways to tie PaaS-based 
applications into the OpenStack's orchestration of a larger group of 
resources. Some applications may even use both. But it's an important 
part of the story.)


What sorts of services would OpenStack expose?
--

Some example use cases might be:

* The application needs a reliable message queue. Rather than spinning 
up multiple storage-backed containers with anti-affinity policies and 
dealing with the overhead of managing e.g. RabbitMQ, the application 
requests a Zaqar queue from an OpenStack cloud. The overhead of running 
the queueing service is amortised across all of the applications in the 
cloud. The queue gets cleaned up correctly when the application is 
removed, since it is tied into the application definition.


* The application needs a database. Rather than spinning one up in a 
storage-backed container and dealing with the overhead of managing it, 
the application requests a Trove DB from an OpenStack cloud.


* The application includes a service that needs to run on bare metal for 
performance reasons (e.g. could also be a database). The application 
requests a bare-metal server from Nova w/ Ironic for the purpose. (The 
same applies to requesting a VM, but there are alternatives like 
KubeVirt - which also operates through the Service Catalog - available 
for getting a VM in Kubernetes. There are no non-proprietary 
alternatives for getting a bare-metal server.)


AWS[3], Azure[4], and GCP[5] all have service brokers available that 
support these and many more services that they provide. I don't know of 
any reason in principle not to expose every type of resource that 
OpenStack provides via a service broker.


How is this different from cloud-provider-openstack?


The Cloud Controller[6] interface in Kubernetes allows Kubernetes itself 
to access features of the cloud to provide its service. For example, if 
k8s needs persistent storage for a container then it can request that 
from Cinder through cloud-provider-openstack[7]. It can also request a 
load balancer from Octavia instead of having to start a container 
running HAProxy to load balance between multiple instances of an 
application container (thus enabling use of hardware load balancers via 
the cloud's abstraction for them).


In contrast, the Service Catalog interface allows the *application* 
running on Kubernetes to access features of the cloud.


What does a service broker look like?
-

A service broker provides an HTTP API with 5 actions:

* List the services provided by the broker
* Create an instance of a resource
* Bind the resource into an instance of the application
* Unbind the resource from an instance of the application
* Delete the resource

The binding step is used for things like providing a set of DB 
credentials to a container. You can rotate credentials when replacing a 
container by revoking the existing credentials on unbind and creating a 
new set on bind, without replacing the entire resource.


Is there an easier way?
---

Yes! Folks from OpenShift came up with a project called the Automation 
Broker[8]. To add 

[openstack-dev] [tc] proposing changes to the project-team-guide-core review team

2018-06-05 Thread Doug Hellmann
The review team [1] for the project-team-guide repository (managed
by the TC) hasn't been updated for a while. I would like to propose
removing a few reviewers who are no longer active, and adding one
new reviewer.

My understanding is that Kyle Mestery and Nikhil Komawar have both
moved on from OpenStack to other projects, so I propose that we
remove them from the core team.

Chris Dent has been active with reviews lately and has expressed
willingness to help manage the guide. I propose that we add him to
the team.

Please let me know what you think,
Doug

[1] https://review.openstack.org/#/admin/groups/953,members

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-05 Thread Fox, Kevin M
That might not be a good idea. That may just push the problem underground as 
people are afraid to speak up publicly.

Perhaps an anonymous poll kind of thing, so that it can be counted publicly but 
doesn't cause people to fear retaliation?

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Tuesday, June 05, 2018 7:39 AM
To: openstack-dev
Subject: Re: [openstack-dev] [tc] Organizational diversity tag

Excerpts from Doug Hellmann's message of 2018-06-02 15:08:28 -0400:
> Excerpts from Jeremy Stanley's message of 2018-06-02 18:51:47 +:
> > On 2018-06-02 13:23:24 -0400 (-0400), Doug Hellmann wrote:
> > [...]
> > > It feels like we would be saying that we don't trust 2 core reviewers
> > > from the same company to put the project's goals or priorities over
> > > their employer's.  And that doesn't feel like an assumption I would
> > > want us to encourage through a tag meant to show the health of the
> > > project.
> > [...]
> >
> > That's one way of putting it. On the other hand, if we ostensibly
> > have that sort of guideline (say, two core reviewers shouldn't be
> > the only ones to review a change submitted by someone else from
> > their same organization if the team is large and diverse enough to
> > support such a pattern) then it gives our reviewers a better
> > argument to push back on their management _if_ they're being
> > strongly urged to review/approve certain patches. At least then they
> > can say, "this really isn't going to fly because we have to get a
> > reviewer from another organization to agree it's in the best
> > interests of the project" rather than "fire me if you want but I'm
> > not approving that change, no matter how much your product launch is
> > going to be delayed."
>
> Do we have that problem? I honestly don't know how much pressure other
> folks are feeling. My impression is that we've mostly become good at
> finding the necessary compromises, but my experience doesn't cover all
> of our teams.

To all of the people who have replied to me privately that they have
experienced this problem:

We can't really start to address it until it's out here in the open.
Please post to the list.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] StarlingX project status update

2018-06-05 Thread Doug Hellmann
Excerpts from Graham Hayes's message of 2018-06-05 16:42:45 +0100:
> 
> On 30/05/18 21:23, Mohammed Naser wrote:
> > Hi everyone:
> > 
> > Over the past week in the summit, there was a lot of discussion
> > regarding StarlingX
> > and members of the technical commitee had a few productive discussions 
> > regarding
> > the best approach to deal with a proposed new pilot project for
> > incubation in the OSF's Edge
> > Computing strategic focus area: StarlingX.
> > 
> > If you're not aware, StarlingX includes forks of some OpenStack
> > components and other open source software
> > which contain certain features that are specific to edge and
> > industrial IoT computing use cases.  The code
> > behind the project is from Wind River (and is used to build a product
> > called "Titanium
> > Cloud").
> > 
> > At the moment, the goal of StarlingX hosting their projects on the
> > community infrastructure
> > is to get the developers used to the Gerrit workflow.  The intention
> > is to evenutally
> > work with upstream teams in order to bring the features and bug fixes which 
> > are
> > specific to the fork back upstream, with an ideal goal of bringing all
> > the differences
> > upstream.
> > 
> > We've discussed around all the different ways that we can approach
> > this and how to
> > help the StarlingX team be part of our community.  If we can
> > succesfully do this, it would
> > be a big success for our community as well as our community gaining
> > contributors from
> > the Wind River team.  In an ideal world, it's a win-win.
> > 
> > The plan at the moment is the following:
> > - StarlingX will have the first import of code that is not forked,
> > simply other software that
> >   they've developed to help deliver their product.  This code can be
> > hosted with no problems.
> > - StarlingX will generate a list of patches to be brought upstream and
> > the StarlingX team
> >   will work together with upstream teams in order to start backporting
> > and upstreaming the
> >   codebase.  Emilien Macchi (EmilienM) and I have volunteered to take
> > on the responsibility of
> >   monitoring the progress upstreaming these patches.
> > - StarlingX contains a few forks of other non-OpenStack software. The
> > StarlingX team will work
> >   with the authors of the original projects to ensure that they do not
> > mind us hosting a fork
> >   of their software.  If they don't, we'll proceed to host those
> > projects. If they prefer
> >   something else (hosting it themselves, placing it on another hosting
> > service, etc.),
> >   the StarlingX team will work with them in that way.
> > 
> > We discussed approaches for cases where patches aren't acceptable
> > upstream, because they
> > diverge from the project mission or aren't comprehensive. Ideally all
> > of those could be turned
> > into acceptable changes that meet both team's criteria. In some cases,
> > adding plugin interfaces
> > or driver interfaces may be the best alternative. Only as a last
> > resort would we retain the
> > forks for a long period of time.
> 
> I honestly think that these forks should never be inside the foundation.
> If there is a big enough disagreement between project teams and the
> fork, we (as the TC of the OpenStack project) and the board (of
> *OpenStack* Foundation) should support our current teams, who have
> been working in the open.
> 
> There is plenty of companies who would have loved certain features in
> OpenStack over the years that an extra driver extension point would
> have enabled, but when the upstream team pushed back, they redesigned
> the feature to work with the community vision. We should not reward
> companies that didn't.

I can understand that point of view. I even somewhat agree. But
saying that we don't welcome contributions now, because they didn't
do things the right way when someone else was in charge of their
project, doesn't strike the right tone for me.

The conversations I've had with the folks involved with StarlingX
have convinced me they have learned, the hard way, the error of a
closed-source fork and they are trying to do better for the future.
The first step of that is to bring what they already have out into
the open, where it will be easier to figure out what can be introduced
into projects to eliminate the forks, what can be discarded, and
what will need to be worked around.

When all of this is done, a viable project with real users will be
open source instead of closed source. Those contributors, and users,
will be a part of our community instead of looking in from the
outside. The path is ugly, long, and clearly not ideal. But, I
consider the result a win, overall.


> 
> > From what was brought up, the team from Wind River is hoping to
> > on-board roughly 50 new full
> > time contributors.  In combination with the features that they've
> > built that we can hopefully
> > upstream, I am hopeful that we can come to a win-win situation for
> > everyone in this.
> > 
> > Regards,
> > 

Re: [openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud

2018-06-05 Thread Luke Hinds
On Tue, Jun 5, 2018 at 3:44 PM, Cédric Jeanneret 
wrote:

> Hello guys!
>
> I'm currently working on python-tripleoclient in order to squash the
> dreadful "NOPASSWD:ALL" allowed to the "stack" user.
>
> The start was an issue with the rights on some files being wrong (owner
> by root instead of stack, in stack home). After some digging and poking,
> it appears the undercloud deployment is called with a "sudo openstack
> tripleo deploy" command - this, of course, creates some major issues
> regarding both security and right management.
>
> I see a couple of ways to correct that bad situation:
> - let the global "sudo" call, and play with setuid/setgid when we
> actually don't need the root access (as it's mentioned in this comment¹)
>
> - drop that global sudo call, and replace all the necessary calls by
> some "sudo" when needed. This involves the replacement of native python
> code, like "os.mkdir" and the like.
>
> The first one isn't a solution - code maintenance will not be possible,
> having to thing "darn, os.setuid() before calling that, because I don't
> need root" is the current way, and it just doesn't apply.
>
> So I started the second one. It's, of course, longer, not really nice
> and painful, but at least this will end to a good status, and not so bad
> solution.
>
> This also meets the current work of the Security Squad about "limiting
> sudo rights and accesses".
>
> For now I don't have a proper patch to show, but it will most probably
> appear shortly, as a Work In Progress (I don't think it will be
> mergeable before some time, due to all the constraints we have regarding
> version portability, new sudoer integration and so on).
>
> I'll post the relevant review link as an answer of this thread when I
> have something I can show.
>
> Cheers,
>
> C.
>
>
Hi Cédric,

Pleased to hear you are willing to take this on.

It makes sense we should co-ordinate efforts here as I have been looking at
the same item, but planned to start with heat-admin over on the overcloud.

Due to the complexity / level of coverage in the use of sudo, it makes
sense to have a spec where we can then get community consensus on the
approach selected. This is important as it looks like we will need to have
some sort of white list to maintain and make considerations around
functional test coverage in CI (in case someone writes something new
wrapped in sudo).

In regards to your suggested positions within python code such as the
client, its worth looking at oslo.privsep [1] where a decorator can be used
for when needing to setuid.

Let's discuss this also in the squad meeting tomorrow and try to synergize
approach for all tripleo nix accounts.

[1] https://github.com/openstack/oslo.privsep

Cheers,

Luke


> ¹
> https://github.com/openstack/python-tripleoclient/blob/
> master/tripleoclient/v1/tripleo_deploy.py#L827-L829
>
>
> --
> Cédric Jeanneret
> Software Engineer
> DFG:DF
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] StarlingX project status update

2018-06-05 Thread Remo Mattei
I agree with Graham +1

Remo 

> On Jun 5, 2018, at 8:42 AM, Graham Hayes  wrote:
> 
> 
> 
> On 30/05/18 21:23, Mohammed Naser wrote:
>> Hi everyone:
>> 
>> Over the past week in the summit, there was a lot of discussion
>> regarding StarlingX
>> and members of the technical commitee had a few productive discussions 
>> regarding
>> the best approach to deal with a proposed new pilot project for
>> incubation in the OSF's Edge
>> Computing strategic focus area: StarlingX.
>> 
>> If you're not aware, StarlingX includes forks of some OpenStack
>> components and other open source software
>> which contain certain features that are specific to edge and
>> industrial IoT computing use cases.  The code
>> behind the project is from Wind River (and is used to build a product
>> called "Titanium
>> Cloud").
>> 
>> At the moment, the goal of StarlingX hosting their projects on the
>> community infrastructure
>> is to get the developers used to the Gerrit workflow.  The intention
>> is to evenutally
>> work with upstream teams in order to bring the features and bug fixes which 
>> are
>> specific to the fork back upstream, with an ideal goal of bringing all
>> the differences
>> upstream.
>> 
>> We've discussed around all the different ways that we can approach
>> this and how to
>> help the StarlingX team be part of our community.  If we can
>> succesfully do this, it would
>> be a big success for our community as well as our community gaining
>> contributors from
>> the Wind River team.  In an ideal world, it's a win-win.
>> 
>> The plan at the moment is the following:
>> - StarlingX will have the first import of code that is not forked,
>> simply other software that
>>  they've developed to help deliver their product.  This code can be
>> hosted with no problems.
>> - StarlingX will generate a list of patches to be brought upstream and
>> the StarlingX team
>>  will work together with upstream teams in order to start backporting
>> and upstreaming the
>>  codebase.  Emilien Macchi (EmilienM) and I have volunteered to take
>> on the responsibility of
>>  monitoring the progress upstreaming these patches.
>> - StarlingX contains a few forks of other non-OpenStack software. The
>> StarlingX team will work
>>  with the authors of the original projects to ensure that they do not
>> mind us hosting a fork
>>  of their software.  If they don't, we'll proceed to host those
>> projects. If they prefer
>>  something else (hosting it themselves, placing it on another hosting
>> service, etc.),
>>  the StarlingX team will work with them in that way.
>> 
>> We discussed approaches for cases where patches aren't acceptable
>> upstream, because they
>> diverge from the project mission or aren't comprehensive. Ideally all
>> of those could be turned
>> into acceptable changes that meet both team's criteria. In some cases,
>> adding plugin interfaces
>> or driver interfaces may be the best alternative. Only as a last
>> resort would we retain the
>> forks for a long period of time.
> 
> I honestly think that these forks should never be inside the foundation.
> If there is a big enough disagreement between project teams and the
> fork, we (as the TC of the OpenStack project) and the board (of
> *OpenStack* Foundation) should support our current teams, who have
> been working in the open.
> 
> There is plenty of companies who would have loved certain features in
> OpenStack over the years that an extra driver extension point would
> have enabled, but when the upstream team pushed back, they redesigned
> the feature to work with the community vision. We should not reward
> companies that didn't.
> 
>> From what was brought up, the team from Wind River is hoping to
>> on-board roughly 50 new full
>> time contributors.  In combination with the features that they've
>> built that we can hopefully
>> upstream, I am hopeful that we can come to a win-win situation for
>> everyone in this.
>> 
>> Regards,
>> Mohammed
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] Forum Recap - Stein Release Goals

2018-06-05 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2018-06-04 18:46:15 -0500:
> On 6/4/2018 4:20 PM, Doug Hellmann wrote:
> > See my comments on the other part of the thread, but I think this is too
> > optimistic until we add a couple of people to the review team on OSC.
> > 
> > Others from the OSC team who have a better perspective on how much work
> > is actually left may have a different opinion though?
> 
> Yeah that is definitely something I was thinking about in Vancouver.
> 
> Would a more realistic goal be to decentralize the OSC code, like the 
> previous goal about how tempest plugins were done? Or similar to the 
> docs being decentralized? That would spread the review load onto the 
> projects that are actually writing CLIs for their resources - which they 
> are already doing in their per-project clients, e.g. python-novaclient 
> and python-cinderclient.
> 

In the past we've tried to avoid that because we wanted some
consistency in the UI design. I don't know if it's time to give up
on that and reconsider dividing the commands into multiple repos,
or just ask that people participate in building this tool for our
users. I don't think it would be any more complicated to do the
work in the OSC repo and gain some minimal experience that could
let folks become cores than it would be to do the same work in a
repo where they are already core. The plugin APIs are relatively
stable so it's basically the same code.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] StarlingX project status update

2018-06-05 Thread Graham Hayes


On 30/05/18 21:23, Mohammed Naser wrote:
> Hi everyone:
> 
> Over the past week in the summit, there was a lot of discussion
> regarding StarlingX
> and members of the technical commitee had a few productive discussions 
> regarding
> the best approach to deal with a proposed new pilot project for
> incubation in the OSF's Edge
> Computing strategic focus area: StarlingX.
> 
> If you're not aware, StarlingX includes forks of some OpenStack
> components and other open source software
> which contain certain features that are specific to edge and
> industrial IoT computing use cases.  The code
> behind the project is from Wind River (and is used to build a product
> called "Titanium
> Cloud").
> 
> At the moment, the goal of StarlingX hosting their projects on the
> community infrastructure
> is to get the developers used to the Gerrit workflow.  The intention
> is to evenutally
> work with upstream teams in order to bring the features and bug fixes which 
> are
> specific to the fork back upstream, with an ideal goal of bringing all
> the differences
> upstream.
> 
> We've discussed around all the different ways that we can approach
> this and how to
> help the StarlingX team be part of our community.  If we can
> succesfully do this, it would
> be a big success for our community as well as our community gaining
> contributors from
> the Wind River team.  In an ideal world, it's a win-win.
> 
> The plan at the moment is the following:
> - StarlingX will have the first import of code that is not forked,
> simply other software that
>   they've developed to help deliver their product.  This code can be
> hosted with no problems.
> - StarlingX will generate a list of patches to be brought upstream and
> the StarlingX team
>   will work together with upstream teams in order to start backporting
> and upstreaming the
>   codebase.  Emilien Macchi (EmilienM) and I have volunteered to take
> on the responsibility of
>   monitoring the progress upstreaming these patches.
> - StarlingX contains a few forks of other non-OpenStack software. The
> StarlingX team will work
>   with the authors of the original projects to ensure that they do not
> mind us hosting a fork
>   of their software.  If they don't, we'll proceed to host those
> projects. If they prefer
>   something else (hosting it themselves, placing it on another hosting
> service, etc.),
>   the StarlingX team will work with them in that way.
> 
> We discussed approaches for cases where patches aren't acceptable
> upstream, because they
> diverge from the project mission or aren't comprehensive. Ideally all
> of those could be turned
> into acceptable changes that meet both team's criteria. In some cases,
> adding plugin interfaces
> or driver interfaces may be the best alternative. Only as a last
> resort would we retain the
> forks for a long period of time.

I honestly think that these forks should never be inside the foundation.
If there is a big enough disagreement between project teams and the
fork, we (as the TC of the OpenStack project) and the board (of
*OpenStack* Foundation) should support our current teams, who have
been working in the open.

There is plenty of companies who would have loved certain features in
OpenStack over the years that an extra driver extension point would
have enabled, but when the upstream team pushed back, they redesigned
the feature to work with the community vision. We should not reward
companies that didn't.

> From what was brought up, the team from Wind River is hoping to
> on-board roughly 50 new full
> time contributors.  In combination with the features that they've
> built that we can hopefully
> upstream, I am hopeful that we can come to a win-win situation for
> everyone in this.
> 
> Regards,
> Mohammed
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ci][infra] telemetry test broken on oslo.messaging stable/queens

2018-06-05 Thread Doug Hellmann
Excerpts from Ken Giusti's message of 2018-06-05 10:47:17 -0400:
> Hi,
> 
> The telemetry integration test for oslo.messaging has started failing
> on the stable/queens branch [0].
> 
> A quick review of the logs points to a change in heat-tempest-plugin
> that is incompatible with the version of gabbi from queens upper
> constraints (1.40.0) [1][2].
> 
> The job definition [3] includes required-projects that do not have
> stable/queens branches - including heat-tempest-plugin.
> 
> My question - how do I prevent this job from breaking when these
> unbranched projects introduce changes that are incompatible with
> upper-constrants for a particular branch?

Aren't those projects co-gating on the oslo.messaging test job?

How are the tests working for heat's stable/queens branch? Or telemetry?
(whichever project is pulling in that tempest repo)

> 
> I've tried to use override-checkout in the job definition, but that
> seems a bit hacky in this case since the tagged versions don't appear
> to work and I've resorted to a hardcoded ref [4].
> 
> Advice appreciated, thanks!
> 
> [0] https://review.openstack.org/#/c/567124/
> [1] 
> http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstack-gate-post_test_hook.txt.gz#_2018-05-16_05_20_05_624
> [2] 
> http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstacklog.txt.gz#_2018-05-16_05_19_06_332
> [3] 
> https://git.openstack.org/cgit/openstack/oslo.messaging/tree/.zuul.yaml?h=stable/queens#n250
> [4] https://review.openstack.org/#/c/572193/2/.zuul.yaml

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] summary of joint leadership meeting from 20 May

2018-06-05 Thread Graham Hayes
On 05/06/18 16:22, Jay S Bryant wrote:
> 
> 
> On 6/5/2018 7:31 AM, Sean McGinnis wrote:
>> On Tue, Jun 05, 2018 at 11:21:16AM +0200, Thierry Carrez wrote:
>>> Jay Pipes wrote:
 On 06/04/2018 05:02 PM, Doug Hellmann wrote:
> [...]>
 Personally, I've very much enjoyed the separate PTGs because I've
 actually
 been able to get work done at them; something that was much harder
 when the
 design summits were part of the overall conference.
>>> Right, the trick is to try to preserve that productivity while making it
>>> easier to travel to... One way would be to make sure the PTG remains a
>>> separate event (separate days, separate venues, separate registration),
>>> just co-located in same city and week.
>>>
 [...]
> There are a few plans under consideration, and no firm decisions
> have been made, yet. We discussed a strawman proposal to combine
> the summit and PTG in April, in Denver, that would look much like
> our older Summit events (from the Folsom/Grizzly time frame) with
> a few days of conference and a few days of design summit, with some
> overlap in the middle of the week.  The dates, overlap, and
> arrangements will depend on venue availability.
 Has the option of doing a single conference a year been addressed?
 Seems
 to me that we (the collective we) could save a lot of money not having
 to put on multiple giant events per year and instead have one.
>>> Yes, the same strawman proposal included the idea of leveraging an
>>> existing international "OpenStack Day" event and raising its profile
>>> rather than organizing a full second summit every year. The second PTG
>>> of the year could then be kept as a separate event, or put next to that
>>> "upgraded" OpenStack Day.
>>>
>> I actually really like this idea. As things slow down, there just
>> aren't enough
>> of big splashy new things to announce every 6 months. I think it could
>> work
>> well to have one Summit a year, while using the OSD events as a way to
>> reach
>> those folks that can't make it to the one big event for the year due
>> to timing
>> or location.
>>
>> It could help concentrate efforts to have bigger goals ready by the
>> Summit and
>> keep things on a better cadence. And if we can still do a PTG-like
>> event along
>> side one of the OSD events, it would allow development to still get the
>> valuable face-to-face time that we've come to expect.
> I think going to one large summit event a year could be good with a
> co-located PTG type event.  I think, however, that we would still need
> to have a Separate PTG type event at some other point in the year.  I
> think it is going to be hard to keep development momentum going in the
> projects without a couple of face-to-face meetings a year.
> 

I personally think a single summit (with a PTG / Ops Mid Cycle before or
after) + a separate PTG / Ops Mid Cycle would be the best solution.

We don't need to rotate locations - while my airmiles balance has really
appreciated places like Tokyo / Sydney - we can just reuse locations.

For example saying that the week of May 20 something every year in
Vancouver[1] (or $NORTH_AMERICAN_CITY) is the OpenStack Summit + Kata +
OpenDev + Edge conference massively reduces planning / scouting.

Then having the PTG (or OpenStack Foundation Developer & Ops Team
Gathering to make any new groups feel as welcomed, and not "tacked on")
in Denver + $NON_NORTH_AMERICAN_CITY[2] (again, as static locations to
reduce planning / scouting) seems like a good idea.

1 - But lets just say Vancouver please :)

2 - not sure if this is actually a good idea, but I don't the same
visa problems that some of our contributors do, or have knowledge
of how tax write off work so if I am wrong please tell me.

>>> Thinking on this is still very much work in progress.
>>>
>>> -- 
>>> Thierry Carrez (ttx)
>>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] summary of joint leadership meeting from 20 May

2018-06-05 Thread Jay S Bryant



On 6/5/2018 7:31 AM, Sean McGinnis wrote:

On Tue, Jun 05, 2018 at 11:21:16AM +0200, Thierry Carrez wrote:

Jay Pipes wrote:

On 06/04/2018 05:02 PM, Doug Hellmann wrote:

[...]>

Personally, I've very much enjoyed the separate PTGs because I've actually
been able to get work done at them; something that was much harder when the
design summits were part of the overall conference.

Right, the trick is to try to preserve that productivity while making it
easier to travel to... One way would be to make sure the PTG remains a
separate event (separate days, separate venues, separate registration),
just co-located in same city and week.


[...]

There are a few plans under consideration, and no firm decisions
have been made, yet. We discussed a strawman proposal to combine
the summit and PTG in April, in Denver, that would look much like
our older Summit events (from the Folsom/Grizzly time frame) with
a few days of conference and a few days of design summit, with some
overlap in the middle of the week.  The dates, overlap, and
arrangements will depend on venue availability.

Has the option of doing a single conference a year been addressed? Seems
to me that we (the collective we) could save a lot of money not having
to put on multiple giant events per year and instead have one.

Yes, the same strawman proposal included the idea of leveraging an
existing international "OpenStack Day" event and raising its profile
rather than organizing a full second summit every year. The second PTG
of the year could then be kept as a separate event, or put next to that
"upgraded" OpenStack Day.


I actually really like this idea. As things slow down, there just aren't enough
of big splashy new things to announce every 6 months. I think it could work
well to have one Summit a year, while using the OSD events as a way to reach
those folks that can't make it to the one big event for the year due to timing
or location.

It could help concentrate efforts to have bigger goals ready by the Summit and
keep things on a better cadence. And if we can still do a PTG-like event along
side one of the OSD events, it would allow development to still get the
valuable face-to-face time that we've come to expect.
I think going to one large summit event a year could be good with a 
co-located PTG type event.  I think, however, that we would still need 
to have a Separate PTG type event at some other point in the year.  I 
think it is going to be hard to keep development momentum going in the 
projects without a couple of face-to-face meetings a year.



Thinking on this is still very much work in progress.

--
Thierry Carrez (ttx)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

2018-06-05 Thread Eric Fried
Alex-

Allocations for an instance are pulled down by the compute manager and
passed into the virt driver's spawn method since [1].  An allocation
comprises a consumer, provider, resource class, and amount.  Once we can
schedule to trees, the allocations pulled down by the compute manager
will span the tree as appropriate.  So in that sense, yes, nova-compute
knows which amounts of which resource classes come from which providers.

However, if you're asking about the situation where we have two
different allocations of the same resource class coming from two
separate providers: Yes, we can still tell which RCxAMOUNT is associated
with which provider; but No, we still have no inherent way to correlate
a specific one of those allocations with the part of the *request* it
came from.  If just the provider UUID isn't enough for the virt driver
to figure out what to do, it may have to figure it out by looking at the
flavor (and/or image metadata), inspecting the traits on the providers
associated with the allocations, etc.  (The theory here is that, if the
virt driver can't tell the difference at that point, then it actually
doesn't matter.)

[1] https://review.openstack.org/#/c/511879/

On 06/05/2018 09:05 AM, Alex Xu wrote:
> Maybe I missed something. Is there anyway the nova-compute can know the
> resources are allocated from which child resource provider? For example,
> the host has two PFs. The request is asking one VF, then the
> nova-compute needs to know the VF is allocated from which PF (resource
> provider). As my understand, currently we only return a list of
> alternative resource provider to the nova-compute, those alternative is
> root resource provider.
> 
> 2018-06-05 21:29 GMT+08:00 Jay Pipes  >:
> 
> On 06/05/2018 08:50 AM, Stephen Finucane wrote:
> 
> I thought nested resource providers were already supported by
> placement? To the best of my knowledge, what is /not/ supported
> is virt drivers using these to report NUMA topologies but I
> doubt that affects you. The placement guys will need to weigh in
> on this as I could be missing something but it sounds like you
> can start using this functionality right now.
> 
> 
> To be clear, this is what placement and nova *currently* support
> with regards to nested resource providers:
> 
> 1) When creating a resource provider in placement, you can specify a
> parent_provider_uuid and thus create trees of providers. This was
> placement API microversion 1.14. Also included in this microversion
> was support for displaying the parent and root provider UUID for
> resource providers.
> 
> 2) The nova "scheduler report client" (terrible name, it's mostly
> just the placement client at this point) understands how to call
> placement API 1.14 and create resource providers with a parent provider.
> 
> 3) The nova scheduler report client uses a ProviderTree object [1]
> to cache information about the hierarchy of providers that it knows
> about. For nova-compute workers managing hypervisors, that means the
> ProviderTree object contained in the report client is rooted in a
> resource provider that represents the compute node itself (the
> hypervisor). For nova-compute workers managing baremetal, that means
> the ProviderTree object contains many root providers, each
> representing an Ironic baremetal node.
> 
> 4) The placement API's GET /allocation_candidates endpoint now
> understands the concept of granular request groups [2]. Granular
> request groups are only relevant when a user wants to specify that
> child providers in a provider tree should be used to satisfy part of
> an overall scheduling request. However, this support is yet
> incomplete -- see #5 below.
> 
> The following parts of the nested resource providers modeling are
> *NOT* yet complete, however:
> 
> 5) GET /allocation_candidates does not currently return *results*
> when granular request groups are specified. So, while the placement
> service understands the *request* for granular groups, it doesn't
> yet have the ability to constrain the returned candidates
> appropriately. Tetsuro is actively working on this functionality in
> this patch series:
> 
> 
> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers-allocation-candidates
> 
> 
> 
> 6) The virt drivers need to implement the update_provider_tree()
> interface [3] and construct the tree of resource providers along
> with appropriate inventory records for each child provider in the
> tree. Both libvirt and XenAPI virt drivers have patch series up that
> begin to take advantage of the 

[openstack-dev] [FEMDC] meetings suspended until further notice

2018-06-05 Thread lebre . adrien
Dear all,

Following the exchanges we had during the Vancouver summit, in particular the 
non-sense to maintain/animate two groups targeting similar challenges (ie., the 
FEMDC SIG [1] and the new Edge Computing Working group [2]), FEMDC meetings are 
suspended until further notice.  

If you are interested by Edge Computing discussions, please see information 
available on the new edge wiki page [2]. 

Thanks
ad_ri3n_

[1]https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds
[2]https://wiki.openstack.org/wiki/Edge_Computing_Group

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][ci][infra] telemetry test broken on oslo.messaging stable/queens

2018-06-05 Thread Ken Giusti
Hi,

The telemetry integration test for oslo.messaging has started failing
on the stable/queens branch [0].

A quick review of the logs points to a change in heat-tempest-plugin
that is incompatible with the version of gabbi from queens upper
constraints (1.40.0) [1][2].

The job definition [3] includes required-projects that do not have
stable/queens branches - including heat-tempest-plugin.

My question - how do I prevent this job from breaking when these
unbranched projects introduce changes that are incompatible with
upper-constrants for a particular branch?

I've tried to use override-checkout in the job definition, but that
seems a bit hacky in this case since the tagged versions don't appear
to work and I've resorted to a hardcoded ref [4].

Advice appreciated, thanks!

[0] https://review.openstack.org/#/c/567124/
[1] 
http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstack-gate-post_test_hook.txt.gz#_2018-05-16_05_20_05_624
[2] 
http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstacklog.txt.gz#_2018-05-16_05_19_06_332
[3] 
https://git.openstack.org/cgit/openstack/oslo.messaging/tree/.zuul.yaml?h=stable/queens#n250
[4] https://review.openstack.org/#/c/572193/2/.zuul.yaml
-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud

2018-06-05 Thread Cédric Jeanneret
Hello guys!

I'm currently working on python-tripleoclient in order to squash the
dreadful "NOPASSWD:ALL" allowed to the "stack" user.

The start was an issue with the rights on some files being wrong (owner
by root instead of stack, in stack home). After some digging and poking,
it appears the undercloud deployment is called with a "sudo openstack
tripleo deploy" command - this, of course, creates some major issues
regarding both security and right management.

I see a couple of ways to correct that bad situation:
- let the global "sudo" call, and play with setuid/setgid when we
actually don't need the root access (as it's mentioned in this comment¹)

- drop that global sudo call, and replace all the necessary calls by
some "sudo" when needed. This involves the replacement of native python
code, like "os.mkdir" and the like.

The first one isn't a solution - code maintenance will not be possible,
having to thing "darn, os.setuid() before calling that, because I don't
need root" is the current way, and it just doesn't apply.

So I started the second one. It's, of course, longer, not really nice
and painful, but at least this will end to a good status, and not so bad
solution.

This also meets the current work of the Security Squad about "limiting
sudo rights and accesses".

For now I don't have a proper patch to show, but it will most probably
appear shortly, as a Work In Progress (I don't think it will be
mergeable before some time, due to all the constraints we have regarding
version portability, new sudoer integration and so on).

I'll post the relevant review link as an answer of this thread when I
have something I can show.

Cheers,

C.


¹
https://github.com/openstack/python-tripleoclient/blob/master/tripleoclient/v1/tripleo_deploy.py#L827-L829


-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-05 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2018-06-02 15:08:28 -0400:
> Excerpts from Jeremy Stanley's message of 2018-06-02 18:51:47 +:
> > On 2018-06-02 13:23:24 -0400 (-0400), Doug Hellmann wrote:
> > [...]
> > > It feels like we would be saying that we don't trust 2 core reviewers
> > > from the same company to put the project's goals or priorities over
> > > their employer's.  And that doesn't feel like an assumption I would
> > > want us to encourage through a tag meant to show the health of the
> > > project.
> > [...]
> > 
> > That's one way of putting it. On the other hand, if we ostensibly
> > have that sort of guideline (say, two core reviewers shouldn't be
> > the only ones to review a change submitted by someone else from
> > their same organization if the team is large and diverse enough to
> > support such a pattern) then it gives our reviewers a better
> > argument to push back on their management _if_ they're being
> > strongly urged to review/approve certain patches. At least then they
> > can say, "this really isn't going to fly because we have to get a
> > reviewer from another organization to agree it's in the best
> > interests of the project" rather than "fire me if you want but I'm
> > not approving that change, no matter how much your product launch is
> > going to be delayed."
> 
> Do we have that problem? I honestly don't know how much pressure other
> folks are feeling. My impression is that we've mostly become good at
> finding the necessary compromises, but my experience doesn't cover all
> of our teams.

To all of the people who have replied to me privately that they have
experienced this problem:

We can't really start to address it until it's out here in the open.
Please post to the list.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unsubscribe

2018-06-05 Thread Sean McGinnis

Hey Henry, see footer on every message for how to unsubscribe.


On 06/05/2018 09:09 AM, Henry Nash wrote:



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-05 Thread Doug Hellmann
Excerpts from Sean McGinnis's message of 2018-06-05 07:26:27 -0500:
> On Mon, Jun 04, 2018 at 06:44:15PM -0500, Matt Riedemann wrote:
> > On 6/4/2018 5:13 PM, Sean McGinnis wrote:
> > > Yes, exactly what I meant by the NOOP. I'm not sure what Cinder would
> > > check here. We don't have to see if placement has been set up or if cell0
> > > has been configured. Maybe once we have the facility in place we would
> > > find some things worth checking, but at present I don't know what that
> > > would be.
> > 
> > Here is an example from the Cinder Queens upgrade release notes:
> > 
> > "RBD/Ceph backends should adjust max_over_subscription_ratio to take into
> > account that the driver is no longer reporting volume’s physical usage but
> > it’s provisioned size."
> > 
> > I'm assuming you could check if rbd is configured as a storage backend and
> > if so, is max_over_subscription_ratio set? If not, is it fatal? Does the
> > operator need to configure it before upgrading to Rocky? Or is it something
> > they should consider but don't necessary have to do - if that, there is a
> > 'WARNING' status for those types of things.
> > 
> > Things that are good candidates for automating are anything that would stop
> > the cinder-volume service from starting, or things that require data
> > migrations before you can roll forward. In nova we've had blocking DB schema
> > migrations for stuff like this which basically mean "you haven't run the
> > online data migrations CLI yet so we're not letting you go any further until
> > your homework is done".
> > 
> 
> Thanks, I suppose we probably could find some things to at least WARN on. 
> Maybe
> that would be useful.
> 
> I suppose as far as a series goal goes, even if each project doesn't come up
> with a comprehensive set of checks, this would be a known thing deployers 
> could
> use and potentially build some additional tooling around. This could be a good
> win for the overall ease of upgrade story.
> 
> > Like I said, it's not black and white, but chances are good there are things
> > that fall into these categories.

In the past when we've had questions about how broadly a goal is going
to affect projects, we did a little data collection work up front. Maybe
that's the next step for this one? Does someone want to volunteer to go
around and talk to some of the project teams that are likely candidates
for these sorts of upgrade blockers to start making lists?

Doug

> > 
> > -- 
> > 
> > Thanks,
> > 
> > Matt
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] summary of joint leadership meeting from 20 May

2018-06-05 Thread Doug Hellmann
Excerpts from Jay Pipes's message of 2018-06-04 18:47:22 -0400:
> On 06/04/2018 05:02 PM, Doug Hellmann wrote:
> > The most significant point of interest to the contributor
> > community from this section of the meeting was the apparently
> > overwhelming interest from companies employing contributors, as
> > well as 2/3 of the contributors to recent releases who responded
> > to the survey, to bring the PTG and summit back together as a single
> > event. This had come up at the meeting in Dublin as well, but in
> > the time since then the discussions progressed and it looks much
> > more likely that we will at least try re-combining the two events.
> 
> OK, so will we return to having eleventy billion different mid-cycle 
> events for each project?

Given that the main objections seem to be funding the travel (not the
events themselves), or participants not *wanting* to go to that many
events, I suspect not.

> Personally, I've very much enjoyed the separate PTGs because I've 
> actually been able to get work done at them; something that was much 
> harder when the design summits were part of the overall conference.

Yes, me, too. I find it useful to separate the discussions focused
on internal team planning as opposed to setting priorities for the
community more broadly.  If we recombine the events I hope we can
find a way to retain both types of conversations.

> In fact I haven't gone to the last two summit events because of what I 
> perceive to be a continued trend of the summits being focused on 
> marketing, buzzwords and vendor pitches/sales. An extra spoonful of the 
> "edge", anyone?

I've found the Forums significantly more useful the last 2 times. We
definitely felt your absence in a few sessions.

I don't think I've attended a regular talk at a summit in years.
Maybe if we're going to combine the events again we can get a track
set aside for contributor-focused talks, though. Not onboarding,
or how-to-use-it talks, but deep-dives into how things like the new
placement system was designed or how to build a driver for oslo.config,
or whatever. The sort of thing you would expect to find at a
tech-focused conference with contributors attending.

> > We discussed several reasons, including travel expense, travel visa
> > difficulties, time away from home and family, and sponsorship of
> > the events themselves.
> > 
> > There are a few plans under consideration, and no firm decisions
> > have been made, yet. We discussed a strawman proposal to combine
> > the summit and PTG in April, in Denver, that would look much like
> > our older Summit events (from the Folsom/Grizzly time frame) with
> > a few days of conference and a few days of design summit, with some
> > overlap in the middle of the week.  The dates, overlap, and
> > arrangements will depend on venue availability.
> 
> Has the option of doing a single conference a year been addressed? Seems 
> to me that we (the collective we) could save a lot of money not having 
> to put on multiple giant events per year and instead have one.
>
> Just my two cents, but the OpenStack and Linux foundations seem to be 
> pumping out new "open events" at a pretty regular clip -- OpenStack 
> Summit, OpenDev, Open Networking Summit, OpenStack Days, OpenInfra Days, 
> OpenNFV summit, the list keeps growing... at some point, do we think 
> that the industry as a whole is just going to get event overload?

Thierry addressed this a bit, but I want to emphasize that the cost
savings were less focused on the event itself or foundation costs
and more on the travel expenses for all of the people going to the
event.  So, yes, having fewer events (or focusing more on regional
events) would help with that, some. It's not clear to me how much
of a critical mass of contributors we would get at regional events,
though, unless we planned for it explicitly.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Unsubscribe

2018-06-05 Thread Henry Nash


> On 5 Jun 2018, at 14:56, Eric Fried  wrote:
> 
> To summarize: cyborg could model things nested-wise, but there would be
> no way to schedule them yet.
> 
> Couple of clarifications inline.
> 
> On 06/05/2018 08:29 AM, Jay Pipes wrote:
>> On 06/05/2018 08:50 AM, Stephen Finucane wrote:
>>> I thought nested resource providers were already supported by
>>> placement? To the best of my knowledge, what is /not/ supported is
>>> virt drivers using these to report NUMA topologies but I doubt that
>>> affects you. The placement guys will need to weigh in on this as I
>>> could be missing something but it sounds like you can start using this
>>> functionality right now.
>> 
>> To be clear, this is what placement and nova *currently* support with
>> regards to nested resource providers:
>> 
>> 1) When creating a resource provider in placement, you can specify a
>> parent_provider_uuid and thus create trees of providers. This was
>> placement API microversion 1.14. Also included in this microversion was
>> support for displaying the parent and root provider UUID for resource
>> providers.
>> 
>> 2) The nova "scheduler report client" (terrible name, it's mostly just
>> the placement client at this point) understands how to call placement
>> API 1.14 and create resource providers with a parent provider.
>> 
>> 3) The nova scheduler report client uses a ProviderTree object [1] to
>> cache information about the hierarchy of providers that it knows about.
>> For nova-compute workers managing hypervisors, that means the
>> ProviderTree object contained in the report client is rooted in a
>> resource provider that represents the compute node itself (the
>> hypervisor). For nova-compute workers managing baremetal, that means the
>> ProviderTree object contains many root providers, each representing an
>> Ironic baremetal node.
>> 
>> 4) The placement API's GET /allocation_candidates endpoint now
>> understands the concept of granular request groups [2]. Granular request
>> groups are only relevant when a user wants to specify that child
>> providers in a provider tree should be used to satisfy part of an
>> overall scheduling request. However, this support is yet incomplete --
>> see #5 below.
> 
> Granular request groups are also usable/useful when sharing providers
> are in play. That functionality is complete on both the placement side
> and the report client side (see below).
> 
>> The following parts of the nested resource providers modeling are *NOT*
>> yet complete, however:
>> 
>> 5) GET /allocation_candidates does not currently return *results* when
>> granular request groups are specified. So, while the placement service
>> understands the *request* for granular groups, it doesn't yet have the
>> ability to constrain the returned candidates appropriately. Tetsuro is
>> actively working on this functionality in this patch series:
>> 
>> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers-allocation-candidates
>> 
>> 
>> 6) The virt drivers need to implement the update_provider_tree()
>> interface [3] and construct the tree of resource providers along with
>> appropriate inventory records for each child provider in the tree. Both
>> libvirt and XenAPI virt drivers have patch series up that begin to take
>> advantage of the nested provider modeling. However, a number of concerns
>> [4] about in-place nova-compute upgrades when moving from a single
>> resource provider to a nested provider tree model were raised, and we
>> have begun brainstorming how to handle the migration of existing data in
>> the single-provider model to the nested provider model. [5] We are
>> blocking any reviews on patch series that modify the local provider
>> modeling until these migration concerns are fully resolved.
>> 
>> 7) The scheduler does not currently pass granular request groups to
>> placement.
> 
> The code is in place to do this [6] - so the scheduler *will* pass
> granular request groups to placement if your flavor specifies them.  As
> noted above, such flavors will be limited to exploiting sharing
> providers until Tetsuro's series merges.  But no further code work is
> required on the scheduler side.
> 
> [6] https://review.openstack.org/#/c/515811/
> 
>> Once #5 and #6 are resolved, and once the migration/upgrade
>> path is resolved, clearly we will need to have the scheduler start
>> making requests to placement that represent the granular request groups
>> and have the scheduler pass the resulting allocation candidates to its
>> filters and weighers.
>> 
>> Hope this helps highlight where we currently are and the work still left
>> to do (in Rocky) on nested resource providers.
>> 
>> Best,
>> -jay
>> 
>> 
>> [1]
>> https://github.com/openstack/nova/blob/master/nova/compute/provider_tree.py
>> 
>> [2]
>> https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html
>> 
>> 
>> [3]
>> 

Re: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

2018-06-05 Thread Alex Xu
Maybe I missed something. Is there anyway the nova-compute can know the
resources are allocated from which child resource provider? For example,
the host has two PFs. The request is asking one VF, then the nova-compute
needs to know the VF is allocated from which PF (resource provider). As my
understand, currently we only return a list of alternative resource
provider to the nova-compute, those alternative is root resource provider.

2018-06-05 21:29 GMT+08:00 Jay Pipes :

> On 06/05/2018 08:50 AM, Stephen Finucane wrote:
>
>> I thought nested resource providers were already supported by placement?
>> To the best of my knowledge, what is /not/ supported is virt drivers using
>> these to report NUMA topologies but I doubt that affects you. The placement
>> guys will need to weigh in on this as I could be missing something but it
>> sounds like you can start using this functionality right now.
>>
>
> To be clear, this is what placement and nova *currently* support with
> regards to nested resource providers:
>
> 1) When creating a resource provider in placement, you can specify a
> parent_provider_uuid and thus create trees of providers. This was placement
> API microversion 1.14. Also included in this microversion was support for
> displaying the parent and root provider UUID for resource providers.
>
> 2) The nova "scheduler report client" (terrible name, it's mostly just the
> placement client at this point) understands how to call placement API 1.14
> and create resource providers with a parent provider.
>
> 3) The nova scheduler report client uses a ProviderTree object [1] to
> cache information about the hierarchy of providers that it knows about. For
> nova-compute workers managing hypervisors, that means the ProviderTree
> object contained in the report client is rooted in a resource provider that
> represents the compute node itself (the hypervisor). For nova-compute
> workers managing baremetal, that means the ProviderTree object contains
> many root providers, each representing an Ironic baremetal node.
>
> 4) The placement API's GET /allocation_candidates endpoint now understands
> the concept of granular request groups [2]. Granular request groups are
> only relevant when a user wants to specify that child providers in a
> provider tree should be used to satisfy part of an overall scheduling
> request. However, this support is yet incomplete -- see #5 below.
>
> The following parts of the nested resource providers modeling are *NOT*
> yet complete, however:
>
> 5) GET /allocation_candidates does not currently return *results* when
> granular request groups are specified. So, while the placement service
> understands the *request* for granular groups, it doesn't yet have the
> ability to constrain the returned candidates appropriately. Tetsuro is
> actively working on this functionality in this patch series:
>
> https://review.openstack.org/#/q/status:open+project:opensta
> ck/nova+branch:master+topic:bp/nested-resource-providers-
> allocation-candidates
>
> 6) The virt drivers need to implement the update_provider_tree() interface
> [3] and construct the tree of resource providers along with appropriate
> inventory records for each child provider in the tree. Both libvirt and
> XenAPI virt drivers have patch series up that begin to take advantage of
> the nested provider modeling. However, a number of concerns [4] about
> in-place nova-compute upgrades when moving from a single resource provider
> to a nested provider tree model were raised, and we have begun
> brainstorming how to handle the migration of existing data in the
> single-provider model to the nested provider model. [5] We are blocking any
> reviews on patch series that modify the local provider modeling until these
> migration concerns are fully resolved.
>
> 7) The scheduler does not currently pass granular request groups to
> placement. Once #5 and #6 are resolved, and once the migration/upgrade path
> is resolved, clearly we will need to have the scheduler start making
> requests to placement that represent the granular request groups and have
> the scheduler pass the resulting allocation candidates to its filters and
> weighers.
>
> Hope this helps highlight where we currently are and the work still left
> to do (in Rocky) on nested resource providers.
>
> Best,
> -jay
>
>
> [1] https://github.com/openstack/nova/blob/master/nova/compute/p
> rovider_tree.py
>
> [2] https://specs.openstack.org/openstack/nova-specs/specs/queen
> s/approved/granular-resource-requests.html
>
> [3] https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a
> 7aca73d185775d43df2/nova/virt/driver.py#L833
>
> [4] http://lists.openstack.org/pipermail/openstack-dev/2018-May/
> 130783.html
>
> [5] https://etherpad.openstack.org/p/placement-making-the-(up)grade
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [horizon][plugins][heat][searchlight][murano][sahara][watcher] Use default Django test runner instead of nose

2018-06-05 Thread Akihiro Motoki
This is an important step to drop nose and nosehtmloutput :)
We plan to switch the test runner and then re-enable integration tests
(with selenium) for cross project testing.

In addition, we horizon team are trying to minimize gate breakage in
horizon plugins for recent changes (this and django 2.0).
Hopefully pending related patches will land soon.


2018年6月5日(火) 22:52 Doug Hellmann :

> Excerpts from Ivan Kolodyazhny's message of 2018-06-05 16:32:22 +0300:
> > Hi team,
> >
> > In Horizon, we're going to get rid of unsupported Nose and use Django
> Test
> > Runner instead of it [1]. Nose has some issues and limitations which
> blocks
> > us in our testing improvement efforts.
> >
> > Nose has different test discovery mechanism than Django does. So, there
> was
> > a chance to break some Horizon Plugins:(. Unfortunately, we haven't
> > cross-project CI yet (TBH, I'm working on it and it's one of the first
> > steps to get it done), that's why I tested this change [2] against all
> > known plugins [3].
> >
> > Most of the projects don't need any changes. I proposed few changed to
> > plugins repositories [4] and most of them are merged already. Thanks a
> lot
> > to everybody who helped me with it. Patches for heat-dashboard [5] and
> > searchlight-ui [6] are under review.
> >
> > Additional efforts are needed for murano-dashboard, sahara-dashboard, and
> > watcher-dashboard projects. murano-dashboard has Nose test runner enabled
> > in the config, so Horizon change won't affect it.
> >
> > I proposed patches for sahara-dashboard [7] and watcher-dashboard [8] to
> > explicitly enable Nose test runner there until we'll fix all related
> > issues. I hope we'll have a good number of cross-project activities with
> > these teams.
> >
> > Once all patches above will be merged, we'll be ready to the next step to
> > make Horizon and plugins CI better.
> >
> >
> > [1] https://review.openstack.org/#/c/544296/
> > [2]
> >
> https://docs.google.com/spreadsheets/d/17Yiso6JLeRHBSqJhAiQYkqIAvQhvNFM8NgTkrPxovMo/edit?usp=sharing
> > [3]
> https://docs.openstack.org/horizon/latest/install/plugin-registry.html
> > [4]
> >
> https://review.openstack.org/#/q/topic:bp/improve-horizon-testing+(status:open+OR+status:merged)
> > [5] https://review.openstack.org/572095
> > [6] https://review.openstack.org/572124
> > [7] https://review.openstack.org/572390
> > [8] https://review.openstack.org/572391
> >
> >
> >
> > Regards,
> > Ivan Kolodyazhny,
> > http://blog.e0ne.info/
>
> Nice work! Thanks for taking the initiative on updating our tooling.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

2018-06-05 Thread Eric Fried
To summarize: cyborg could model things nested-wise, but there would be
no way to schedule them yet.

Couple of clarifications inline.

On 06/05/2018 08:29 AM, Jay Pipes wrote:
> On 06/05/2018 08:50 AM, Stephen Finucane wrote:
>> I thought nested resource providers were already supported by
>> placement? To the best of my knowledge, what is /not/ supported is
>> virt drivers using these to report NUMA topologies but I doubt that
>> affects you. The placement guys will need to weigh in on this as I
>> could be missing something but it sounds like you can start using this
>> functionality right now.
> 
> To be clear, this is what placement and nova *currently* support with
> regards to nested resource providers:
> 
> 1) When creating a resource provider in placement, you can specify a
> parent_provider_uuid and thus create trees of providers. This was
> placement API microversion 1.14. Also included in this microversion was
> support for displaying the parent and root provider UUID for resource
> providers.
> 
> 2) The nova "scheduler report client" (terrible name, it's mostly just
> the placement client at this point) understands how to call placement
> API 1.14 and create resource providers with a parent provider.
> 
> 3) The nova scheduler report client uses a ProviderTree object [1] to
> cache information about the hierarchy of providers that it knows about.
> For nova-compute workers managing hypervisors, that means the
> ProviderTree object contained in the report client is rooted in a
> resource provider that represents the compute node itself (the
> hypervisor). For nova-compute workers managing baremetal, that means the
> ProviderTree object contains many root providers, each representing an
> Ironic baremetal node.
> 
> 4) The placement API's GET /allocation_candidates endpoint now
> understands the concept of granular request groups [2]. Granular request
> groups are only relevant when a user wants to specify that child
> providers in a provider tree should be used to satisfy part of an
> overall scheduling request. However, this support is yet incomplete --
> see #5 below.

Granular request groups are also usable/useful when sharing providers
are in play. That functionality is complete on both the placement side
and the report client side (see below).

> The following parts of the nested resource providers modeling are *NOT*
> yet complete, however:
> 
> 5) GET /allocation_candidates does not currently return *results* when
> granular request groups are specified. So, while the placement service
> understands the *request* for granular groups, it doesn't yet have the
> ability to constrain the returned candidates appropriately. Tetsuro is
> actively working on this functionality in this patch series:
> 
> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers-allocation-candidates
> 
> 
> 6) The virt drivers need to implement the update_provider_tree()
> interface [3] and construct the tree of resource providers along with
> appropriate inventory records for each child provider in the tree. Both
> libvirt and XenAPI virt drivers have patch series up that begin to take
> advantage of the nested provider modeling. However, a number of concerns
> [4] about in-place nova-compute upgrades when moving from a single
> resource provider to a nested provider tree model were raised, and we
> have begun brainstorming how to handle the migration of existing data in
> the single-provider model to the nested provider model. [5] We are
> blocking any reviews on patch series that modify the local provider
> modeling until these migration concerns are fully resolved.
> 
> 7) The scheduler does not currently pass granular request groups to
> placement.

The code is in place to do this [6] - so the scheduler *will* pass
granular request groups to placement if your flavor specifies them.  As
noted above, such flavors will be limited to exploiting sharing
providers until Tetsuro's series merges.  But no further code work is
required on the scheduler side.

[6] https://review.openstack.org/#/c/515811/

> Once #5 and #6 are resolved, and once the migration/upgrade
> path is resolved, clearly we will need to have the scheduler start
> making requests to placement that represent the granular request groups
> and have the scheduler pass the resulting allocation candidates to its
> filters and weighers.
> 
> Hope this helps highlight where we currently are and the work still left
> to do (in Rocky) on nested resource providers.
> 
> Best,
> -jay
> 
> 
> [1]
> https://github.com/openstack/nova/blob/master/nova/compute/provider_tree.py
> 
> [2]
> https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html
> 
> 
> [3]
> https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/virt/driver.py#L833
> 
> 
> [4] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130783.html
> 
> [5] 

Re: [openstack-dev] [horizon][plugins][heat][searchlight][murano][sahara][watcher] Use default Django test runner instead of nose

2018-06-05 Thread Doug Hellmann
Excerpts from Ivan Kolodyazhny's message of 2018-06-05 16:32:22 +0300:
> Hi team,
> 
> In Horizon, we're going to get rid of unsupported Nose and use Django Test
> Runner instead of it [1]. Nose has some issues and limitations which blocks
> us in our testing improvement efforts.
> 
> Nose has different test discovery mechanism than Django does. So, there was
> a chance to break some Horizon Plugins:(. Unfortunately, we haven't
> cross-project CI yet (TBH, I'm working on it and it's one of the first
> steps to get it done), that's why I tested this change [2] against all
> known plugins [3].
> 
> Most of the projects don't need any changes. I proposed few changed to
> plugins repositories [4] and most of them are merged already. Thanks a lot
> to everybody who helped me with it. Patches for heat-dashboard [5] and
> searchlight-ui [6] are under review.
> 
> Additional efforts are needed for murano-dashboard, sahara-dashboard, and
> watcher-dashboard projects. murano-dashboard has Nose test runner enabled
> in the config, so Horizon change won't affect it.
> 
> I proposed patches for sahara-dashboard [7] and watcher-dashboard [8] to
> explicitly enable Nose test runner there until we'll fix all related
> issues. I hope we'll have a good number of cross-project activities with
> these teams.
> 
> Once all patches above will be merged, we'll be ready to the next step to
> make Horizon and plugins CI better.
> 
> 
> [1] https://review.openstack.org/#/c/544296/
> [2]
> https://docs.google.com/spreadsheets/d/17Yiso6JLeRHBSqJhAiQYkqIAvQhvNFM8NgTkrPxovMo/edit?usp=sharing
> [3] https://docs.openstack.org/horizon/latest/install/plugin-registry.html
> [4]
> https://review.openstack.org/#/q/topic:bp/improve-horizon-testing+(status:open+OR+status:merged)
> [5] https://review.openstack.org/572095
> [6] https://review.openstack.org/572124
> [7] https://review.openstack.org/572390
> [8] https://review.openstack.org/572391
> 
> 
> 
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/

Nice work! Thanks for taking the initiative on updating our tooling.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] StarlingX project status update

2018-06-05 Thread Mohammed Naser
Hi everyone:

This email is just to provide an update to the initial email regarding
the state of StarlingX.  The team has proposed a set of repositories
to be imported[1] which are completely new projects (not forks of
OpenStack or any other open source software).

Importing those projects will help us on-board the new StarlingX
contributors to our community, using the same tools we use for
developing our other projects.

[1]: https://review.openstack.org/#/c/569562/

If anyone has any questions, I'd be more than happy to address them.

Regards,
Mohammed

On Wed, May 30, 2018 at 4:23 PM, Mohammed Naser  wrote:
> Hi everyone:
>
> Over the past week in the summit, there was a lot of discussion
> regarding StarlingX
> and members of the technical commitee had a few productive discussions 
> regarding
> the best approach to deal with a proposed new pilot project for
> incubation in the OSF's Edge
> Computing strategic focus area: StarlingX.
>
> If you're not aware, StarlingX includes forks of some OpenStack
> components and other open source software
> which contain certain features that are specific to edge and
> industrial IoT computing use cases.  The code
> behind the project is from Wind River (and is used to build a product
> called "Titanium
> Cloud").
>
> At the moment, the goal of StarlingX hosting their projects on the
> community infrastructure
> is to get the developers used to the Gerrit workflow.  The intention
> is to evenutally
> work with upstream teams in order to bring the features and bug fixes which 
> are
> specific to the fork back upstream, with an ideal goal of bringing all
> the differences
> upstream.
>
> We've discussed around all the different ways that we can approach
> this and how to
> help the StarlingX team be part of our community.  If we can
> succesfully do this, it would
> be a big success for our community as well as our community gaining
> contributors from
> the Wind River team.  In an ideal world, it's a win-win.
>
> The plan at the moment is the following:
> - StarlingX will have the first import of code that is not forked,
> simply other software that
>   they've developed to help deliver their product.  This code can be
> hosted with no problems.
> - StarlingX will generate a list of patches to be brought upstream and
> the StarlingX team
>   will work together with upstream teams in order to start backporting
> and upstreaming the
>   codebase.  Emilien Macchi (EmilienM) and I have volunteered to take
> on the responsibility of
>   monitoring the progress upstreaming these patches.
> - StarlingX contains a few forks of other non-OpenStack software. The
> StarlingX team will work
>   with the authors of the original projects to ensure that they do not
> mind us hosting a fork
>   of their software.  If they don't, we'll proceed to host those
> projects. If they prefer
>   something else (hosting it themselves, placing it on another hosting
> service, etc.),
>   the StarlingX team will work with them in that way.
>
> We discussed approaches for cases where patches aren't acceptable
> upstream, because they
> diverge from the project mission or aren't comprehensive. Ideally all
> of those could be turned
> into acceptable changes that meet both team's criteria. In some cases,
> adding plugin interfaces
> or driver interfaces may be the best alternative. Only as a last
> resort would we retain the
> forks for a long period of time.
>
> From what was brought up, the team from Wind River is hoping to
> on-board roughly 50 new full
> time contributors.  In combination with the features that they've
> built that we can hopefully
> upstream, I am hopeful that we can come to a win-win situation for
> everyone in this.
>
> Regards,
> Mohammed

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][plugins][heat][searchlight][murano][sahara][watcher] Use default Django test runner instead of nose

2018-06-05 Thread Ivan Kolodyazhny
Hi team,

In Horizon, we're going to get rid of unsupported Nose and use Django Test
Runner instead of it [1]. Nose has some issues and limitations which blocks
us in our testing improvement efforts.

Nose has different test discovery mechanism than Django does. So, there was
a chance to break some Horizon Plugins:(. Unfortunately, we haven't
cross-project CI yet (TBH, I'm working on it and it's one of the first
steps to get it done), that's why I tested this change [2] against all
known plugins [3].

Most of the projects don't need any changes. I proposed few changed to
plugins repositories [4] and most of them are merged already. Thanks a lot
to everybody who helped me with it. Patches for heat-dashboard [5] and
searchlight-ui [6] are under review.

Additional efforts are needed for murano-dashboard, sahara-dashboard, and
watcher-dashboard projects. murano-dashboard has Nose test runner enabled
in the config, so Horizon change won't affect it.

I proposed patches for sahara-dashboard [7] and watcher-dashboard [8] to
explicitly enable Nose test runner there until we'll fix all related
issues. I hope we'll have a good number of cross-project activities with
these teams.

Once all patches above will be merged, we'll be ready to the next step to
make Horizon and plugins CI better.


[1] https://review.openstack.org/#/c/544296/
[2]
https://docs.google.com/spreadsheets/d/17Yiso6JLeRHBSqJhAiQYkqIAvQhvNFM8NgTkrPxovMo/edit?usp=sharing
[3] https://docs.openstack.org/horizon/latest/install/plugin-registry.html
[4]
https://review.openstack.org/#/q/topic:bp/improve-horizon-testing+(status:open+OR+status:merged)
[5] https://review.openstack.org/572095
[6] https://review.openstack.org/572124
[7] https://review.openstack.org/572390
[8] https://review.openstack.org/572391



Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

2018-06-05 Thread Jay Pipes

On 06/05/2018 08:50 AM, Stephen Finucane wrote:
I thought nested resource providers were already supported by placement? 
To the best of my knowledge, what is /not/ supported is virt drivers 
using these to report NUMA topologies but I doubt that affects you. The 
placement guys will need to weigh in on this as I could be missing 
something but it sounds like you can start using this functionality 
right now.


To be clear, this is what placement and nova *currently* support with 
regards to nested resource providers:


1) When creating a resource provider in placement, you can specify a 
parent_provider_uuid and thus create trees of providers. This was 
placement API microversion 1.14. Also included in this microversion was 
support for displaying the parent and root provider UUID for resource 
providers.


2) The nova "scheduler report client" (terrible name, it's mostly just 
the placement client at this point) understands how to call placement 
API 1.14 and create resource providers with a parent provider.


3) The nova scheduler report client uses a ProviderTree object [1] to 
cache information about the hierarchy of providers that it knows about. 
For nova-compute workers managing hypervisors, that means the 
ProviderTree object contained in the report client is rooted in a 
resource provider that represents the compute node itself (the 
hypervisor). For nova-compute workers managing baremetal, that means the 
ProviderTree object contains many root providers, each representing an 
Ironic baremetal node.


4) The placement API's GET /allocation_candidates endpoint now 
understands the concept of granular request groups [2]. Granular request 
groups are only relevant when a user wants to specify that child 
providers in a provider tree should be used to satisfy part of an 
overall scheduling request. However, this support is yet incomplete -- 
see #5 below.


The following parts of the nested resource providers modeling are *NOT* 
yet complete, however:


5) GET /allocation_candidates does not currently return *results* when 
granular request groups are specified. So, while the placement service 
understands the *request* for granular groups, it doesn't yet have the 
ability to constrain the returned candidates appropriately. Tetsuro is 
actively working on this functionality in this patch series:


https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers-allocation-candidates

6) The virt drivers need to implement the update_provider_tree() 
interface [3] and construct the tree of resource providers along with 
appropriate inventory records for each child provider in the tree. Both 
libvirt and XenAPI virt drivers have patch series up that begin to take 
advantage of the nested provider modeling. However, a number of concerns 
[4] about in-place nova-compute upgrades when moving from a single 
resource provider to a nested provider tree model were raised, and we 
have begun brainstorming how to handle the migration of existing data in 
the single-provider model to the nested provider model. [5] We are 
blocking any reviews on patch series that modify the local provider 
modeling until these migration concerns are fully resolved.


7) The scheduler does not currently pass granular request groups to 
placement. Once #5 and #6 are resolved, and once the migration/upgrade 
path is resolved, clearly we will need to have the scheduler start 
making requests to placement that represent the granular request groups 
and have the scheduler pass the resulting allocation candidates to its 
filters and weighers.


Hope this helps highlight where we currently are and the work still left 
to do (in Rocky) on nested resource providers.


Best,
-jay


[1] 
https://github.com/openstack/nova/blob/master/nova/compute/provider_tree.py


[2] 
https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html


[3] 
https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/virt/driver.py#L833


[4] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130783.html

[5] https://etherpad.openstack.org/p/placement-making-the-(up)grade

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

2018-06-05 Thread Stephen Finucane
On Mon, 2018-06-04 at 10:49 -0700, Nadathur, Sundar wrote:
> Hi,
> 
>  Cyborg needs to create RCs and traits for accelerators. The
> original plan was to do that with nested RPs. To avoid rushing
> the
> Nova developers, I had proposed that Cyborg could start by
> applying
> the traits to the compute node RP, and accept the resulting
> caveats
> for Rocky, till we get nested RP support. That proposal did not
> find
> many takers, and Cyborg has essentially been in waiting mode.
> 
> 
> 
> Since it is June already, and there is a risk of not delivering
> anything meaningful in Rocky, I am reviving my older proposal,
> which
> is summarized as below:
> 
> 
>   Cyborg shall create the RCs and traits as per spec
> (https://review.openstack.org/#/c/554717/), both in Rocky and
> beyond. Only the RPs will change post-Rocky.
> 
>   
>   In Rocky:
>   
> Cyborg will not create nested RPs. It shall apply the device
>   traits to the compute node RP.
> Cyborg will document the resulting caveat, i.e., all devices
>   in the same host should have the same traits. In
> particular,
>   we cannot have a GPU and a FPGA, or 2 FPGAs of different
>   types, in the same host.
> Cyborg will document that upgrades to post-Rocky releases
>   will require operator intervention (as described below).
> 
> 
>   
>For upgrade to post-Rocky world with nested RPs:
>   
> The operator needs to stop all running instances that use an
>   accelerator.
> The operator needs to run a script that removes the Cyborg
>   traits and the inventory for Cyborg RCs from compute node
> RPs.
> The operator can then perform the upgrade. The new Cyborg
>   agent/driver(s) shall created nested RPs and publish
>   inventory/traits as specified.
>   
> 
> IMHO, it is acceptable for Cyborg to do this because it is new
>   and we can set expectations for the (lack of) upgrade plan. The
>   alternative is that potentially no meaningful use cases get
>   addressed in Rocky for Cyborg. 
> 
> 
> 
> Please LMK what you think.

I thought nested resource providers were already supported by
placement? To the best of my knowledge, what is not supported is virt
drivers using these to report NUMA topologies but I doubt that affects
you. The placement guys will need to weigh in on this as I could be
missing something but it sounds like you can start using this
functionality right now.

Stephen

> 
> 
> Regards,
> 
> Sundar
> 
>   
> 
> _
> _OpenStack Development Mailing List (not for usage
> questions)Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack
> .org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] summary of joint leadership meeting from 20 May

2018-06-05 Thread Sean McGinnis
On Tue, Jun 05, 2018 at 11:21:16AM +0200, Thierry Carrez wrote:
> Jay Pipes wrote:
> > On 06/04/2018 05:02 PM, Doug Hellmann wrote:
> >> [...]>
> > Personally, I've very much enjoyed the separate PTGs because I've actually
> > been able to get work done at them; something that was much harder when the
> > design summits were part of the overall conference. 
> 
> Right, the trick is to try to preserve that productivity while making it
> easier to travel to... One way would be to make sure the PTG remains a
> separate event (separate days, separate venues, separate registration),
> just co-located in same city and week.
> 
> > [...]
> >> There are a few plans under consideration, and no firm decisions
> >> have been made, yet. We discussed a strawman proposal to combine
> >> the summit and PTG in April, in Denver, that would look much like
> >> our older Summit events (from the Folsom/Grizzly time frame) with
> >> a few days of conference and a few days of design summit, with some
> >> overlap in the middle of the week.  The dates, overlap, and
> >> arrangements will depend on venue availability.
> > 
> > Has the option of doing a single conference a year been addressed? Seems
> > to me that we (the collective we) could save a lot of money not having
> > to put on multiple giant events per year and instead have one.
> 
> Yes, the same strawman proposal included the idea of leveraging an
> existing international "OpenStack Day" event and raising its profile
> rather than organizing a full second summit every year. The second PTG
> of the year could then be kept as a separate event, or put next to that
> "upgraded" OpenStack Day.
> 

I actually really like this idea. As things slow down, there just aren't enough
of big splashy new things to announce every 6 months. I think it could work
well to have one Summit a year, while using the OSD events as a way to reach
those folks that can't make it to the one big event for the year due to timing
or location.

It could help concentrate efforts to have bigger goals ready by the Summit and
keep things on a better cadence. And if we can still do a PTG-like event along
side one of the OSD events, it would allow development to still get the
valuable face-to-face time that we've come to expect.

> Thinking on this is still very much work in progress.
> 
> -- 
> Thierry Carrez (ttx)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-05 Thread Sean McGinnis
On Mon, Jun 04, 2018 at 06:44:15PM -0500, Matt Riedemann wrote:
> On 6/4/2018 5:13 PM, Sean McGinnis wrote:
> > Yes, exactly what I meant by the NOOP. I'm not sure what Cinder would
> > check here. We don't have to see if placement has been set up or if cell0
> > has been configured. Maybe once we have the facility in place we would
> > find some things worth checking, but at present I don't know what that
> > would be.
> 
> Here is an example from the Cinder Queens upgrade release notes:
> 
> "RBD/Ceph backends should adjust max_over_subscription_ratio to take into
> account that the driver is no longer reporting volume’s physical usage but
> it’s provisioned size."
> 
> I'm assuming you could check if rbd is configured as a storage backend and
> if so, is max_over_subscription_ratio set? If not, is it fatal? Does the
> operator need to configure it before upgrading to Rocky? Or is it something
> they should consider but don't necessary have to do - if that, there is a
> 'WARNING' status for those types of things.
> 
> Things that are good candidates for automating are anything that would stop
> the cinder-volume service from starting, or things that require data
> migrations before you can roll forward. In nova we've had blocking DB schema
> migrations for stuff like this which basically mean "you haven't run the
> online data migrations CLI yet so we're not letting you go any further until
> your homework is done".
> 

Thanks, I suppose we probably could find some things to at least WARN on. Maybe
that would be useful.

I suppose as far as a series goal goes, even if each project doesn't come up
with a comprehensive set of checks, this would be a known thing deployers could
use and potentially build some additional tooling around. This could be a good
win for the overall ease of upgrade story.

> Like I said, it's not black and white, but chances are good there are things
> that fall into these categories.
> 
> -- 
> 
> Thanks,
> 
> Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][neutron] Extraroute support

2018-06-05 Thread Lajos Katona

Thanks for the answer.

On 2018-06-01 18:04, Kevin Benton wrote:
The neutron API now supports compare and swap updates with an If-Match 
header so the race condition can be avoided.

https://bugs.launchpad.net/neutron/+bug/1703234



On Fri, Jun 1, 2018, 04:57 Rabi Mishra > wrote:



On Fri, Jun 1, 2018 at 3:57 PM, Lajos Katona
mailto:lajos.kat...@ericsson.com>> wrote:

Hi,

Could somebody help me out with Neutron's Extraroute support
in Hot templates.
The support status of the Extraroute is support.UNSUPPORTED in
heat, and only create and delete are the supported operations.
see:

https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/extraroute.py#LC35

As I see the unsupported tag was added when the feature was
moved from the contrib folder to in-tree
(https://review.openstack.org/186608)
Perhaps you can help me out why only create and delete are
supported and update not.


I think most of the resources when moved from contrib to in-tree
are marked as unsupported. Adding routes to an existing router by
multiple stacks can be racy and is probably the reason use of this
resource is not encouraged and hence it's not supported. You can
see the discussion in the original patch that proposed this
resource https://review.openstack.org/#/c/41044/

Not sure if things have changed on neutron side for us to revisit
the concerns.

Also it does not have any update_allowed properties, hence no
handle_update(). It would be replaced if you change any property.

Hope it helps.

Thanks in advance for  the help.

Regards
Lajos



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards,

Rabi Mishra

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-06-05 Thread Akihiro Motoki
Hi,

Sorry for re-using the ancient ML thread.
Looking at recent xstatic-* repo reviews, I am a bit afraid that
xstatic-cores do not have a common understanding on the principle of
xstatic packages.
I hope all xstatic-cores re-read "Packing Software" in the horizon
contributor docs [1], especially "Minified Javascript policy" [2],
carefully.

Thanks,
Akihiro

[1]
https://docs.openstack.org/horizon/latest/contributor/topics/packaging.html
[2]
https://docs.openstack.org/horizon/latest/contributor/topics/packaging.html#minified-javascript-policy


2018年4月4日(水) 14:35 Xinni Ge :

> Hi Ivan and other Horizon team member,
>
> Thanks for adding us into xstatic-core group.
> But I still need your opinion and help to release the newly-added xstatic
> packages to pypi index.
>
> Current `xstatic-core` group doesn't have the permission to PUSH SIGNED
> TAG, and I cannot release the first non-trivial version.
>
> If I (or maybe Kaz) could be added into xstatic-release group, we can
> release all the 8 packages by ourselves.
>
> Or, we are very appreciate if any member of xstatic-release could help to
> do it.
>
> Just for your quick access, here is the link of access permission page of
> one xstatic package.
>
> https://review.openstack.org/#/admin/projects/openstack/xstatic-angular-material,access
>
>
> --
> Best Regards,
> Xinni
>
> On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara 
> wrote:
>
>> Hi Ivan,
>>
>>
>> Thank you very much.
>> I've confirmed that all of us have been added to xstatic-core.
>>
>> As discussed, we will focus on the followings what we added for
>> heat-dashboard, will not touch other xstatic repos as core.
>>
>> xstatic-angular-material
>> xstatic-angular-notify
>> xstatic-angular-uuid
>> xstatic-angular-vis
>> xstatic-filesaver
>> xstatic-js-yaml
>> xstatic-json2yaml
>> xstatic-vis
>>
>> Regards,
>> Kaz
>>
>> 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny :
>> > Hi Kuz,
>> >
>> > Don't worry, we're on the same page with you. I added both you, Xinni
>> and
>> > Keichii to the xstatic-core group. Thank you for your contributions!
>> >
>> > Regards,
>> > Ivan Kolodyazhny,
>> > http://blog.e0ne.info/
>> >
>> > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara 
>> wrote:
>> >>
>> >> Hi Ivan & Horizon folks
>> >>
>> >>
>> >> AFAIK, Horizon team had conclusion that you will add the specific
>> >> members to xstatic-core, correct ?
>> >> Can I ask you to add the following members ?
>> >> # All of tree are heat-dashboard core.
>> >>
>> >> Kazunori Shinohara / ksnhr.t...@gmail.com #myself
>> >> Xinni Ge / xinni.ge1...@gmail.com
>> >> Keiichi Hikita / keiichi.hik...@gmail.com
>> >>
>> >> Please give me a shout, if we are not on same page or any concern.
>> >>
>> >> Regards,
>> >> Kaz
>> >>
>> >>
>> >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara :
>> >> > Hi Ivan, Akihiro,
>> >> >
>> >> >
>> >> > Thanks for your kind arrangement.
>> >> > Looking forward to hearing your decision soon.
>> >> >
>> >> > Regards,
>> >> > Kaz
>> >> >
>> >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny :
>> >> >> HI Team,
>> >> >>
>> >> >> From my perspective, I'm OK both with #2 and #3 options. I agree
>> that
>> >> >> #4
>> >> >> could be too complicated for us. Anyway, we've got this topic on the
>> >> >> meeting
>> >> >> agenda [1] so we'll discuss it there too. I'll share our decision
>> after
>> >> >> the
>> >> >> meeting.
>> >> >>
>> >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon
>> >> >>
>> >> >>
>> >> >>
>> >> >> Regards,
>> >> >> Ivan Kolodyazhny,
>> >> >> http://blog.e0ne.info/
>> >> >>
>> >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki > >
>> >> >> wrote:
>> >> >>>
>> >> >>> Hi Kaz and Ivan,
>> >> >>>
>> >> >>> Yeah, it is worth discussed officially in the horizon team meeting
>> or
>> >> >>> the
>> >> >>> mailing list thread to get a consensus.
>> >> >>> Hopefully you can add this topic to the horizon meeting agenda.
>> >> >>>
>> >> >>> After sending the previous mail, I noticed anther option. I see
>> there
>> >> >>> are
>> >> >>> several options now.
>> >> >>> (1) Keep xstatic-core and horizon-core same.
>> >> >>> (2) Add specific members to xstatic-core
>> >> >>> (3) Add specific horizon-plugin core to xstatic-core
>> >> >>> (4) Split core membership into per-repo basis (perhaps too
>> >> >>> complicated!!)
>> >> >>>
>> >> >>> My current vote is (2) as xstatic-core needs to understand what is
>> >> >>> xstatic
>> >> >>> and how it is maintained.
>> >> >>>
>> >> >>> Thanks,
>> >> >>> Akihiro
>> >> >>>
>> >> >>>
>> >> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara :
>> >> 
>> >>  Hi Akihiro,
>> >> 
>> >> 
>> >>  Thanks for your comment.
>> >>  The background of my request to add us to xstatic-core comes from
>> >>  Ivan's comment in last PTG's etherpad for heat-dashboard
>> discussion.
>> >> 
>> >> 
>> https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion
>> >>  Line135, "we can share ownership if needed - e0ne"
>> >> 
>> >>  Just in case, could 

Re: [openstack-dev] [tc] summary of joint leadership meeting from 20 May

2018-06-05 Thread Thierry Carrez
Jay Pipes wrote:
> On 06/04/2018 05:02 PM, Doug Hellmann wrote:
>> [...]>
> Personally, I've very much enjoyed the separate PTGs because I've actually 
> been able to get work done at them; something that was much harder when the 
> design summits were part of the overall conference. 

Right, the trick is to try to preserve that productivity while making it
easier to travel to... One way would be to make sure the PTG remains a
separate event (separate days, separate venues, separate registration),
just co-located in same city and week.

> [...]
>> There are a few plans under consideration, and no firm decisions
>> have been made, yet. We discussed a strawman proposal to combine
>> the summit and PTG in April, in Denver, that would look much like
>> our older Summit events (from the Folsom/Grizzly time frame) with
>> a few days of conference and a few days of design summit, with some
>> overlap in the middle of the week.  The dates, overlap, and
>> arrangements will depend on venue availability.
> 
> Has the option of doing a single conference a year been addressed? Seems
> to me that we (the collective we) could save a lot of money not having
> to put on multiple giant events per year and instead have one.

Yes, the same strawman proposal included the idea of leveraging an
existing international "OpenStack Day" event and raising its profile
rather than organizing a full second summit every year. The second PTG
of the year could then be kept as a separate event, or put next to that
"upgraded" OpenStack Day.

Thinking on this is still very much work in progress.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] Nominating suzhengwei as Watcher core

2018-06-05 Thread Чадин Александр Сергеевич
Hi Watchers,

I’d like to nominate suzhengwei for Watcher Core team.

suzhengwei makes great contribution to the Watcher project including code 
reviews and implementations.

Please vote +1/-1.


Best Regards,

Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible][releases][governance] Change in OSA roles tagging

2018-06-05 Thread Jean-Philippe Evrard
Hello,

*TL:DR;* If you are an openstack-ansible user, consuming our roles directly,
with tags, without using openstack-ansible plays or integrated repo,
then things will change for you. Start using git shas instead of tags.
All other openstack-ansible users should not see a difference, even if
they use openstack-ansible tags.


During the summit, I had a discussion with dhellman (and smcginnis)
to change how openstack-ansible does its releases.

Currently, we tag openstack-ansible + many roles under our umbrella
every two weeks. As far as I know, nobody requested to have roles
tagged every two weeks. Only OpenStack-Ansible need to be tagged
for consumption. Even people using our roles directly outside
openstack-ansible generally use sha for roles. We don't rely on
ansible galaxy.

Because there is no need to tag the roles, there is no need to make them
part of the "openstack-ansible" deliverable [1][2]. I will therefore
clarify the governance repo for that, separating the roles, each of them
with their own deliverable, instead of grouping some roles within
openstack-ansible, and some others outside it.

With this done, a release of openstack-ansible becomes straightforward
using the standard release tooling. The releases of openstack-ansible
becomes far simpler to request, review, and will not have timeouts
anymore :p

There are a few issues I see from the change. Still according to the
discussion, it seems we can overcome those.

1. As this will be applied on all the branches, we may reach some
issues with releasing in the next days. While the validate tooling
of releases has shown me that it wouldn't be a problem (just
warning) to not have all the repos in the deliverable, I would
expect a governance change could be impactful.
However, that is only impacting openstack-ansible, releases,
and governance team: Keep in mind, openstack-ansible will not
change for its users, and will still be tagged as you know it.

2. We will keep branching our roles the same way we do now. What
we liked for roles being part of this deliverable, is the ability
of having them automatically branched and their files adapted.
To what I heard, it is still possible to do so, by having a
devstack-like behavior, which branches on a sha, instead of
branching on tag. So I guess it means all our roles will now be
part of release files like this one [3], or even on a single release
file, similar to it.

What I would like to have, from this email, is:
1. Raise awareness to all the involved parties;
2. Confirmation we can go ahead, from a governance standpoint;
3. Confirmation we can still benefit from this automatic branch
tooling.

Thank you in advance.

Jean-Philippe Evrard (evrardjp)


[1]: 
https://github.com/openstack/governance/blob/8215c5fd9b464b332b310bbb767812fefc5d9174/reference/projects.yaml#L2493-L2540
[2]: 
https://github.com/openstack/releases/blob/9db5991707458bbf26a4dd9f55c2a01fee96a45d/deliverables/queens/openstack-ansible.yaml#L768-L851
[3]: 
https://github.com/openstack/releases/blob/9db5991707458bbf26a4dd9f55c2a01fee96a45d/deliverables/queens/devstack.yaml

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-05 Thread Jean-Philippe Evrard
Sorry if I missed/repeat something already said in this thread...

When I am looking at diversity, I generally like to know: 1) what's
going on "right now", and 2) what happened in the cycle x.

I think these 2 are different problems to solve. And tags are, IMO,
best applied to the second case.

So if I focus on the second: What if we are only tagging once per
cycle, after the release?
(I am pushing the idea further than the quarter basically). It would
avoid flappiness (if that's a proper term?).
For me, a cycle has a clear meaning. And involvements can balance out
in a cycle.
This would be, IMO, good enough to promote/declare diversity after the
facts (and is an answer to the "what happened during the cycle").

Jean-Philippe Evrard (evrardjp)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Status of Standalone installer (aka All-In-One)

2018-06-05 Thread Raoul Scarazzini
On 05/06/2018 02:26, Emilien Macchi wrote:
[...]
> I hope this update was useful, feel free to give feedback or ask any
> questions,
[...]

I'm no prophet here, but I see a bright future for this approach. I can
imagine how useful this can be on the testing and much more the learning
side. Thanks for sharing!

-- 
Raoul Scarazzini
ra...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Stepping down from core

2018-06-05 Thread Slawomir Kaplonski
Hi Ihar,

Thanks for everything what You did for OpenStack and Neutron especially.
I remember that You were one of first people which I met in OpenStack community.
Thanks for all Your help then and during all time when we worked together :)

Good luck in Your new project :)

> Wiadomość napisana przez Ihar Hrachyshka  w dniu 
> 04.06.2018, o godz. 22:31:
> 
> Hi neutrinos and all,
> 
> As some of you've already noticed, the last several months I was
> scaling down my involvement in Neutron and, more generally, OpenStack.
> I am at a point where I feel confident my disappearance won't disturb
> the project, and so I am ready to make it official.
> 
> I am stepping down from all administrative roles I so far accumulated
> in Neutron and Stable teams. I shifted my focus to another project,
> and so I just removed myself from all relevant admin groups to reflect
> the change.
> 
> It was a nice 4.5 year ride for me. I am very happy with what we
> achieved in all these years and a bit sad to leave. The community is
> the most brilliant and compassionate and dedicated to openness group
> of people I was lucky to work with, and I am reminded daily how
> awesome it is.
> 
> I am far from leaving the industry, or networking, or the promise of
> open source infrastructure, so I am sure we will cross our paths once
> in a while with most of you. :) I also plan to hang out in our IRC
> channels and make snarky comments, be aware!
> 
> Thanks for the fish,
> Ihar
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][puppet] Hello all, puppet modules

2018-06-05 Thread Tobias Urdin
We are using them for one of our deployments and are working on moving
our other one to use the same modules :)

Best regards


On 06/04/2018 11:06 PM, Arnaud Morin wrote:
> Hey,
>
> OVH is also using them as well as some custom ansible playbooks to manage the 
> deployment. But as for red had, the configuration part is handled by puppet.
> We are also doing some upstream contribution from time to time.
>
> For us, the puppet modules are very stable and works very fine.
>
> Cheers,
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-06-05 Thread Bogdan Dobrelya
The proposed undercloud installation jobs dependency [0] worked, see the 
jobs start time [1], [2] confirms that. The resulting delay for the full 
pipeline is an ~80 minutes, as it was expected. So PTAL folks, I propose 
to try it out in real gating and see how the tripleo zuul queue gets 
relieved.


The remaining patch [1] adding a dependency on tox/linting didn't work, 
I'll need some help please to figure out why.


Thank you Tristan and James and y'all folks for helping!

[0] https://review.openstack.org/#/c/568536/
[1] 
http://logs.openstack.org/36/568536/6/check/tripleo-ci-centos-7-undercloud-containers/cfebec0/ara-report/
[2] 
http://logs.openstack.org/36/568536/6/check/tripleo-ci-centos-7-containers-multinode/1a211bb/ara-report/

[3] https://review.openstack.org/#/c/568543/




Perhaps this has something to do with jobs evaluation order, it may be
worth trying to add the dependencies list in the project-templates, like
it is done here for example:
http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml#n9799

It also easier to read dependencies from pipelines definition imo.

-Tristan



--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev