Re: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team

2018-08-22 Thread Trinh Nguyen
+1


*Trinh Nguyen *| Founder & Chief Architect



*E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *



On Wed, Aug 22, 2018 at 2:21 PM 龚永生  wrote:

> +1
>
>
>
>
> --
> *龚永生*
> *九州云信息科技有限公司 99CLOUD Co. Ltd.*
> 邮箱(Email):gong.yongsh...@99cloud.net 
> 地址:北京市海淀区上地三街嘉华大厦B座806
> Addr : Room 806, Tower B, Jiahua Building, No. 9 Shangdi 3rd Street,
> Haidian District, Beijing, China
> 手机(Mobile):+86-18618199879
> 公司网址(WebSite):http://99cloud.net 
>
> 在 2018-08-22 12:21:22,Dharmendra Kushwaha <
> dharmendra.kushw...@india.nec.com> 写道:
>
> Hi Tacker members,
>
>
>
> To keep our Tacker project growing with new active members, I would like
>
> to propose to prune +2 ability of our farmer member Kanagaraj Manickam,
>
> and propose Cong Phuoc Hoang (IRC: phuoc) to join the tacker core team.
>
>
>
> Kanagaraj is not been involved since last couple of cycle. You had a great
>
> Contribution in Tacker project like VNF scaling features which are
> milestone
>
> for project. Thanks for your contribution, and wish to see you again.
>
>
>
> Phuoc is contributing actively in Tacker from Pike cycle, and
>
> he has grown into a key member of this project [1]. He delivered multiple
>
> features in each cycle. Additionally tons of other activities like bug
> fixes,
>
> answering actively on bugs. He is also actively contributing in cross
> project
>
> like tosca-parser and heat-translator which is much helpful for Tacker.
>
>
>
> Please vote your +1/-1.
>
>
>
> [1]:
> http://stackalytics.com/?project_type=openstack=all=commits=tacker-group_id=hoangphuoc
>
>
>
> Thanks & Regards
>
> Dharmendra Kushwaha
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team

2018-08-22 Thread Hiroyuki JO
+1

 

--

Hiroyuki JO

Email: jo.hiroy...@lab.ntt.co.jp, TEL(direct) : +81-422-59-7394

NTT Network Service Systems Laboratories

 

From: Dharmendra Kushwaha [mailto:dharmendra.kushw...@india.nec.com] 
Sent: Wednesday, August 22, 2018 1:21 PM
To: openstack-dev
Subject: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team

 

Hi Tacker members,

 

To keep our Tacker project growing with new active members, I would like

to propose to prune +2 ability of our farmer member Kanagaraj Manickam,

and propose Cong Phuoc Hoang (IRC: phuoc) to join the tacker core team.

 

Kanagaraj is not been involved since last couple of cycle. You had a great

Contribution in Tacker project like VNF scaling features which are milestone

for project. Thanks for your contribution, and wish to see you again. 

 

Phuoc is contributing actively in Tacker from Pike cycle, and

he has grown into a key member of this project [1]. He delivered multiple

features in each cycle. Additionally tons of other activities like bug
fixes,

answering actively on bugs. He is also actively contributing in cross
project

like tosca-parser and heat-translator which is much helpful for Tacker.

 

Please vote your +1/-1.

 

[1]:
http://stackalytics.com/?project_type=openstack=all=commits
odule=tacker-group_id=hoangphuoc

 

Thanks & Regards

Dharmendra Kushwaha

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][patrole][neutron][policy] Neutron Policy Testing in OpenStack Patrole project

2018-08-22 Thread Ghanshyam Mann
Hi All,

This thread is to request the neutron team on review help for neutron policy 
testing in Patrole project.

Folks who are not familiar with Patrole, below is the  brief background & 
description of  Patrole:
-
OpenStack Patrole is official project under QA umbrella which perform the RBAC 
testing. It has been in development state since Ocata and currently released 
its 0.4.0 version for Rocky[1]. Complete Documentation can be found here[2]. 
#openstack-qa is IRC channel for Patrole. 

Main goal of this project is to perform the RBAC testing for OpenStack where we 
will first focus on  Nova, Cinder, Keystone, Glance and Neutron in Patrole repo 
and provide the framework / mechanism  to extend the testing for other project 
via plugin or some other way (yet to finalized). 

Current state :
- Good coverage for Nova, Keystone, Cinder, Glance.
- Ongoing 1. neutron coverage, framework stability
- Next 1. stable release of Patrole, 2. start gating the Patrole testing on 
project side.
--

Patrole team is working on neutron policy testing. As you know neutron policy 
is not as simple as other projects and also no user facing documentation for 
policy. I was discussing with amotoki about it and got to know that he is 
working on policy doc or something which can be useful for users and so does 
Patrole can consume that for writing the test cases.

Another request QA has for neutron team is about review the neutron policy test 
cases. Here is the complete review list[3] (cannot get the single gerrit topic 
linked with story#) and it will be great if neutron team can keep eyes on those 
and provide early feedback on new test cases (their policy name, return code, 
coverage etc). 

One example where we need feedback is - 
https://review.openstack.org/#/c/586739/ 

Q: What is the return code for GET API if policy authorization fail. 

From neutron doc [4] (though it is internal doc but it explain the neutron 
policy internals), it seems for GET, PUT, DELETE where resource existence is 
checked first. If resource does not exist then 404 is return for security 
purpose as 403 can tell invalid user that this resource exist. 

But for PUT and DELETE, it can be 403 when resource exist but user does not 
have access to PUT/DELETE operation. 

I was discussing it with amotoki also and we thought of
 - Check 404 for GET 
 - Check [403, 404] for PUT and DELETE.
 - later we will strict the checks of 404 and 403 separately for PUT and DELETE.

Let's us know if that is right way to proceed. 

[1] https://docs.openstack.org/releasenotes/patrole/v0.4.0.html  
[2] https://docs.openstack.org/patrole/latest/ 
[3] https://storyboard.openstack.org/#!/story/2002641
[4] 
https://docs.openstack.org/neutron/pike/contributor/internals/policy.html#request-authorization

-gmann



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Old patches cleaning

2018-08-22 Thread Slawomir Kaplonski
Hi,

In Neutron we have many patches without any activity since long time. To make 
list of patches a bit smaller I want to run script [1] soon.
I will run it only for projects like:
* neutron
* neutron-lib
* neutron-tempest-plugin

But if You want to run it for Your stadium project it should be also possible 
after my patch [2] will be merged.
If You have any concerns about running this script, please raise Your hand now 
:)

If You are owner of patch which will be abandoned and You will want to continue 
work on it, You can always restore Your patch and continue work on it then.

[1] 
https://github.com/openstack/neutron/blob/master/tools/abandon_old_reviews.sh
[2] https://review.openstack.org/#/c/594326

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Redis licensing terms changes

2018-08-22 Thread Haïkel
Hi,

I haven't seen this but I'd like to point that Redis moved to an open
core licensing model.
https://redislabs.com/community/commons-clause/

In short:
* base engine remains under BSD license
* modules move to ASL 2.0 + commons clause which is non-free
(prohibits sales of derived products)

IMHO, projects that rely on Redis as default driver, should consider
alternatives (off course, it's up to them).

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team

2018-08-22 Thread Kim Bao, Long
+1 from me.
First of all, I would like to thank Phuoc for his contribution in Tacker. As I 
know, Phuoc joined in Tacker project for a year, but his contribution in Tacker 
is highly appreciated. Besides, he also is one of active member in  IRC, gerrit 
and bug report.
Hope that he can help Tacker keep growing in his new role.
LongKB

From: Dharmendra Kushwaha [mailto:dharmendra.kushw...@india.nec.com]
Sent: Wednesday, August 22, 2018 11:21 AM
To: openstack-dev 
Subject: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team

Hi Tacker members,

To keep our Tacker project growing with new active members, I would like
to propose to prune +2 ability of our farmer member Kanagaraj Manickam,
and propose Cong Phuoc Hoang (IRC: phuoc) to join the tacker core team.

Kanagaraj is not been involved since last couple of cycle. You had a great
Contribution in Tacker project like VNF scaling features which are milestone
for project. Thanks for your contribution, and wish to see you again.

Phuoc is contributing actively in Tacker from Pike cycle, and
he has grown into a key member of this project [1]. He delivered multiple
features in each cycle. Additionally tons of other activities like bug fixes,
answering actively on bugs. He is also actively contributing in cross project
like tosca-parser and heat-translator which is much helpful for Tacker.

Please vote your +1/-1.

[1]: 
http://stackalytics.com/?project_type=openstack=all=commits=tacker-group_id=hoangphuoc

Thanks & Regards
Dharmendra Kushwaha
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Redis licensing terms changes

2018-08-22 Thread Thierry Carrez

Haïkel wrote:

I haven't seen this but I'd like to point that Redis moved to an open
core licensing model.
https://redislabs.com/community/commons-clause/

In short:
* base engine remains under BSD license
* modules move to ASL 2.0 + commons clause which is non-free
(prohibits sales of derived products)


Beyond the sale of a derived product, it prohibits selling hosting of or 
providing consulting services on anything that depend on it... so it's 
pretty broad.



IMHO, projects that rely on Redis as default driver, should consider
alternatives (off course, it's up to them).


The TC stated in the past that default drivers had to be open source, so 
if anything depends on commons-claused Redis modules, they would 
probably have to find an alternative...


Which OpenStack components are affected ?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Searchlight] Action plan for Searchlight in Stein

2018-08-22 Thread Trinh Nguyen
Dear team,

Here is my proposed action plan for Searchlight in Stein. The ultimate goal
is to revive Searchlight with a sustainable number of contributors and can
release as expected.

1. Migrate Searchlight to Storyboard with the help of Kendall
2. Attract more contributors (as well as cores)
3. Clean up docs, notes
4. Review and clean up patches [1] [2] [3] [4]
5. Setting up goals/features for Stein. We will need to have a virtual PTG
(September 10-14, 2018, Denver) since I cannot attend it this time.

This is our Etherpad for Stein, please feel free to contribute from now on
until the PTG:
https://review.openstack.org/#/q/project:openstack/searchlight+status:open

[1]
https://review.openstack.org/#/q/project:openstack/searchlight+status:open
[2]
https://review.openstack.org/#/q/project:openstack/searchlight-ui+status:open
[3]
https://review.openstack.org/#/q/project:openstack/python-searchlightclient+status:open
[4]
https://review.openstack.org/#/q/project:openstack/searchlight-specs+status:open

If you have any idea or want to contribute, please ping me on IRC:

   - IRC Channel: #openstack-searchlight
   - My IRC handler: dangtrinhnt


Bests,

*Trinh Nguyen *| Founder & Chief Architect



*E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goal][python3] week 2 update

2018-08-22 Thread Nguyễn Trí Hải
Hi,

The other projects please consider merging the OpenStack Release Bot
patches related to Rocky branch. So that we can propose the patches related
to python3-first goal.

On Tue, Aug 21, 2018 at 8:40 PM Telles Nobrega  wrote:

> Thanks. We merged most of them, there is only one that failed the tests so
> I'm rechecking it.
>
> On Mon, Aug 20, 2018 at 5:43 PM Doug Hellmann 
> wrote:
>
>> Excerpts from Doug Hellmann's message of 2018-08-20 16:34:22 -0400:
>> > Excerpts from Telles Nobrega's message of 2018-08-20 15:07:29 -0300:
>> > > Hi Doug,
>> > >
>> > > I believe Sahara is ready to have those patches worked on.
>> > >
>> > > Do we have to do anything specific to get the env ready?
>> >
>> > Just be ready to do the reviews. I am generating the patches now and
>> > will propose them in a little while when the script finishes.
>>
>> And here they are:
>>
>>
>> +--+-+-+
>> | Subject  | Repo
>> | URL |
>>
>> +--+-+-+
>> | import zuul job settings from project-config |
>> openstack/python-saharaclient   | https://review.openstack.org/593904 |
>> | switch documentation job to new PTI  |
>> openstack/python-saharaclient   | https://review.openstack.org/593905 |
>> | add python 3.6 unit test job |
>> openstack/python-saharaclient   | https://review.openstack.org/593906 |
>> | import zuul job settings from project-config |
>> openstack/python-saharaclient   | https://review.openstack.org/593918 |
>> | import zuul job settings from project-config |
>> openstack/python-saharaclient   | https://review.openstack.org/593923 |
>> | import zuul job settings from project-config |
>> openstack/python-saharaclient   | https://review.openstack.org/593928 |
>> | import zuul job settings from project-config |
>> openstack/python-saharaclient   | https://review.openstack.org/593933 |
>> | import zuul job settings from project-config | openstack/sahara
>> | https://review.openstack.org/593907 |
>> | switch documentation job to new PTI  | openstack/sahara
>> | https://review.openstack.org/593908 |
>> | add python 3.6 unit test job | openstack/sahara
>> | https://review.openstack.org/593909 |
>> | import zuul job settings from project-config | openstack/sahara
>> | https://review.openstack.org/593919 |
>> | import zuul job settings from project-config | openstack/sahara
>> | https://review.openstack.org/593924 |
>> | import zuul job settings from project-config | openstack/sahara
>> | https://review.openstack.org/593929 |
>> | import zuul job settings from project-config | openstack/sahara
>> | https://review.openstack.org/593934 |
>> | import zuul job settings from project-config |
>> openstack/sahara-dashboard  | https://review.openstack.org/593910 |
>> | switch documentation job to new PTI  |
>> openstack/sahara-dashboard  | https://review.openstack.org/593911 |
>> | import zuul job settings from project-config |
>> openstack/sahara-dashboard  | https://review.openstack.org/593920 |
>> | import zuul job settings from project-config |
>> openstack/sahara-dashboard  | https://review.openstack.org/593925 |
>> | import zuul job settings from project-config |
>> openstack/sahara-dashboard  | https://review.openstack.org/593930 |
>> | import zuul job settings from project-config |
>> openstack/sahara-dashboard  | https://review.openstack.org/593935 |
>> | import zuul job settings from project-config | openstack/sahara-extra
>> | https://review.openstack.org/593912 |
>> | import zuul job settings from project-config | openstack/sahara-extra
>> | https://review.openstack.org/593921 |
>> | import zuul job settings from project-config | openstack/sahara-extra
>> | https://review.openstack.org/593926 |
>> | import zuul job settings from project-config | openstack/sahara-extra
>> | https://review.openstack.org/593931 |
>> | import zuul job settings from project-config | openstack/sahara-extra
>> | https://review.openstack.org/593936 |
>> | import zuul job settings from project-config |
>> openstack/sahara-image-elements | https://review.openstack.org/593913 |
>> | import zuul job settings from project-config |
>> openstack/sahara-image-elements | https://review.openstack.org/593922 |
>> | import zuul job settings from project-config |
>> openstack/sahara-image-elements | https://review.openstack.org/593927 |
>> | import zuul job settings from project-config |
>> openstack/sahara-image-elements | https://review.openstack.org/593932 |
>> | import zuul job settings from project-config |
>> openstack/sahara-image-elements | https://review.openstack.org/593937 |
>> | import 

Re: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad)

2018-08-22 Thread Csatari, Gergely (Nokia - HU/Budapest)
Hi, 

This is good news. We could even have an hour session to discuss ideas about 
TripleO-s place in the edge cloud infrastructure. Would you be open for that?

Br, 
Gerg0

-Original Message-
From: James Slagle  
Sent: Tuesday, August 21, 2018 2:42 PM
To: OpenStack Development Mailing List 
Subject: Re: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud 
deployment use cases (new squad)

On Tue, Aug 21, 2018 at 2:40 AM Csatari, Gergely (Nokia - HU/Budapest) 
 wrote:
>
> Hi,
>
> There was a two days workshop on edge requirements back in Dublin. The notes 
> are stored here: 
> https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG I think 
> there are some areas there what can be interesting for the squad.
> Edge Computing Group plans to have a day long discussion in Denver. Maybe we 
> could have a short discussion there about these requirements.

Thanks! I've added my name to the etherpad for the PTG and will plan on 
spending Tuesday with the group.
https://etherpad.openstack.org/p/EdgeComputingGroupPTG4


--
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Keystone Team Update - Week of 6 August 2018

2018-08-22 Thread Adrian Turjak
Bah! I saw this while on holiday and didn't get a chance to respond,
sorry for being late to the conversation.

On 11/08/18 3:46 AM, Colleen Murphy wrote:
> ### Self-Service Keystone
>
> At the weekly meeting Adam suggested we make self-service keystone a focus 
> point of the PTG[9]. Currently, policy limitations make it difficult for an 
> unprivileged keystone user to get things done or to get information without 
> the help of an administrator. There are some other projects that have been 
> created to act as workflow proxies to mitigate keystone's limitations, such 
> as Adjutant[10] (now an official OpenStack project) and Ksproj[11] (written 
> by Kristi). The question is whether the primitives offered by keystone are 
> sufficient building blocks for these external tools to leverage, or if we 
> should be doing more of this logic within keystone. Certainly improving our 
> RBAC model is going to be a major part of improving the self-service user 
> experience.
>
> [9] 
> http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-121
> [10] https://adjutant.readthedocs.io/en/latest/
> [11] https://github.com/CCI-MOC/ksproj

As you can probably expect, I'd love to be a part of any of these
discussions. Anything I can nicely move to being logic directly
supported in Keystone, the less I need to do in Adjutant. The majority
of things though I think I can do reasonably well with the primitives
Keystone gives me, and what I can't I tend to try and work with upstream
to fill the gaps.

System vs project scope helps a lot though, and I look forward to really
playing with that.

I sadly won't be at the PTG, but will be at the Berlin summit. Plus I
have a lot of Adjutant work planned for Stein, a large chunk of which is
refactors and reshuffling blueprints and writing up a roadmap, plus some
better entry point tasks for new contributors.

> ### Standalone Keystone
>
> Also at the meeting and during office hours, we revived the discussion of 
> what it would take to have a standalone keystone be a useful identity 
> provider for non-OpenStack projects[12][13]. First up we'd need to turn 
> keystone into a fully-fledged SAML IdP, which it's not at the moment (which 
> is a point of confusion in our documentation), or even add support for it to 
> act as an OpenID Connect IdP. This would be relatively easy to do (or at 
> least not impossible). Then the application would have to use 
> keystonemiddleware or its own middleware to route requests to keystone to 
> issue and validate tokens (this is one aspect where we've previously 
> discussed whether JWT could benefit us). Then the question is what should a 
> not-OpenStack application do with keystone's "scoped RBAC"? It would all 
> depend on how the resources of the application are grouped and whether they 
> care about multitenancy in some form. Likely each application would have 
> different needs and it would be difficult to find a one-size-fits-all 
> approach. We're interested to know whether anyone has a burning use case for 
> something like this.
>
> [12] 
> http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-192
> [13] 
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-08-07.log.html#t2018-08-07T17:01:30

This one is interesting because another department at Catalyst is
actually looking to use Keystone outside of the scope of OpenStack. They
are building a SaaS platform, and they need authn, authz (with some
basic RBAC), a service catalog (think API endpoint per software
offering), and most of those things are useful outside of OpenStack.
They can then use projects to signify a customer, and a project
(customer) could have one or more users accessing the management GUIs,
with roles giving them some RBAC. A large part of this is because they
can then also piggy back on a lot of work our team has done with
OpenStack and Keystone and even reuse some of our projects and tools for
billing and other things (Adjutant maybe?). They could use KeystoneAuth
for CLI and client tools, they can build their APIs using
Keystonemiddleware.


Then another reason why this actually interests the Catalyst Cloud team
is because we actually use Keystone with an SQL backend for our public
cloud, with the db in a multi-region galera cluster. Keystone is our
Idp, we don't federate it, and we now have a reasonably passable 2FA
option on it, with a better MFA option coming in Stein when I'm done
with Auth Receipts. We actually kind of like Keystone for our authn, and
because we didn't have any existing users when we first built our cloud
so using vanilla Keystone seemed like a sensible solution. We had plans
to migrate users and federate, or move to LDAP, but they never
materialized because maintaining more systems didn't make sense and did
add many useful benefits. Making Keystone a fully fledged Idp with SAML
and OpenID support would be fantastic because we could 

Re: [openstack-dev] [Searchlight] Action plan for Searchlight in Stein

2018-08-22 Thread Kendall Nelson
It is done! I could only find two lp projects- searchlight and
python-searchlight-client. If I missed any let me know and I can run the
script on them. Otherwise you can view the results here[1].

Play around with it for a couple days and if it works for you we can
migrate you whenever. We usually do migrations on Fridays to minimize
impact on other work.

For other info about the migration process, you can check that out here[2]
or ask in #storyboard or email me directly :)

-Kendall (diablo_rojo)

[1] https://storyboard-dev.openstack.org/#!/project_group/61
[2] https://docs.openstack.org/infra/storyboard/migration.html

On Wed, Aug 22, 2018 at 12:19 PM Kendall Nelson 
wrote:

> Hello Trinh,
>
>
> On Wed, Aug 22, 2018 at 1:57 AM Trinh Nguyen 
> wrote:
>
>> Dear team,
>>
>> Here is my proposed action plan for Searchlight in Stein. The ultimate
>> goal is to revive Searchlight with a sustainable number of contributors and
>> can release as expected.
>>
>> 1. Migrate Searchlight to Storyboard with the help of Kendall
>>
>
> I will get Searchlight setup in our dev environment and run some test
> migrations today and let you know when they finish :)
>
>
>> 2. Attract more contributors (as well as cores)
>> 3. Clean up docs, notes
>> 4. Review and clean up patches [1] [2] [3] [4]
>> 5. Setting up goals/features for Stein. We will need to have a virtual
>> PTG (September 10-14, 2018, Denver) since I cannot attend it this time.
>>
>> This is our Etherpad for Stein, please feel free to contribute from now
>> on until the PTG:
>> https://review.openstack.org/#/q/project:openstack/searchlight+status:open
>>
>> [1]
>> https://review.openstack.org/#/q/project:openstack/searchlight+status:open
>> [2]
>> https://review.openstack.org/#/q/project:openstack/searchlight-ui+status:open
>> [3]
>> https://review.openstack.org/#/q/project:openstack/python-searchlightclient+status:open
>> [4]
>> https://review.openstack.org/#/q/project:openstack/searchlight-specs+status:open
>>
>> If you have any idea or want to contribute, please ping me on IRC:
>>
>>- IRC Channel: #openstack-searchlight
>>- My IRC handler: dangtrinhnt
>>
>>
>> Bests,
>>
>> *Trinh Nguyen *| Founder & Chief Architect
>>
>> 
>>
>> *E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *
>>
>>
> - Kendall (diablo_rojo)
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] octavia-dashboard 2.0.0.0rc3 (rocky)

2018-08-22 Thread no-reply

Hello everyone,

A new release candidate for octavia-dashboard for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/octavia-dashboard/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


https://git.openstack.org/cgit/openstack/octavia-dashboard/log/?h=stable/rocky

Release notes for octavia-dashboard can be found at:

https://docs.openstack.org/releasenotes/octavia-dashboard/

If you find an issue that could be considered release-critical, please
file it at:

https://storyboard.openstack.org/#!/project/909

and tag it *rocky-rc-potential* to bring it to the octavia-dashboard
release crew's attention.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][horizon] Issues we found when using Community Images

2018-08-22 Thread Monty Taylor

On 08/22/2018 04:31 PM, Andy Botting wrote:

Hi all,

We've recently moved to using Glance's community visibility on the 
Nectar Research Cloud. We had lots of public images (12255), and we 
found it was becoming slow to list them all and the community image 
visibility seems to fit our use-case nicely.


We moved all of our user's images over to become community images, and 
left our 'official' images as the only public ones.


We found a few issues, which I wanted to document, if anyone else is 
looking at doing the same thing.


-> Glance API has no way of returning all images available to me in a 
single API request (https://bugs.launchpad.net/glance/+bug/1779251)
The default list of images is perfect (all available to me, except 
community), but there's a heap of cases where you need to fetch all 
images including community. If we did have this, my next points would be 
a whole lot easier to solve.


-> Horizon's support for Community images is very lacking 
(https://bugs.launchpad.net/horizon/+bug/1779250)
On the surface, it looks like Community images are supported in Horizon, 
but it's only as far as listing images in the Images tab. Trying to boot 
a Community image from the Launch Instance wizard is actually 
impossible, as community images don't appear in that list at all. The 
images tab in Horizon dynamically builds the list of images on the 
Images tab through new Glance API calls when you use any filters (good).
In contrast, the source tab on the Launch Images wizard loads all images 
at the start (slow with lots of images), then relies on javascript 
client-side filtering of the list. I've got a dirty patch to fix this 
for us by basically making two Glance API requests (one without 
specifying visibility, and another with visibility=community), then 
merging the data. This would be better handled the same way as the 
Images tab, with new Glance API requests when filtering.


-> Users can't set their own images as Community from the dashboard
Should be relatively easy to add this. I'm hoping to look into fixing 
this soon.


-> Murano / Sahara image discovery
These projects rely on images to be chosen when creating new 
environments, and it looks like they use a glance list for their 
discovery. They both suffer from the same issue and require their images 
to be non-community for them to find their images.


-> Openstack Client didn't support listing community images at all 
(https://storyboard.openstack.org/#!/story/2001925 
)
It did support setting images to community, but support for actually 
listing them was missing.  Support has  now been added, but not sure if 
it's made it to a release yet.


We've got a few more things I want to do related to images, sdk, 
openstackclient *and* horizon to make rollouts like this a bit better.


I'm betting when I do that I shoujld add murano, sahara and heat to the 
list.


We're currently having to add the new support in like 5 places, which is 
where some of the holes come from. Hopefully we'll get stuff solid on 
that front soon - but thanks for the feedback!


Apart from these issues, our migration was pretty successful with 
minimal user complaints.


\o/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Searchlight] Action plan for Searchlight in Stein

2018-08-22 Thread Trinh Nguyen
Hi Kendall,

Thanks much for the help. :)

Bests,


*Trinh Nguyen *| Founder & Chief Architect



*E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *



On Thu, Aug 23, 2018 at 7:21 AM Kendall Nelson 
wrote:

> It is done! I could only find two lp projects- searchlight and
> python-searchlight-client. If I missed any let me know and I can run the
> script on them. Otherwise you can view the results here[1].
>
> Play around with it for a couple days and if it works for you we can
> migrate you whenever. We usually do migrations on Fridays to minimize
> impact on other work.
>
> For other info about the migration process, you can check that out here[2]
> or ask in #storyboard or email me directly :)
>
> -Kendall (diablo_rojo)
>
> [1] https://storyboard-dev.openstack.org/#!/project_group/61
> [2] https://docs.openstack.org/infra/storyboard/migration.html
>
> On Wed, Aug 22, 2018 at 12:19 PM Kendall Nelson 
> wrote:
>
>> Hello Trinh,
>>
>>
>> On Wed, Aug 22, 2018 at 1:57 AM Trinh Nguyen 
>> wrote:
>>
>>> Dear team,
>>>
>>> Here is my proposed action plan for Searchlight in Stein. The ultimate
>>> goal is to revive Searchlight with a sustainable number of contributors and
>>> can release as expected.
>>>
>>> 1. Migrate Searchlight to Storyboard with the help of Kendall
>>>
>>
>> I will get Searchlight setup in our dev environment and run some test
>> migrations today and let you know when they finish :)
>>
>>
>>> 2. Attract more contributors (as well as cores)
>>> 3. Clean up docs, notes
>>> 4. Review and clean up patches [1] [2] [3] [4]
>>> 5. Setting up goals/features for Stein. We will need to have a virtual
>>> PTG (September 10-14, 2018, Denver) since I cannot attend it this time.
>>>
>>> This is our Etherpad for Stein, please feel free to contribute from now
>>> on until the PTG:
>>> https://review.openstack.org/#/q/project:openstack/searchlight+status:open
>>>
>>> [1]
>>> https://review.openstack.org/#/q/project:openstack/searchlight+status:open
>>> [2]
>>> https://review.openstack.org/#/q/project:openstack/searchlight-ui+status:open
>>> [3]
>>> https://review.openstack.org/#/q/project:openstack/python-searchlightclient+status:open
>>> [4]
>>> https://review.openstack.org/#/q/project:openstack/searchlight-specs+status:open
>>>
>>> If you have any idea or want to contribute, please ping me on IRC:
>>>
>>>- IRC Channel: #openstack-searchlight
>>>- My IRC handler: dangtrinhnt
>>>
>>>
>>> Bests,
>>>
>>> *Trinh Nguyen *| Founder & Chief Architect
>>>
>>> 
>>>
>>> *E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz
>>> *
>>>
>>>
>> - Kendall (diablo_rojo)
>>
>>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Freezer] Reactivate the team

2018-08-22 Thread Trinh Nguyen
Hi Kendall,

I hope gengchc2 will have a decision on this since he is the new PTL of
Freezer for Stein.

Bests,

*Trinh Nguyen *| Founder & Chief Architect



*E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *



On Thu, Aug 23, 2018 at 8:26 AM Kendall Nelson 
wrote:

> I finished the test migration. You can find the results here[1]. I only
> found two lp projects- freezer and freezer-web-ui. If I missed any, please
> let me know and I will run the migration script on them.
>
> Play around with it for a few days and let me know if you are interested
> in moving forward with the real migration.
>
> -Kendall (diablo_rojo)
>
> [1] https://storyboard-dev.openstack.org/#!/project_group/62
>
> On Tue, Aug 21, 2018 at 11:30 AM Kendall Nelson 
> wrote:
>
>> If you also wanted to add migrating from Launchpad to Storyboard to this
>> list I am happy to help do the test migration and coordinate the real
>> migration.
>>
>> -Kendall (diablo_rojo)
>>
>> On Fri, Aug 17, 2018 at 6:50 PM Trinh Nguyen 
>> wrote:
>>
>>> Dear Freezer team,
>>>
>>> Since we have appointed a new PTL for the Stein cycle (gengchc2), I
>>> suggest that we should reactivate the team follows these actions:
>>>
>>>1. Have a team meeting to formalize the new leader as well as
>>>discuss the new direction.
>>>2. Grant PTL privileges for gengchc2 on Launchpad and Project Gerrit
>>>repositories.
>>>3. Reorganize the core team to make sure we have enough active core
>>>reviewers for new patches.
>>>4. Clean up bug reports, blueprints on Launchpad, as well as
>>>unreviewed patched on Gerrit.
>>>
>>> I hope that we can revive Freezer.
>>>
>>> Best regards,
>>>
>>> *Trinh Nguyen *| Founder & Chief Architect
>>>
>>> 
>>>
>>> *E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz
>>> *
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration

2018-08-22 Thread Sam Morrison
I think in our case we’d only migrate between cells if we know the network and 
storage is accessible and would never do it if not. 
Thinking moving from old to new hardware at a cell level.

If storage and network isn’t available ideally it would fail at the api request.

There is also ceph backed instances and so this is also something to take into 
account which nova would be responsible for.

I’ll be in Denver so we can discuss more there too.

Cheers,
Sam





> On 23 Aug 2018, at 11:23 am, Matt Riedemann  wrote:
> 
> Hi everyone,
> 
> I have started an etherpad for cells topics at the Stein PTG [1]. The main 
> issue in there right now is dealing with cross-cell cold migration in nova.
> 
> At a high level, I am going off these requirements:
> 
> * Cells can shard across flavors (and hardware type) so operators would like 
> to move users off the old flavors/hardware (old cell) to new flavors in a new 
> cell.
> 
> * There is network isolation between compute hosts in different cells, so no 
> ssh'ing the disk around like we do today. But the image service is global to 
> all cells.
> 
> Based on this, for the initial support for cross-cell cold migration, I am 
> proposing that we leverage something like shelve offload/unshelve 
> masquerading as resize. We shelve offload from the source cell and unshelve 
> in the target cell. This should work for both volume-backed and 
> non-volume-backed servers (we use snapshots for shelved offloaded 
> non-volume-backed servers).
> 
> There are, of course, some complications. The main ones that I need help with 
> right now are what happens with volumes and ports attached to the server. 
> Today we detach from the source and attach at the target, but that's assuming 
> the storage backend and network are available to both hosts involved in the 
> move of the server. Will that be the case across cells? I am assuming that 
> depends on the network topology (are routed networks being used?) and storage 
> backend (routed storage?). If the network and/or storage backend are not 
> available across cells, how do we migrate volumes and ports? Cinder has a 
> volume migrate API for admins but I do not know how nova would know the 
> proper affinity per-cell to migrate the volume to the proper host (cinder 
> does not have a routed storage concept like routed provider networks in 
> neutron, correct?). And as far as I know, there is no such thing as port 
> migration in Neutron.
> 
> Could Placement help with the volume/port migration stuff? Neutron routed 
> provider networks rely on placement aggregates to schedule the VM to a 
> compute host in the same network segment as the port used to create the VM, 
> however, if that segment does not span cells we are kind of stuck, correct?
> 
> To summarize the issues as I see them (today):
> 
> * How to deal with the targeted cell during scheduling? This is so we can 
> even get out of the source cell in nova.
> 
> * How does the API deal with the same instance being in two DBs at the same 
> time during the move?
> 
> * How to handle revert resize?
> 
> * How are volumes and ports handled?
> 
> I can get feedback from my company's operators based on what their deployment 
> will look like for this, but that does not mean it will work for others, so I 
> need as much feedback from operators, especially those running with multiple 
> cells today, as possible. Thanks in advance.
> 
> [1] https://etherpad.openstack.org/p/nova-ptg-stein-cells
> 
> -- 
> 
> Thanks,
> 
> Matt


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca][goal][python3] monasca's zuul migration is only partially complete

2018-08-22 Thread Doug Hellmann
Monasca team,

It looks like you have self-proposed some, but not all, of the
patches to import the zuul settings into monasca repositories.

I found these:

+-++--++-+---+
| Subject | Repo
   | Tests | Workflow   | URL | Branch  
  |
+-++--++-+---+
| Removed dependency on supervisor| openstack/monasca-agent 
   | VERIFIED | MERGED | https://review.openstack.org/554304 | master   
 |
| fix tox python3 overrides   | openstack/monasca-agent 
   | VERIFIED | MERGED | https://review.openstack.org/574693 | master   
 |
| fix tox python3 overrides   | openstack/monasca-api   
   | VERIFIED | MERGED | https://review.openstack.org/572970 | master   
 |
| import zuul job settings from project-config| openstack/monasca-api   
   | VERIFIED | MERGED | https://review.openstack.org/590698 | 
stable/ocata  |
| import zuul job settings from project-config| openstack/monasca-api   
   | VERIFIED | MERGED | https://review.openstack.org/590355 | 
stable/pike   |
| import zuul job settings from project-config| openstack/monasca-api   
   | VERIFIED | MERGED | https://review.openstack.org/589928 | 
stable/queens |
| fix tox python3 overrides   | 
openstack/monasca-common   | VERIFIED | MERGED | 
https://review.openstack.org/572910 | master|
| ignore python2-specific code under python3 for pep8 | 
openstack/monasca-common   | VERIFIED | MERGED | 
https://review.openstack.org/573002 | master|
| fix tox python3 overrides   | 
openstack/monasca-log-api  | VERIFIED | MERGED | 
https://review.openstack.org/572971 | master|
| replace use of 'unicode' builtin| 
openstack/monasca-log-api  | VERIFIED | MERGED | 
https://review.openstack.org/573015 | master|
| fix tox python3 overrides   | 
openstack/monasca-statsd   | VERIFIED | MERGED | 
https://review.openstack.org/572911 | master|
| fix tox python3 overrides   | 
openstack/python-monascaclient | VERIFIED | MERGED | 
https://review.openstack.org/573344 | master|
| replace unicode with six.text_type  | 
openstack/python-monascaclient | VERIFIED | MERGED | 
https://review.openstack.org/575212 | master|
| | 
   |   || | 
  |
| | 
   | VERIFIED: 13 | MERGED: 13 | |  
 |
+-++--++-+———+

They do not include the monasca-events-api, monasca-specs,
monasca-persister, monasca-tempest-plugin, monasca-thresh, monasca-ui,
monasca-ceilometer, monasaca-transform, monasca-analytics,
monasca-grafana-datasource, and monasca-kibana-plugin repositories.

It also looks like they don’t include some necessary changes for
some branches in some of the other repos, although I haven’t checked
if those branches actually exist so maybe they’re fine.

We also need a patch to project-config to remove the settings for
all of the monasca team’s repositories.

I can generate the missing patches, but doing that now is likely
to introduce some bad patches into the repositories that have had
some work done, so you’ll need to review everything carefully.

In all, it looks like we’re missing around 80+ patches, although
some of the ones I have generated locally may be bogus because of
the existing changes.

I realize Witold is OOO for a while, so I'm emailing the list to
ask the team how you want to proceed. Should I go ahead and propose
the patches I have?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-22 Thread Jay Pipes
On Wed, Aug 22, 2018, 10:13 AM Eric Fried  wrote:

> For some time, nova has been using uuidsentinel [1] which conveniently
> allows you to get a random UUID in a single LOC with a readable name
> that's the same every time you reference it within that process (but not
> across processes). Example usage: [2].
>
> We would like other projects (notably the soon-to-be-split-out placement
> project) to be able to use uuidsentinel without duplicating the code. So
> we would like to stuff it in an oslo lib.
>
> The question is whether it should live in oslotest [3] or in
> oslo_utils.uuidutils [4]. The proposed patches are (almost) the same.
> The issues we've thought of so far:
>
> - If this thing is used only for test, oslotest makes sense. We haven't
> thought of a non-test use, but somebody surely will.
> - Conversely, if we put it in oslo_utils, we're kinda saying we support
> it for non-test too. (This is why the oslo_utils version does some extra
> work for thread safety and collision avoidance.)
> - In oslotest, awkwardness is necessary to avoid circular importing:
> uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In
> oslo_utils.uuidutils, everything is right there.
>

My preference is to put it in oslotest. Why does oslo_utils.uuidutils
import oslotest? That makes zero sense to me...

-jay

- It's a... UUID util. If I didn't know anything and I was looking for a
> UUID util like uuidsentinel, I would look in a module called uuidutils
> first.
>
> We hereby solicit your opinions, either by further discussion here or as
> votes on the respective patches.
>
> Thanks,
> efried
>
> [1]
>
> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py
> [2]
>
> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115
> [3] https://review.openstack.org/594068
> [4] https://review.openstack.org/594179
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder][neutron] Cross-cell cold migration

2018-08-22 Thread Matt Riedemann

Hi everyone,

I have started an etherpad for cells topics at the Stein PTG [1]. The 
main issue in there right now is dealing with cross-cell cold migration 
in nova.


At a high level, I am going off these requirements:

* Cells can shard across flavors (and hardware type) so operators would 
like to move users off the old flavors/hardware (old cell) to new 
flavors in a new cell.


* There is network isolation between compute hosts in different cells, 
so no ssh'ing the disk around like we do today. But the image service is 
global to all cells.


Based on this, for the initial support for cross-cell cold migration, I 
am proposing that we leverage something like shelve offload/unshelve 
masquerading as resize. We shelve offload from the source cell and 
unshelve in the target cell. This should work for both volume-backed and 
non-volume-backed servers (we use snapshots for shelved offloaded 
non-volume-backed servers).


There are, of course, some complications. The main ones that I need help 
with right now are what happens with volumes and ports attached to the 
server. Today we detach from the source and attach at the target, but 
that's assuming the storage backend and network are available to both 
hosts involved in the move of the server. Will that be the case across 
cells? I am assuming that depends on the network topology (are routed 
networks being used?) and storage backend (routed storage?). If the 
network and/or storage backend are not available across cells, how do we 
migrate volumes and ports? Cinder has a volume migrate API for admins 
but I do not know how nova would know the proper affinity per-cell to 
migrate the volume to the proper host (cinder does not have a routed 
storage concept like routed provider networks in neutron, correct?). And 
as far as I know, there is no such thing as port migration in Neutron.


Could Placement help with the volume/port migration stuff? Neutron 
routed provider networks rely on placement aggregates to schedule the VM 
to a compute host in the same network segment as the port used to create 
the VM, however, if that segment does not span cells we are kind of 
stuck, correct?


To summarize the issues as I see them (today):

* How to deal with the targeted cell during scheduling? This is so we 
can even get out of the source cell in nova.


* How does the API deal with the same instance being in two DBs at the 
same time during the move?


* How to handle revert resize?

* How are volumes and ports handled?

I can get feedback from my company's operators based on what their 
deployment will look like for this, but that does not mean it will work 
for others, so I need as much feedback from operators, especially those 
running with multiple cells today, as possible. Thanks in advance.


[1] https://etherpad.openstack.org/p/nova-ptg-stein-cells

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][oslo][release][requirements] FFE request for castellan

2018-08-22 Thread Ade Lee
Thanks guys, 

Sorry - it was not clear to me if I was supposed to do anything
further.  It seems like the requirements team has approved the FFE and
the release has merged.  Is there anything further I need to do?

Thanks,
Ade

On Tue, 2018-08-21 at 14:16 -0500, Matthew Thode wrote:
> On 18-08-21 14:00:41, Ben Nemec wrote:
> > Because castellan is in global-requirements, we need an FFE from
> > requirements too.  Can someone from the requirements team respond
> > to the
> > review?  Thanks.
> > 
> > On 08/16/2018 04:34 PM, Ben Nemec wrote:
> > > The backport has merged and I've proposed the release here:
> > > https://review.openstack.org/592746
> > > 
> > > On 08/15/2018 11:58 AM, Ade Lee wrote:
> > > > Done.
> > > > 
> > > > https://review.openstack.org/#/c/592154/
> > > > 
> > > > Thanks,
> > > > Ade
> > > > 
> > > > On Wed, 2018-08-15 at 09:20 -0500, Ben Nemec wrote:
> > > > > 
> > > > > On 08/14/2018 01:56 PM, Sean McGinnis wrote:
> > > > > > > On 08/10/2018 10:15 AM, Ade Lee wrote:
> > > > > > > > Hi all,
> > > > > > > > 
> > > > > > > > I'd like to request a feature freeze exception to get
> > > > > > > > the
> > > > > > > > following
> > > > > > > > change in for castellan.
> > > > > > > > 
> > > > > > > > https://review.openstack.org/#/c/575800/
> > > > > > > > 
> > > > > > > > This extends the functionality of the vault backend to
> > > > > > > > provide
> > > > > > > > previously uninmplemented functionality, so it should
> > > > > > > > not break
> > > > > > > > anyone.
> > > > > > > > 
> > > > > > > > The castellan vault plugin is used behind barbican in
> > > > > > > > the
> > > > > > > > barbican-
> > > > > > > > vault plugin.  We'd like to get this change into Rocky
> > > > > > > > so that
> > > > > > > > we can
> > > > > > > > release Barbican with complete functionality on this
> > > > > > > > backend
> > > > > > > > (along
> > > > > > > > with a complete set of passing functional tests).
> > > > > > > 
> > > > > > > This does seem fairly low risk since it's just
> > > > > > > implementing a
> > > > > > > function that
> > > > > > > previously raised a NotImplemented exception.  However,
> > > > > > > with it
> > > > > > > being so
> > > > > > > late in the cycle I think we need the release team's
> > > > > > > input on
> > > > > > > whether this
> > > > > > > is possible.  Most of the release FFE's I've seen have
> > > > > > > been for
> > > > > > > critical
> > > > > > > bugs, not actual new features.  I've added that tag to
> > > > > > > this
> > > > > > > thread so
> > > > > > > hopefully they can weigh in.
> > > > > > > 
> > > > > > 
> > > > > > As far as releases go, this should be fine. If this doesn't
> > > > > > affect
> > > > > > any other
> > > > > > projects and would just be a late merging feature, as long
> > > > > > as the
> > > > > > castellan
> > > > > > team has considered the risk of adding code so late and is
> > > > > > comfortable with
> > > > > > that, this is OK.
> > > > > > 
> > > > > > Castellan follows the cycle-with-intermediary release
> > > > > > model, so the
> > > > > > final Rocky
> > > > > > release just needs to be done by next Thursday. I do see
> > > > > > the
> > > > > > stable/rocky
> > > > > > branch has already been created for this repo, so it would
> > > > > > need to
> > > > > > merge to
> > > > > > master first (technically stein), then get cherry-picked to
> > > > > > stable/rocky.
> > > > > 
> > > > > Okay, sounds good.  It's already merged to master so we're
> > > > > good
> > > > > there.
> > > > > 
> > > > > Ade, can you get the backport proposed?
> > > > > 
> 
> I've approved it for a UC only bump
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goal][python3] week 2 update

2018-08-22 Thread Doug Hellmann
Excerpts from William M Edmonds's message of 2018-08-22 07:42:43 -0400:
> 
> Doug Hellmann  wrote on 08/20/2018 11:27:09 AM:
> > If your team is ready to have your zuul settings migrated, please
> > let us know by following up to this email. We will start with the
> > volunteers, and then work our way through the other teams.
> 
> I think PowerVMStackers is ready (so nova-powervm, networking-powervm,
> ceilometer-powervm).

Here you go:

+--+--+-+---+
| Subject  | Repo | 
URL | Branch|
+--+--+-+---+
| import zuul job settings from project-config | openstack/ceilometer-powervm | 
https://review.openstack.org/594984 | master|
| add python 3.6 unit test job | openstack/ceilometer-powervm | 
https://review.openstack.org/594985 | master|
| import zuul job settings from project-config | openstack/ceilometer-powervm | 
https://review.openstack.org/594989 | stable/ocata  |
| import zuul job settings from project-config | openstack/ceilometer-powervm | 
https://review.openstack.org/594992 | stable/pike   |
| import zuul job settings from project-config | openstack/ceilometer-powervm | 
https://review.openstack.org/594995 | stable/queens |
| import zuul job settings from project-config | openstack/ceilometer-powervm | 
https://review.openstack.org/594998 | stable/rocky  |
| import zuul job settings from project-config | openstack/networking-powervm | 
https://review.openstack.org/594986 | master|
| import zuul job settings from project-config | openstack/networking-powervm | 
https://review.openstack.org/594990 | stable/ocata  |
| import zuul job settings from project-config | openstack/networking-powervm | 
https://review.openstack.org/594993 | stable/pike   |
| import zuul job settings from project-config | openstack/networking-powervm | 
https://review.openstack.org/594996 | stable/queens |
| import zuul job settings from project-config | openstack/networking-powervm | 
https://review.openstack.org/594999 | stable/rocky  |
| import zuul job settings from project-config | openstack/nova-powervm   | 
https://review.openstack.org/594987 | master|
| add python 3.6 unit test job | openstack/nova-powervm   | 
https://review.openstack.org/594988 | master|
| import zuul job settings from project-config | openstack/nova-powervm   | 
https://review.openstack.org/594991 | stable/ocata  |
| import zuul job settings from project-config | openstack/nova-powervm   | 
https://review.openstack.org/594994 | stable/pike   |
| import zuul job settings from project-config | openstack/nova-powervm   | 
https://review.openstack.org/594997 | stable/queens |
| import zuul job settings from project-config | openstack/nova-powervm   | 
https://review.openstack.org/595000 | stable/rocky  |
+--+--+-+---+

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-22 Thread Doug Hellmann
Excerpts from melanie witt's message of 2018-08-21 15:05:00 -0700:
> On Tue, 21 Aug 2018 16:41:11 -0400, Doug Hellmann wrote:
> > Excerpts from melanie witt's message of 2018-08-21 12:53:43 -0700:
> >> On Tue, 21 Aug 2018 06:50:56 -0500, Matt Riedemann wrote:
> >>> At this point, I think we're at:
> >>>
> >>> 1. Should placement be extracted into it's own git repo in Stein while
> >>> nova still has known major issues which will have dependencies on
> >>> placement changes, mainly modeling affinity?
> >>>
> >>> 2. If we extract, does it go under compute governance or a new project
> >>> with a new PTL.
> >>>
> >>> As I've said, I personally believe that unless we have concrete plans
> >>> for the big items in #1, we shouldn't hold up the extraction. We said in
> >>> Dublin we wouldn't extract to a new git repo in Rocky but we'd work up
> >>> to that point so we could do it in Stein, so this shouldn't surprise
> >>> anyone. The actual code extraction and re-packaging and all that is
> >>> going to be the biggest technical issue with all of this, and will
> >>> likely take all of stein to complete it after all the bugs are shaken out.
> >>>
> >>> For #2, I think for now, in the interim, while we deal with the
> >>> technical headache of the code extraction itself, it's best to leave the
> >>> new repo under compute governance so the existing team is intact and we
> >>> don't conflate the people issue with the technical issue at the same
> >>> time. Get the hard technical part done first, and then we can move it
> >>> out of compute governance. Once it's in its own git repo, we can change
> >>> the core team as needed but I think it should be initialized with
> >>> existing nova-core.
> >>
> >> I'm in support of extracting placement into its own git repo because
> >> Chris has done a lot of work to reduce dependencies in placement and
> >> moving it into its own repo would help in not having to keep chasing
> >> that. As has been said before, I think all of us agree that placement
> >> should be separate as an end goal. The question is when to fully
> >> separate it from governance.
> >>
> >> It's true that we don't have concrete plans for affinity modeling and
> >> shared storage modeling. But I think we do have concrete plans for vGPU
> >> enhancements (being able to have different vGPU types on one compute
> >> host and adding support for traits). vGPU support is an important and
> >> highly sought after feature for operators and users, as we witnessed at
> >> the last Summit in Vancouver. vGPU support is currently using a flat
> >> resource provider structure that needs to be migrated to nested in order
> >> to do the enhancement work, and that's how the reshaper work came about.
> >> (Reshaper work will migrate a flat resource provider structure to a
> >> nested one.)
> >>
> >> We have the nested resource provider support in placement but we need to
> >> integrate the Nova side, leveraging the reshaper code. The reshaper code
> >> is still going through code review, then next we have the integration to
> >> do. I think things are bound to break when we integrate it, just because
> >> nothing is ever perfect, as much as we scrutinize it and the real test
> >> is when we start using it for real. I think going through this
> >> integration would be best done *before* extraction to a new repo. But
> >> given that there is never a "good" time to extract something to a new
> >> repo, I am OK with the idea of doing the extraction first, if that is
> >> what most people want to do.
> >>
> >> What I'm concerned about on the governance piece is how things look as
> >> far as project priorities between the two projects if they are split.
> >> Affinity modeling and shared storage support are compute features
> >> OpenStack operators and users need. Operators need affinity modeling in
> >> the placement is needed to achieve parity for affinity scheduling with
> >> multiple cells. That means, affinity scheduling in Nova with multiple
> >> cells is susceptible to races and does *not* work as well as the
> >> previous single cell support. Shared storage support is something
> >> operators have badly needed for years now and was envisioned to be
> >> solved with placement.
> >>
> >> Given all of that, I'm not seeing how *now* is a good time to separate
> >> the placement project under separate governance with separate goals and
> >> priorities. If operators need things for compute, that are well-known
> >> and that placement was created to solve, how will placement have a
> >> shared interest in solving compute problems, if it is not part of the
> >> compute project?
> >>
> > 
> > Who are candidates to be members of a review team for the placement
> > repository after the code is moved out of openstack/nova?
> > 
> > How many of them are also members of the nova-core team?
> 
> I assume you pose this question in the proposed situation I described 
> where placement is a repo under compute. I expect the review team to be 

[openstack-dev] [oslo] UUID sentinel needs a home

2018-08-22 Thread Eric Fried
For some time, nova has been using uuidsentinel [1] which conveniently
allows you to get a random UUID in a single LOC with a readable name
that's the same every time you reference it within that process (but not
across processes). Example usage: [2].

We would like other projects (notably the soon-to-be-split-out placement
project) to be able to use uuidsentinel without duplicating the code. So
we would like to stuff it in an oslo lib.

The question is whether it should live in oslotest [3] or in
oslo_utils.uuidutils [4]. The proposed patches are (almost) the same.
The issues we've thought of so far:

- If this thing is used only for test, oslotest makes sense. We haven't
thought of a non-test use, but somebody surely will.
- Conversely, if we put it in oslo_utils, we're kinda saying we support
it for non-test too. (This is why the oslo_utils version does some extra
work for thread safety and collision avoidance.)
- In oslotest, awkwardness is necessary to avoid circular importing:
uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In
oslo_utils.uuidutils, everything is right there.
- It's a... UUID util. If I didn't know anything and I was looking for a
UUID util like uuidsentinel, I would look in a module called uuidutils
first.

We hereby solicit your opinions, either by further discussion here or as
votes on the respective patches.

Thanks,
efried

[1]
https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py
[2]
https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115
[3] https://review.openstack.org/594068
[4] https://review.openstack.org/594179

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][project navigator] kolla missing in project navigator

2018-08-22 Thread Steven Dake (stdake)
Thierry,

Kolla likely belongs in the packaging recipes in the map.  Kolla-Ansible 
belongs in the lifecycle tools.

FWIW, I'm agree with Jean on the location of OpenStack-Ansible in the map.  
This is a deployment tool, not really a set of recipes.  I think the name 
"openstack-ansible" as a project is what causes all the confusion.  Some folks 
see it as a set of playbooks by its naming, but really its a lifecycle 
management tool simply using Ansible as a dependent technology.

Jimmy,

Thanks for your help in sorting out the project navigator.  This is greatly 
appreciated.

Cheers
-steve


From: Thierry Carrez 
Sent: Monday, August 20, 2018 7:31 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla][project navigator] kolla missing in 
project navigator

Eduardo,

"Kolla" was originally left out of the map (and therefore the new
OpenStack components page) because the map only shows deliverables that
are directly usable by deployers. That is why "Kolla-Ansible" is listed
there and not "Kolla".

Are you making the case that Kolla should be used directly by deployers
(rather than run it though Ansible with Kolla-Ansible), and therefore
should appear as a deployment option on the map as well ?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goal][python3] week 2 update

2018-08-22 Thread William M Edmonds

Doug Hellmann  wrote on 08/20/2018 11:27:09 AM:
> If your team is ready to have your zuul settings migrated, please
> let us know by following up to this email. We will start with the
> volunteers, and then work our way through the other teams.

I think PowerVMStackers is ready (so nova-powervm, networking-powervm,
ceilometer-powervm).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Keystone Team Update - Week of 6 August 2018

2018-08-22 Thread Lance Bragstad


On 08/22/2018 03:23 AM, Adrian Turjak wrote:
> Bah! I saw this while on holiday and didn't get a chance to respond,
> sorry for being late to the conversation.
>
> On 11/08/18 3:46 AM, Colleen Murphy wrote:
>> ### Self-Service Keystone
>>
>> At the weekly meeting Adam suggested we make self-service keystone a focus 
>> point of the PTG[9]. Currently, policy limitations make it difficult for an 
>> unprivileged keystone user to get things done or to get information without 
>> the help of an administrator. There are some other projects that have been 
>> created to act as workflow proxies to mitigate keystone's limitations, such 
>> as Adjutant[10] (now an official OpenStack project) and Ksproj[11] (written 
>> by Kristi). The question is whether the primitives offered by keystone are 
>> sufficient building blocks for these external tools to leverage, or if we 
>> should be doing more of this logic within keystone. Certainly improving our 
>> RBAC model is going to be a major part of improving the self-service user 
>> experience.
>>
>> [9] 
>> http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-121
>> [10] https://adjutant.readthedocs.io/en/latest/
>> [11] https://github.com/CCI-MOC/ksproj
> As you can probably expect, I'd love to be a part of any of these
> discussions. Anything I can nicely move to being logic directly
> supported in Keystone, the less I need to do in Adjutant. The majority
> of things though I think I can do reasonably well with the primitives
> Keystone gives me, and what I can't I tend to try and work with upstream
> to fill the gaps.
>
> System vs project scope helps a lot though, and I look forward to really
> playing with that.

Since it made sense to queue incorporating system scope after the flask
work, I just started working with that on the credentials API*. There is
a WIP series up for review that attempts to do a couple things [0].
First it tries to incorporate system and project scope checking into the
API. Second it tries to be more explicit about protection test cases,
which I think is going to be important since we're adding another scope
type. We also support three different roles now and it would be nice to
clearly see who can do what in each case with tests.

I'd be curious to get your feedback here if you have any.

* Because the credentials API was already moved to flask and has room
for self-service improvements [1]

[0] https://review.openstack.org/#/c/594547/
[1]
https://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/credential.py#n21

>
> I sadly won't be at the PTG, but will be at the Berlin summit. Plus I
> have a lot of Adjutant work planned for Stein, a large chunk of which is
> refactors and reshuffling blueprints and writing up a roadmap, plus some
> better entry point tasks for new contributors.
>
>> ### Standalone Keystone
>>
>> Also at the meeting and during office hours, we revived the discussion of 
>> what it would take to have a standalone keystone be a useful identity 
>> provider for non-OpenStack projects[12][13]. First up we'd need to turn 
>> keystone into a fully-fledged SAML IdP, which it's not at the moment (which 
>> is a point of confusion in our documentation), or even add support for it to 
>> act as an OpenID Connect IdP. This would be relatively easy to do (or at 
>> least not impossible). Then the application would have to use 
>> keystonemiddleware or its own middleware to route requests to keystone to 
>> issue and validate tokens (this is one aspect where we've previously 
>> discussed whether JWT could benefit us). Then the question is what should a 
>> not-OpenStack application do with keystone's "scoped RBAC"? It would all 
>> depend on how the resources of the application are grouped and whether they 
>> care about multitenancy in some form. Likely each application would have 
>> different needs and it would be difficult to find a one-size-fits-all 
>> approach. We're interested to know whether anyone has a burning use case for 
>> something like this.
>>
>> [12] 
>> http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-192
>> [13] 
>> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-08-07.log.html#t2018-08-07T17:01:30
> This one is interesting because another department at Catalyst is
> actually looking to use Keystone outside of the scope of OpenStack. They
> are building a SaaS platform, and they need authn, authz (with some
> basic RBAC), a service catalog (think API endpoint per software
> offering), and most of those things are useful outside of OpenStack.
> They can then use projects to signify a customer, and a project
> (customer) could have one or more users accessing the management GUIs,
> with roles giving them some RBAC. A large part of this is because they
> can then also piggy back on a lot of work our team has done with
> OpenStack and Keystone and even reuse some of our 

Re: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad)

2018-08-22 Thread Jiri Tomasek
Hi,

thanks for a write up James. I am adding a few notes/ideas inline...

On Mon, Aug 20, 2018 at 10:48 PM James Slagle 
wrote:

> As we start looking at how TripleO will address next generation deployment
> needs such as Edge, multi-site, and multi-cloud, I'd like to kick off a
> discussion around how TripleO can evolve and adapt to meet these new
> challenges.
>
> What are these challenges? I think the OpenStack Edge Whitepaper does a
> good
> job summarizing some of them:
>
>
> https://www.openstack.org/assets/edge/OpenStack-EdgeWhitepaper-v3-online.pdf
>
> They include:
>
> - management of distributed infrastructure
> - massive scale (thousands instead of hundreds)
> - limited network connectivity
> - isolation of distributed sites
> - orchestration of federated services across multiple sites
>
> We already have a lot of ongoing work that directly or indirectly starts to
> address some of these challenges. That work includes things like
> config-download, split-controlplane, metalsmith integration, validations,
> all-in-one, and standalone.
>
> I laid out some initial ideas in a previous message:
>
> http://lists.openstack.org/pipermail/openstack-dev/2018-July/132398.html
>
> I'll be reviewing some of that here and going into a bit more detail.
>
> These are some of the high level ideas I'd like to see TripleO start to
> address:
>
> - More separation between planning and deploying (likely to be further
> defined
>   in spec discussion). We've had these concepts for a while, but we need
> to do
>   a better job of surfacing them to users as deployments grow in size and
>   complexity.
>

One of the focus points of ui/cli and workflows squads for Stein is getting
GUI and CLI consolidated so
that both clients operate on deployment plan via Mistral workflows. We are
currently working on identifying
missing CLI commands which would lead to adopting the same workflow as GUI
uses. This will lead to
complete interoperability between the clients and would make a deployment
plan the first-class citizen as
Ben mentioned in discussion linked above.

Existing plan import/export functionality makes the deployment plan easily
portable and replicable as it is
possible to export the plan at any point of time and re-use it (with
ability to still
apply some tweaks for each usage)

When Steven's work [1] introduces plan-types which adds ability to define
multiple starting points for the
deployment plan.

[1] https://review.openstack.org/#/c/574753


>
>   With config-download, we can more easily separate the phases of
> rendering,
>   downloading, validating, and applying the configuration. As we increase
> in
>   scale to managing many deployments, we should take advantage of what
> each of
>   those phases offer.
>
>   The separation also makes the deployment more portable, as we should
>   eliminate any restrictions that force the undercloud to be the control
> node
>   applying the configuration.
>
> - Management of multiple deployments from a single undercloud. This is of
>   course already possible today, but we need better docs and polish and
> more
>   testing to flush out any bugs.
>
> - Plan and template management in git.
>
>   This could be an iterative step towards eliminating Swift in the
> undercloud.
>   Swift seemed like a natural choice at the time because it was an existing
>   OpenStack service.  However, I think git would do a better job at
> tracking
>   history and comparing changes and is much more lightweight than Swift.
> We've
>   been managing the config-download directory as a git repo, and I like
> this
>   direction. For now, we are just putting the whole git repo in Swift, but
> I
>   wonder if it makes sense to consider eliminating Swift entirely. We need
> to
>   consider the scale of managing thousands of plans for separate edge
>   deployments.
>
>   I also think this would be a step towards undercloud simplification.
>

+1, we need to identify how much this affects the existing API and overall
user experience
for managing deployment plans. Currentl plan management options we support
are:
- create plan from default files (/usr/share/tht...)
- create/update plan from local directory
- create/update plan by providing tarball
- create/update plan from remote git repository

Ian has been working on similar efforts towards performance improvements
[2], It
would be good to take this a step further and evaluate possibility to
eliminate Swift entirely.

[2] https://review.openstack.org/#/c/581153/

-- Jirka


>
> - Orchestration between plans. I think there's general agreement around
> scaling
>   up the undercloud to be more effective at managing and deploying multiple
>   plans.
>
>   The plans could be different OpenStack deployments potentially sharing
> some
>   resources. Or, they could be deployments of different software stacks
>   (Kubernetes/OpenShift, Ceph, etc).
>
>   We'll need to develop some common interfaces for some basic orchestration
>   between plans. It could include 

Re: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)

2018-08-22 Thread Sean McGinnis
> 
> The solution is conceptually simple.  We add a new API microversion in
> Cinder that adds and optional parameter called "generic_keep_source"
> (defaults to False) to both migrate and retype operations.
> 
> This means that if the driver optimized migration cannot do the
> migration and the generic migration code is the one doing the migration,
> then, instead of our final step being to swap the volume id's and
> deleting the source volume, what we would do is to swap the volume id's
> and move all the snapshots to reference the new volume.  Then we would
> create a user message with the new ID of the volume.
> 

How would you propose to "move all the snapshots to reference the new volume"?
Most storage does not allow a snapshot to be moved from one volume to another.
really the only way a migration of a snapshot can work across all storage types
would be to incrementally copy the data from a source to a destination up to
the point of the oldest snapshot, create a new snapshot on the new volume, then
proceed through until all snapshots have been rebuilt on the new volume.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] cinder 13.0.0.0rc2 (rocky)

2018-08-22 Thread no-reply

Hello everyone,

A new release candidate for cinder for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/cinder/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

https://git.openstack.org/cgit/openstack/cinder/log/?h=stable/rocky

Release notes for cinder can be found at:

https://docs.openstack.org/releasenotes/cinder/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

2018-08-22 Thread Balázs Gibizer



On Fri, Aug 17, 2018 at 5:40 PM, Eric Fried  wrote:

gibi-

 - On migration, when we transfer the allocations in either 
direction, a

 conflict means someone managed to resize (or otherwise change
 allocations?) since the last time we pulled data. Given the global 
lock

 in the report client, this should have been tough to do. If it does
 happen, I would think any retry would need to be done all the way 
back
 at the claim, which I imagine is higher up than we should go. So 
again,

 I think we should fail the migration and make the user retry.


 Do we want to fail the whole migration or just the migration step 
(e.g.

 confirm, revert)?
 The later means that failure during confirm or revert would put the
 instance back to VERIFY_RESIZE. While the former would mean that in 
case
 of conflict at confirm we try an automatic revert. But for a 
conflict at

 revert we can only put the instance to ERROR state.


This again should be "impossible" to come across. What would the
behavior be if we hit, say, ValueError in this spot?


I might not totally follow you. I see two options to choose from for 
the revert case:


a) Allocation manipulation error during revert of a migration causes 
that instance goes to ERROR. -> end user cannot retry the revert the 
instance needs to be deleted.


b) Allocation manipulation error during revert of a migration causes 
that the instance goes back to VERIFY_RESIZE state. -> end user can 
retry the revert via the API.


I see three options to choose from for the confirm case:

a) Allocation manipulation error during confirm of a migration causes 
that instance goes to ERROR. -> end user cannot retry the confirm the 
instance needs to be deleted.


b) Allocation manipulation error during confirm of a migration causes 
that the instance goes back to VERIFY_RESIZE state. -> end user can 
retry the confirm via the API.


c) Allocation manipulation error during confirm of a migration causes 
that nova automatically tries to revert the migration. (For failure 
during this revert the same options available as for the generic revert 
case, see above)


We also need to consider live migration. It is similar in a sense that 
it also use move_allocations. But it is different as the end user 
doesn't explicitly confirm or revert a live migration.


I'm looking for opinions about which option we should take in each 
cases.


gibi



-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] horizon 14.0.0.0rc2 (rocky)

2018-08-22 Thread no-reply

Hello everyone,

A new release candidate for horizon for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/horizon/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

https://git.openstack.org/cgit/openstack/horizon/log/?h=stable/rocky

Release notes for horizon can be found at:

https://docs.openstack.org/releasenotes/horizon/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stepping down from Ironic core

2018-08-22 Thread Ruby Loo
Hi John,

So sorry to hear this but totally understandable! Thanks for letting us
know and for everything you've done! Enjoy life without ironic :)

--ruby

On Tue, Aug 21, 2018 at 10:39 AM John Villalovos 
wrote:

> Good morning Ironic,
>
> I have come to realize that I don't have the time needed to be able to
> devote the attention needed to continue as an Ironic core.
>
> I'm hopeful that in the future I will work on Ironic or OpenStack again! :)
>
> The Ironic (and OpenStack) community has been a great one and I have
> really enjoyed my time working on it and working with all the people. I
> will still be hanging around on IRC and you may see me submitting a patch
> here and there too :)
>
> Thanks again,
> John
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

2018-08-22 Thread Eric Fried
b) sounds the most sane in both cases. I don't like the idea of "your
move operation failed and you have no recourse but to delete your
instance". And automatic retry sounds lovely, but potentially hairy to
implement (and we would need to account for the retries-failed scenario
anyway) so at least initially we should leave that out.

On 08/22/2018 07:55 AM, Balázs Gibizer wrote:
> 
> 
> On Fri, Aug 17, 2018 at 5:40 PM, Eric Fried  wrote:
>> gibi-
>>
  - On migration, when we transfer the allocations in either
 direction, a
  conflict means someone managed to resize (or otherwise change
  allocations?) since the last time we pulled data. Given the global
 lock
  in the report client, this should have been tough to do. If it does
  happen, I would think any retry would need to be done all the way back
  at the claim, which I imagine is higher up than we should go. So
 again,
  I think we should fail the migration and make the user retry.
>>>
>>>  Do we want to fail the whole migration or just the migration step (e.g.
>>>  confirm, revert)?
>>>  The later means that failure during confirm or revert would put the
>>>  instance back to VERIFY_RESIZE. While the former would mean that in
>>> case
>>>  of conflict at confirm we try an automatic revert. But for a
>>> conflict at
>>>  revert we can only put the instance to ERROR state.
>>
>> This again should be "impossible" to come across. What would the
>> behavior be if we hit, say, ValueError in this spot?
> 
> I might not totally follow you. I see two options to choose from for the
> revert case:
> 
> a) Allocation manipulation error during revert of a migration causes
> that instance goes to ERROR. -> end user cannot retry the revert the
> instance needs to be deleted.
> 
> b) Allocation manipulation error during revert of a migration causes
> that the instance goes back to VERIFY_RESIZE state. -> end user can
> retry the revert via the API.
> 
> I see three options to choose from for the confirm case:
> 
> a) Allocation manipulation error during confirm of a migration causes
> that instance goes to ERROR. -> end user cannot retry the confirm the
> instance needs to be deleted.
> 
> b) Allocation manipulation error during confirm of a migration causes
> that the instance goes back to VERIFY_RESIZE state. -> end user can
> retry the confirm via the API.
> 
> c) Allocation manipulation error during confirm of a migration causes
> that nova automatically tries to revert the migration. (For failure
> during this revert the same options available as for the generic revert
> case, see above)
> 
> We also need to consider live migration. It is similar in a sense that
> it also use move_allocations. But it is different as the end user
> doesn't explicitly confirm or revert a live migration.
> 
> I'm looking for opinions about which option we should take in each cases.
> 
> gibi
> 
>>
>> -efried
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] heat 11.0.0.0rc2 (rocky)

2018-08-22 Thread no-reply

Hello everyone,

A new release candidate for heat for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/heat/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

https://git.openstack.org/cgit/openstack/heat/log/?h=stable/rocky

Release notes for heat can be found at:

https://docs.openstack.org/releasenotes/heat/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ovs] OVS drop issue

2018-08-22 Thread Ajay Kalambur (akalambu)
Hi
We are seeing a very weird issue with OVS 2.9 which is not seen in OVS 2.6. 
Basically the end symptom is from the neutron L3 agent  router namespace we 
cant ping the gateway in this case 10.86.67.1
Now on performing tcpdump tests we see the gateway responding to ARP the reply 
comes into the control/network node and is dropped by OVS in the qg- interface
We observed that when the ARP reply came back to OVS port it added the 
following entry
recirc_id(0),in_port(2),eth(src=f0:25:72:ab:d4:c1,dst=fa:16:3e:65:85:ad),eth_type(0x8100),vlan(vid=0),encap(eth_type(0x0806)),
 packets:217, bytes:13888, used:0.329s, actions:drop

That drop rule states src mac=gateway mac(f0:25:72:ab:d4:c1), destination 
mac=qg-xxx interface drop the packet. We were first not sure why this was 
happening but when we inspected the arp response packet from gateway we noticed 
in this setup the packet from gatway was sent with cos /tos bit set to priority 
5.

When we rewrote the packet on TOR to set cos/tos priority to 0 it worked fine.

Question is why does OVS 2.9 add a drop rule when it sees an ARP response with 
cos/tos priority set to 5.

Has anyone seen this before 2.6 with newton works for this use case and 2.9 
with queens fails

Some info below



L3 Namespace
 ip netns exec qrouter-eee032f4-670f-4e43-8e83-e04cb23f00ae ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
46: ha-df4ad8aa-18:  mtu 1500 qdisc 
noqueue state UNKNOWN group default qlen 1000
link/ether fa:16:3e:eb:0c:21 brd ff:ff:ff:ff:ff:ff
inet 169.254.192.5/18 brd 
169.254.255.255 scope global ha-df4ad8aa-18
   valid_lft forever preferred_lft forever
inet 169.254.0.1/24 scope global ha-df4ad8aa-18
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feeb:c21/64 scope link
   valid_lft forever preferred_lft forever
47: qg-e5541f70-a5:  mtu 1500 qdisc 
noqueue state UNKNOWN group default qlen 1000
link/ether fa:16:3e:65:85:ad brd ff:ff:ff:ff:ff:ff
inet 10.86.67.78/24 scope global qg-e5541f70-a5
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe65:85ad/64 scope link nodad
   valid_lft forever preferred_lft forever


Appctl Trace output
ovs_vswitch_15520 [root@BXB-AIO-1 /]# ovs-appctl ofproto/trace br-int 
in_port=2,dl_src=f0:25:72:ab:d4:c1,dl_dst=fa:16:3e:65:85:ad
Flow: 
in_port=2,vlan_tci=0x,dl_src=f0:25:72:ab:d4:c1,dl_dst=fa:16:3e:65:85:ad,dl_type=0x

bridge("br-int")

 0. in_port=2,vlan_tci=0x/0x1fff, priority 3, cookie 0x21589cb48848e7fb
push_vlan:0x8100
set_field:4102->vlan_vid
goto_table:60
60. priority 3, cookie 0x21589cb48848e7fb
NORMAL
 -> no learned MAC for destination, flooding

bridge("br-prov")
-
 0. in_port=2, priority 2, cookie 0x7faae4a30960716f
drop

bridge("br-inst")
-
 0. in_port=2, priority 2, cookie 0x8b9b0311aedc1b0c
drop

Final flow: 
in_port=2,dl_vlan=6,dl_vlan_pcp=0,vlan_tci1=0x,dl_src=f0:25:72:ab:d4:c1,dl_dst=fa:16:3e:65:85:ad,dl_type=0x
Megaflow: 
recirc_id=0,eth,in_port=2,vlan_tci=0x/0x1fff,dl_src=f0:25:72:ab:d4:c1,dl_dst=fa:16:3e:65:85:ad,dl_type=0x
Datapath actions: 9,push_vlan(vid=6,pcp=0),7
ovs_vswitch_15520 [root@BXB-AIO-1 /]#

ovs-dpctl dump-flows
ovs_vswitch_15520 [root@BXB-AIO-1 /]# ovs-dpctl dump-flows | grep 
f0:25:72:ab:d4:c1
recirc_id(0),in_port(2),eth(src=f0:25:72:ab:d4:c1,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=0),encap(eth_type(0x0806),arp(sip=10.86.67.1,tip=10.86.67.77,op=1/0xff)),
 packets:3, bytes:192, used:8.199s, actions:1
recirc_id(0),in_port(2),eth(src=f0:25:72:ab:d4:c1,dst=fa:16:3e:65:85:ad),eth_type(0x8100),vlan(vid=0),encap(eth_type(0x0806)),
 packets:217, bytes:13888, used:0.329s, actions:drop
recirc_id(0),in_port(2),eth(src=f0:25:72:ab:d4:c1,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=0),encap(eth_type(0x0806),arp(sip=10.86.67.1,tip=10.86.67.45,op=1/0xff)),
 packets:3, bytes:192, used:9.778s, actions:1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kayobe] Kayobe update

2018-08-22 Thread Mark Goddard
On Wed, 22 Aug 2018, 19:08 Erik McCormick, 
wrote:

>
>
> On Wed, Aug 22, 2018, 1:52 PM Mark Goddard  wrote:
>
>> Hello Kayobians,
>>
>> I thought it is about time to do another update.
>>
>
> 
>
>
>> # PTG
>>
>> There won't be an official Kayobe session at the PTG in Denver, although
>> I and a few others from the team will be present. If anyone would like to
>> meet to discuss Kayobe then don't be shy. Please get in touch either via
>> email or IRC (mgoddard).
>>
>
> Would you have any interest in doing an overview / Q session with
> Operators Monday before lunch or sometime Tuesday? It doesn't need to be
> anything fancy or formal as these are all fishbowl sessions. It might be a
> good way to get some traction and feedback.
>

Absolutely, that's a great idea. I was hoping to attend the Scientific SIG
session on Monday, but any time on Tuesday would work.


>
>>
>> Cheers,
>> Mark
>>
>
> -Erik
>
>
>> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] octavia 3.0.0.0rc3 (rocky)

2018-08-22 Thread no-reply

Hello everyone,

A new release candidate for octavia for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/octavia/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

https://git.openstack.org/cgit/openstack/octavia/log/?h=stable/rocky

Release notes for octavia can be found at:

https://docs.openstack.org/releasenotes/octavia/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][horizon] Issues we found when using Community Images

2018-08-22 Thread Andy Botting
Hi all,

We've recently moved to using Glance's community visibility on the Nectar
Research Cloud. We had lots of public images (12255), and we found it was
becoming slow to list them all and the community image visibility seems to
fit our use-case nicely.

We moved all of our user's images over to become community images, and left
our 'official' images as the only public ones.

We found a few issues, which I wanted to document, if anyone else is
looking at doing the same thing.

-> Glance API has no way of returning all images available to me in a
single API request (https://bugs.launchpad.net/glance/+bug/1779251)
The default list of images is perfect (all available to me, except
community), but there's a heap of cases where you need to fetch all images
including community. If we did have this, my next points would be a whole
lot easier to solve.

-> Horizon's support for Community images is very lacking (
https://bugs.launchpad.net/horizon/+bug/1779250)
On the surface, it looks like Community images are supported in Horizon,
but it's only as far as listing images in the Images tab. Trying to boot a
Community image from the Launch Instance wizard is actually impossible, as
community images don't appear in that list at all. The images tab in
Horizon dynamically builds the list of images on the Images tab through new
Glance API calls when you use any filters (good).
In contrast, the source tab on the Launch Images wizard loads all images at
the start (slow with lots of images), then relies on javascript client-side
filtering of the list. I've got a dirty patch to fix this for us by
basically making two Glance API requests (one without specifying
visibility, and another with visibility=community), then merging the data.
This would be better handled the same way as the Images tab, with new
Glance API requests when filtering.

-> Users can't set their own images as Community from the dashboard
Should be relatively easy to add this. I'm hoping to look into fixing this
soon.

-> Murano / Sahara image discovery
These projects rely on images to be chosen when creating new environments,
and it looks like they use a glance list for their discovery. They both
suffer from the same issue and require their images to be non-community for
them to find their images.

-> Openstack Client didn't support listing community images at all (
https://storyboard.openstack.org/#!/story/2001925)
It did support setting images to community, but support for actually
listing them was missing.  Support has  now been added, but not sure if
it's made it to a release yet.

Apart from these issues, our migration was pretty successful with minimal
user complaints.

cheers,
Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Freezer] Reactivate the team

2018-08-22 Thread Kendall Nelson
I finished the test migration. You can find the results here[1]. I only
found two lp projects- freezer and freezer-web-ui. If I missed any, please
let me know and I will run the migration script on them.

Play around with it for a few days and let me know if you are interested in
moving forward with the real migration.

-Kendall (diablo_rojo)

[1] https://storyboard-dev.openstack.org/#!/project_group/62

On Tue, Aug 21, 2018 at 11:30 AM Kendall Nelson 
wrote:

> If you also wanted to add migrating from Launchpad to Storyboard to this
> list I am happy to help do the test migration and coordinate the real
> migration.
>
> -Kendall (diablo_rojo)
>
> On Fri, Aug 17, 2018 at 6:50 PM Trinh Nguyen 
> wrote:
>
>> Dear Freezer team,
>>
>> Since we have appointed a new PTL for the Stein cycle (gengchc2), I
>> suggest that we should reactivate the team follows these actions:
>>
>>1. Have a team meeting to formalize the new leader as well as discuss
>>the new direction.
>>2. Grant PTL privileges for gengchc2 on Launchpad and Project Gerrit
>>repositories.
>>3. Reorganize the core team to make sure we have enough active core
>>reviewers for new patches.
>>4. Clean up bug reports, blueprints on Launchpad, as well as
>>unreviewed patched on Gerrit.
>>
>> I hope that we can revive Freezer.
>>
>> Best regards,
>>
>> *Trinh Nguyen *| Founder & Chief Architect
>>
>> 
>>
>> *E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Searchlight] Action plan for Searchlight in Stein

2018-08-22 Thread Thierry Carrez

Trinh Nguyen wrote:
Here is my proposed action plan for Searchlight in Stein. The ultimate 
goal is to revive Searchlight with a sustainable number of contributors 
and can release as expected.

[...]


Thanks again for stepping up, and communicating so clearly.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)

2018-08-22 Thread Matthew Booth
On Wed, 22 Aug 2018 at 10:47, Gorka Eguileor  wrote:
>
> On 20/08, Matthew Booth wrote:
> > For those who aren't familiar with it, nova's volume-update (also
> > called swap volume by nova devs) is the nova part of the
> > implementation of cinder's live migration (also called retype).
> > Volume-update is essentially an internal cinder<->nova api, but as
> > that's not a thing it's also unfortunately exposed to users. Some
> > users have found it and are using it, but because it's essentially an
> > internal cinder<->nova api it breaks pretty easily if you don't treat
> > it like a special snowflake. It looks like we've finally found a way
> > it's broken for non-cinder callers that we can't fix, even with a
> > dirty hack.
> >
> > volume-updateessentially does a live copy of the
> > data on  volume to  volume, then seamlessly swaps the
> > attachment to  from  to . The guest OS on 
> > will not notice anything at all as the hypervisor swaps the storage
> > backing an attached volume underneath it.
> >
> > When called by cinder, as intended, cinder does some post-operation
> > cleanup such that  is deleted and  inherits the same
> > volume_id; that is  effectively becomes . When called any
> > other way, however, this cleanup doesn't happen, which breaks a bunch
> > of assumptions. One of these is that a disk's serial number is the
> > same as the attached volume_id. Disk serial number, in KVM at least,
> > is immutable, so can't be updated during volume-update. This is fine
> > if we were called via cinder, because the cinder cleanup means the
> > volume_id stays the same. If called any other way, however, they no
> > longer match, at least until a hard reboot when it will be reset to
> > the new volume_id. It turns out this breaks live migration, but
> > probably other things too. We can't think of a workaround.
> >
> > I wondered why users would want to do this anyway. It turns out that
> > sometimes cinder won't let you migrate a volume, but nova
> > volume-update doesn't do those checks (as they're specific to cinder
> > internals, none of nova's business, and duplicating them would be
> > fragile, so we're not adding them!). Specifically we know that cinder
> > won't let you migrate a volume with snapshots. There may be other
> > reasons. If cinder won't let you migrate your volume, you can still
> > move your data by using nova's volume-update, even though you'll end
> > up with a new volume on the destination, and a slightly broken
> > instance. Apparently the former is a trade-off worth making, but the
> > latter has been reported as a bug.
> >
>
> Hi Matt,
>
> As you know, I'm in favor of making this REST API call only authorized
> for Cinder to avoid messing the cloud.
>
> I know you wanted Cinder to have a solution to do live migrations of
> volumes with snapshots, and while this is not possible to do in a
> reasonable fashion, I kept thinking about it given your strong feelings
> to provide a solution for users that really need this, and I think we
> may have a "reasonable" compromise.
>
> The solution is conceptually simple.  We add a new API microversion in
> Cinder that adds and optional parameter called "generic_keep_source"
> (defaults to False) to both migrate and retype operations.
>
> This means that if the driver optimized migration cannot do the
> migration and the generic migration code is the one doing the migration,
> then, instead of our final step being to swap the volume id's and
> deleting the source volume, what we would do is to swap the volume id's
> and move all the snapshots to reference the new volume.  Then we would
> create a user message with the new ID of the volume.
>
> This way we can preserve the old volume with all its snapshots and do
> the live migration.
>
> The implementation is a little bit tricky, as we'll have to add anew
> "update_migrated_volume" mechanism to support the renaming of both
> volumes, since the old one wouldn't work with this among other things,
> but it's doable.
>
> Unfortunately I don't have the time right now to work on this...

Sounds promising, and honestly more than I'd have hoped for.

Matt

-- 
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)

2018-08-22 Thread Gorka Eguileor
On 20/08, Matthew Booth wrote:
> For those who aren't familiar with it, nova's volume-update (also
> called swap volume by nova devs) is the nova part of the
> implementation of cinder's live migration (also called retype).
> Volume-update is essentially an internal cinder<->nova api, but as
> that's not a thing it's also unfortunately exposed to users. Some
> users have found it and are using it, but because it's essentially an
> internal cinder<->nova api it breaks pretty easily if you don't treat
> it like a special snowflake. It looks like we've finally found a way
> it's broken for non-cinder callers that we can't fix, even with a
> dirty hack.
>
> volume-updateessentially does a live copy of the
> data on  volume to  volume, then seamlessly swaps the
> attachment to  from  to . The guest OS on 
> will not notice anything at all as the hypervisor swaps the storage
> backing an attached volume underneath it.
>
> When called by cinder, as intended, cinder does some post-operation
> cleanup such that  is deleted and  inherits the same
> volume_id; that is  effectively becomes . When called any
> other way, however, this cleanup doesn't happen, which breaks a bunch
> of assumptions. One of these is that a disk's serial number is the
> same as the attached volume_id. Disk serial number, in KVM at least,
> is immutable, so can't be updated during volume-update. This is fine
> if we were called via cinder, because the cinder cleanup means the
> volume_id stays the same. If called any other way, however, they no
> longer match, at least until a hard reboot when it will be reset to
> the new volume_id. It turns out this breaks live migration, but
> probably other things too. We can't think of a workaround.
>
> I wondered why users would want to do this anyway. It turns out that
> sometimes cinder won't let you migrate a volume, but nova
> volume-update doesn't do those checks (as they're specific to cinder
> internals, none of nova's business, and duplicating them would be
> fragile, so we're not adding them!). Specifically we know that cinder
> won't let you migrate a volume with snapshots. There may be other
> reasons. If cinder won't let you migrate your volume, you can still
> move your data by using nova's volume-update, even though you'll end
> up with a new volume on the destination, and a slightly broken
> instance. Apparently the former is a trade-off worth making, but the
> latter has been reported as a bug.
>

Hi Matt,

As you know, I'm in favor of making this REST API call only authorized
for Cinder to avoid messing the cloud.

I know you wanted Cinder to have a solution to do live migrations of
volumes with snapshots, and while this is not possible to do in a
reasonable fashion, I kept thinking about it given your strong feelings
to provide a solution for users that really need this, and I think we
may have a "reasonable" compromise.

The solution is conceptually simple.  We add a new API microversion in
Cinder that adds and optional parameter called "generic_keep_source"
(defaults to False) to both migrate and retype operations.

This means that if the driver optimized migration cannot do the
migration and the generic migration code is the one doing the migration,
then, instead of our final step being to swap the volume id's and
deleting the source volume, what we would do is to swap the volume id's
and move all the snapshots to reference the new volume.  Then we would
create a user message with the new ID of the volume.

This way we can preserve the old volume with all its snapshots and do
the live migration.

The implementation is a little bit tricky, as we'll have to add anew
"update_migrated_volume" mechanism to support the renaming of both
volumes, since the old one wouldn't work with this among other things,
but it's doable.

Unfortunately I don't have the time right now to work on this...

Cheers,
Gorka.


> I'd like to make it very clear that nova's volume-update, isn't
> expected to work correctly except when called by cinder. Specifically
> there was a proposal that we disable volume-update from non-cinder
> callers in some way, possibly by asserting volume state that can only
> be set by cinder. However, I'm also very aware that users are calling
> volume-update because it fills a need, and we don't want to trap data
> that wasn't previously trapped.
>
> Firstly, is anybody aware of any other reasons to use nova's
> volume-update directly?
>
> Secondly, is there any reason why we shouldn't just document then you
> have to delete snapshots before doing a volume migration? Hopefully
> some cinder folks or operators can chime in to let me know how to back
> them up or somehow make them independent before doing this, at which
> point the volume itself should be migratable?
>
> If we can establish that there's an acceptable alternative to calling
> volume-update directly for all use-cases we're aware of, I'm going to
> propose heading off this class of bug by disabling it for 

Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-22 Thread Balázs Gibizer



On Sat, Aug 18, 2018 at 2:25 PM, Chris Dent  
wrote:


So my hope is that (in no particular order) Jay Pipes, Eric Fried,
Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov,
Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to
placement whom I'm forgetting [1] would express their preference on
what they'd like to see happen.


+1 for separate git repository
+1 for initializing the placement-core with nova-core team
+1 for talking separately about incuding more cores to the 
placement-core


I'm for taking incremental steps. So if the git repo separation can ben 
done independently of the project separation then why not do the step 
first we seems to be agreeing with.


I think allowing the placement-core team to diverge from the nova-core 
team will help in many ways to talk about the project separate:
* more core reviewers for placement-> more review bandwidth for 
placement-> less review need from nova-cores on placement code -> more 
time for nova-cores to propose solutions for remaining big nova induced 
placement changes (affinity, quota) and implement support in nova for 
existing placement features (consumer gen, nested RP, granular resource 
request)
* possibility to include reviews to the placement core team (over time) 
with other, placement-using module background (cinder, neutron, cyborg, 
etc.) -> fresh viewpoints about the direction of placement from 
placement API consumers
* a divers core team will allow us to test the water about feature 
priorization conflicts if any.


I'm not against making two steps at the same time and doing the project 
separation _if_ there are some level of consensus amongst the 
interested parties. But based on this long mail thread we don't have 
that yet. So I suggest to do the repo and core team change only now and 
spend time gathering experience having the evolved placement-core team.


Cheers,
gibi





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Searchlight] Action plan for Searchlight in Stein

2018-08-22 Thread Nguyễn Trí Hải
Hi,

The link for Stein's Etherpad is missing.

On Wed, Aug 22, 2018 at 5:58 PM Trinh Nguyen  wrote:

> Dear team,
>
> Here is my proposed action plan for Searchlight in Stein. The ultimate
> goal is to revive Searchlight with a sustainable number of contributors and
> can release as expected.
>
> 1. Migrate Searchlight to Storyboard with the help of Kendall
> 2. Attract more contributors (as well as cores)
> 3. Clean up docs, notes
> 4. Review and clean up patches [1] [2] [3] [4]
> 5. Setting up goals/features for Stein. We will need to have a virtual PTG
> (September 10-14, 2018, Denver) since I cannot attend it this time.
>
> This is our Etherpad for Stein, please feel free to contribute from now on
> until the PTG:
> https://review.openstack.org/#/q/project:openstack/searchlight+status:open
>
> [1]
> https://review.openstack.org/#/q/project:openstack/searchlight+status:open
> [2]
> https://review.openstack.org/#/q/project:openstack/searchlight-ui+status:open
> [3]
> https://review.openstack.org/#/q/project:openstack/python-searchlightclient+status:open
> [4]
> https://review.openstack.org/#/q/project:openstack/searchlight-specs+status:open
>
> If you have any idea or want to contribute, please ping me on IRC:
>
>- IRC Channel: #openstack-searchlight
>- My IRC handler: dangtrinhnt
>
>
> Bests,
>
> *Trinh Nguyen *| Founder & Chief Architect
>
> 
>
> *E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 

Nguyen Tri Hai / Ph.D. Student

ANDA Lab., Soongsil Univ., Seoul, South Korea



*[image:
http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4]
* 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Redis licensing terms changes

2018-08-22 Thread Jimmy McArthur

Hmm...

http://antirez.com/news/120

Today a page about the new Creative Common license in the Redis Labs web 
site was interpreted as if Redis itself switched license. This is not 
the case, Redis is, and will remain, BSD licensed. However in the fake 
news era my attempts to provide the correct information failed, and I’m 
still seeing everywhere “Redis is no longer open source”. The reality is 
that Redis remains BSD, and actually Redis Labs did the right thing 
supporting my effort to keep the Redis core open as usually.


What is happening instead is that certain Redis modules, developed 
inside Redis Labs, are now released under the Common Clause (using 
Apache license as a base license). This means that basically certain 
enterprise add-ons, instead of being completely closed source as they 
could be, will be available with a more permissive license.


Thierry Carrez wrote:

Haïkel wrote:

I haven't seen this but I'd like to point that Redis moved to an open
core licensing model.
https://redislabs.com/community/commons-clause/

In short:
* base engine remains under BSD license
* modules move to ASL 2.0 + commons clause which is non-free
(prohibits sales of derived products)


Beyond the sale of a derived product, it prohibits selling hosting of 
or providing consulting services on anything that depend on it... so 
it's pretty broad.



IMHO, projects that rely on Redis as default driver, should consider
alternatives (off course, it's up to them).


The TC stated in the past that default drivers had to be open source, 
so if anything depends on commons-claused Redis modules, they would 
probably have to find an alternative...


Which OpenStack components are affected ?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest][qa][congress] trouble setting tempest feature flag

2018-08-22 Thread Eric K
Hi all,

I have added feature flags for the congress tempest plugin [1] and set
them in the devstack plugin [2], but the flags seem to be ignored. The
tests are skipped [3] according to the default False flag rather than
run according to the True flag set in devstack plugin. Any hints on
what may be wrong? Thanks so much!

[1] https://review.openstack.org/#/c/594747/3
[2] https://review.openstack.org/#/c/594793/1/devstack/plugin.sh
[3] 
http://logs.openstack.org/64/594564/3/check/congress-devstack-api-mysql/b2cd46f/logs/testr_results.html.gz
(the bottom two skipped tests were expected to run)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Searchlight] Action plan for Searchlight in Stein

2018-08-22 Thread Trinh Nguyen
Oops, here you go:

https://etherpad.openstack.org/p/searchlight-stein-ptg


*Trinh Nguyen *| Founder & Chief Architect



*E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *



On Wed, Aug 22, 2018 at 11:46 PM Nguyễn Trí Hải 
wrote:

> Hi,
>
> The link for Stein's Etherpad is missing.
>
> On Wed, Aug 22, 2018 at 5:58 PM Trinh Nguyen 
> wrote:
>
>> Dear team,
>>
>> Here is my proposed action plan for Searchlight in Stein. The ultimate
>> goal is to revive Searchlight with a sustainable number of contributors and
>> can release as expected.
>>
>> 1. Migrate Searchlight to Storyboard with the help of Kendall
>> 2. Attract more contributors (as well as cores)
>> 3. Clean up docs, notes
>> 4. Review and clean up patches [1] [2] [3] [4]
>> 5. Setting up goals/features for Stein. We will need to have a virtual
>> PTG (September 10-14, 2018, Denver) since I cannot attend it this time.
>>
>> This is our Etherpad for Stein, please feel free to contribute from now
>> on until the PTG:
>> https://review.openstack.org/#/q/project:openstack/searchlight+status:open
>>
>> [1]
>> https://review.openstack.org/#/q/project:openstack/searchlight+status:open
>> [2]
>> https://review.openstack.org/#/q/project:openstack/searchlight-ui+status:open
>> [3]
>> https://review.openstack.org/#/q/project:openstack/python-searchlightclient+status:open
>> [4]
>> https://review.openstack.org/#/q/project:openstack/searchlight-specs+status:open
>>
>> If you have any idea or want to contribute, please ping me on IRC:
>>
>>- IRC Channel: #openstack-searchlight
>>- My IRC handler: dangtrinhnt
>>
>>
>> Bests,
>>
>> *Trinh Nguyen *| Founder & Chief Architect
>>
>> 
>>
>> *E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
>
> Nguyen Tri Hai / Ph.D. Student
>
> ANDA Lab., Soongsil Univ., Seoul, South Korea
>
> 
> 
> *[image:
> http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4]
> * 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Old patches cleaning

2018-08-22 Thread Boden Russell

On 8/22/18 2:10 AM, Slawomir Kaplonski wrote:> I will run it only for
projects like:
> * neutron-lib
> 
> If You have any concerns about running this script, please raise Your hand 
> now :)

Thanks for this.
Personally I don't see a need to cleanup old reviews for neutron-lib;
it's a pretty small list and a few oldies are still being discussed in
some form or another. But if you think there's a need, that's fine as well.

Neutron is a whole different story.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kayobe] Kayobe update

2018-08-22 Thread Mark Goddard
Hello Kayobians,

I thought it is about time to do another update.

# Releases

Work continues on  master adding Kayobe features. We're still deploying the
Queens release of OpenStack, although the Rocky patch [1] gets refreshed
every so often to ensure we're not lagging behind.

I've been thinking about whether it makes sense to continue to target
Kayobe releases against a specific release of OpenStack. It's relatively
decoupled in general.

* What if we could specify the version of OpenStack deploy from a list of
supported versions?
* What if we could specify a version of Kolla Ansible to install, and what
if Kolla Ansible supported deploying different releases of each service?
This is often how clouds end up in practice.

This would increase our test matrix somewhat, but could allow us to stay
current while still adding Kayobe features for OpenStack releases that
operators are actually using.

# Recently added features

* Ansible 2.5 support [2]
* Add support for a separate admin network [3]

# Upgrades

There is currently no coverage of upgrades in CI. This leaves us in a
dangerous position. I've started work on an upgrade job [4], which seems
almost ready. We first deploy Pike, smoke test, upgrade to Queens, then
smoke test again. Hopefully this job will influence development of a
similar job in Kolla Ansible during the Stein cycle that ensures issues are
caught upstream where possible.

# PTG

There won't be an official Kayobe session at the PTG in Denver, although I
and a few others from the team will be present. If anyone would like to
meet to discuss Kayobe then don't be shy. Please get in touch either via
email or IRC (mgoddard).

# New faces

We've seen a few new faces in #openstack-kayobe recently. Welcome, and keep
asking questions - they help us improve our software and documentation.

[1] https://review.openstack.org/#/c/568804
[2] https://review.openstack.org/562306
[3] https://review.openstack.org/572370
[4] https://review.openstack.org/592932

Cheers,
Mark
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-22 Thread melanie witt

On Wed, 22 Aug 2018 09:49:13 -0400, Doug Hellmann wrote:

Excerpts from melanie witt's message of 2018-08-21 15:05:00 -0700:

On Tue, 21 Aug 2018 16:41:11 -0400, Doug Hellmann wrote:

Excerpts from melanie witt's message of 2018-08-21 12:53:43 -0700:

On Tue, 21 Aug 2018 06:50:56 -0500, Matt Riedemann wrote:

At this point, I think we're at:

1. Should placement be extracted into it's own git repo in Stein while
nova still has known major issues which will have dependencies on
placement changes, mainly modeling affinity?

2. If we extract, does it go under compute governance or a new project
with a new PTL.

As I've said, I personally believe that unless we have concrete plans
for the big items in #1, we shouldn't hold up the extraction. We said in
Dublin we wouldn't extract to a new git repo in Rocky but we'd work up
to that point so we could do it in Stein, so this shouldn't surprise
anyone. The actual code extraction and re-packaging and all that is
going to be the biggest technical issue with all of this, and will
likely take all of stein to complete it after all the bugs are shaken out.

For #2, I think for now, in the interim, while we deal with the
technical headache of the code extraction itself, it's best to leave the
new repo under compute governance so the existing team is intact and we
don't conflate the people issue with the technical issue at the same
time. Get the hard technical part done first, and then we can move it
out of compute governance. Once it's in its own git repo, we can change
the core team as needed but I think it should be initialized with
existing nova-core.


I'm in support of extracting placement into its own git repo because
Chris has done a lot of work to reduce dependencies in placement and
moving it into its own repo would help in not having to keep chasing
that. As has been said before, I think all of us agree that placement
should be separate as an end goal. The question is when to fully
separate it from governance.

It's true that we don't have concrete plans for affinity modeling and
shared storage modeling. But I think we do have concrete plans for vGPU
enhancements (being able to have different vGPU types on one compute
host and adding support for traits). vGPU support is an important and
highly sought after feature for operators and users, as we witnessed at
the last Summit in Vancouver. vGPU support is currently using a flat
resource provider structure that needs to be migrated to nested in order
to do the enhancement work, and that's how the reshaper work came about.
(Reshaper work will migrate a flat resource provider structure to a
nested one.)

We have the nested resource provider support in placement but we need to
integrate the Nova side, leveraging the reshaper code. The reshaper code
is still going through code review, then next we have the integration to
do. I think things are bound to break when we integrate it, just because
nothing is ever perfect, as much as we scrutinize it and the real test
is when we start using it for real. I think going through this
integration would be best done *before* extraction to a new repo. But
given that there is never a "good" time to extract something to a new
repo, I am OK with the idea of doing the extraction first, if that is
what most people want to do.

What I'm concerned about on the governance piece is how things look as
far as project priorities between the two projects if they are split.
Affinity modeling and shared storage support are compute features
OpenStack operators and users need. Operators need affinity modeling in
the placement is needed to achieve parity for affinity scheduling with
multiple cells. That means, affinity scheduling in Nova with multiple
cells is susceptible to races and does *not* work as well as the
previous single cell support. Shared storage support is something
operators have badly needed for years now and was envisioned to be
solved with placement.

Given all of that, I'm not seeing how *now* is a good time to separate
the placement project under separate governance with separate goals and
priorities. If operators need things for compute, that are well-known
and that placement was created to solve, how will placement have a
shared interest in solving compute problems, if it is not part of the
compute project?



Who are candidates to be members of a review team for the placement
repository after the code is moved out of openstack/nova?

How many of them are also members of the nova-core team?


I assume you pose this question in the proposed situation I described
where placement is a repo under compute. I expect the review team to be


No, not at all. I'm trying to understand how you think a completely
separate team is going to cause problems. Because it seems like at
least a large portion, if not all, of the contributors want it, and
I need to have a very good reason for denying their request, if we
do. Right now, I understand that there are concerns, but I don't
understand 

Re: [openstack-dev] [kayobe] Kayobe update

2018-08-22 Thread Erik McCormick
On Wed, Aug 22, 2018, 1:52 PM Mark Goddard  wrote:

> Hello Kayobians,
>
> I thought it is about time to do another update.
>




> # PTG
>
> There won't be an official Kayobe session at the PTG in Denver, although I
> and a few others from the team will be present. If anyone would like to
> meet to discuss Kayobe then don't be shy. Please get in touch either via
> email or IRC (mgoddard).
>

Would you have any interest in doing an overview / Q session with
Operators Monday before lunch or sometime Tuesday? It doesn't need to be
anything fancy or formal as these are all fishbowl sessions. It might be a
good way to get some traction and feedback.


>
> Cheers,
> Mark
>

-Erik


>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-22 Thread Jeremy Stanley
On 2018-08-22 11:03:43 -0700 (-0700), melanie witt wrote:
[...]
> I think it's about context. If two separate projects do their own priority
> and goal setting, separately, I think they will naturally be more different
> than they would be if they were one project. Currently, we agree on goals
> and priorities together, in the compute context. If placement has its own
> separate context, the priority setting and goal planning will be done in the
> context of placement. In two separate groups, someone who is a member of
> both the Nova and Placement teams would have to persuade Placement-only
> members to agree to prioritize a particular item. This may sound subtle, but
> it's a notable difference in how things work when it's one team vs two
> separate teams. I think having shared context and alignment, at this point
> in time, when we have outstanding closely coupled nova/placement work to do,
> is critical in delivering for operators and users who are depending on us.
[...]

I'm clearly missing some critical detail about the relationships in
the Nova team. Don't the Nova+Placement contributors already have to
convince the Placement-only contributors what to prioritize working
on? Or are you saying that if they disagree that's fine because the
Nova+Placement contributors will get along just fine without the
Placement-only contributors helping them get it done?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-22 Thread Jeremy Stanley
On 2018-08-22 00:17:41 + (+), Fox, Kevin M wrote:
> There have been plenty of cross project goals set forth from the
> TC and implemented by the various projects such as wsgi or
> python3. Those have been worked on by each of the projects in
> priority to some project specific goals by devs interested in
> bettering OpenStack. Why is it so hard to believe if the TC gave
> out a request for a grander user/ops supporting feature, that the
> community wouldn't step up? PTL's are supposed to be neutral to
> vendor specific issues and work for the betterment of the Project.

Those goals, cross-project by nature, necessarily involve people
with domain-specific knowledge in the requisite projects. That is a
lot different than expecting Cinder developers to switch gears and
start working on Barbican instead just because the TC (or the UC, or
the OSF BoD, or whoever) decrees key management is prioritized over
multi-attach storage. Cross-project goal setting is already a
strained process, in which we as a community spend a _lot_ of time
and effort to determine what various project teams are even willing
to work on and prioritize alongside the things they already get
done. Asking them to work on something has absolutely not stopped
them from wanting to work on other things instead.

There are plenty of instances where the community (via its elected
leadership) has attempted to set goals and some teams have chosen to
work on other priorities of their own instead. If they could have
directed all their contributors to focus on that it would have been
completed, but they (all teams really) attempt balance the
priorities set by the OpenStack Technical Committee and other
leadership with their own project-specific priorities. Just as the
TC sinks a lot of effort into getting teams to focus on things it
identifies as priorities, the PTLs encounter similar challenges
getting their teams to focus on whatever priorities they've set as a
group. Some contributors only work on what interests them, some only
on what their employer tells them, and so on, while much of the rest
struggle simply to keep up with the overall rate of change.

> I don't buy the complexity argument either. Other non OpenStack
> projects are implementing similar functionality with far less
> complexity. I attribute a lot of that to difference in governence.
> Through governence we've made hard things much harder. They can't
> be fixed until the governence issues are fixed first I think.
[...]

Again, specifics would be nice. What decisions has the community
made in governing itself which have contributed to the problems you
see? What incremental changes would you make to improve that
situation (hint: blow-it-all-up suggestions like "get rid of PTLs"
aren't solutions when you're steering a community consisting of
thousands of developers, we need steps to get from point A to point
B)? In this _particular_ situation, what action are you asking the
TC or other community leaders to take to resolve the problem (and
what do you see as "the problem" in this case, for that matter)?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Anti-affinity Broke

2018-08-22 Thread Telles Nobrega
Hi all,

We have an open bug on storyboard regarding anti-affinity on sahara.
https://storyboard.openstack.org/#!/story/2002656

This was proposed by Joe Topjian and I have implemented the proposed fix.
Unfortunetely we don't have resources to test it properly. Joe, if you can
take a look and review https://review.openstack.org/#/c/587978/ .

We need this reviewed and merged by tomorrow in order to have it in Rocky.

Thanks
-- 

TELLES NOBREGA

SOFTWARE ENGINEER

Red Hat Brasil  

Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo

tenob...@redhat.com

TRIED. TESTED. TRUSTED. 
 Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo Great Place to Work.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-22 Thread Doug Hellmann
Excerpts from Eric Fried's message of 2018-08-22 09:13:25 -0500:
> For some time, nova has been using uuidsentinel [1] which conveniently
> allows you to get a random UUID in a single LOC with a readable name
> that's the same every time you reference it within that process (but not
> across processes). Example usage: [2].
> 
> We would like other projects (notably the soon-to-be-split-out placement
> project) to be able to use uuidsentinel without duplicating the code. So
> we would like to stuff it in an oslo lib.
> 
> The question is whether it should live in oslotest [3] or in
> oslo_utils.uuidutils [4]. The proposed patches are (almost) the same.
> The issues we've thought of so far:
> 
> - If this thing is used only for test, oslotest makes sense. We haven't
> thought of a non-test use, but somebody surely will.

It also depends on whether we want it used that way. I think, given
the fact that the data is not persistent or consistent across runs,
I would rather have it as a test library only, and not part of the
public production API of oslo.util (see below).

> - Conversely, if we put it in oslo_utils, we're kinda saying we support
> it for non-test too. (This is why the oslo_utils version does some extra
> work for thread safety and collision avoidance.)

That protection is necessary regardless of how it is going to be used.

> - In oslotest, awkwardness is necessary to avoid circular importing:
> uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In
> oslo_utils.uuidutils, everything is right there.

A third alternative is to create a test fixture which is exposed
from oslo.utils under the test package. That clearly labels the
code as a test tool, but avoids the circular import problem of
placing it in oslotest.

> - It's a... UUID util. If I didn't know anything and I was looking for a
> UUID util like uuidsentinel, I would look in a module called uuidutils
> first.
> 
> We hereby solicit your opinions, either by further discussion here or as
> votes on the respective patches.
> 
> Thanks,
> efried
> 
> [1]
> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py
> [2]
> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115
> [3] https://review.openstack.org/594068
> [4] https://review.openstack.org/594179
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Searchlight] Action plan for Searchlight in Stein

2018-08-22 Thread Kendall Nelson
Hello Trinh,


On Wed, Aug 22, 2018 at 1:57 AM Trinh Nguyen  wrote:

> Dear team,
>
> Here is my proposed action plan for Searchlight in Stein. The ultimate
> goal is to revive Searchlight with a sustainable number of contributors and
> can release as expected.
>
> 1. Migrate Searchlight to Storyboard with the help of Kendall
>

I will get Searchlight setup in our dev environment and run some test
migrations today and let you know when they finish :)


> 2. Attract more contributors (as well as cores)
> 3. Clean up docs, notes
> 4. Review and clean up patches [1] [2] [3] [4]
> 5. Setting up goals/features for Stein. We will need to have a virtual PTG
> (September 10-14, 2018, Denver) since I cannot attend it this time.
>
> This is our Etherpad for Stein, please feel free to contribute from now on
> until the PTG:
> https://review.openstack.org/#/q/project:openstack/searchlight+status:open
>
> [1]
> https://review.openstack.org/#/q/project:openstack/searchlight+status:open
> [2]
> https://review.openstack.org/#/q/project:openstack/searchlight-ui+status:open
> [3]
> https://review.openstack.org/#/q/project:openstack/python-searchlightclient+status:open
> [4]
> https://review.openstack.org/#/q/project:openstack/searchlight-specs+status:open
>
> If you have any idea or want to contribute, please ping me on IRC:
>
>- IRC Channel: #openstack-searchlight
>- My IRC handler: dangtrinhnt
>
>
> Bests,
>
> *Trinh Nguyen *| Founder & Chief Architect
>
> 
>
> *E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *
>
>
- Kendall (diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev