Re: [openstack-dev] [nova] Rocky spec review day

2018-03-28 Thread melanie witt

On Wed, 21 Mar 2018 21:48:40 -0700, Melanie Witt wrote:

On Tue, 20 Mar 2018 16:47:58 -0700, Melanie Witt wrote:

The past several cycles, we've had a spec review day in the cycle where
reviewers focus on specs and iterating quickly with spec authors for the
day. Spec freeze is April 19 so I wanted to get some input from all of
you about what day would work best for a spec review day.

I was thinking that 2-3 weeks ahead of spec freeze would be appropriate,
so that would be March 27 (next week) or April 3 if we do it on a Tuesday.


Thanks for all who replied on the thread. There was consensus that
earlier is better, so let's do the spec review day next week: Tuesday
March 27.


Thank you to all who participated in the spec review day yesterday. We 
approved 2 specs, merged some spec amendments, and many other specs 
received feedback from reviewers.


As a reminder, the spec freeze date is at r-2, June 7 this cycle (it has 
been moved out because of the new review runways effort going on [0]). 
As we work through the runways queue, we will be looking at approving 
more specs leading up to r-2, if we make sufficient progress on the queue.


Thanks,
-melanie

[0] https://etherpad.openstack.org/p/nova-runways-rocky
































__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-03-28 Thread Kaz Shinohara
Hi Ivan,


Thank you very much.
I've confirmed that all of us have been added to xstatic-core.

As discussed, we will focus on the followings what we added for
heat-dashboard, will not touch other xstatic repos as core.

xstatic-angular-material
xstatic-angular-notify
xstatic-angular-uuid
xstatic-angular-vis
xstatic-filesaver
xstatic-js-yaml
xstatic-json2yaml
xstatic-vis

Regards,
Kaz

2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny :
> Hi Kuz,
>
> Don't worry, we're on the same page with you. I added both you, Xinni and
> Keichii to the xstatic-core group. Thank you for your contributions!
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara  wrote:
>>
>> Hi Ivan & Horizon folks
>>
>>
>> AFAIK, Horizon team had conclusion that you will add the specific
>> members to xstatic-core, correct ?
>> Can I ask you to add the following members ?
>> # All of tree are heat-dashboard core.
>>
>> Kazunori Shinohara / ksnhr.t...@gmail.com #myself
>> Xinni Ge / xinni.ge1...@gmail.com
>> Keiichi Hikita / keiichi.hik...@gmail.com
>>
>> Please give me a shout, if we are not on same page or any concern.
>>
>> Regards,
>> Kaz
>>
>>
>> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara :
>> > Hi Ivan, Akihiro,
>> >
>> >
>> > Thanks for your kind arrangement.
>> > Looking forward to hearing your decision soon.
>> >
>> > Regards,
>> > Kaz
>> >
>> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny :
>> >> HI Team,
>> >>
>> >> From my perspective, I'm OK both with #2 and #3 options. I agree that
>> >> #4
>> >> could be too complicated for us. Anyway, we've got this topic on the
>> >> meeting
>> >> agenda [1] so we'll discuss it there too. I'll share our decision after
>> >> the
>> >> meeting.
>> >>
>> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon
>> >>
>> >>
>> >>
>> >> Regards,
>> >> Ivan Kolodyazhny,
>> >> http://blog.e0ne.info/
>> >>
>> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki 
>> >> wrote:
>> >>>
>> >>> Hi Kaz and Ivan,
>> >>>
>> >>> Yeah, it is worth discussed officially in the horizon team meeting or
>> >>> the
>> >>> mailing list thread to get a consensus.
>> >>> Hopefully you can add this topic to the horizon meeting agenda.
>> >>>
>> >>> After sending the previous mail, I noticed anther option. I see there
>> >>> are
>> >>> several options now.
>> >>> (1) Keep xstatic-core and horizon-core same.
>> >>> (2) Add specific members to xstatic-core
>> >>> (3) Add specific horizon-plugin core to xstatic-core
>> >>> (4) Split core membership into per-repo basis (perhaps too
>> >>> complicated!!)
>> >>>
>> >>> My current vote is (2) as xstatic-core needs to understand what is
>> >>> xstatic
>> >>> and how it is maintained.
>> >>>
>> >>> Thanks,
>> >>> Akihiro
>> >>>
>> >>>
>> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara :
>> 
>>  Hi Akihiro,
>> 
>> 
>>  Thanks for your comment.
>>  The background of my request to add us to xstatic-core comes from
>>  Ivan's comment in last PTG's etherpad for heat-dashboard discussion.
>> 
>>  https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion
>>  Line135, "we can share ownership if needed - e0ne"
>> 
>>  Just in case, could you guys confirm unified opinion on this matter
>>  as
>>  Horizon team ?
>> 
>>  Frankly speaking I'm feeling the benefit to make us xstatic-core
>>  because it's easier & smoother to manage what we are taking for
>>  heat-dashboard.
>>  On the other hand, I can understand what Akihiro you are saying, the
>>  newly added repos belong to Horizon project & being managed by not
>>  Horizon core is not consistent.
>>  Also having exception might make unexpected confusion in near future.
>> 
>>  Eventually we will follow your opinion, let me hear Horizon team's
>>  conclusion.
>> 
>>  Regards,
>>  Kaz
>> 
>> 
>>  2018-03-20 12:58 GMT+09:00 Akihiro Motoki :
>>  > Hi Kaz,
>>  >
>>  > These repositories are under horizon project. It looks better to
>>  > keep
>>  > the
>>  > current core team.
>>  > It potentially brings some confusion if we treat some horizon
>>  > plugin
>>  > team
>>  > specially.
>>  > Reviewing xstatic repos would be a small burden, wo I think it
>>  > would
>>  > work
>>  > without problem even if only horizon-core can approve xstatic
>>  > reviews.
>>  >
>>  >
>>  > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara :
>>  >>
>>  >> Hi Ivan, Horizon folks,
>>  >>
>>  >>
>>  >> Now totally 8 xstatic-** repos for heat-dashboard have been
>>  >> landed.
>>  >>
>>  >> In project-config for them, I've set same acl-config as the
>>  >> existing
>>  >> xstatic repos.
>>  >> It means only "xstatic-core" can manage 

Re: [openstack-dev] [mistral][tempest][congress] import or retain mistral tempest service client

2018-03-28 Thread Eric K
Thank you, Dougal and Ghanshyam for the responses!

What I can gather is: service client registration > import service client
> retaining copy.
So the best thing for Congress to do now is to import the service client.

On 3/17/18, 9:00 PM, "Ghanshyam Mann"  wrote:

>Hi All,
>
>Sorry for late response, i kept this mail unread but forgot to
>respond. reply inline.
>
>On Fri, Mar 16, 2018 at 8:08 PM, Dougal Matthews 
>wrote:
>>
>>
>> On 13 March 2018 at 18:51, Eric K  wrote:
>>>
>>> Hi Mistral folks and others,
>>>
>>> I'm working on Congress tempest tests [1] for integration with
>>>Mistral. In
>>> the tests, we use a Mistral service client to call Mistral APIs and
>>> compare results against those obtained by Mistral driver for Congress.
>>>
>>> Regarding the service client, Congress can either import directly from
>>> Mistral tempest plugin [2] or maintain its own copy within Congress
>>> tempest plugin.
>
>Maintaining own copy will leads to lot of issues and lot of duplicate
>code among many plugins.
>
>>I'm not sure whether Mistral team expects the service
>>> client to be internal use only, so I hope to hear folks' thoughts on
>>>which
>>> approach is preferred. Thanks very much!
>>
>>
>> I don't have a strong opinion here. I am happy for you to use the
>>Mistral
>> service client, but it will be hard to guarantee stability. It has been
>> stable (since it hasn't changed), but we have a temptest refactor
>>planned
>> (once we move the final tempest tests from mistraclient to
>> mistral-tempest-plugin). So there is a fair chance we will break the
>>API at
>> that point, however, I don't know when it will happen, as nobody is
>> currently working on it.
>
>From QA team, service clients are the main interface which can be used
>across tempest plugins. For example, congress need many other service
>clients from other Tempest Plugins liek Mistral. Tempest also declare
>all their in-tree service clients as library interface and we maintain
>them as per backward compatibility [3]. This way we make these service
>clients usable outside of Tempest also to avoid duplicate
>code/interface.
>
>For Service Clients defined in Tempest plugins (like Mistral service
>clients),  we suggest (strongly) the same process which is to declare
>plugins's service clients as stable interface which gives 2 advantage:
>1. By this you make sure that you are not allowing to change the API
>calling interface(service clietns) which indirectly means you are not
>allowing to change the APIs. Makes your tempest plugin testing more
>reliable.
>
>2. Your service clients can be used in other Tempest plugins to avoid
>duplicate code/interface. If any other plugins use you service clients
>means, they also test your project so it is good to help them by
>providing the required interface as stable.
>
>Initial idea of owning the service clients in their respective plugins
>was to share them among plugins for integrated testing of more then
>one openstack service.
>
>Now on usage of service clients, Tempest provide a better way to do so
>than importing them directly [4]. You can see the example for Manila's
>tempest plugin [5]. This gives an advantage of discovering your
>registered service clients in other Tempest plugins automatically.
>They do not need to import other plugins service clients. QA is hoping
>that each tempest plugins will move to new service client registration
>process.
>
>Overall, we recommend to have service clients as stable interface so
>that other plugins can use them and test your projects in more
>integrated way.
>
>>
>> I have cc'ed Chandan - hopefully he can provide some input. He has
>>advised
>> me and the Mistral team regarding tempest before.
>>
>>>
>>>
>>> Eric
>>>
>>> [1] https://review.openstack.org/#/c/538336/
>>> [2]
>>>
>>> 
>>>https://github.com/openstack/mistral-tempest-plugin/blob/master/mistral_
>>>tem
>>> pest_tests/services/v2/mistral_client.py
>>>
>>>
>
>..3 
>http://git.openstack.org/cgit/openstack/tempest/tree/tempest/lib/services
>..4 
>https://docs.openstack.org/tempest/latest/plugin.html#get_service_clients(
>)
>..5 https://review.openstack.org/#/c/334596/34
>
>-gmann
>
>>>
>>> 
>>>
>>>__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 

[openstack-dev] [Congress] updated backlog

2018-03-28 Thread Eric K
Here's an updated backlog following Rocky discussions.
https://etherpad.openstack.org/p/congress-task-priority


Please feel free to comment and suggest additions/deletions and changes in
priority.   



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-28 Thread Doug Hellmann
Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
> Hi,
> 
> I've proposed a change to devstack which slightly alters the
> LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
> those using legacy devstack jobs (but you may want to be aware of it).
> It is more significant for new-style devstack jobs.
> 
> The change is at https://review.openstack.org/549252
> 
> In summary, when this change lands, new-style devstack jobs should no
> longer need to set LIBS_FROM_GIT explicitly.  Existing legacy jobs
> should be unaffected (but there is a change to the verification process
> performed by devstack).
> 
> 
> Currently devstack expects the contents of LIBS_FROM_GIT to be
> exclusively a list of python packages which, obviously, should be
> installed from git and not pypi.  It is used for two purposes:
> determining whether an individual package should be installed from git,
> and verifying that a package was installed from git.
> 
> In the old devstack-gate system, we prepared many of the common git
> repos, whether they were used or not.  So LIBS_FROM_GIT was created to
> indicate that in some cases devstack should ignore those repos and
> install from pypi instead.  In other words, its original purpose was
> purely as a method of selecting whether a devstack-gate prepared repo
> should be used or ignored.
> 
> In Zuul v3, we have a good way to indicate whether a job is going to use
> a repo or not -- add it to "required-projects".  Considering that, the
> LIBS_FROM_GIT variable is redundant.  So my patch causes it to be
> automatically generated based on the contents of required-projects.
> This means that job authors don't need to list every required repository
> twice.
> 
> However, a naïve implementation of that runs afoul of the second use of
> LIBS_FROM_GIT -- verifying that python packages are installed from git.
> 
> This usage was added later, after a typographical error ("-" vs "_" in a
> python package name) in a constraints file caused us not to install a
> package from git.  Now devstack verifies that every package in
> LIBS_FROM_GIT is installed.  However, Zuul doesn't know that devstack,
> tempest, and other packages aren't installed.  So adding them
> automatically to LIBS_FROM_GIT will cause devstack to fail.
> 
> My change modifies this verification to only check that packages
> mentioned in LIBS_FROM_GIT that devstack tried to install were actually
> installed.  I realize that stated as such this sounds tautological,
> however, this check is still valid -- it would have caught the original
> error that prompted the check in the first case.
> 
> What the revised check will no longer handle is a typo in a legacy job.
> If someone enters a typo into LIBS_FROM_GIT, it will no longer fail.
> However, I think the risk is worthwhile -- particularly since it is in
> service of a system which eliminates the opportunity to introduce such
> an error in the first place.
> 
> To see the result in action, take a look at this change which, in only a
> few lines, implements what was a significantly more complex undertaking
> in Zuul v2:
> 
> https://review.openstack.org/548331
> 
> Finally, a note on the automatic generation of LIBS_FROM_GIT -- if, for
> some reason, you require a new-style devstack job to manually set
> LIBS_FROM_GIT, that will still work.  Simply define the variable as
> normal, and the module which generates the devstack config will bypass
> automatic generation if the variable is already set.
> 
> -Jim
> 

How does this apply to uses of devstack outside of zuul, such as in a
local development environment?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][release] Remove complex ACL changes around releases

2018-03-28 Thread Tony Breeds
On Wed, Mar 28, 2018 at 03:34:32PM +0100, Graham Hayes wrote:

> It is more complex than just "joining that team" if the project follows
> stable policy. the stable team have to approve the additions, and do
> reject people trying to join them.

This is true but when we (I) say no I explain what's required to get
$project-stable-maint for the requested people.  Which typically boils
down to "do the reviews that show they grok the stable policy" and we
set a short runway (typically 3 months)  It is absolutely that same as
joining *any* core team.

> I don't want to have a release where
> someone has to self approve / ninja approve patches due to cores *not*
> having the access rights that they previously had.

You can always ping stable-maint-core to avoid that.  Looking at recent
stable reviews stable-maint-core and releease-managers have been doing a
pretty good job there.

And as this will happen in July/August there's plenty of time for it to
be a non-issue.

Yours Tony.

[1] https://review.openstack.org/#/admin/groups/101,members
[2] https://review.openstack.org/#/admin/groups/1098,members


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-28 Thread Nadathur, Sundar
Thanks, Eric. Looks like there are no good solutions even as candidates, 
but only options with varying levels of unacceptability. It is funny 
that that the option that is considered the least unacceptable is to let 
the problem happen and then fail the request (last one in your list).


Could I ask what is the objection to the scheme that applies multiple 
traits and removes one as needed, apart from the fact that it has races?


Regards,
Sundar

On 3/28/2018 11:48 AM, Eric Fried wrote:

Sundar-

We're running across this issue in several places right now.   One
thing that's definitely not going to get traction is
automatically/implicitly tweaking inventory in one resource class when
an allocation is made on a different resource class (whether in the same
or different RPs).

Slightly less of a nonstarter, but still likely to get significant
push-back, is the idea of tweaking traits on the fly.  For example, your
vGPU case might be modeled as:

PGPU_RP: {
   inventory: {
   CUSTOM_VGPU_TYPE_A: 2,
   CUSTOM_VGPU_TYPE_B: 4,
   }
   traits: [
   CUSTOM_VGPU_TYPE_A_CAPABLE,
   CUSTOM_VGPU_TYPE_B_CAPABLE,
   ]
}

The request would come in for
resources=CUSTOM_VGPU_TYPE_A:1=VGPU_TYPE_A_CAPABLE, resulting
in an allocation of CUSTOM_VGPU_TYPE_A:1.  Now while you're processing
that, you would *remove* CUSTOM_VGPU_TYPE_B_CAPABLE from the PGPU_RP.
So it doesn't matter that there's still inventory of
CUSTOM_VGPU_TYPE_B:4, because a request including
required=CUSTOM_VGPU_TYPE_B_CAPABLE won't be satisfied by this RP.
There's of course a window between when the initial allocation is made
and when you tweak the trait list.  In that case you'll just have to
fail the loser.  This would be like any other failure in e.g. the spawn
process; it would bubble up, the allocation would be removed; retries
might happen or whatever.

Like I said, you're likely to get a lot of resistance to this idea as
well.  (Though TBH, I'm not sure how we can stop you beyond -1'ing your
patches; there's nothing about placement that disallows it.)

The simple-but-inefficient solution is simply that we'd still be able
to make allocations for vGPU type B, but you would have to fail right
away when it came down to cyborg to attach the resource.  Which is code
you pretty much have to write anyway.  It's an improvement if cyborg
gets to be involved in the post-get-allocation-candidates
weighing/filtering step, because you can do that check at that point to
help filter out the candidates that would fail.  Of course there's still
a race condition there, but it's no different than for any other resource.

efried

On 03/28/2018 12:27 PM, Nadathur, Sundar wrote:

Hi Eric and all,
     I should have clarified that this race condition happens only for
the case of devices with multiple functions. There is a prior thread

about it. I was trying to get a solution within Cyborg, but that faces
this race condition as well.

IIUC, this situation is somewhat similar to the issue with vGPU types

(thanks to Alex Xu for pointing this out). In the latter case, we could
start with an inventory of (vgpu-type-a: 2; vgpu-type-b: 4).  But, after
consuming a unit of  vGPU-type-a, ideally the inventory should change
to: (vgpu-type-a: 1; vgpu-type-b: 0). With multi-function accelerators,
we start with an RP inventory of (region-type-A: 1, function-X: 4). But,
after consuming a unit of that function, ideally the inventory should
change to: (region-type-A: 0, function-X: 3).

I understand that this approach is controversial :) Also, one difference
from the vGPU case is that the number and count of vGPU types is static,
whereas with FPGAs, one could reprogram it to result in more or fewer
functions. That said, we could hopefully keep this analogy in mind for
future discussions.

We probably will not support multi-function accelerators in Rocky. This
discussion is for the longer term.

Regards,
Sundar

On 3/23/2018 12:44 PM, Eric Fried wrote:

Sundar-

First thought is to simplify by NOT keeping inventory information in
the cyborg db at all.  The provider record in the placement service
already knows the device (the provider ID, which you can look up in the
cyborg db) the host (the root_provider_uuid of the provider representing
the device) and the inventory, and (I hope) you'll be augmenting it with
traits indicating what functions it's capable of.  That way, you'll
always get allocation candidates with devices that *can* load the
desired function; now you just have to engage your weigher to prioritize
the ones that already have it loaded so you can prefer those.

Am I missing something?

efried

On 03/22/2018 11:27 PM, Nadathur, Sundar wrote:

Hi all,
     There seems to be a possibility of a race 

Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-28 Thread Doug Hellmann
We're making good progress. Some of the important parts of the
global job changes are in place. There are still a lot of open
patches to add the lower-constraints jobs to repos, however.

Excerpts from Doug Hellmann's message of 2018-03-15 07:03:11 -0400:

[...]
> What I Want to Do
> -
> 
> 1. Update the requirements-check test job to change the check for
>an exact match to be a check for compatibility with the
>upper-constraints.txt value.

This change has merged: https://review.openstack.org/#/c/555402/

There are some additional changes to that job still in the queue.
In particular, the change in https://review.openstack.org/#/c/557034/3
will start enforcing some rules to ensure the lower-constraints.txt
settings stay at the bottom of the requirements files.

Because we had some communication issues and did a few steps out
of order, when this patch lands projects that have approved
bot-proposed requirements updates may find that their requirements
and lower-constraints files no longer match, which may lead to job
failures. It should be easy enough to fix the problems by making
the values in the constraints files match the values in the
requirements files (by editing either set of files, depending on
what is appropriate). I apologize for any inconvenience this causes.

> 2. We should stop syncing dependencies by turning off the
>propose-update-requirements job entirely.

This is also done: https://review.openstack.org/#/c/555426/

> 3. Remove the minimum specifications from the global requirements
>list to make clear that the global list is no longer expressing
>minimums.
> 
>This clean-up step has been a bit more controversial among the
>requirements team, but I think it is a key piece. As the minimum
>versions of dependencies diverge within projects, there will no
>longer *be* a real global set of minimum values. Tracking a list of
>"highest minimums", would either require rebuilding the list from the
>settings in all projects, or requiring two patches to change the
>minimum version of a dependency within a project.
> 
>Maintaining a global list of minimums also implies that we
>consider it OK to run OpenStack as a whole with that list. This
>message conflicts with the message we've been sending about the
>upper constraints list since that was established, which is that
>we have a known good list of versions and deploying all of
>OpenStack with different versions of those dependencies is
>untested.

We've decided not to do this step, because some of the other
requirements team members want to use those lower bound values.
Projects are no longer required to be consistent with the lower
bounds in that global file, however.

> Testing Lower Bounds of Dependencies
> 
[...]
> 
> The results of those steps can be combined into a single patch and
> proposed to the project. To avoid overwhelming zuul's job configuration
> resolver, we need to propose the patches in separate batches of
> about 10 repos at a time. This is all mostly scriptable, so I will
> write a script and propose the patches (unless someone else wants to do
> it all -- we need a single person to keep up with how many patches we're
> proposing at one time).
> 
> The point of creating the initial lower-constraints.txt file is not
> necessarily to be "accurate" with the constraints immediately, but
> to have something to work from. After the patches are proposed,
> please either plan to land them or vote -2 indicating that you don't
> want a job like that on that repo. If you want to change the
> constraints significantly, please do that in a separate patch. With
> ~325 of them, I'm not going to be able to keep up with everyone's
> separate needs and this is all meant to just establish the initial
> version of the job anyway.

I ended up needing fewer patches than expected because many of the
projects receiving requirements syncs didn't have unit test jobs
(ansible roles, and some other packaging-related things, that are tested
other ways).

Approvals have been making good progress. As I say above, if you
have minor issues with the patch, either propose a fix on top of
it or take it over and fix it directly. Even though there are fewer
patches than I expected, I'm still not going to be able to be able
to keep up with lots of individual differences or merge conflicts
in projects. Help wanted.

> For projects that currently only support python 2 we can modify the
> proposed patches to not set base-python to use python3.
> 
> You will have noticed that this will only apply to unit test jobs.
> Projects are free to use the results to add their own functional
> test jobs using the same lower-constraints.txt files, but that's
> up to them to do.

I'm not aware of anyone trying to do this, yet. If you are, please let
us know how it's going.

Doug

__

Re: [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it?

2018-03-28 Thread Jay Pipes

On 03/28/2018 03:35 PM, Matt Riedemann wrote:

On 3/27/2018 10:37 AM, Jay Pipes wrote:


If we want to actually fix the issue once and for all, we need to make 
availability zones a real thing that has a permanent identifier (UUID) 
and store that permanent identifier in the instance (not the instance 
metadata).


Or we can continue to paper over major architectural weaknesses like 
this.


Stepping back a second from the rest of this thread, what if we do the 
hard fail bug fix thing, which could be backported to stable branches, 
and then we have the option of completely re-doing this with aggregate 
UUIDs as the key rather than the aggregate name? Because I think the 
former could get done in Rocky, but the latter probably not.


I'm fine with that (and was fine with it before, just stating that 
solving the problem long-term requires different thinking)


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-03-28 Thread Stephen Finucane
On Wed, 2018-03-28 at 14:14 -0500, Sean McGinnis wrote:
> On Thu, Mar 22, 2018 at 10:43:45AM +, Stephen Finucane wrote:
> > On Wed, 2018-03-21 at 09:57 -0500, Sean McGinnis wrote:
> > > On Wed, Mar 21, 2018 at 10:49:02AM +, Stephen Finucane wrote:
> > > > tl;dr: Make sure you stop using pbr's autodoc feature before converting
> > > > them to the new PTI for docs.
> > > > 
> > > > [snip]
> > > > 
> > 
> > That's unfortunate. What we really need is a migration path from the
> > 'pbr' way of doing things to something else. I see three possible
> > avenues at this point in time:
> > 
> >1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar
> >   things to 'sphinx-apidoc' but it takes the form of an extension.
> >   From my brief experiments, the output generated from this is
> >   radically different and far less comprehensive than what 'sphinx-
> >   apidoc' generates. However, it supports templating so we could
> >   probably configure this somehow and add our own special directive
> >   somewhere like 'openstackdocstheme'
> >2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back
> >   against upstream Sphinx [1]. This essentially does what the PBR
> >   extension does but moves configuration into 'conf.py'. However, this
> >   is currently held up as I can't adequately explain the differences
> >   between this and 'sphinx.ext.autosummary' (there's definite overlap
> >   but I don't understand 'autosummary' well enough to compare them).
> >3. Modify the upstream jobs that detect the pbr integration and have
> >   them run 'sphinx-apidoc' before 'sphinx-build'. This is the least
> >   technically appealing approach as it still leaves us unable to build
> >   stuff locally and adds yet more "magic" to the gate, but it does let
> >   us progress.
> > 
> 
> It's not mentioned here, but I discovered today that Cinder is using the
> sphinx.ext.autodoc module. Is there any issue with using this?
> 

Nope - sphinx-apidoc and the likes use autodoc under the hood. You can
see this by checking the output in 'contributor/api' or the likes.

Stephen

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-03-28 Thread Ivan Kolodyazhny
Hi Kuz,

Don't worry, we're on the same page with you. I added both you, Xinni and
Keichii to the xstatic-core group. Thank you for your contributions!

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara  wrote:

> Hi Ivan & Horizon folks
>
>
> AFAIK, Horizon team had conclusion that you will add the specific
> members to xstatic-core, correct ?
> Can I ask you to add the following members ?
> # All of tree are heat-dashboard core.
>
> Kazunori Shinohara / ksnhr.t...@gmail.com #myself
> Xinni Ge / xinni.ge1...@gmail.com
> Keiichi Hikita / keiichi.hik...@gmail.com
>
> Please give me a shout, if we are not on same page or any concern.
>
> Regards,
> Kaz
>
>
> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara :
> > Hi Ivan, Akihiro,
> >
> >
> > Thanks for your kind arrangement.
> > Looking forward to hearing your decision soon.
> >
> > Regards,
> > Kaz
> >
> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny :
> >> HI Team,
> >>
> >> From my perspective, I'm OK both with #2 and #3 options. I agree that #4
> >> could be too complicated for us. Anyway, we've got this topic on the
> meeting
> >> agenda [1] so we'll discuss it there too. I'll share our decision after
> the
> >> meeting.
> >>
> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon
> >>
> >>
> >>
> >> Regards,
> >> Ivan Kolodyazhny,
> >> http://blog.e0ne.info/
> >>
> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki 
> wrote:
> >>>
> >>> Hi Kaz and Ivan,
> >>>
> >>> Yeah, it is worth discussed officially in the horizon team meeting or
> the
> >>> mailing list thread to get a consensus.
> >>> Hopefully you can add this topic to the horizon meeting agenda.
> >>>
> >>> After sending the previous mail, I noticed anther option. I see there
> are
> >>> several options now.
> >>> (1) Keep xstatic-core and horizon-core same.
> >>> (2) Add specific members to xstatic-core
> >>> (3) Add specific horizon-plugin core to xstatic-core
> >>> (4) Split core membership into per-repo basis (perhaps too
> complicated!!)
> >>>
> >>> My current vote is (2) as xstatic-core needs to understand what is
> xstatic
> >>> and how it is maintained.
> >>>
> >>> Thanks,
> >>> Akihiro
> >>>
> >>>
> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara :
> 
>  Hi Akihiro,
> 
> 
>  Thanks for your comment.
>  The background of my request to add us to xstatic-core comes from
>  Ivan's comment in last PTG's etherpad for heat-dashboard discussion.
> 
>  https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion
>  Line135, "we can share ownership if needed - e0ne"
> 
>  Just in case, could you guys confirm unified opinion on this matter as
>  Horizon team ?
> 
>  Frankly speaking I'm feeling the benefit to make us xstatic-core
>  because it's easier & smoother to manage what we are taking for
>  heat-dashboard.
>  On the other hand, I can understand what Akihiro you are saying, the
>  newly added repos belong to Horizon project & being managed by not
>  Horizon core is not consistent.
>  Also having exception might make unexpected confusion in near future.
> 
>  Eventually we will follow your opinion, let me hear Horizon team's
>  conclusion.
> 
>  Regards,
>  Kaz
> 
> 
>  2018-03-20 12:58 GMT+09:00 Akihiro Motoki :
>  > Hi Kaz,
>  >
>  > These repositories are under horizon project. It looks better to
> keep
>  > the
>  > current core team.
>  > It potentially brings some confusion if we treat some horizon plugin
>  > team
>  > specially.
>  > Reviewing xstatic repos would be a small burden, wo I think it would
>  > work
>  > without problem even if only horizon-core can approve xstatic
> reviews.
>  >
>  >
>  > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara :
>  >>
>  >> Hi Ivan, Horizon folks,
>  >>
>  >>
>  >> Now totally 8 xstatic-** repos for heat-dashboard have been landed.
>  >>
>  >> In project-config for them, I've set same acl-config as the
> existing
>  >> xstatic repos.
>  >> It means only "xstatic-core" can manage the newly created repos on
>  >> gerrit.
>  >> Could you kindly add "heat-dashboard-core" into "xstatic-core"
> like as
>  >> what horizon-core is doing ?
>  >>
>  >> xstatic-core
>  >> https://review.openstack.org/#/admin/groups/385,members
>  >>
>  >> heat-dashboard-core
>  >> https://review.openstack.org/#/admin/groups/1844,members
>  >>
>  >> Of course, we will surely touch only what we made, just would like
> to
>  >> manage them smoothly by ourselves.
>  >> In case we need to touch the other ones, will ask Horizon team for
>  >> help.
>  >>
>  >> Thanks in advance.
>  >>
>  >> Regards,
>  >> Kaz
> 

[openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-28 Thread James E. Blair
Hi,

I've proposed a change to devstack which slightly alters the
LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
those using legacy devstack jobs (but you may want to be aware of it).
It is more significant for new-style devstack jobs.

The change is at https://review.openstack.org/549252

In summary, when this change lands, new-style devstack jobs should no
longer need to set LIBS_FROM_GIT explicitly.  Existing legacy jobs
should be unaffected (but there is a change to the verification process
performed by devstack).


Currently devstack expects the contents of LIBS_FROM_GIT to be
exclusively a list of python packages which, obviously, should be
installed from git and not pypi.  It is used for two purposes:
determining whether an individual package should be installed from git,
and verifying that a package was installed from git.

In the old devstack-gate system, we prepared many of the common git
repos, whether they were used or not.  So LIBS_FROM_GIT was created to
indicate that in some cases devstack should ignore those repos and
install from pypi instead.  In other words, its original purpose was
purely as a method of selecting whether a devstack-gate prepared repo
should be used or ignored.

In Zuul v3, we have a good way to indicate whether a job is going to use
a repo or not -- add it to "required-projects".  Considering that, the
LIBS_FROM_GIT variable is redundant.  So my patch causes it to be
automatically generated based on the contents of required-projects.
This means that job authors don't need to list every required repository
twice.

However, a naïve implementation of that runs afoul of the second use of
LIBS_FROM_GIT -- verifying that python packages are installed from git.

This usage was added later, after a typographical error ("-" vs "_" in a
python package name) in a constraints file caused us not to install a
package from git.  Now devstack verifies that every package in
LIBS_FROM_GIT is installed.  However, Zuul doesn't know that devstack,
tempest, and other packages aren't installed.  So adding them
automatically to LIBS_FROM_GIT will cause devstack to fail.

My change modifies this verification to only check that packages
mentioned in LIBS_FROM_GIT that devstack tried to install were actually
installed.  I realize that stated as such this sounds tautological,
however, this check is still valid -- it would have caught the original
error that prompted the check in the first case.

What the revised check will no longer handle is a typo in a legacy job.
If someone enters a typo into LIBS_FROM_GIT, it will no longer fail.
However, I think the risk is worthwhile -- particularly since it is in
service of a system which eliminates the opportunity to introduce such
an error in the first place.

To see the result in action, take a look at this change which, in only a
few lines, implements what was a significantly more complex undertaking
in Zuul v2:

https://review.openstack.org/548331

Finally, a note on the automatic generation of LIBS_FROM_GIT -- if, for
some reason, you require a new-style devstack job to manually set
LIBS_FROM_GIT, that will still work.  Simply define the variable as
normal, and the module which generates the devstack config will bypass
automatic generation if the variable is already set.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] review runways are now live!

2018-03-28 Thread melanie witt

Hi Stackers,

This is just an standalone announcement that review runways [0] are now 
live and in active use. Details and instructions are documented on the 
etherpad.


For approved blueprint code authors, please consult the etherpad 
instructions and add your blueprint to the Queue when your code is ready 
for review (requirements are documented in the etherpad).


For nova-core team members, please make blueprints in runways your 
priority for review.


As mentioned before, runways are an experimental process and we are open 
to feedback and will adjust the process incrementally on a continual 
basis as we gain experience with it. The process is not meant to be 
rigid and unchanging during the cycle.


Thanks,
-melanie

[0] https://etherpad.openstack.org/p/nova-runways-rocky

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it?

2018-03-28 Thread Matt Riedemann

On 3/27/2018 10:37 AM, Jay Pipes wrote:


If we want to actually fix the issue once and for all, we need to make 
availability zones a real thing that has a permanent identifier (UUID) 
and store that permanent identifier in the instance (not the instance 
metadata).


Or we can continue to paper over major architectural weaknesses like this.


Stepping back a second from the rest of this thread, what if we do the 
hard fail bug fix thing, which could be backported to stable branches, 
and then we have the option of completely re-doing this with aggregate 
UUIDs as the key rather than the aggregate name? Because I think the 
former could get done in Rocky, but the latter probably not.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-03-28 Thread Sean McGinnis
On Thu, Mar 22, 2018 at 10:43:45AM +, Stephen Finucane wrote:
> On Wed, 2018-03-21 at 09:57 -0500, Sean McGinnis wrote:
> > On Wed, Mar 21, 2018 at 10:49:02AM +, Stephen Finucane wrote:
> > > tl;dr: Make sure you stop using pbr's autodoc feature before converting
> > > them to the new PTI for docs.
> > > 
> > > [snip]
> > > 
> 
> That's unfortunate. What we really need is a migration path from the
> 'pbr' way of doing things to something else. I see three possible
> avenues at this point in time:
> 
>1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar
>   things to 'sphinx-apidoc' but it takes the form of an extension.
>   From my brief experiments, the output generated from this is
>   radically different and far less comprehensive than what 'sphinx-
>   apidoc' generates. However, it supports templating so we could
>   probably configure this somehow and add our own special directive
>   somewhere like 'openstackdocstheme'
>2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back
>   against upstream Sphinx [1]. This essentially does what the PBR
>   extension does but moves configuration into 'conf.py'. However, this
>   is currently held up as I can't adequately explain the differences
>   between this and 'sphinx.ext.autosummary' (there's definite overlap
>   but I don't understand 'autosummary' well enough to compare them).
>3. Modify the upstream jobs that detect the pbr integration and have
>   them run 'sphinx-apidoc' before 'sphinx-build'. This is the least
>   technically appealing approach as it still leaves us unable to build
>   stuff locally and adds yet more "magic" to the gate, but it does let
>   us progress.
> 

It's not mentioned here, but I discovered today that Cinder is using the
sphinx.ext.autodoc module. Is there any issue with using this?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-28 Thread Eric Fried
Sundar-

We're running across this issue in several places right now.   One
thing that's definitely not going to get traction is
automatically/implicitly tweaking inventory in one resource class when
an allocation is made on a different resource class (whether in the same
or different RPs).

Slightly less of a nonstarter, but still likely to get significant
push-back, is the idea of tweaking traits on the fly.  For example, your
vGPU case might be modeled as:

PGPU_RP: {
  inventory: {
  CUSTOM_VGPU_TYPE_A: 2,
  CUSTOM_VGPU_TYPE_B: 4,
  }
  traits: [
  CUSTOM_VGPU_TYPE_A_CAPABLE,
  CUSTOM_VGPU_TYPE_B_CAPABLE,
  ]
}

The request would come in for
resources=CUSTOM_VGPU_TYPE_A:1=VGPU_TYPE_A_CAPABLE, resulting
in an allocation of CUSTOM_VGPU_TYPE_A:1.  Now while you're processing
that, you would *remove* CUSTOM_VGPU_TYPE_B_CAPABLE from the PGPU_RP.
So it doesn't matter that there's still inventory of
CUSTOM_VGPU_TYPE_B:4, because a request including
required=CUSTOM_VGPU_TYPE_B_CAPABLE won't be satisfied by this RP.
There's of course a window between when the initial allocation is made
and when you tweak the trait list.  In that case you'll just have to
fail the loser.  This would be like any other failure in e.g. the spawn
process; it would bubble up, the allocation would be removed; retries
might happen or whatever.

Like I said, you're likely to get a lot of resistance to this idea as
well.  (Though TBH, I'm not sure how we can stop you beyond -1'ing your
patches; there's nothing about placement that disallows it.)

The simple-but-inefficient solution is simply that we'd still be able
to make allocations for vGPU type B, but you would have to fail right
away when it came down to cyborg to attach the resource.  Which is code
you pretty much have to write anyway.  It's an improvement if cyborg
gets to be involved in the post-get-allocation-candidates
weighing/filtering step, because you can do that check at that point to
help filter out the candidates that would fail.  Of course there's still
a race condition there, but it's no different than for any other resource.

efried

On 03/28/2018 12:27 PM, Nadathur, Sundar wrote:
> Hi Eric and all,
>     I should have clarified that this race condition happens only for
> the case of devices with multiple functions. There is a prior thread
> 
> about it. I was trying to get a solution within Cyborg, but that faces
> this race condition as well.
> 
> IIUC, this situation is somewhat similar to the issue with vGPU types
> 
> (thanks to Alex Xu for pointing this out). In the latter case, we could
> start with an inventory of (vgpu-type-a: 2; vgpu-type-b: 4).  But, after
> consuming a unit of  vGPU-type-a, ideally the inventory should change
> to: (vgpu-type-a: 1; vgpu-type-b: 0). With multi-function accelerators,
> we start with an RP inventory of (region-type-A: 1, function-X: 4). But,
> after consuming a unit of that function, ideally the inventory should
> change to: (region-type-A: 0, function-X: 3).
> 
> I understand that this approach is controversial :) Also, one difference
> from the vGPU case is that the number and count of vGPU types is static,
> whereas with FPGAs, one could reprogram it to result in more or fewer
> functions. That said, we could hopefully keep this analogy in mind for
> future discussions.
> 
> We probably will not support multi-function accelerators in Rocky. This
> discussion is for the longer term.
> 
> Regards,
> Sundar
> 
> On 3/23/2018 12:44 PM, Eric Fried wrote:
>> Sundar-
>>
>>  First thought is to simplify by NOT keeping inventory information in
>> the cyborg db at all.  The provider record in the placement service
>> already knows the device (the provider ID, which you can look up in the
>> cyborg db) the host (the root_provider_uuid of the provider representing
>> the device) and the inventory, and (I hope) you'll be augmenting it with
>> traits indicating what functions it's capable of.  That way, you'll
>> always get allocation candidates with devices that *can* load the
>> desired function; now you just have to engage your weigher to prioritize
>> the ones that already have it loaded so you can prefer those.
>>
>>  Am I missing something?
>>
>>  efried
>>
>> On 03/22/2018 11:27 PM, Nadathur, Sundar wrote:
>>> Hi all,
>>>     There seems to be a possibility of a race condition in the
>>> Cyborg/Nova flow. Apologies for missing this earlier. (You can refer to
>>> the proposed Cyborg/Nova spec
>>> 
>>> for details.)
>>>
>>> Consider the scenario where the flavor specifies a resource class for a
>>> device type, and also specifies a function (e.g. encrypt) in the extra
>>> specs. The Nova 

Re: [openstack-dev] [PTLS] Project Updates & Project Onboarding

2018-03-28 Thread Tom Barron

Many apologies for sending this to the openstack-dev list;
I thought I had removed the list from my address list but
clearly did not.

On 28/03/18 14:34 -0400, Tom Barron wrote:
Would you be so kind as to add 

<... snip ...>



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLS] Project Updates & Project Onboarding

2018-03-28 Thread Tom Barron
Would you be so kind as to add Victoria Martinez de la Cruz and Dustin 
Schoenbrun to the manila project Onboarding session [1] ?  They are 
confirmed for conference attendance.


Thanks much!

-- Tom Barron

[1] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21637/manila-project-onboarding


On 21/03/18 22:14 +, Kendall Nelson wrote:

Hello!

Project Updates[1] & Project Onboarding[2] sessions are now live on the
schedule!

We did as best as we could to keep project onboarding sessions adjacent to
project update slots. Though, given the differences in duration and the
number of each we have per day that got increasingly difficult as the days
went on, hopefully what is there will work for everyone.

If there are any speakers you need added to your slots, or any conflicts
you need addressed, feel free to email speakersupp...@openstack.org and
they should be able to help you out.

Thanks!

-Kendall Nelson (diablo_rojo)

[1]
https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=Update
[2]
https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=Onboarding



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-28 Thread Nadathur, Sundar

Hi Shaohe,
  I have responded in the Etherpad. The Cyborg/Nova scheduling spec 
details the 4 types of user requests 
. 



I believe you are looking for more details on what the RC names, traits 
and flavors will look like. I will add that to the spec itself.


Thanks,
Sundar

On 3/28/2018 2:10 AM, 少合冯 wrote:

I have summarize some scenarios for fpga devices request.
https://etherpad.openstack.org/p/cyborg-fpga-request-scenarios

Please add more  more scenarios to find out the exceptions that 
placement can not satisfy the filter and weight.


IMOH, I refer placementto do filter and weight. If we have to let 
cyborg do filter and weight.  Nova scheduler just need call cyborg 
once for all host weight though we do the weigh one by one.



2018-03-23 12:27 GMT+08:00 Nadathur, Sundar >:


Hi all,
    There seems to be a possibility of a race condition in the
Cyborg/Nova flow. Apologies for missing this earlier. (You can
refer to the proposed Cyborg/Nova spec


for details.)

Consider the scenario where the flavor specifies a resource class
for a device type, and also specifies a function (e.g. encrypt) in
the extra specs. The Nova scheduler would only track the device
type as a resource, and Cyborg needs to track the availability of
functions. Further, to keep it simple, say all the functions exist
all the time (no reprogramming involved).

To recap, here is the scheduler flow for this case:

  * A request spec with a flavor comes to Nova
conductor/scheduler. The flavor has a device type as a
resource class, and a function in the extra specs.
  * Placement API returns the list of RPs (compute nodes) which
contain the requested device types (but not necessarily the
function).
  * Cyborg will provide a custom filter which queries Cyborg DB.
This needs to check which hosts contain the needed function,
and filter out the rest.
  * The scheduler selects one node from the filtered list, and the
request goes to the compute node.

For the filter to work, the Cyborg DB needs to maintain a table
with triples of (host, function type, #free units). The filter
checks if a given host has one or more free units of the requested
function type. But, to keep the # free units up to date, Cyborg on
the selected compute node needs to notify the Cyborg API to
decrement the #free units when an instance is spawned, and to
increment them when resources are released.

Therein lies the catch: this loop from the compute node to
controller is susceptible to race conditions. For example, if two
simultaneous requests each ask for function A, and there is only
one unit of that available, the Cyborg filter will approve both,
both may land on the same host, and one will fail. This is because
Cyborg on the controller does not decrement resource usage due to
one request before processing the next request.

This is similar to this previous Nova scheduling issue

.
That was solved by having the scheduler claim a resource in
Placement for the selected node. I don't see an analog for Cyborg,
since it would not know which node is selected.

Thanks in advance for suggestions and solutions.

Regards,
Sundar







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-28 Thread Chuck Short
+1

Regards
chuck

On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang 
wrote:

> There are two projects to solve the issue that run OpenStack on
> Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both
> leverage helm tool for orchestration. There is some different perspective
> at the beginning, which results in the two teams could not work together.
>
> But recently, the difference becomes too small. and there is also no active
> contributor in the kolla-kubernetes project.
>
> So I propose to retire kolla-kubernetes project. If you are still
> interested in running OpenStack on kubernetes, please refer to
> openstack-helm project.
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-28 Thread MCEUEN, MATT
The OpenStack-Helm team would eagerly welcome contributions from 
Kolla-Kubernetes team members!  Several of the current OSH team come from a 
Kolla-Kubernetes background, and the project has benefitted greatly from their 
experience and domain knowledge.

Please reach out to me or say hi in #openstack-helm if you'd like to get looped 
in.
Thanks,
Matt

-Original Message-
From: Paul Bourke [mailto:paul.bou...@oracle.com] 
Sent: Wednesday, March 28, 2018 11:17 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] 
[kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes 
project

+1

Thanks Jeffrey for taking the time to investigate.

On 28/03/18 16:47, Jeffrey Zhang wrote:
> There are two projects to solve the issue that run OpenStack on
> Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both
> leverage helm tool for orchestration. There is some different perspective
> at the beginning, which results in the two teams could not work together.
> 
> But recently, the difference becomes too small. and there is also no active
> contributor in the kolla-kubernetes project.
> 
> So I propose to retire kolla-kubernetes project. If you are still
> interested in running OpenStack on kubernetes, please refer to
> openstack-helm project.
> 
> -- 
> Regards,
> Jeffrey Zhang
> Blog: 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__xcodest.me=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=_C5hC_103uW491yNPPpNmA=aRg_FabM3_QiSWpeuGcuJXSLceTM0KGMLJgHyDOfrAo=hQmYlooBLjpF5tBAsthjxDFNn1zNssgnvtW-smJ7MYk=
>   
>   >
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=_C5hC_103uW491yNPPpNmA=aRg_FabM3_QiSWpeuGcuJXSLceTM0KGMLJgHyDOfrAo=UQmPU1ND-ti1FNE8yfZx9qDP_I4gwW-jC2EOybg58mA=
>  
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=_C5hC_103uW491yNPPpNmA=aRg_FabM3_QiSWpeuGcuJXSLceTM0KGMLJgHyDOfrAo=UQmPU1ND-ti1FNE8yfZx9qDP_I4gwW-jC2EOybg58mA=
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-28 Thread Nadathur, Sundar

Hi Eric and all,
    I should have clarified that this race condition happens only for 
the case of devices with multiple functions. There is a prior thread 
 
about it. I was trying to get a solution within Cyborg, but that faces 
this race condition as well.


IIUC, this situation is somewhat similar to the issue with vGPU types 
 
(thanks to Alex Xu for pointing this out). In the latter case, we could 
start with an inventory of (vgpu-type-a: 2; vgpu-type-b: 4).  But, after 
consuming a unit of vGPU-type-a, ideally the inventory should change to: 
(vgpu-type-a: 1; vgpu-type-b: 0). With multi-function accelerators, we 
start with an RP inventory of (region-type-A: 1, function-X: 4). But, 
after consuming a unit of that function, ideally the inventory should 
change to: (region-type-A: 0, function-X: 3).


I understand that this approach is controversial :) Also, one difference 
from the vGPU case is that the number and count of vGPU types is static, 
whereas with FPGAs, one could reprogram it to result in more or fewer 
functions. That said, we could hopefully keep this analogy in mind for 
future discussions.


We probably will not support multi-function accelerators in Rocky. This 
discussion is for the longer term.


Regards,
Sundar

On 3/23/2018 12:44 PM, Eric Fried wrote:

Sundar-

First thought is to simplify by NOT keeping inventory information in
the cyborg db at all.  The provider record in the placement service
already knows the device (the provider ID, which you can look up in the
cyborg db) the host (the root_provider_uuid of the provider representing
the device) and the inventory, and (I hope) you'll be augmenting it with
traits indicating what functions it's capable of.  That way, you'll
always get allocation candidates with devices that *can* load the
desired function; now you just have to engage your weigher to prioritize
the ones that already have it loaded so you can prefer those.

Am I missing something?

efried

On 03/22/2018 11:27 PM, Nadathur, Sundar wrote:

Hi all,
     There seems to be a possibility of a race condition in the
Cyborg/Nova flow. Apologies for missing this earlier. (You can refer to
the proposed Cyborg/Nova spec

for details.)

Consider the scenario where the flavor specifies a resource class for a
device type, and also specifies a function (e.g. encrypt) in the extra
specs. The Nova scheduler would only track the device type as a
resource, and Cyborg needs to track the availability of functions.
Further, to keep it simple, say all the functions exist all the time (no
reprogramming involved).

To recap, here is the scheduler flow for this case:

   * A request spec with a flavor comes to Nova conductor/scheduler. The
 flavor has a device type as a resource class, and a function in the
 extra specs.
   * Placement API returns the list of RPs (compute nodes) which contain
 the requested device types (but not necessarily the function).
   * Cyborg will provide a custom filter which queries Cyborg DB. This
 needs to check which hosts contain the needed function, and filter
 out the rest.
   * The scheduler selects one node from the filtered list, and the
 request goes to the compute node.

For the filter to work, the Cyborg DB needs to maintain a table with
triples of (host, function type, #free units). The filter checks if a
given host has one or more free units of the requested function type.
But, to keep the # free units up to date, Cyborg on the selected compute
node needs to notify the Cyborg API to decrement the #free units when an
instance is spawned, and to increment them when resources are released.

Therein lies the catch: this loop from the compute node to controller is
susceptible to race conditions. For example, if two simultaneous
requests each ask for function A, and there is only one unit of that
available, the Cyborg filter will approve both, both may land on the
same host, and one will fail. This is because Cyborg on the controller
does not decrement resource usage due to one request before processing
the next request.

This is similar to this previous Nova scheduling issue
.
That was solved by having the scheduler claim a resource in Placement
for the selected node. I don't see an analog for Cyborg, since it would
not know which node is selected.

Thanks in advance for suggestions and solutions.

Regards,
Sundar








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [nova][oslo] what to do with problematic mocking in nova unit tests

2018-03-28 Thread Doug Hellmann
In the course of preparing the next release of oslo.config, Ben noticed
that nova's unit tests fail with oslo.config master [1].

The underlying issue is that the tests mock things that oslo.config
is now calling as part of determining where options are being set
in code. This isn't an API change in oslo.config, and it is all
transparent for normal uses of the library. But the mocks replace
os.path.exists() and open() for the entire duration of a test
function (not just for the isolated application code being tested),
and so the library behavior change surfaces as a test error.

I'm not really in a position to go through and clean up the use of
mocks in those (and other?) tests myself, and I would like to not
have to revert the feature work in oslo.config, especially since
we did it for the placement API stuff for the nova team.

I'm looking for ideas about what to do.

Doug

[1] 
http://logs.openstack.org/12/557012/1/check/cross-nova-py27/37b2a7c/job-output.txt.gz#_2018-03-27_21_41_09_883881

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Queries about API Extension

2018-03-28 Thread Chris Dent

On Wed, 28 Mar 2018, 陈汗 wrote:


Hi all,
   Here are my questions:
   For the projects whose api parts were implemented with Pecan, is there 
any way(hope it is graceful) to extend these api?
  I mean, for example, somehow I have to add several extra attributes in 
Class Chassis in ironic project. Do you guys have any better way instead of 
directly editing the file of chassis.py?


As a general rule you should avoid doing this as it breaks
interoperability.

If you really need a special extension to an existing API, make a
custom API in a custom service that does what you need it to do. By
being separate it is clearly identified as not being a part of the
standard API and client code written to that standard API will
continue to work.

Of course, I'm sure plenty of people in their private clouds make
adjustments to existing services and API all the time. If you must
do, that doing it directly in the code may be one of the best ways
to go as it makes it obvious that things have changed.

Also, it might be that there are ways to such a thing in Ironic, in
which case I hope someone will followup with that. I'm speaking from
the position of APIs in OpenStack in general.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-28 Thread 楊睿豪
+1 
To consolidate them


> Paul Bourke  於 2018年3月29日 上午12:16 寫道:
> 
> +1
> 
> Thanks Jeffrey for taking the time to investigate.
> 
>> On 28/03/18 16:47, Jeffrey Zhang wrote:
>> There are two projects to solve the issue that run OpenStack on
>> Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both
>> leverage helm tool for orchestration. There is some different perspective
>> at the beginning, which results in the two teams could not work together.
>> But recently, the difference becomes too small. and there is also no active
>> contributor in the kolla-kubernetes project.
>> So I propose to retire kolla-kubernetes project. If you are still
>> interested in running OpenStack on kubernetes, please refer to
>> openstack-helm project.
>> -- 
>> Regards,
>> Jeffrey Zhang
>> Blog: http://xcodest.me 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VMware NSX CI - no longer running?

2018-03-28 Thread Chris Dent

On Wed, 28 Mar 2018, melanie witt wrote:

Can anyone from the vmware subteam comment on whether or not the vmware 
third-party CI is going to be fixed or if it has been abandoned?


I've got no substantive information yet, but for the sake of the
thread not looking ignored, I can report that the beacons have been
lit within the team that cares for such things and there should be
some progress soon. Given that there hasn't been awareness in that
group of the flakiness, we'll probably use that as the starting
point: enhanced observability.

And go from there to reach some measure of better.

Long term it would be sweet to zuulv3 on a legit cluster, with more
tests being run than just the chunk of tempest that happens now.

If nobody else has posted something more helpful by tomorrow UTC,
I'll chase.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky community goal: remove the use of mox/mox3 for testing

2018-03-28 Thread melanie witt

On Wed, 28 Mar 2018 19:21:24 +0300, Andrey Kurilin wrote:


This is a nice goal which revises me the first my patch to OpenStack 
community. It was a patch to Nova and it was related to removing mox :)


PS: https://review.openstack.org/#/c/59694/
PS2: it was abandoned due to several -2 :)


You were ahead of your time. :)

-melanie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky community goal: remove the use of mox/mox3 for testing

2018-03-28 Thread Matt Riedemann

On 3/28/2018 11:21 AM, Andrey Kurilin wrote:

PS: https://review.openstack.org/#/c/59694/
PS2: it was abandoned due to several -2 :)


Look how nice I was as a reviewer 5 years ago...

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky community goal: remove the use of mox/mox3 for testing

2018-03-28 Thread Andrey Kurilin
Hi Melanie and stacker!

This is a nice goal which revises me the first my patch to OpenStack
community. It was a patch to Nova and it was related to removing mox :)

PS: https://review.openstack.org/#/c/59694/
PS2: it was abandoned due to several -2 :)

2018-03-27 1:06 GMT+03:00 melanie witt :

> Hey everyone,
>
> This cycle there is a community goal to remove the use of mox/mox3 for
> testing [0]. In nova, we're tracking our work at this blueprint:
>
>   https://blueprints.launchpad.net/nova/+spec/mox-removal
>
> If you propose patches contributing to this goal, please be sure to add
> something like "Part of blueprint mox-removal" in the commit message of
> your patch so it will be tracked as part of the blueprint for Rocky.
>
> NOTE: Please avoid converting any tests related to cells v1 or
> nova-network as these two legacy features are either in-progress of being
> removed or on the road map to being removed within the next two cycles.
> Tests to *avoid* converting are located:
>
>   nova/tests/unit/cells/
>   nova/nova/tests/unit/compute/test_compute_cells.py
>   nova/tests/unit/network/test_manager.py
>
> Please reply with other cells v1 or nova-network test locations to avoid
> if I've missed any.
>
> Thanks,
> -melanie
>
> [0] https://storyboard.openstack.org/#!/story/2001546
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VMware NSX CI - no longer running?

2018-03-28 Thread Matt Riedemann

On 3/28/2018 11:07 AM, melanie witt wrote:
We were reviewing a bug fix for the vmware driver [0] today and we 
noticed it appears that the VMware NSX CI is no longer running, not even 
on only the nova/virt/vmwareapi/ tree.


 From the third-party CI dashboard, I see some claims of it running but 
when I open the patches, I don't see any reporting from VMware NSX CI [1].


Can anyone from the vmware subteam comment on whether or not the vmware 
third-party CI is going to be fixed or if it has been abandoned?


Thanks,
-melanie

[0] https://review.openstack.org/557377
[1] http://ci-watch.tintri.com/project?project=nova=7+days


As a result, I've posted a change to log a warning on start of the 
driver indicating its quality cannot be ensured since it doesn't get the 
same level of testing as the other drivers.


https://review.openstack.org/#/c/557398/

This also makes me basically -2 on any vmware driver specs since I don't 
see a point in adding new features for the driver when the CI is never 
working, and by "never" I mean for at least the last couple of years. I 
could go back and find the seemingly quarterly mailing list posts I've 
had to make like this in the past.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-28 Thread Paul Bourke

+1

Thanks Jeffrey for taking the time to investigate.

On 28/03/18 16:47, Jeffrey Zhang wrote:

There are two projects to solve the issue that run OpenStack on
Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both
leverage helm tool for orchestration. There is some different perspective
at the beginning, which results in the two teams could not work together.

But recently, the difference becomes too small. and there is also no active
contributor in the kolla-kubernetes project.

So I propose to retire kolla-kubernetes project. If you are still
interested in running OpenStack on kubernetes, please refer to
openstack-helm project.

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-28 Thread Davanum Srinivas
+1 to consolidate.

Thanks,
Dims

On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang  wrote:
> There are two projects to solve the issue that run OpenStack on
> Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both
> leverage helm tool for orchestration. There is some different perspective
> at the beginning, which results in the two teams could not work together.
>
> But recently, the difference becomes too small. and there is also no active
> contributor in the kolla-kubernetes project.
>
> So I propose to retire kolla-kubernetes project. If you are still
> interested in running OpenStack on kubernetes, please refer to
> openstack-helm project.
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] VMware NSX CI - no longer running?

2018-03-28 Thread melanie witt

Hello everyone,

We were reviewing a bug fix for the vmware driver [0] today and we 
noticed it appears that the VMware NSX CI is no longer running, not even 
on only the nova/virt/vmwareapi/ tree.


From the third-party CI dashboard, I see some claims of it running but 
when I open the patches, I don't see any reporting from VMware NSX CI [1].


Can anyone from the vmware subteam comment on whether or not the vmware 
third-party CI is going to be fixed or if it has been abandoned?


Thanks,
-melanie

[0] https://review.openstack.org/557377
[1] http://ci-watch.tintri.com/project?project=nova=7+days

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-28 Thread Jeffrey Zhang
There are two projects to solve the issue that run OpenStack on
Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both
leverage helm tool for orchestration. There is some different perspective
at the beginning, which results in the two teams could not work together.

But recently, the difference becomes too small. and there is also no active
contributor in the kolla-kubernetes project.

So I propose to retire kolla-kubernetes project. If you are still
interested in running OpenStack on kubernetes, please refer to
openstack-helm project.

-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] Where did the ARA logs go?

2018-03-28 Thread Jeremy Stanley
On 2018-03-28 09:26:49 -0500 (-0500), Sean McGinnis wrote:
[...]
> I believe the ARA logs are only captured on failing jobs.

Correct. This was a stop-gap some months ago when we noticed we were
overrunning our inode capacity on the logserver. ARA was was only
one of the various contributors to that increased consumption but
due to its original model based on numerous tiny files, limiting it
to job failures (where it was most useful) was one of the ways we
temporarily curtailed inode utilization. ARA has very recently grown
the ability to stuff all that data into a single sqlite file and
then handle it browser-side, so I expect we'll be able to switch
back to collecting it for all job runs again fairly soon.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Queries about API Extension

2018-03-28 Thread 陈汗
Hi all,
Here are my questions:
For the projects whose api parts were implemented with Pecan, is there 
any way(hope it is graceful) to extend these api?
   I mean, for example, somehow I have to add several extra attributes in 
Class Chassis in ironic project. Do you guys have any better way instead of 
directly editing the file of chassis.py?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tap-as-a-service] publish on pypi

2018-03-28 Thread Takashi Yamamoto
hi,

i'm thinking about publishing the latest release of tap-as-a-service on pypi.
background: https://review.openstack.org/#/c/555788/
iirc, the naming (tap-as-a-service vs neutron-taas) was one of concerns
when we talked about this topic last time. (long time ago. my memory is dim.)
do you have any ideas or suggestions?
probably i'll just use "tap-as-a-service" unless anyone has strong opinions.
because:
- it's the name we use the most frequently
- we are not neutron (yet?)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][release] Remove complex ACL changes around releases

2018-03-28 Thread Graham Hayes


On 26/03/2018 14:33, Thierry Carrez wrote:
> Hi!
> 
> TL;DR:
> We used to do complex things with ACLs for stable/* branches around
> releases. Let's stop doing that as it's not really useful anyway, and
> just trust the $project-stable-maint teams to do the right thing.
> 
> 
> Current situation:
> 
> As we get close to the end of a release cycle, we start creating
> stable/$series branches to refine what is likely to become a part of the
> coordinated release at the end of the cycle. After release, that same
> stable/$series branch is used to backport fixes and issue further point
> releases.
> 
> The rules to apply for approving changes to stable/$series differ
> slightly depending on whether you are pre-release or post-release. To
> reflect that, we use two different groups. Pre-release the branch is
> controlled by the $project-release group (and Release Managers) and
> post-release the branch is controlled by the $project-stable-maint group
> (and stable-maint-core).
> 
> To switch between the two without blocking on an infra ACL change, the
> release team enters a complex dance where we initially create an ACL for
> stable/$series, giving control of it to a $project-release-branch group,
> whose membership is reset at every cycle to contain $project-release. At
> release time, we update $project-release-branch Gerrit group membership
> to contain $project-stable-maint instead. Then we get rid of the
> stable/$series ACL altogether.
> 
> This process is a bit complex and error-prone (and we tend to have to
> re-learn it every cycle). It's also designed for a time when we expected
> completely-different people to be in -release and -stable-maint groups,
> while those are actually, most of the time, the same people.
> Furthermore, with more and more deliverables being released under the
> cycle-with-intermediary model, pre-release and post-release approval
> rules are actually more and more of the same.
> 
> Proposal:
> 
> By default, let's just have $project-stable-maint control stable/*. We
> no longer create new ACLs for stable/$series every cycle, we no longer
> switch from $project-release control to $project-stable-maint control.
> The release team no longer does anything around stable branch ACLs or
> groups during the release cycle.
> 
> That way, the same group ends up being used to control stable/*
> pre-release and post-release. They were mostly the same people already:
> Release managers are a part of stable-maint-core, which is included in
> every $project-stable-maint anyway, so they retain control.
> 
> What that changes for you:
> 
> If you are part of $project-release but not part of
> $project-stable-maint, you'll probably want to join that team. If you
> review pre-release changes on a stable branch for a
> cycle-with-milestones deliverable, you will have to remember that the
> rules there are slightly different from stable branch approval rules. In
> doubt, do not approve, and ask.

It is more complex than just "joining that team" if the project follows
stable policy. the stable team have to approve the additions, and do
reject people trying to join them. I don't want to have a release where
someone has to self approve / ninja approve patches due to cores *not*
having the access rights that they previously had.

> But I don't like that! I prefer tight ACLs!
> 
> While we do not recommend it, every team can still specify more complex
> ACLs to control their stable branches. As long as the "Release Managers"
> group retains ability to approve changes pre-release (and
> stable-maint-core retains ability to approve changes post-release), more
> specific ACLs are fine.
> 
> Let me know if you have any comment, otherwise we'll start using that
> new process for the Rocky cycle (stable/rocky branch).
> 
> Thanks !
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Replacing pbr's autodoc feature with sphinxcontrib-apidoc

2018-03-28 Thread Stephen Finucane
As noted last week [1], we're trying to move away from pbr's autodoc
feature as part of the new docs PTI. To that end, I've created
sphinxcontrib-apidoc, which should do what pbr was previously doing for
us by via a Sphinx extension.

  https://pypi.org/project/sphinxcontrib-apidoc/

This works by reading some configuration from your documentation's
'conf.py' file and using this to call 'sphinx-apidoc'. It means we no
longer need pbr to do this for.

I have pushed version 0.1.0 to PyPi already but before I add this to
global requirements, I'd like to ensure things are working as expected.
smcginnis was kind enough to test this out on glance and it seemed to
work for him but I'd appreciate additional data points. The
configuration steps for this extension are provided in the above link.
To test this yourself, you simply need to do the following:

   1. Add 'sphinxcontrib-apidoc' to your test-requirements.txt or
  doc/requirements.txt file
   2. Configure as noted above and remove the '[pbr]' and '[build_sphinx]'
  configuration from 'setup.cfg'
   3. Replace 'python setup.py build_sphinx' with a call to 'sphinx-build'
   4. Run 'tox -e docs'
   5. Profit?

Be sure to let me know if anyone encounters issues. If not, I'll be
pushing for this to be included in global requirements so we can start
the migration.

Cheers,
Stephen

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128594.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] Where did the ARA logs go?

2018-03-28 Thread Sean McGinnis
On Wed, Mar 28, 2018 at 10:58:56AM +0200, András Kövi wrote:
> Hi,
> 
> Recently I noticed that ARA logs were published for all CI jobs. It
> seems like the reports do not contain the these logs any more. I tried
> to research on what happened to them but couldn't find any info. Can
> someone please enlighten me about this change?
> 
> Thank you,
> Andras
> 

I believe the ARA logs are only captured on failing jobs.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-03-28 Thread Kaz Shinohara
Hi Ivan & Horizon folks


AFAIK, Horizon team had conclusion that you will add the specific
members to xstatic-core, correct ?
Can I ask you to add the following members ?
# All of tree are heat-dashboard core.

Kazunori Shinohara / ksnhr.t...@gmail.com #myself
Xinni Ge / xinni.ge1...@gmail.com
Keiichi Hikita / keiichi.hik...@gmail.com

Please give me a shout, if we are not on same page or any concern.

Regards,
Kaz


2018-03-21 22:29 GMT+09:00 Kaz Shinohara :
> Hi Ivan, Akihiro,
>
>
> Thanks for your kind arrangement.
> Looking forward to hearing your decision soon.
>
> Regards,
> Kaz
>
> 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny :
>> HI Team,
>>
>> From my perspective, I'm OK both with #2 and #3 options. I agree that #4
>> could be too complicated for us. Anyway, we've got this topic on the meeting
>> agenda [1] so we'll discuss it there too. I'll share our decision after the
>> meeting.
>>
>> [1] https://wiki.openstack.org/wiki/Meetings/Horizon
>>
>>
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>>
>> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki  wrote:
>>>
>>> Hi Kaz and Ivan,
>>>
>>> Yeah, it is worth discussed officially in the horizon team meeting or the
>>> mailing list thread to get a consensus.
>>> Hopefully you can add this topic to the horizon meeting agenda.
>>>
>>> After sending the previous mail, I noticed anther option. I see there are
>>> several options now.
>>> (1) Keep xstatic-core and horizon-core same.
>>> (2) Add specific members to xstatic-core
>>> (3) Add specific horizon-plugin core to xstatic-core
>>> (4) Split core membership into per-repo basis (perhaps too complicated!!)
>>>
>>> My current vote is (2) as xstatic-core needs to understand what is xstatic
>>> and how it is maintained.
>>>
>>> Thanks,
>>> Akihiro
>>>
>>>
>>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara :

 Hi Akihiro,


 Thanks for your comment.
 The background of my request to add us to xstatic-core comes from
 Ivan's comment in last PTG's etherpad for heat-dashboard discussion.

 https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion
 Line135, "we can share ownership if needed - e0ne"

 Just in case, could you guys confirm unified opinion on this matter as
 Horizon team ?

 Frankly speaking I'm feeling the benefit to make us xstatic-core
 because it's easier & smoother to manage what we are taking for
 heat-dashboard.
 On the other hand, I can understand what Akihiro you are saying, the
 newly added repos belong to Horizon project & being managed by not
 Horizon core is not consistent.
 Also having exception might make unexpected confusion in near future.

 Eventually we will follow your opinion, let me hear Horizon team's
 conclusion.

 Regards,
 Kaz


 2018-03-20 12:58 GMT+09:00 Akihiro Motoki :
 > Hi Kaz,
 >
 > These repositories are under horizon project. It looks better to keep
 > the
 > current core team.
 > It potentially brings some confusion if we treat some horizon plugin
 > team
 > specially.
 > Reviewing xstatic repos would be a small burden, wo I think it would
 > work
 > without problem even if only horizon-core can approve xstatic reviews.
 >
 >
 > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara :
 >>
 >> Hi Ivan, Horizon folks,
 >>
 >>
 >> Now totally 8 xstatic-** repos for heat-dashboard have been landed.
 >>
 >> In project-config for them, I've set same acl-config as the existing
 >> xstatic repos.
 >> It means only "xstatic-core" can manage the newly created repos on
 >> gerrit.
 >> Could you kindly add "heat-dashboard-core" into "xstatic-core" like as
 >> what horizon-core is doing ?
 >>
 >> xstatic-core
 >> https://review.openstack.org/#/admin/groups/385,members
 >>
 >> heat-dashboard-core
 >> https://review.openstack.org/#/admin/groups/1844,members
 >>
 >> Of course, we will surely touch only what we made, just would like to
 >> manage them smoothly by ourselves.
 >> In case we need to touch the other ones, will ask Horizon team for
 >> help.
 >>
 >> Thanks in advance.
 >>
 >> Regards,
 >> Kaz
 >>
 >>
 >> 2018-03-14 15:12 GMT+09:00 Xinni Ge :
 >> > Hi Horizon Team,
 >> >
 >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin
 >> > option,
 >> >  and submitted a patch for it.
 >> > Could you please help to review the patch.
 >> >
 >> > https://bugs.launchpad.net/horizon/+bug/1755339
 >> > https://review.openstack.org/#/c/552259/
 >> >
 >> > Thank you very much.
 >> >
 >> > Best Regards,
 >> > Xinni
 >> >
 >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan 

Re: [openstack-dev] [stable][release] Remove complex ACL changes around releases

2018-03-28 Thread Sean McGinnis
On Wed, Mar 28, 2018 at 09:51:34AM -0400, Doug Hellmann wrote:
> Excerpts from Thierry Carrez's message of 2018-03-26 15:33:03 +0200:
> > Hi!
> > 
> > TL;DR:
> > We used to do complex things with ACLs for stable/* branches around
> > releases. Let's stop doing that as it's not really useful anyway, and
> > just trust the $project-stable-maint teams to do the right thing.
> 
> +1 to not doing things we no longer consider useful
> 

+1 to keeping things simple.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][release] Remove complex ACL changes around releases

2018-03-28 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2018-03-26 15:33:03 +0200:
> Hi!
> 
> TL;DR:
> We used to do complex things with ACLs for stable/* branches around
> releases. Let's stop doing that as it's not really useful anyway, and
> just trust the $project-stable-maint teams to do the right thing.

+1 to not doing things we no longer consider useful

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Reminder bi-weekly meeting Public Cloud WG

2018-03-28 Thread Tobias Rydberg

Hi all,

Time again for a meeting for the Public Cloud WG - at our new time and 
channel - tomorrow at 1400 UTC in #openstack-publiccloud


Agenda and etherpad at: https://etherpad.openstack.org/p/publiccloud-wg

Cheers,
Tobias Rydberg

--
Tobias Rydberg
Senior Developer
Mobile: +46 733 312780

www.citynetwork.eu | www.citycloud.com

INNOVATION THROUGH OPEN IT INFRASTRUCTURE
ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED




smime.p7s
Description: S/MIME Cryptographic Signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg]Team Weekly Meeting 2018.03.28

2018-03-28 Thread Zhipeng Huang
Hi Team,

Weekly meeting as usual starting UTC1400 at #openstack-cyborg, initial
agenda as follows:

* Cyborg GPU support discussion
* Clock driver introduction by ZTE team
* Rocky dev discussion:
https://review.openstack.org/#/q/status:open+project:openstack/cyborg

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-28 Thread 少合冯
I have summarize some scenarios for fpga devices request.
https://etherpad.openstack.org/p/cyborg-fpga-request-scenarios

Please add more  more  scenarios to find out the exceptions that placement
can not satisfy the filter and weight.

IMOH, I refer  placement  to do  filter and weight. If we have to let
cyborg do filter and weight.  Nova scheduler just need call cyborg once for
all host  weight though we do the weigh one by one.


2018-03-23 12:27 GMT+08:00 Nadathur, Sundar :

> Hi all,
> There seems to be a possibility of a race condition in the Cyborg/Nova
> flow. Apologies for missing this earlier. (You can refer to the proposed
> Cyborg/Nova spec
> 
> for details.)
>
> Consider the scenario where the flavor specifies a resource class for a
> device type, and also specifies a function (e.g. encrypt) in the extra
> specs. The Nova scheduler would only track the device type as a resource,
> and Cyborg needs to track the availability of functions. Further, to keep
> it simple, say all the functions exist all the time (no reprogramming
> involved).
>
> To recap, here is the scheduler flow for this case:
>
>- A request spec with a flavor comes to Nova conductor/scheduler. The
>flavor has a device type as a resource class, and a function in the extra
>specs.
>- Placement API returns the list of RPs (compute nodes) which contain
>the requested device types (but not necessarily the function).
>- Cyborg will provide a custom filter which queries Cyborg DB. This
>needs to check which hosts contain the needed function, and filter out the
>rest.
>- The scheduler selects one node from the filtered list, and the
>request goes to the compute node.
>
> For the filter to work, the Cyborg DB needs to maintain a table with
> triples of (host, function type, #free units). The filter checks if a given
> host has one or more free units of the requested function type. But, to
> keep the # free units up to date, Cyborg on the selected compute node needs
> to notify the Cyborg API to decrement the #free units when an instance is
> spawned, and to increment them when resources are released.
>
> Therein lies the catch: this loop from the compute node to controller is
> susceptible to race conditions. For example, if two simultaneous requests
> each ask for function A, and there is only one unit of that available, the
> Cyborg filter will approve both, both may land on the same host, and one
> will fail. This is because Cyborg on the controller does not decrement
> resource usage due to one request before processing the next request.
>
> This is similar to this previous Nova scheduling issue
> .
> That was solved by having the scheduler claim a resource in Placement for
> the selected node. I don't see an analog for Cyborg, since it would not
> know which node is selected.
>
> Thanks in advance for suggestions and solutions.
>
> Regards,
> Sundar
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-infra] Where did the ARA logs go?

2018-03-28 Thread András Kövi
Hi,

Recently I noticed that ARA logs were published for all CI jobs. It
seems like the reports do not contain the these logs any more. I tried
to research on what happened to them but couldn't find any info. Can
someone please enlighten me about this change?

Thank you,
Andras

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core

2018-03-28 Thread Jean-Philippe Evrard
Hello,

Ahah, gate job breakages? You were the first to break them, but also
willing to step in to fix them as soon as you knew.
And that's the part I will remember the most.

You will be missed, Major. Your next team is lucky to have you!
It was a pleasure working with you. And the gifs, omagad! :)

JP

On 27 March 2018 at 12:11, Jesse Pretorius
 wrote:
> Ah Major, we shall definitely miss your readiness to help, positive attitude 
> and deep care for setenforce 1. Oh, and then there're the gifs... so many 
> gifs...
>
> While I am inclined to [1], I shall instead wish you well while you [2]. (
>
> [1] https://media.giphy.com/media/1BXa2alBjrCXC/giphy.gif
> [2] https://media.giphy.com/media/G6if3AWViiNdC/giphy.gif
>
>
> On 3/26/18, 2:07 PM, "Major Hayden"  wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Hey there,
>
> As promised, I am stepping down from being an OpenStack-Ansible core 
> reviewer since I am unable to meet the obligations of the role with my new 
> job. :(
>
> Thanks to everyone who has mentored me along the way and put up with my 
> gate job breakages. I have learned an incredible amount about OpenStack, 
> Ansible, complex software deployments, and open source communities. I 
> appreciate everyone's support as I worked through the creation of the 
> ansible-hardening role as well as adding CentOS support for OpenStack-Ansible.
>
> - --
> Major Hayden
> -BEGIN PGP SIGNATURE-
>
> iQIzBAEBCAAdFiEEG/mSZJWWADNpjCUrc3BR4MEBH7EFAlq4774ACgkQc3BR4MEB
> H7E+gA/9HJEDibsQhdy191NbxbhF75wUup3gRDHhGPI6eFqHo/Iz8Q5Kv9Z9CXbo
> rkBGMebbGzoKwiLnKbFWr448azMJkj5/bTRLHb1eDQg2S2xaywP2L4e0CU+Gouto
> DucmGT6uLg+LKdQByYTB8VAHelub4DoxV2LhwsH+uYgWp6rZ2tB2nEIDTYQihhGx
> /WukfG+3zA99RZQjWRHmfnb6djB8sONzGIM8qY4qDUw9Xjp5xguHOU4+lzn4Fq6B
> cEpsJnztuEYnEpeTjynu4Dc8g+PX8y8fcObhcj+1D0NkZ1qW7sdX6CA64wuYOqec
> S552ej/fR5FPRKLHF3y8rbtNIlK5qfpNPE4UFKuVLjGSTSBz4Kp9cGn2jNCzyw5c
> aDQs/wQHIiUECzY+oqU1RHZJf9/Yq1VVw3vio+Dye1IMgkoaNpmX9lTcNw9wb1i7
> lac+fm0e438D+c+YZAttmHBCCaVWgKdGxH7BY84FoQaXRcaJ9y3ZoDEx6Rr8poBQ
> pK4YjUzVP9La2f/7S1QemX2ficisCbX+MVmAX9G4Yr9U2n98aXVWFMaF4As1H+OS
> zm9r9saoAZr6Z8BxjROjoClrg97RN1zkPseUDwMQwlJwF3V33ye3ib1dYWRr7BSm
> zAht+Jih/JE6Xtp+5UEF+6TBCYFVtXO8OHzCcac14w9dy1ur900=
> =fx64
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> 
> Rackspace Limited is a company registered in England & Wales (company 
> registered number 03897010) whose registered office is at 5 Millington Road, 
> Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
> viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
> contain confidential or privileged information intended for the recipient. 
> Any dissemination, distribution or copying of the enclosed material is 
> prohibited. If you receive this transmission in error, please notify us 
> immediately by e-mail at ab...@rackspace.com and delete the original message. 
> Your cooperation is appreciated.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][release] Remove complex ACL changes around releases

2018-03-28 Thread Jean-Philippe Evrard
LGTM

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Next two IRC meetings are canceled

2018-03-28 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi,

Most of Vitrage developers will not be available today and next Wednesday, so 
we’ll skip the next two IRC meetings.
We will meet again on Wednesday, April 11.

Thanks,
Ifat

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev