[openstack-dev] [cyborg]No Meeting This Week

2018-05-28 Thread Zhipeng Huang
Hi team,

Given that people still recover from the summit and memorial day holiday,
let's cancel the team weekly meeting this week as well.

At the mean time feel free to communicate over irc or email :)

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-05-28 Thread TETSURO NAKAMURA

Hi,

> Do I still understand correctly ? If yes, perfect, let's jump to my 
upgrade

> concern.

Yes, I think. The old microversions look into only root providers and 
give up providing resources if a root provider itself doesn't have 
enough inventories for requested resources. But the new microversion 
looks into the root's descendents also and see if it can provide 
requested resources *collectively* in that tree.


The tests from [1] would help you understand this, where VCPUs come from 
the root(compute host) and SRIOV_NET_VFs from its grandchild.


[1] 
https://review.openstack.org/#/c/565487/15/nova/tests/functional/api/openstack/placement/gabbits/allocation-candidates.yaml@362


> In that situation, say for example with VGPU inventories, that would mean
> that the compute node would stop reporting inventories for its root 
RP, but

> would rather report inventories for at least one single child RP.
> In that model, do we reconcile the allocations that were already made
> against the "root RP" inventory ?

It would be nice to see Eric and Jay comment on this,
but if I'm not mistaken, when the virt driver stops reporting 
inventories for its root RP, placement would try to delete that 
inventory inside and raise InventoryInUse exception if any allocations 
still exist on that resource.


```
update_from_provider_tree() (nova/compute/resource_tracker.py)
  + _set_inventory_for_provider() (nova/scheduler/client/report.py)
  + put() - PUT /resource_providers//inventories with new 
inventories (scheduler/client/report.py)

  + set_inventories() (placement/handler/inventory.py)
  + _set_inventory() (placement/objects/resource_proveider.py)
  + _delete_inventory_from_provider() 
(placement/objects/resource_proveider.py)

  -> raise exception.InventoryInUse
```

So we need some trick something like deleting VGPU allocations before 
upgrading and set the allocation again for the created new child after 
upgrading?


On 2018/05/28 23:18, Sylvain Bauza wrote:

Hi,

I already told about that in a separate thread, but let's put it here too
for more visibility.

tl;dr: I suspect existing allocations are being lost when we upgrade a
compute service from Queens to Rocky, if those allocations are made against
inventories that are now provided by a child Resource Provider.


I started reviewing https://review.openstack.org/#/c/565487/ and bottom
patches to understand the logic with querying nested resource providers.

From what I understand, the scheduler will query Placement using the same

query but will get (thanks to a new microversion) not only allocation
candidates that are root resource providers but also any possible child.

If so, that's great as in a rolling upgrade scenario with mixed computes
(both Queens and Rocky), we will still continue to return both old RPs and
new child RPs if they both support the same resource classes ask.
Accordingly, allocations done by the scheduler will be made against the
corresponding Resource Provider, whether it's a root RP (old way) or a
child RP (new way).

Do I still understand correctly ? If yes, perfect, let's jump to my upgrade
concern.
Now, consider the Queens->Rocky compute upgrade. If I'm an operator and I
start deploying Rocky on one compute node, it will provide to Placement API
new inventories that are possibly nested.
In that situation, say for example with VGPU inventories, that would mean
that the compute node would stop reporting inventories for its root RP, but
would rather report inventories for at least one single child RP.
In that model, do we reconcile the allocations that were already made
against the "root RP" inventory ? I don't think so, hence my question here.

Thanks,
-Sylvain



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Tetsuro Nakamura 
NTT Network Service Systems Laboratories
TEL:0422 59 6914(National)/+81 422 59 6914(International)
3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Style guide for OpenStack documentation

2018-05-28 Thread Jeremy Stanley
On 2018-05-28 16:40:13 +0200 (+0200), Petr Kovar wrote:
[...]
> I'm all for openness but maintaining consistency is why style guides
> matter. Switching to a different style guide would require the following:
> 
> 1) agreeing on the right style guide,
> 2) reviewing our current style guidelines in doc-contrib-guide and updating
> them as needed so that they comply with the new style guide, and,
> 3) ideally, begin reviewing all of OpenStack docs for style changes.
[...]

I get this (and alluded to as much in my first message in this
thread, in fact). My point was that _when_ you're to the point of
evaluating switching to a wholly different style guide it would be
great to take such concerns into account. It also serves as a
cautionary tale to other newly forming projects (outside OpenStack)
who may at some point stumble across this discussion. Please choose
free tools at every opportunity, I sure wish we had in this case.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Style guide for OpenStack documentation

2018-05-28 Thread Stefano Canepa
On 28 May 2018 at 15:40, Petr Kovar  wrote:

> On Thu, 17 May 2018 15:03:23 +
> Jeremy Stanley  wrote:
>
> > On 2018-05-17 16:35:36 +0200 (+0200), Petr Kovar wrote:
>

​%<-
​

> I'm all for openness but maintaining consistency is why style guides
> matter. Switching to a different style guide would require the following:
>
> 1) agreeing on the right style guide,
> 2) reviewing our current style guidelines in doc-contrib-guide and updating
> them as needed so that they comply with the new style guide, and,
> 3) ideally, begin reviewing all of OpenStack docs for style changes.
>
> Do we have a volunteer who would be interested in taking on these tasks? If
> not, we have to go for a quick fix. Either reference developerWorks, or, if
> that's a concern, remove references to external style guides
> altogether (and provide less information as a result). I prefer the former.
>
> Cheers,
> pk
>

Petr,
​do we really need to reference to another style guide?
How many times people clicked on the link to the IBM guide?​

If first answer is yes and second is hundreds of times
than in my opinion you first option is the right one
otherwise I'd go for the second.

My 2¢
Stefano

PS: a good free doc style guideline is the gnome one.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Style guide for OpenStack documentation

2018-05-28 Thread Petr Kovar
On Thu, 17 May 2018 15:03:23 +
Jeremy Stanley  wrote:

> On 2018-05-17 16:35:36 +0200 (+0200), Petr Kovar wrote:
> > On Wed, 16 May 2018 17:05:15 +
> > Jeremy Stanley  wrote:
> > 
> > > On 2018-05-16 18:24:45 +0200 (+0200), Petr Kovar wrote:
> > > [...]
> > > > I'd like to propose replacing the reference to the IBM Style Guide
> > > > with a reference to the developerWorks editorial style guide
> > > > (https://www.ibm.com/developerworks/library/styleguidelines/).
> > > > This lightweight version comes from the same company and is based
> > > > on the same guidelines, but most importantly, it is available for
> > > > free.
> > > [...]
> > > 
> > > I suppose replacing a style guide nobody can access with one
> > > everyone can (modulo legal concerns) is a step up. Still, are there
> > > no style guides published under an actual free/open license? If
> > > https://www.ibm.com/developerworks/community/terms/use/ is correct
> > > then even accidental creation of a derivative work might be
> > > prosecuted as copyright infringement.
> > 
> > 
> > We don't really plan on reusing content from that site, just referring to
> > it, so is it a concern?
> [...]
> 
> A style guide is a tool. Free and open collaboration needs free
> (libre, not merely gratis) tools, and that doesn't just mean
> software. If, down the road, you want an OpenStack Documentation
> Style Guide which covers OpenStack-specific concerns to quote or
> transclude information from a more thorough guide, that becomes a
> derivative work and is subject to the licensing terms for the guide
> from which you're copying.

Okay, but that's not what we want to do here.
 
> There are a lot of other parallels between writing software and
> writing prose here beyond mere intellectual property concerns too.
> Saying that OpenStack Documentation is free and open, but then
> endorsing an effectively proprietary guide as something its authors
> should read and follow, sends a mixed message as to our position on
> open documentation (as a style guide is of course also documentation
> in its own right). On the other hand, recommending use of a style
> guide which is available under a free/libre open source license or
> within the public domain resonates with our ideals and principles as
> a community, serving only to strengthen our position on openness in
> all its endeavors (including documentation).

I'm all for openness but maintaining consistency is why style guides
matter. Switching to a different style guide would require the following:

1) agreeing on the right style guide,
2) reviewing our current style guidelines in doc-contrib-guide and updating
them as needed so that they comply with the new style guide, and,
3) ideally, begin reviewing all of OpenStack docs for style changes.

Do we have a volunteer who would be interested in taking on these tasks? If
not, we have to go for a quick fix. Either reference developerWorks, or, if
that's a concern, remove references to external style guides
altogether (and provide less information as a result). I prefer the former.

Cheers,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-05-28 Thread Sylvain Bauza
Hi,

I already told about that in a separate thread, but let's put it here too
for more visibility.

tl;dr: I suspect existing allocations are being lost when we upgrade a
compute service from Queens to Rocky, if those allocations are made against
inventories that are now provided by a child Resource Provider.


I started reviewing https://review.openstack.org/#/c/565487/ and bottom
patches to understand the logic with querying nested resource providers.
>From what I understand, the scheduler will query Placement using the same
query but will get (thanks to a new microversion) not only allocation
candidates that are root resource providers but also any possible child.

If so, that's great as in a rolling upgrade scenario with mixed computes
(both Queens and Rocky), we will still continue to return both old RPs and
new child RPs if they both support the same resource classes ask.
Accordingly, allocations done by the scheduler will be made against the
corresponding Resource Provider, whether it's a root RP (old way) or a
child RP (new way).

Do I still understand correctly ? If yes, perfect, let's jump to my upgrade
concern.
Now, consider the Queens->Rocky compute upgrade. If I'm an operator and I
start deploying Rocky on one compute node, it will provide to Placement API
new inventories that are possibly nested.
In that situation, say for example with VGPU inventories, that would mean
that the compute node would stop reporting inventories for its root RP, but
would rather report inventories for at least one single child RP.
In that model, do we reconcile the allocations that were already made
against the "root RP" inventory ? I don't think so, hence my question here.

Thanks,
-Sylvain
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Need some feedback on the proposed heal_allocations CLI

2018-05-28 Thread Sylvain Bauza
On Fri, May 25, 2018 at 12:19 AM, Matt Riedemann 
wrote:

> I've written a nova-manage placement heal_allocations CLI [1] which was a
> TODO from the PTG in Dublin as a step toward getting existing
> CachingScheduler users to roll off that (which is deprecated).
>
> During the CERN cells v1 upgrade talk it was pointed out that CERN was
> able to go from placement-per-cell to centralized placement in Ocata
> because the nova-computes in each cell would automatically recreate the
> allocations in Placement in a periodic task, but that code is gone once
> you're upgraded to Pike or later.
>
> In various other talks during the summit this week, we've talked about
> things during upgrades where, for instance, if placement is down for some
> reason during an upgrade, a user deletes an instance and the allocation
> doesn't get cleaned up from placement so it's going to continue counting
> against resource usage on that compute node even though the server instance
> in nova is gone. So this CLI could be expanded to help clean up situations
> like that, e.g. provide it a specific server ID and the CLI can figure out
> if it needs to clean things up in placement.
>
> So there are plenty of things we can build into this, but the patch is
> already quite large. I expect we'll also be backporting this to stable
> branches to help operators upgrade/fix allocation issues. It already has
> several things listed in a code comment inline about things to build into
> this later.
>
> My question is, is this good enough for a first iteration or is there
> something severely missing before we can merge this, like the automatic
> marker tracking mentioned in the code (that will probably be a non-trivial
> amount of code to add). I could really use some operator feedback on this
> to just take a look at what it already is capable of and if it's not going
> to be useful in this iteration, let me know what's missing and I can add
> that in to the patch.
>
> [1] https://review.openstack.org/#/c/565886/
>
>

It does sound for me a good way to help operators.

That said, given I'm now working on using Nested Resource Providers for
VGPU inventories, I wonder about a possible upgrade problem with VGPU
allocations. Given that :
 - in Queens, VGPU inventories are for the root RP (ie. the compute node
RP), but,
 - in Rocky, VGPU inventories will be for children RPs (ie. against a
specific VGPU type), then

if we have VGPU allocations in Queens, when upgrading to Rocky, we should
maybe recreate the allocations to a specific other inventory ?

Hope you see the problem with upgrading by creating nested RPs ?


> --
>
> Thanks,
>
> Matt
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-28 Thread Bogdan Dobrelya

On 5/28/18 11:43 AM, Bogdan Dobrelya wrote:

On 5/25/18 6:40 PM, Tristan Cacqueray wrote:

Hello Bogdan,

Perhaps this has something to do with jobs evaluation order, it may be
worth trying to add the dependencies list in the project-templates, like
it is done here for example:
http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml#n9799 



It also easier to read dependencies from pipelines definition imo.


Thank you!
It seems for the most places, tripleo uses pre-defined templates, see 
[0]. And templates can not import dependencies [1] :(


Here is a zuul story for that [2]

[2] https://storyboard.openstack.org/#!/story/2002113



[0] 
http://codesearch.openstack.org/?q=-%20project%3A&i=nope&files=&repos=tripleo-ci,tripleo-common,tripleo-common-tempest-plugin,tripleo-docs,tripleo-ha-utils,tripleo-heat-templates,tripleo-image-elements,tripleo-ipsec,tripleo-puppet-elements,tripleo-quickstart,tripleo-quickstart-extras,tripleo-repos,tripleo-specs,tripleo-ui,tripleo-upgrade,tripleo-validations 



[1] https://review.openstack.org/#/c/568536/4



-Tristan

On May 25, 2018 12:45 pm, Bogdan Dobrelya wrote:
Job dependencies seem ignored by zuul, see jobs [0],[1],[2] started 
simultaneously. While I expected them run one by one. According to 
the patch 568536 [3], [1] is a dependency for [2] and [3].


The same can be observed for the remaining patches in the topic [4].
Is that a bug or I misunderstood what zuul job dependencies actually do?

[0] 
http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-undercloud-containers/731183a/ara-report/ 

[1] 
http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-3nodes-multinode/a1353ed/ara-report/ 

[2] 
http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-containers-multinode/9777136/ara-report/ 


[3] https://review.openstack.org/#/c/568536/
[4] 
https://review.openstack.org/#/q/topic:ci_pipelines+(status:open+OR+status:merged) 








--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-28 Thread Bogdan Dobrelya

On 5/25/18 6:40 PM, Tristan Cacqueray wrote:

Hello Bogdan,

Perhaps this has something to do with jobs evaluation order, it may be
worth trying to add the dependencies list in the project-templates, like
it is done here for example:
http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml#n9799 



It also easier to read dependencies from pipelines definition imo.


Thank you!
It seems for the most places, tripleo uses pre-defined templates, see 
[0]. And templates can not import dependencies [1] :(


[0] 
http://codesearch.openstack.org/?q=-%20project%3A&i=nope&files=&repos=tripleo-ci,tripleo-common,tripleo-common-tempest-plugin,tripleo-docs,tripleo-ha-utils,tripleo-heat-templates,tripleo-image-elements,tripleo-ipsec,tripleo-puppet-elements,tripleo-quickstart,tripleo-quickstart-extras,tripleo-repos,tripleo-specs,tripleo-ui,tripleo-upgrade,tripleo-validations


[1] https://review.openstack.org/#/c/568536/4



-Tristan

On May 25, 2018 12:45 pm, Bogdan Dobrelya wrote:
Job dependencies seem ignored by zuul, see jobs [0],[1],[2] started 
simultaneously. While I expected them run one by one. According to the 
patch 568536 [3], [1] is a dependency for [2] and [3].


The same can be observed for the remaining patches in the topic [4].
Is that a bug or I misunderstood what zuul job dependencies actually do?

[0] 
http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-undercloud-containers/731183a/ara-report/ 

[1] 
http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-3nodes-multinode/a1353ed/ara-report/ 

[2] 
http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-containers-multinode/9777136/ara-report/ 


[3] https://review.openstack.org/#/c/568536/
[4] 
https://review.openstack.org/#/q/topic:ci_pipelines+(status:open+OR+status:merged) 





--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev