Thanks, as always, for the excellent summary emails, Chris. Comments inline.
On 04/06/2018 01:54 PM, Chris Dent wrote:
This is "contract" style update. New stuff will not be added to the
# Most Important
There doesn't appear to be anything new with regard to most
important. That which was important remains important. At the
scheduler team meeting at the start of the week there was talk of
working out ways to trim the amount of work in progress by using the
nova priorities tracking etherpad to help sort things out:
Update provider tree and nested allocation candidates remain
critical basic functionality on which much else is based. With most
of provider tree done, it's really on nested allocation candidates.
Yup. And that series is deadlocked on a disagreement about whether
granular request groups should be "separate by default" (meaning: if you
request multiple groups of resources, the expectation is that they will
be served by distinct resource providers) or "unrestricted by default"
(meaning: if you request multiple groups of resources, those resources
may or may not be serviced by distinct resource providers).
For folk's information, the latter (unrestricted by default) is the
*existing* behaviour as outlined in the granular request groups spec:
Specifically, it is Requirement 3 on the above spec that is the primary
driver for this debate.
I currently have an action item to resolve this debate and move forward
with a decision, whatever that may be.
# What's Changed
Quite a bit of provider tree related code has merged.
Some negotiation happened with regard to when/if the fixes for
shared providers is going to happen. I'm not sure how that resolved,
if someone can follow up with that, that would be most excellent.
Sharing providers are in a weird place right now, agreed. We have landed
lots of code on the placement side of the house for handling sharing
providers. However, the nova-compute service still does not know about
the providers that share resources with it. This makes it impossible
right now to have a compute node with local disk storage as well as
shared disk resources.
Most of the placement-req-filter series merged.
The spec for error codes in the placement API merged (code is in
progress and ready for review, see below).
* Eric and I discussed earlier in the week that it might be a good
time to start an #openstack-placement IRC channel, for two main
reasons: break things up so as to limit the crosstalk in the often
very busy #openstack-nova channel and to lend a bit of momentum
for going in that direction. Is this okay with everyone? If not,
please say so, otherwise I'll make it happen soon.
Cool with me. I know Matt has wanted a separate placement channel for a
* Shared providers status?
(I really think we need to make this go. It was one of the
original value propositions of placement: being able to accurate
manage shared disk.)
Agreed, but you know.... NUMA. And CPU pinning. And vGPUs. And FPGAs.
And physnet network bandwidth scheduling. And... well, you get the idea.
* Placement related bugs not yet in progress: https://goo.gl/TgiPXb
15, -1 on last week
* In progress placement bugs: https://goo.gl/vzGGDQ
13, +1 on last week
These seem to be divided into three classes:
* Normal stuff
* Old stuff not getting attention or newer stuff that ought to be
abandoned because of lack of support
* Anything related to the client side of using nested providers
effectively. This apparently needs a lot of thinking. If there are
some general sticking points we can extract and resolve, that
might help move the whole thing forward?
VMware: place instances on resource pool
mirror nova host aggregates to placement API
Proposes NUMA topology with RPs
Account for host agg allocation ratio in placement
Spec for isolating configuration of placement database
(This has a strong +2 on it but needs one more.)
Support default allocation ratios
Spec on preemptible servers
Handle nested providers for allocation candidates
Add Generation to Consumers
Proposes Multiple GPU types
Standardize CPU resource tracking
Network bandwidth resource provider
Propose counting quota usage from placement
# Main Themes
## Update Provider Tree
Most of the main guts of this have merged (huzzah!). What's left are
some loose end details, and clean handling of aggregates:
## Nested providers in allocation candidates
Representing nested provides in the response to GET
/allocation_candidates is required to actually make use of all the
topology that update provider tree will report. That work is in
Note that some of this includes the up-for-debate shared handling.
## Request Filters
As far as I can tell this is mostly done (yay!) but there is a loose
end: We merged an updated spec to support multiple member_of
parameters, but it's not clear anybody is currently owning that:
## Mirror nova host aggregates to placement
This makes it so some kinds of aggregate filtering can be done
"placement side" by mirroring nova host aggregates into placement
It's part of what will make the req filters above useful.
## Forbidden Traits
A way of expressing "I'd like resources that do _not_ have trait X".
This is ready for review:
## Consumer Generations
This allows multiple agents to "safely" update allocations for a
single consumer. There is both a spec and code in progress for this:
Small bits of work on extraction continue on the
The spec for optional database handling got some nice support
but needs more attention:
Jay has declared that he's going to start work on the
I've posted a 6th in my placement container playground series:
Though not directly related to extraction, that experimentation has
exposed a lot of the areas where work remains to be done to make
placement independent of nova.
A recent experiment with shrinking the repo to just the placement
dir reinforced a few things we already know:
* The placement tests need their own base test to avoid 'from nova
* That will need to provide database and other fixtures (such a
config and the self.flags feature).
* And, of course, eventually, config handling. The container
experiments above demonstrate just how little config placement
actually needs (by design, let's keep it that way).
This is a contract week, so nothing new has been added here, despite
there being new work. Part of the intent here it make sure we are
queue-like where we can be. This list maintains its ordering from
week to week: newly discovered things are added to the end.
There are 14 entries here, -7 on last week.
That's good. However some of the removals are the result of some
code changing topic (and having been listed here by topic). Some of
the oldest stuff at the top of the list has not moved.
Purge comp_node and res_prvdr records during deletion of
A huge pile of improvements to osc-placement
Add compute capabilities traits (to os-traits)
General policy sample file for placement
Provide framework for setting placement error codes
Get resource provider by uuid or name (osc-placement)
placement: Make API history doc more consistent
Handle agg generation conflict in report client
Slugification utilities for placement names
Remove usage of [placement]os_region_name
Get rid of 406 paths in report client
Add unit test for non-placement resize
Address issues raised in adding member_of to GET /a-c
cover migration cases with functional tests
2 runway slots open up this coming Wednesday, the 11th.
OpenStack Development Mailing List (not for usage questions)
OpenStack Development Mailing List (not for usage questions)