Thanks for resuming these, Chris, much appreciated. Comments inline.

On 07/07/2017 07:44 AM, Chris Dent wrote:

After 40 days in the desert I've returned with placement update 27.

Unfortunately, as far as I can tell, no one did any updates while I
was gone so I don't have anything to crib from to have the full
story on what's going on. I suspect I will miss some relevant
reviews when making this list. If I have, please let me know.
Otherwise, let's begin:

Actually, you did a great job catching up, thank you!

# What Matters Most

Claims in the scheduler remains the key feature we'd like to get in
before feature freeze. After some hiccups on how to do it, making
requests of the new /allocation_candidates (of which more, below) is
the way to go. Changes for that are starting at

     https://review.openstack.org/#/c/476631/

Well, the above is the only remaining non-WIP patch that hasn't already merged. The remainder of the series:

https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/placement-allocation-requests

has merged in the last couple weeks. We've made a lot of progress on it.

The patch above is currently approved and on its way through the gate.

The patch after it:

https://review.openstack.org/#/c/476632/

Is what I'm currently actively working on and performs the resource claims within the scheduler by picking one of the allocation candidates returned from the call to GET /allocation_candidates and using that candidate as the HTTP request body to PUT /allocations/{consumer_uuid}.

I'm hoping to have that patch un-WIP'd and ready for review by the end of day today.

# What's Changed

As mentioned, there's now a new URL in the placement API:
GET /allocation_candidates. It has a similar interface to GET
/resource_providers (in that you can filter the results by the kind
of resources required) but the information is formatted as a
two-tuple of lists of allocation requests and a dictionary of
resource provider information. The latter will provide the initial
list of available resource providers and augment the process of
filtering and weighing those providers. The former provides a
collection of correctly formed JSON bodies that can be sent in a PUT
to /allocations/{consumer_uuid} when making a claim.

Yup, exactly. This was Dan Smith's idea and I really like it. It makes the returned allocation request body opaque (to systems other than the scheduler and placement service) so that things like the cell-level conductor and nova-compute service can literally pass a different allocation request JSON object to PUT /allocations/{consumer_uuid} and we get a clean, consistent "retry" mechanism that operators wanted to keep.

I'm still a bit confused about where the concept of "alternatives"
that are going to be passed to the cell conductors fits into this,
but I guess that will become more clear soon.

Yes, this should become clearer soon, but essentially we'll be changing the scheduler's select_destinations() RPC method to return not just the destination hosts selected by the scheduler for a request but also one or more allocation candidates that matched the original launch request for resources. These JSON objects will be opaque to everything outside of the scheduler/placement services and, as noted above, will serve as the clean, consistent retry mechanism within the cells. This retry mechanism is missing in cellsv2 architecture due to the way that cells cannot do "upcalls" to the API layer.

It also seems like this model creates a pretty strong conceptual
coupling between a thing which behaves like a nova-scheduler
(request, process, then claim resources).

Conceptual coupling between nova-scheduler and what? It looks like you cut off your question above mid-stream :)

As placement becomes useful to other services it will be important to
revisit some of these decisions and make sure the HTTP API is not
imposing too many behaviuor requirements on the client side
(otherwise why bother having an HTTP API?). But that's for later.
Right now we're on a tight schedule trying to make sure that claims
get in in Ocata.

The concept of a claim is simply an atomic creation of related allocation records for a consumer against one or more resource providers. What are the behaviour requirements on the client side that worry you? Let's discuss those concerns here and work through them.

Because there's a bit of a dependency hierarchy with the various
threads of work going on in placement, the work on claims may punt
traits and/or nested resource providers further down the timeline.
Work continues on all three concurrently.

Right, all three continue concurrently.

Another change is that allocations now include project id and user
id information and usages by those id can be retrieved.

Yup, this is A Good Thing :) I look forward to the day we can replace the old os-simple-tenant-usage Compute API with something that's more efficient!

Best,
-jay

# Help Wanted

Areas where volunteers are needed.

* General attention to bugs tagged placement:
              https://bugs.launchpad.net/nova/+bugs?field.tag=placement

# Main Themes

## Claims in the Scheduler

As described above there's been a change in direction. That probably
means some or all of the code now at

     https://review.openstack.org/#/q/status:open+topic:bp/placement-claims

can be abandoned in favor of the work at

https://review.openstack.org/#/q/topic:bp/placement-allocation-requests+status:open

The main starting point for that is

     https://review.openstack.org/#/c/476631/

## Traits

The concept of traits now exists in the placement service, but
filtering resource providers on traits is in flux. With the advent
of /allocation_candidates as the primary scheduling interface, that
needs to support traits. Work for that is in a stack starting at

     https://review.openstack.org/#/c/478464/

It's not yet clear if we'll want to support traits at both
/allocation_candidates and /resource_providers. I think we should,
but the immediate need is on /allocation_candidates.

There's some proposed code to get the latter started:

     https://review.openstack.org/#/c/474602/

## Shared Resource Providers

Support for shared resource providers is "built in" to the
/allocation_candidates concept and one of the drivers for having it.

## Nested Resource Providers

Work continues on nested resource providers.

https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers

The need with these is simply more review, but they are behind
claims in priority.

## Docs

Lots of placement-related api docs have merged or are in progress:

https://review.openstack.org/#/q/status:open+topic:cd/placement-api-ref

Shortly there will be a real publishing job:

     https://review.openstack.org/#/c/480991/

and the tooling which tests that new handlers are documented
will be turned on:

     https://review.openstack.org/#/c/480924/

Some changes have been proposed to document the scheduler's
workflow, including visual aids, starting at:

     https://review.openstack.org/#/c/475810/

# Other Code/Specs

* https://review.openstack.org/#/c/472378/
    A proposed fix to using multiple config locations with the
    placement wsgi app. There's some active discussion on whether the
    solution in mind is the right solution, or even whether the bug is
    a bug (it is!).

* https://review.openstack.org/#/c/470578/
    Add functional test for local delete allocations

* https://review.openstack.org/#/c/427200/
       Add a status check for legacy filters in nova-status.

* https://review.openstack.org/#/c/469048/
     Provide more information about installing placement

* https://review.openstack.org/#/c/468928/
     Disambiguate resource provider conflict message

* https://review.openstack.org/#/c/468797/
     Spec for requesting traits in flavors

* https://review.openstack.org/#/c/480379/
   ensure shared RP maps with correct root RP
   (Some discussion on this one what the goal is and whether the
   approach is the right one.)

# End

That's all I've got this week, next week I should be a bit more
caught up and aware of any bits I've missed. No prize this week, but
maybe next week.



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to