On Thu, 13 Jul 2017, Balazs Gibizer wrote:

/placement/allocation_candidates?resources=CUSTOM_MAGIC%3A512%2CMEMORY_MB%3A64%2CVCPU%3A1" but placement returns an empty response. Then nova scheduler falls back to legacy behavior [4] and places the instance without considering the custom resource request.

As far as I can tell at least one missing piece of the puzzle here
is that your MAGIC provider does not have the
'MISC_SHARES_VIA_AGGREGATE' trait. It's not enough for the compute
and MAGIC to be in the same aggregate, the MAGIC needs to announce
that its inventory is for sharing. The comments here have a bit more
on that:

    
https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L663-L678

It's quite likely this is not well documented yet as this style of
declaring that something is shared was a later development. The
initial code that added the support for GET /resource_providers
was around, it was later reused for GET /allocation_candidates:

    https://review.openstack.org/#/c/460798/

The other thing to be aware of is that GET /allocation_candidates is
in flight. It should be stable on the placement service side, but the
way the data is being used on the scheduler side is undergoing
change as we speak:

    https://review.openstack.org/#/c/482381/

Then I tried to connect the compute provider and the MAGIC provider to the same aggregate via the placement API but the above placement request still resulted in empty response. See my exact steps in [5].

This still needs to happen, but you also need to put the trait
mentioned above on the magic provider, the docs for that are in progress
on this review

    https://review.openstack.org/#/c/474550/

and a rendered version:

    
http://docs-draft.openstack.org/50/474550/8/check/gate-placement-api-ref-nv/2d2a7ea//placement-api-ref/build/html/#update-resource-provider-traits

Do I still missing some environment setup on my side to make it work?
Is the work in [1] incomplete?
Are the missing pieces in [2] needed to make this use case work?

If more implementation is needed then I can offer some help during Queens cycle.

There's definitely more to do and your help would be greatly
appreciated. It's _fantastic_ that you are experimenting with this
and sharing what's happening.

To make the above use case fully functional I realized that I need a service that periodically updates the placement service with the state of the MAGIC resource like the resource tracker in Nova. Is there any existing plans creating a generic service or framework that can be used for the tracking and reporting purposes?

As you've probably discovered from your experiments with curl,
updating inventory is pretty straightforward (if you have a TOKEN)
so we decided to forego making a framework at this point. I had some
code long ago that demonstrated one way to do it, but it didn't get
any traction:

    https://review.openstack.org/#/c/382613/

That tried to be a simple python script using requests that did the
bare minimum and would be amenable to cron jobs and other simple
scripts.

I hope some of the above is helpful. Jay, Ed, Sylvain or Dan may come
along with additional info.

--
Chris Dent                  ┬──┬◡ノ(° -°ノ)       https://anticdent.org/
freenode: cdent                                         tw: @anticdent
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to