Le 23/09/2016 18:41, Jay Pipes a écrit :
Hi Stackers,
In Newton, we had a major goal of having Nova sending inventory and
allocation records from the nova-compute daemon to the new placement
API service over HTTP (i.e. not RPC). I'm happy to say we achieved
this goal. We had a stretch goal from the mid-cycle of implementing
the custom resource class support. I'm sorry to say that we did not
reach this goal, though Ironic did indeed get its part merged and we
should be able to complete this work before the summit in Nova.
Through the hard work of many folks [1] we were able to merge code
that added a brand new REST API service (/placement) with endpoints
for read/write operations against resource providers, inventories,
allocations, and usage records. We were able to get patches merged
that modified the resource tracker in the nova-compute to write the
compute node's inventory and allocation records to the placement API
in a fashion that avoided required action on the part of the operator
to keep the nova-computes up and running.
Thanks Jay for giving us again your views.
For Ocata AND BEYOND, I'd here are a number of rough priorities and
goals that we need to work on...
1. Shared storage properly implemented
To fulfill the original use case around accurate reporting of shared
resources, we need to complete a few subtasks:
a) complete the aggregates/ endpoints in the placement API so that
resource providers can be associated with aggregates
b) have the scheduler reporting client tracking more than just the
resource provider for the compute node
I saw some patches about that. Let me know the changes so I could review
them.
For the client one, lemme know if you need some help.
2. Custom resource classes
This actually isn't all that much work, but just needs some focus. We
need the following done in this area:
a) (very simple) REST API added to the placement API for GET/PUT
resource class names
b) modify the ResourceClass Enum field to be a StringField -- which is
wire-compatible with Enum -- and add some code on each side of the
client/server communication that caches the standard resource classes
as constants that Nova and placement code can share
c) modify the Ironic virt driver to pass the new node_class attribute
on nodes into the resource tracker and have the resource tracker
create resource provider records for each Ironic node with a single
inventory record for each of those resource providers for the node class
d) modify the resource tracker to track the allocation of instances to
resource providers
So, first about that, sorry. I said during the midcycle that I could
implement the above REST API but given I had an August time very short,
I finally had no time for that. Now that we're in September, I can
resume my implementation for a) and b).
That said, we still have the spec to be approved by Ocata.
3. Integration of Nova scheduler with Placement API
We would like the Nova scheduler to be able to query the placement API
for quantitative information in Ocata. So, code will need to be pushed
that adds a call to the placement API for resource provider UUIDs that
meet a given request for some amount of resources. This result will
then be used to filter a request in the Nova scheduler for ComputeNode
objects to satisfy the qualitative side of the request.
We tried to discuss about that during the midcycle, but it seemed we had
some confusions about what could be calling the placement and where.
From my perspective, I was thinking the current scheduler would call
out the placement API (or even directly using the Nova objects) during
the HostManager call so that it would return less hosts for calling the
filters. Thoughts?
4. Progress on qualitative request components (traits)
A number of things can be done in this area:
a) get os-traits interface stable and include all catalogued
standardized trait strings
b) agree on schema in placement DB for storing and querying traits
against resource providers
Given Ocata is a short cycle, and given the current specs are not yet
fully discussed, I wonder if we would have time for having the above
implemented ?
I don't want to say we won't do that, just that it looks like a stretch
goal for Ocata. At least, I think the discussion in the spec is a
priority for Ocata, sure.
5. Nested resource providers
Things like SR-IOV PCI devices are actually resource providers that
are embedded within another resource provider (the compute node
itself). In order to tag things like SR-IOV PFs or VFs with a set of
traits, we need to have discovery code run on the compute node that
registers things like SR-IOV PF/VFs or SR-IOV FPGAs as nested resource
providers.
Some steps needed here:
a) agreement on schema for placement DB for representing this nesting
relationship
b) write the discovery code in nova-compute for adding these resource
providers to the placement API when found
Again, that looks like a stretch goal to me, given how small we already
discussed about that yet. But sure, Ocata would be fine for discussing
first.
Anyway, in conclusion, we've got a ton of work to do and I'm going to
spend time before the summit trying to get good agreement on direction
and proposed implementation for a number of the items listed above.
Hopefully by mid-October we'll have a good idea of assignees for
various work and what is going to be realistic to complete in Ocata.
Best,
-jay
[1] I'd like to personally thank Chris Dent, Dan Smith, Sean Dague, Ed
Leafe, Sylvain Bauza, Andrew Laski, Alex Xu and Matt Riedemann for
tolerating my sometimes lengthy absences and for pushing through
communication breakdowns resulting from my inability to adequately
express my ideas or document agreed solutions.
Heh, thanks buddy. No worries about your absences, we had an awesome Dan
for helping us :-)
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev