Hi matt
You not only currently so I taught I would respond to your question regarding 
the workflow via email.
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-10-18.log.html#t2017-10-18T20:29:02

mriedem             1. conductor asks scheduler for a host    20:29
mriedem             2. scheduler filter looks for ports with a qos policy and 
if found, gets allocatoin candidates for hosts that have a nested bw provider 
20:29
mriedem             3. scheduler returns host to conductor  20:29
mriedem             4. conductor binds the port to the host  20:30
mriedem             5. the bound port profile has some allocation juju that 
nova proxies to placement as an allocation request for the port on the bw 
provider               20:30
mriedem             6. conductor sends to compute to build the instance       
20:30
mriedem             7. compute activates the bound port      20:30
mriedem             8. compute plugs vifs     20:30
mriedem             9. profit?!


So my ideal workflow would be


1.       conductor calls allocate_for_instance 
https://github.com/openstack/nova/blob/1b45b530448c45598b62e783bdd567480a8eb433/nova/network/neutronv2/api.py#L814
in schedule_and_build_instances 
https://github.com/openstack/nova/blob/fce56ce8c04b20174cd89dfbc2c06f0068324b55/nova/conductor/manager.py#L1002
Before calling self._schedule_instances. This get or creates all neutron ports 
for the instance before we call the scheduler.

2.       conductor asks scheduler for a host by calling 
self._schedule_instances passing in the network_info object.

3.       scheduler extract placement requests form network_info object and adds 
them to the list it send to placement.

4.       Scheduler applies standard filters to placement candidates.

5.       scheduler returns host  after weighing to conductor.

6.       conductor binds the port to the host.

a.       if it fails early retry on next host in candidate set.

b.      Continue until port binding succeeds, retry limit is reached, or 
candidate are exhausted

7.       The conductor creates allocations for the host against all resource 
providers.

a.       When the port is bound neutron will populate the resource request for 
bandwidth, with the neutron agent uuid which will be the resource provider uuid 
to allocate from.

8.       conductor sends to compute to build the instance passing the 
allocations

9.       compute plugs vifs

10.   compute activates the bound port setting the allocation uuid on the port 
for all resource classes request by neutron

11.   excess of income over expenditure? :)

The important thing to note is nova recives all request for network resouces 
form neutron in the port objects created at step 1
Nova learns the backend resource provider for neutron at step 6 before it makes 
allocations
Nova then passes the allocation that were made back to neutron when it activate 
the port.

We have nova make the allocation for all resources to prevent any races between 
the conductor and neutron when updating the same nested resource provider tree.
(this was jays concern)
Neutron will create the inventories for bandwith but nova will allocate form 
the them.
The intent is for nova to not need to know what the resource it is claim are, 
but instead be able to accept a set of additional resocues to claim form 
neutron in a generic
Workflow which we can hopefully reuse for other project like cinder or cyborg 
in the future.

Regards sean.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to