On 06/12/2017 02:17 PM, Edward Leafe wrote:
On Jun 12, 2017, at 10:20 AM, Jay Pipes <jaypi...@gmail.com <mailto:jaypi...@gmail.com>> wrote:

The RP uuid is part of the provider: the compute node's uuid, and (after https://review.openstack.org/#/c/469147/ merges) the PCI device's uuid. So in the code that passes the PCI device information to the scheduler, we could add that new uuid field, and then the scheduler would have the information to a) select the best fit and then b) claim it with the specific uuid. Same for all the other nested/shared devices.

How would the scheduler know that a particular SRIOV PF resource provider UUID is on a particular compute node unless the placement API returns information indicating that SRIOV PF is a child of a particular compute node resource provider?

Because PCI devices are per compute node. The HostState object populates itself from the compute node here:

https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L224-L225

If we add the UUID information to the PCI device, as the above-mentioned patch proposes, when the scheduler selects a particular compute node that has the device, it uses the PCI device’s UUID. I thought that having that information in the scheduler was what that patch was all about.

I would hope that over time, there's be little to no need for the scheduler to read either the compute_nodes or the pci_devices tables (which, btw, are in the cell databases). The information that the scheduler currently keeps in the host state objects should eventually be able to be primarily constructed by the returned results from the placement API instead of the existing situation where the scheduler must make multiple calls to the multiple cells databases in order to fill that information in.

Best,
-jay

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to