On 10 April 2017 at 11:31, <sfinu...@redhat.com> wrote: > On Mon, 2017-04-10 at 11:50 +0530, Nisha Agarwal wrote: >> Hi team, >> >> Please could you pour in your suggestions on the mail? >> >> I raised a blueprint in Nova for this https://blueprints.launchpad.ne >> t/nova/+spec/pci-passthorugh-for-ironic and two RFEs at ironic side h >> ttps://bugs.launchpad.net/ironic/+bug/1680780 and https://bugs.launch >> pad.net/ironic/+bug/1681320 for the discussion topic. > > If I understand you correctly, you want to be able to filter ironic > hosts by available PCI device, correct? Barring any possibility that > resource providers could do this for you yet, extending the nova ironic > driver to use the PCI passthrough filter sounds like the way to go.
With ironic I thought everything is "passed through" by default, because there is no virtualization in the way. (I am possibly incorrectly assuming no BIOS tricks to turn off or re-assign PCI devices dynamically.) So I am assuming this is purely a scheduling concern. If so, why are the new custom resource classes not good enough? "ironic_blue" could mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU and one 1Gb nic, etc. Or is there something else that needs addressing here? Trying to describe what you get with each flavor to end users? Are you needing to aggregating similar hardware in a different way to the above resource class approach? Thanks, johnthetubaguy __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev