----- Original Message -----
> From: "Irena Berezovsky" <irenab....@gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>,
> 
> On Thu, Feb 5, 2015 at 9:01 PM, Steve Gordon <sgor...@redhat.com> wrote:
> 
> > ----- Original Message -----
> > > From: "Przemyslaw Czesnowicz" <przemyslaw.czesnow...@intel.com>
> > > To: "OpenStack Development Mailing List (not for usage questions)" <
> > openstack-dev@lists.openstack.org>
> > >
> > > Hi
> > >
> > > > 1) If the device is a "normal" PCI device, but is a network card, am I
> > > > still able to
> > > > take advantage of the advanced syntax added circa Juno to define the
> > > > relationship between that card and a given physical network so that the
> > > > scheduler can place accordingly (and does this still use the ML2 mech
> > > > drvier for
> > > > SR-IOV even though it's a "normal" device.
> > >
> > > Actually libvirt won't allow using "normal" PCI devices for network
> > > interfaces into VM.
> > > Following error is thrown by libvirt 1.2.9.1:
> > > libvirtError: unsupported configuration: Interface type hostdev is
> > currently
> > > supported on SR-IOV Virtual Functions only
> > >
> > > I don't know why libvirt prohibits that. But we should prohibit that on
> > > Openstack side as well.
> >
> > This is true for hostdev"> style configuration, "normal" PCI devices are
> > still valid in Libvirt for passthrough using <hostdev> though. The former
> > having been specifically created for handling passthrough of VFs, the
> > latter being the more generic passthrough functionality and what was used
> > with the original PCI passthrough functionality introduced circa Havana.
> >
> > I guess what I'm really asking in this particular question is what is the
> > intersection of these two implementations - if any, as on face value it
> > seems that to passthrough a physical PCI device I must use the older syntax
> > and thus can't have the scheduler be aware of its external network
> > connectivity.
> >
> Support for "normal" PCI device passthrough for networking in SR-IOV like
> way will require new VIF Driver support for hostdev style device guest XML
> being created and some call invocation to set MAC address and VLAN tag.
> 
> >
> > > > 2) There is no functional reason from a Libvirt/Qemu perspective that I
> > > > couldn't
> > > > pass through a PF to a guest, and some users have expressed surprise
> > to me
> > > > when they have run into this check in the Nova driver. I assume in the
> > > > initial
> > > > implementation this was prevented to avoid a whole heap of fun
> > additional
> > > > logic
> > > > that is required if this is allowed (e.g. check that no VFs from the PF
> > > > being
> > > > requested are already in use, remove all the associated VFs from the
> > pool
> > > > when
> > > > assigning the PF, who gets allowed to use PFs versus VFs etc.). Am I
> > > > correct here
> > > > or is there another reason that this would be undesirable to allow in
> > > > future -
> > > > assuming such checks can also be designed - that I am missing?
> > > >
> > > I think that is correct. But even if the additional logic was
> > implemented  it
> > > wouldn't work because of how libvirt behaves currently.
> >
> > Again though, in the code we have a distinction between a physical device
> > (as I was asking about in Q1) and a physical function (as I am asking about
> > in Q2) and similarly whether libvirt allows or not depends on how you
> > configure in the guest XML. Though I wouldn't be surprised on the PF case
> > if it is in fact not allowed in Libvirt (even with <hostdev>) it is again
> > important to consider this distinctly separate from passing through the
> > physical device case which we DO allow currently in the code I'm asking
> > about.
> >
> I think what you suggest is not difficult to support, but current (since
> Juno) PCI device passthrough  for networking is all about SR-IOV PCI device
> passthrough. As I mentioned, to support  "normal" PCI device will require
> libvirt VIF Driver adjustment. I think its possible to make this work with
> existing neutron ML2 SRIOV Mechanism Driver.

Understood, was just trying to understand if there was an explicit reason *not* 
to do this. How should we track this, keep adding to 
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough ?

Thanks,

Steve

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to