Hi Ian,
My comments are inline
I  would like to suggest to focus the next PCI-pass though IRC meeting on:

1.        Closing the administration and tenant that powers the VM use cases.

2.       Decouple the nova and neutron parts to start focusing on the neutron 
related details.

BR,
Irena

From: Ian Wells [mailto:[email protected]]
Sent: Friday, December 20, 2013 2:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

On 19 December 2013 15:15, John Garbutt 
<[email protected]<mailto:[email protected]>> wrote:
> Note, I don't see the person who boots the server ever seeing the pci-flavor, 
> only understanding the server flavor.
> [IrenaB] I am not sure that elaborating PCI device request into server flavor 
> is the right approach for the PCI pass-through network case. vNIC by its 
> nature is something dynamic that can be plugged or unplugged after VM boot. 
> server flavor is  quite static.
I was really just meaning the server flavor specify the type of NIC to attach.

The existing port specs, etc, define how many nics, and you can hot
plug as normal, just the VIF plugger code is told by the server flavor
if it is able to PCI passthrough, and which devices it can pick from.
The idea being combined with the neturon network-id you know what to
plug.

The more I talk about this approach the more I hate it :(

The thinking we had here is that nova would provide a VIF or a physical NIC for 
each attachment.  Precisely what goes on here is a bit up for grabs, but I 
would think:
Nova specifiies the type at port-update, making it obvious to Neutron it's 
getting a virtual interface or a passthrough NIC (and the type of that NIC, 
probably, and likely also the path so that Neutron can distinguish between NICs 
if it needs to know the specific attachment port)
Neutron does its magic on the network if it has any to do, like faffing(*) with 
switches
Neutron selects the VIF/NIC plugging type that Nova should use, and in the case 
that the NIC is a VF and it wants to set an encap, returns that encap back to 
Nova
Nova plugs it in and sets it up (in libvirt, this is generally in the XML; 
XenAPI and others are up for grabs).
[IrenaB] I agree on the described flow. Still need to close how to elaborate 
the request for pass-through vNIC into the  'nova boot'.
> We might also want a "nic-flavor" that tells neutron information it requires, 
> but lets get to that later...
> [IrenaB] nic flavor is definitely something that we need in order to choose 
> if  high performance (PCI pass-through) or virtio (i.e. OVS) nic will be 
> created.
Well, I think its the right way go. Rather than overloading the server
flavor with hints about which PCI devices you could use.

The issue here is that additional attach.  Since for passthrough that isn't 
NICs (like crypto cards) you would almost certainly specify it in the flavor, 
if you did the same for NICs then you would have a preallocated pool of NICs 
from which to draw.  The flavor is also all you need to know for billing, and 
the flavor lets you schedule.  If you have it on the list of NICs, you have to 
work out how many physical NICs you need before you schedule (admittedly not 
hard, but not in keeping) and if you then did a subsequent attach it could fail 
because you have no more NICs on the machine you scheduled to - and at this 
point you're kind of stuck.

Also with the former, if you've run out of NICs, the already-extant resize call 
would allow you to pick a flavor with more NICs and you can then reschedule the 
subsequent VM to wherever resources are available to fulfil the new request.
[IrenaB] Still think that putting PCI NIC request into Server Flavor is not 
right approach. You will need to create different server flavors per any 
possible combination of tenant networks attachment options, or maybe assume he 
is connecting to all. As for billing, you can use type of vNIC in addition to 
packets in/out for billing per vNIC. This way, tenant will be charged only for  
used vNICs.
One question here is whether Neutron should become a provider of billed 
resources (specifically passthrough NICs) in the same way as Cinder is of 
volumes - something we'd not discussed to date; we've largely worked on the 
assumption that NICs are like any other passthrough resource, just one where, 
once it's allocated out, Neutron can work magic with it.
[IrenaB] I am not so familiar with Ceilometer, but seems that if we are talking 
about network resources, neutron should be in charge.

--
Ian.
_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to