On 05/07/2013 17:02, Daniel P. Berrange wrote:
On Thu, Jul 04, 2013 at 11:53:45AM +0300, Itzik Brown wrote:
Hi,

We released a code for a VIF Driver that will work with Mellanox
Quantum Plugin.
Mellanox plugin provisions Virtual Networks via embedded HCA
switching functionality (SR-IOV capability of Network adapter).

The code is here:
https://review.openstack.org/#/c/35189/

We allocate and assign probed SR-IOV VF (Virtual Function) as a Para
virtualized Interface of the instance.

We need to:
1)Allocate  device of probed VF
2)Give a name to the device in the XML that will allow us to supprot
live migration.

We have utility for managing embedded switch that allows VF
assignment and configuration.
We chose to use the vif['dev_name'] as a basis for the name of the
device. This allow us to use different devices on different hosts
when doing a migration.
When doing plug/unplug we are using a utility to allocate the device
and change it's name to the logical one.
My concern / question on the review is about where the allocation
of the device name is done.

Currently you have it done on the Nova side, by calling out the
the utility tools. This ties the Nova VIF code to the specific
impl of the Mellanox driver in Neutron. This means it is not
appropriate to use a generic vif_type=direct, but really need
to ue a vif_type=mellanox.

The other option, would be if you could do the device name
allocation on the Neutron side, in response to the Nova API
call to allocate a VIF port on the network. If this is possible,
then all the Mellanox specific code would be in Neutron, and
the Nova code would be entirely generic, meaning that use of
a generic vif_type=direct would be acceptable.

I favour the latter architectural split, since it de-couples
Nova & Neutron to a greater extent. I'm not sure if is actually
possible to implement it that way, since I'm not familiar with
your Neuton plugin design. If it isn't possible, then you'll
need to change your patch to use a vif_type=mellanox name
instead

This is basically it.
It's a temporary solution until there will be a solution that will
address all the issues regarding SR-IOV.
Boris Pavlovic is doing work regarding SR-IOV and we plan to
help/adopt this effort.
We want to ask for your opinion regarding the way we chose.

Daniel
Daniel,

Thanks for your comments.
We'll go with vif_type=mlnx_direct for now since it's easier to implement for now.

In the Medium/Long term we want to use interface type='network ( as started in https://review.openstack.org/#/c/29048/).
We have some issues we need to address with this solution:

1)How the creation of the network pool is done.
2)The scheduler should be aware of the number of available devices for each network pool. 3)For configuration of things like VLAN there is a need to know which device is allocated to each vNIC , preferably before the instance is started.

Itzik

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to