Hi Robert,
Please see inline

-----Original Message-----
From: Robert Li (baoli) [mailto:[email protected]] 
Sent: Friday, July 25, 2014 12:44 AM
To: [email protected]; Irena Berezovsky
Cc: Akihiro Motoki; Sandhya Dasu (sadasu); OpenStack Development Mailing List 
(not for usage questions)
Subject: [openstack-dev][neutron][SR-IOV]: RE: ML2 mechanism driver for SR-IOV 
capable NIC based switching,...

Hi Kyle, 

Sorry I missed your queries on the IRC channel today. I was thinking about this 
whole BP. After chatting with Irena this morning, I think that I understand 
what this BP is trying to achieve overall. I also had a chat with Sandhya 
afterwards. I¹d like to discuss a few things in here:
  
  < Sandhya¹s MD is going to support cisco¹s VMFEX. Overall her code¹s 
structure would look like very much similar to Irena¹s patch in part 1.
However, she cannot simply inherit from SriovNicSwitchMechanismDriver. The 
differences for her code are: 1) get_vif_details() would populate profileid 
(rather than vlanid), 2) she¹d need to do vmfex specific processing in 
try_to_bind(). We¹re thinking that with a little of generalization, 
SriovNicSwitchMechanismDriver() (with a changed name such as 
SriovMechanismDriver()) can be used both for nic switch and vmfex. It would 
look like in terms of class hierarchy:
             SriovMechanismDriver
                    SriovNicSwitchMechanismDriver
                    SriovQBRMechanismDriver
                         SriovCiscoVmfexMechanismDriver
[IrenaB] Seems correct hierarchy in we want to have separate MDs for NIC 
embedded switch and Cisco VMFEX cases.  But what if same deployment should 
support both cases? Two MDs should be chosen? 

Code duplication would be reduced significantly. The change would be:
       < make get_vif_details an abstract method in SriovMechanismDriver
       < make an abstract method to perform specific bind action required by a 
particular adaptor indicated in the PCI vendor info
       < vif type and agent type should be set based on the PCI vendor info 

A little change of patch part 1 would achieve the above
[IrenaB] Do you propose vif_type and agent type be determined during MD 
initialization time or during port binding attempt? 

  < Originally I thought that SR-IOV port¹s status would be depending on the 
Sriov Agent (patch part 2). After chatting with Irena, this is not the case. So 
all the SR-IOV ports will be active once created or bound according to the 
try_to_bind() method. In addition, the current Sriov Agent (patch part 2) only 
supports port admin status change for mlnx adaptor. I think these caveats need 
to be spelled out explicitly to avoid any confusion or misunderstanding, at 
least in the documentation.

[IrenaB] Agree, this  should be documented as well as whole SRIOV vNIC creation 
workflow. 
If agent is required , port won't be bound if Agent is not running on the 
required Host.
It may be the case that dynamic changes for admin state for SR-IOV VF are 
supported by other vendors as well. Existing agent also allows to expand the 
dynamic changes support for future functionality, such as QoS, ACLs, .. 

  < Sandhya has planned to support both intel and vmfex in her MD. This 
requires a hybrid sriov mech driver that populates vif details based on the PCI 
vendor info in the port. Another way to do this is to run two MDs in the same 
time, one supporting intel, the other vmfex. This would work well with the 
above classes. But it requires change of the two config options (in Irena¹s 
patch part one) so that per MD config options can be specified. I¹m not sure if 
this is practical in real deployment (meaning use of SR-IOV adaptors from 
different vendors in the same deployment), but I think it¹s doable within the 
existing ml2 framework.
[IrenaB] I think we should see how to make the deployment of the SR-IOV 
required pieces as simple and intuitive as possible.

we¹ll go over the above in the next sr-iov IRC meeting as well.
[IrenaB]  Looking forward for this discussion

Thanks,
Robert









On 7/24/14, 1:55 PM, "Kyle Mestery (Code Review)" <[email protected]>
wrote:

>Kyle Mestery has posted comments on this change.
>
>Change subject: ML2 mechanism driver for SR-IOV capable NIC based 
>switching, Part 2 
>......................................................................
>
>
>Patch Set 3: Code-Review+2 Workflow+1
>
>I believe Irena has answered all of Robert's questions. Any subsequent 
>issues can be handled as a followup.
>
>--
>To view, visit https://review.openstack.org/107651
>To unsubscribe, visit https://review.openstack.org/settings
>
>Gerrit-MessageType: comment
>Gerrit-Change-Id: I533ccee067935326d5837f90ba321a962e8dc2a6
>Gerrit-PatchSet: 3
>Gerrit-Project: openstack/neutron
>Gerrit-Branch: master
>Gerrit-Owner: Berezovsky Irena <[email protected]>
>Gerrit-Reviewer: Akihiro Motoki <[email protected]>
>Gerrit-Reviewer: Arista Testing 
><[email protected]>
>Gerrit-Reviewer: Baodong (Robert) Li <[email protected]>
>Gerrit-Reviewer: Berezovsky Irena <[email protected]>
>Gerrit-Reviewer: Big Switch CI <[email protected]>
>Gerrit-Reviewer: Brocade CI <[email protected]>
>Gerrit-Reviewer: Brocade OSS CI <[email protected]>
>Gerrit-Reviewer: Cisco Neutron CI 
><[email protected]>
>Gerrit-Reviewer: Freescale CI <[email protected]>
>Gerrit-Reviewer: Hyper-V CI <[email protected]>
>Gerrit-Reviewer: Jenkins
>Gerrit-Reviewer: Kyle Mestery <[email protected]>
>Gerrit-Reviewer: Mellanox External Testing 
><[email protected]>
>Gerrit-Reviewer: Metaplugin CI Test <[email protected]>
>Gerrit-Reviewer: Midokura CI Bot <[email protected]>
>Gerrit-Reviewer: NEC OpenStack CI <[email protected]>
>Gerrit-Reviewer: Neutron Ryu 
><[email protected]>
>Gerrit-Reviewer: One Convergence CI 
><[email protected]>
>Gerrit-Reviewer: PLUMgrid CI <[email protected]>
>Gerrit-Reviewer: Tail-f NCS Jenkins <[email protected]>
>Gerrit-Reviewer: vArmour CI Test <[email protected]>
>Gerrit-HasComments: No


_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to