[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14998436#comment-14998436
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8956:
--------------------------------------------

Github user miguelaferreira commented on the pull request:

    https://github.com/apache/cloudstack/pull/935#issuecomment-155396401
  
    @nvazquez I've built and tested your PR in our environment and the 
functionality we have tests for remains stable. :+1: 
    
    What I did:
    
    1. build and unit-test with developer and systemvm profiles
    
    2. package centos7 KVM agent RPMs
    
    3. deploy infra with
    
       - 1 mangement server (war installed from step 1)
       - 2 NSX controllers (cluster)
       - 1 NSX manger node
       - 1 NSX service node
       - 1 KVM host (agent installed from step 2)
    
    4. deploy data center using 
[this](https://github.com/schubergphilis/MCT-shared/blob/master/marvin/mct-zone1-kvm1-NVP.cfg)
 config
    
    5. run marvin tests from `test/integration/smoke/test_nicira_controller.py` 
with flags `tags=advanced,required_hardware=true`
    
    Result
    ```
    [root@cs1 cloudstack]# nosetests --with-marvin 
--marvin-config=/data/shared/marvin/mct-zone1-kvm1-NVP.cfg -s -a 
tags=advanced,required_hardware=true 
test/integration/smoke/test_nicira_controller.py
    
    ==== Marvin Init Started ====
    
    === Marvin Parse Config Successful ===
    
    === Marvin Setting TestData Successful===
    
    ==== Log Folder Path: /tmp//MarvinLogs//Nov_10_2015_10_27_55_46U497. All 
logs will be available here ====
    
    === Marvin Init Logging Successful===
    
    ==== Marvin Init Successful ====
    
/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:100: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause certain SSL 
connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
      InsecurePlatformWarning
    (previous warning repeated a couple times)
    === TestName: test_01_nicira_controller | Status : SUCCESS ===
    
    === TestName: test_02_nicira_controller_redirect | Status : SUCCESS ===
    
    ===final results are now copied to: 
/tmp//MarvinLogs/test_nicira_controller_RV74VA===
    
    ```
    
    LGTM


> NSX/Nicira Plugin does not support NSX v4.2.1
> ---------------------------------------------
>
>                 Key: CLOUDSTACK-8956
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8956
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the 
> default.) 
>          Components: VMware
>    Affects Versions: 4.4.0, 4.5.0, 4.4.1, 4.4.2, 4.4.3, 4.5.1, 4.4.4
>         Environment: OS: RHEL 6.6
>            Reporter: Nicolas Vazquez
>             Fix For: 4.5.1, 4.6.0
>
>
> h3. Description of the problem:
> Prior to version 4.2. Nicira/VmWare NSX used a variation of Open vSwitch as 
> means of integrating SDN into hypervisor layer. Cloudstack NiciraNVP plugin 
> was written to support OVS as a bridge to NSX.
> In version 4.2 VMware introduced NSX vSwitch as a replacement for OVS in ESX 
> hypervisors. It is a fork of distributed vSwitch leveraging one of the recent 
> features of ESX called opaque networks. Because of that change the current 
> version of NiciraNVP plugin doesn’t support versions of NSX-MH above 4.2 
> specifically in Vsphere environment. Proposed fix will analyze a version of 
> NVP/NSX API and use proper support for ESX hypervisors.
> vSphere hypervisor mode operations when NV is deployed onto NSX managed 
> network changes:
> * Current mode. A portgroup = UUID of CS VM NIC is created on a local 
> standard switch of the Hypervisor where VM is starting. VM nic is attached to 
> that port group.
> * New mode. No additional port group is created on a HW. No port group 
> cleanup is needed after VM/NIC is destroyed. VM is attached to 1st port group 
> having the following attributes:
> ** opaqueNetworkId string "br-int”
> ** opaqueNetworkType string "nsx.network"
> If portgroup with such attributes is not found a deployment should fail with 
> exception.
> h3. VMware vSphere API version from 5.1 to 5.5:
> Since vSphere API version 5.5, 
> [OpaqueNetworks|https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.OpaqueNetwork.html]
>  are introduced. 
> Its description says: 
> bq. This interface defines an opaque network, in the sense that the detail 
> and configuration of the network is unknown to vShpere and is managed by a 
> management plane outside of vSphere. However, the identifier and name of 
> these networks is made available to vSphere so that host and virtual machine 
> virtual ethernet device can connect to them.
> In order to connect a vm's virtual ethernet device to the proper opaque 
> network when deploying a vm into a NSX managed network, we first need to look 
> for a particular opaque network on hosts. This opaque network's id has to be 
> *"br-int"* and its type *"nsx.network"*.
> Since vSphere API version 5.5 
> [HostNetworkInfo|https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.host.NetworkInfo.html#opaqueNetwork]
>  introduces a list of available opaque networks for each host. 
> If NSX API version >= 4.2 we look for a 
> [OpaqueNetworkInfo|https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.host.OpaqueNetworkInfo.html]
>  which satisfies:
> * opaqueNetworkId = "br-int"
> * opaqueNetworkType = "nsx.netork"
> If that opaque network is found, then we need to attach vm's NIC to a virtual 
> ethernet device which support this, so we use 
> [VirtualEthernetCardOpaqueNetworkBackingInfo|https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.vm.device.VirtualEthernetCard.OpaqueNetworkBackingInfo.html]
>  setting:
> * opaqueNetworkId = "br-int"
> * opaqueNetworkType = "nsx.netork"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to