[
https://issues.apache.org/jira/browse/CLOUDSTACK-8956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15010869#comment-15010869
]
ASF subversion and git services commented on CLOUDSTACK-8956:
-------------------------------------------------------------
Commit 219da64027a826980827d7fe79107b33c09f9d5a in cloudstack's branch
refs/heads/master from [~remibergsma]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=219da64 ]
Merge pull request #935 from nvazquez/from4.5.1
CLOUDSTACK-8956: NSX/Nicira Plugin does not support NSX v4.2.1JIRA Ticket:
https://issues.apache.org/jira/browse/CLOUDSTACK-8956
### Description of the problem:
Prior to version 4.2. Nicira/VmWare NSX used a variation of Open vSwitch as
means of integrating SDN into hypervisor layer. Cloudstack NiciraNVP plugin was
written to support OVS as a bridge to NSX.
In version 4.2 VMware introduced NSX vSwitch as a replacement for OVS in ESX
hypervisors. It is a fork of distributed vSwitch leveraging one of the recent
features of ESX called opaque networks. Because of that change the current
version of NiciraNVP plugin doesnt support versions of NSX-MH above 4.2
specifically in Vsphere environment. Proposed fix will analyze a version of
NVP/NSX API and use proper support for ESX hypervisors.
vSphere hypervisor mode operations when NV is deployed onto NSX managed network
changes:
* Current mode. A portgroup = UUID of CS VM NIC is created on a local standard
switch of the Hypervisor where VM is starting. VM nic is attached to that port
group.
* New mode. No additional port group is created on a HW. No port group cleanup
is needed after VM/NIC is destroyed. VM is attached to 1st port group having
the following attributes:
** opaqueNetworkId string "br-int
** opaqueNetworkType string "nsx.network"
If portgroup with such attributes is not found a deployment should fail with
exception.
### VMware vSphere API version from 5.1 to 5.5:
Since vSphere API version 5.5,
[OpaqueNetworks](https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.OpaqueNetwork.html)
are introduced.
Its description says:
> This interface defines an opaque network, in the sense that the detail and
> configuration of the network is unknown to vShpere and is managed by a
> management plane outside of vSphere. However, the identifier and name of
> these networks is made available to vSphere so that host and virtual machine
> virtual ethernet device can connect to them.
In order to connect a vm's virtual ethernet device to the proper opaque network
when deploying a vm into a NSX managed network, we first need to look for a
particular opaque network on hosts. This opaque network's id has to be
**"br-int"** and its type **"nsx.network"**.
Since vSphere API version 5.5
[HostNetworkInfo](https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.host.NetworkInfo.html#opaqueNetwork)
introduces a list of available opaque networks for each host.
If NSX API version >= 4.2 we look for a
[OpaqueNetworkInfo](https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.host.OpaqueNetworkInfo.html)
which satisfies:
* opaqueNetworkId = "br-int"
* opaqueNetworkType = "nsx.netork"
If that opaque network is found, then we need to attach vm's NIC to a virtual
ethernet device which support this, so we use
[VirtualEthernetCardOpaqueNetworkBackingInfo](https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.vm.device.VirtualEthernetCard.OpaqueNetworkBackingInfo.html)
setting:
* opaqueNetworkId = "br-int"
* opaqueNetworkType = "nsx.netork"
* pr/935:
CLOUDSTACK-8956: Remove assert(false) on opaque network and ping method on
NiciraNvpApiVersion
CLOUDSTACK-8956: Deploy VM on NSX managed network changes if NSX Api Version
>= 4.2: has to connect to "br-int" of "nsx.network" type
CLOUDSTACK-8956: Log NSX Api Version
CLOUDSTACK-8956: Add VMware Api v5.5 and change pom.xml to use VMware Api v5.5
Signed-off-by: Remi Bergsma <[email protected]>
> NSX/Nicira Plugin does not support NSX v4.2.1
> ---------------------------------------------
>
> Key: CLOUDSTACK-8956
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8956
> Project: CloudStack
> Issue Type: Bug
> Security Level: Public(Anyone can view this level - this is the
> default.)
> Components: VMware
> Affects Versions: 4.4.0, 4.5.0, 4.4.1, 4.4.2, 4.4.3, 4.5.1, 4.4.4
> Environment: OS: RHEL 6.6
> Reporter: Nicolas Vazquez
> Fix For: 4.5.1, 4.6.0
>
>
> h3. Description of the problem:
> Prior to version 4.2. Nicira/VmWare NSX used a variation of Open vSwitch as
> means of integrating SDN into hypervisor layer. Cloudstack NiciraNVP plugin
> was written to support OVS as a bridge to NSX.
> In version 4.2 VMware introduced NSX vSwitch as a replacement for OVS in ESX
> hypervisors. It is a fork of distributed vSwitch leveraging one of the recent
> features of ESX called opaque networks. Because of that change the current
> version of NiciraNVP plugin doesn’t support versions of NSX-MH above 4.2
> specifically in Vsphere environment. Proposed fix will analyze a version of
> NVP/NSX API and use proper support for ESX hypervisors.
> vSphere hypervisor mode operations when NV is deployed onto NSX managed
> network changes:
> * Current mode. A portgroup = UUID of CS VM NIC is created on a local
> standard switch of the Hypervisor where VM is starting. VM nic is attached to
> that port group.
> * New mode. No additional port group is created on a HW. No port group
> cleanup is needed after VM/NIC is destroyed. VM is attached to 1st port group
> having the following attributes:
> ** opaqueNetworkId string "br-int”
> ** opaqueNetworkType string "nsx.network"
> If portgroup with such attributes is not found a deployment should fail with
> exception.
> h3. VMware vSphere API version from 5.1 to 5.5:
> Since vSphere API version 5.5,
> [OpaqueNetworks|https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.OpaqueNetwork.html]
> are introduced.
> Its description says:
> bq. This interface defines an opaque network, in the sense that the detail
> and configuration of the network is unknown to vShpere and is managed by a
> management plane outside of vSphere. However, the identifier and name of
> these networks is made available to vSphere so that host and virtual machine
> virtual ethernet device can connect to them.
> In order to connect a vm's virtual ethernet device to the proper opaque
> network when deploying a vm into a NSX managed network, we first need to look
> for a particular opaque network on hosts. This opaque network's id has to be
> *"br-int"* and its type *"nsx.network"*.
> Since vSphere API version 5.5
> [HostNetworkInfo|https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.host.NetworkInfo.html#opaqueNetwork]
> introduces a list of available opaque networks for each host.
> If NSX API version >= 4.2 we look for a
> [OpaqueNetworkInfo|https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.host.OpaqueNetworkInfo.html]
> which satisfies:
> * opaqueNetworkId = "br-int"
> * opaqueNetworkType = "nsx.netork"
> If that opaque network is found, then we need to attach vm's NIC to a virtual
> ethernet device which support this, so we use
> [VirtualEthernetCardOpaqueNetworkBackingInfo|https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.vm.device.VirtualEthernetCard.OpaqueNetworkBackingInfo.html]
> setting:
> * opaqueNetworkId = "br-int"
> * opaqueNetworkType = "nsx.netork"
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)