Re: [openstack-dev] [fuel] [vmware] Two hypervisors in one cloud

2015-01-25 Thread Kevin Benton
Yes, you are correct that the suggestion is to have the integration layer
in Neutron backed by 3rd party testing. However, there is nothing technical
preventing you from specifying an arbitrary python path to load as a
driver/plugin. If there is a lot of demand for vDS support, it might be
worth considering.

On Sat, Jan 24, 2015 at 3:25 AM, Andrey Danin ada...@mirantis.com wrote:

 I agree. But, as far as I know [2], there should be a some kind of ML2
 integration layer for each plugin, and it should be in Neutron code base
 (see [1] for example). There is no vDS ML2 driver in Neutron at all and FF
 will become soon. So, it seems we cannot manage to adjust a blueprint spec
 [3], make it approved, refactor a code of the driver and provide a 3rd
 party CI for that in such a short period before FF.

 [1] thin Mellanox ML2 driver https://review.openstack.org/#/c/148614/
 [2]
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/core-vendor-decomposition.html
 [3] https://blueprints.launchpad.net/neutron/+spec/ml2-dvs-mech-driver



 On Sat, Jan 24, 2015 at 12:45 AM, Kevin Benton blak...@gmail.com wrote:

 It's worth noting that all Neutron ML2 drivers are required to move to
 their own repos starting in Kilo so installing an extra python package to
 use a driver will become part of the standard Neutron installation
 workflow. So I would suggest creating a stackforge project for the vDS
 driver and packaging it up.

 On Fri, Jan 23, 2015 at 11:39 AM, Andrey Danin ada...@mirantis.com
 wrote:

 Hi, all,

 As you may know, Fuel 6.0 has an ability to deploy either a KVM-oriented
 environment or a VMware vCenter-oriented environment. We wants to go
 further and mix them together. A user should be able to run both
 hypervisors in one OpenStack environment. We want to get it in Fuel 6.1.
 Here is how we gonna do this.

 * When vCenter is used as a hypervisor the only way to use volumes with
 it is to use Cinder VMDK backend. And vise versa, KVM cannot operate with
 the volumes provided by Cinder VMDK backend. All that means that we should
 have two separe infrastructures (a hypervisor + a volume service) for each
 HV presented in environment. To do that we decided to place corresponding
 nova-compute and cinder-volume instances into different Availability Zones.
 Also we want to disable 'cross_az_attach' option in nova.conf to restrict a
 user to mount a volume to an instance which doesn't support this volume
 type.

 * A cinder-volume service is just a proxy between vCenter Datastore and
 Glance when used with VMDK. It means that the service itself doesn't need a
 local hard drive but sometimes can significantly consume network. That's
 why it's not a good idea to always put it to a Controller node. So, we want
 to add a new role called 'cinder-vmdk'. A user will be able to put this
 role to whatever node he wants: a separate node or combine with other
 roles. HA will be achieved by placing the role on two or more nodes.
 Cinder-volume services on each node will be configured identicaly,
 including 'host' stanza. We have the same approach now for Cinder+Ceph.

 * Nova-compute services for vCenter are kept running on Controller
 nodes. They are managed by Corosync.

 * There are two options for network backend exist. A good old
 Nova-network and a modern Neutron with ML2 DVS driver enabled. The problem
 with Nova-network is that we have to run it in 'singlehost' mode. It means,
 that the only nova-network service will be running for the whole
 environment. It makes the service a single point of failure, prevents a
 user to use Security Groups, and increases a network consuming for the node
 where the service is running. The problem with Neutron is that there is no
 ML2 DVS driver in an upstream Neutron for Juno and even Kilo. The is an
 unmerged patch [1] with almost no chances to get in Kilo. Good news are
 that we managed to run a PoC lab with this driver and both HVs enabled. So,
 we can build the driver as a package but it'll be a little ugly. That's why
 we picked the Nova-network approach as a basis. In Cluster creation wizard
 will be something to choose if you want to use vCenter in a cluster or not.
 Depending on it the nova-network service will be run in the 'singlenode' or
 'multinode' mode. May be, if we have enough resources we'll implement a
 Neutron + vDS support also.

 * We are going to move all VMWare-specific settings to a separate UI
 tab. On the Settings tab we will keep a Glance backend switch (Swift, Ceph,
 VMware) and a libvirt_type switch (KVM, qemu). At the cluster creation
 wizard there will be a checkbox called 'add a VMware vCenter support into
 your cloud'. When it's enabled a user can choose nova-network only.

 * OSTF test suit will be extended to support separate sets of tests for
 each HV.

 [1] Neutron ML2 vDS driver https://review.openstack.org/#/c/111227/

 Links to blueprints:
 https://blueprints.launchpad.net/fuel/+spec/vmware-ui-settings
 

Re: [openstack-dev] [fuel] [vmware] Two hypervisors in one cloud

2015-01-25 Thread Romil Gupta
Hi Andrey,

We do a have a opensource Neutron based solution for VMware
vCenter-oriented environment in Kilo, please have look in the links below:

https://github.com/stackforge/networking-vsphere(For Kilo release woking in
progress)

https://github.com/hp-networking/ovsvapp(With Juno stable)


On Sun, Jan 25, 2015 at 4:01 PM, Kevin Benton blak...@gmail.com wrote:

 Yes, you are correct that the suggestion is to have the integration layer
 in Neutron backed by 3rd party testing. However, there is nothing technical
 preventing you from specifying an arbitrary python path to load as a
 driver/plugin. If there is a lot of demand for vDS support, it might be
 worth considering.

 On Sat, Jan 24, 2015 at 3:25 AM, Andrey Danin ada...@mirantis.com wrote:

 I agree. But, as far as I know [2], there should be a some kind of ML2
 integration layer for each plugin, and it should be in Neutron code base
 (see [1] for example). There is no vDS ML2 driver in Neutron at all and FF
 will become soon. So, it seems we cannot manage to adjust a blueprint spec
 [3], make it approved, refactor a code of the driver and provide a 3rd
 party CI for that in such a short period before FF.

 [1] thin Mellanox ML2 driver https://review.openstack.org/#/c/148614/
 [2]
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/core-vendor-decomposition.html
 [3] https://blueprints.launchpad.net/neutron/+spec/ml2-dvs-mech-driver



 On Sat, Jan 24, 2015 at 12:45 AM, Kevin Benton blak...@gmail.com wrote:

 It's worth noting that all Neutron ML2 drivers are required to move to
 their own repos starting in Kilo so installing an extra python package to
 use a driver will become part of the standard Neutron installation
 workflow. So I would suggest creating a stackforge project for the vDS
 driver and packaging it up.

 On Fri, Jan 23, 2015 at 11:39 AM, Andrey Danin ada...@mirantis.com
 wrote:

 Hi, all,

 As you may know, Fuel 6.0 has an ability to deploy either a
 KVM-oriented environment or a VMware vCenter-oriented environment. We wants
 to go further and mix them together. A user should be able to run both
 hypervisors in one OpenStack environment. We want to get it in Fuel 6.1.
 Here is how we gonna do this.

 * When vCenter is used as a hypervisor the only way to use volumes with
 it is to use Cinder VMDK backend. And vise versa, KVM cannot operate with
 the volumes provided by Cinder VMDK backend. All that means that we should
 have two separe infrastructures (a hypervisor + a volume service) for each
 HV presented in environment. To do that we decided to place corresponding
 nova-compute and cinder-volume instances into different Availability Zones.
 Also we want to disable 'cross_az_attach' option in nova.conf to restrict a
 user to mount a volume to an instance which doesn't support this volume
 type.

 * A cinder-volume service is just a proxy between vCenter Datastore and
 Glance when used with VMDK. It means that the service itself doesn't need a
 local hard drive but sometimes can significantly consume network. That's
 why it's not a good idea to always put it to a Controller node. So, we want
 to add a new role called 'cinder-vmdk'. A user will be able to put this
 role to whatever node he wants: a separate node or combine with other
 roles. HA will be achieved by placing the role on two or more nodes.
 Cinder-volume services on each node will be configured identicaly,
 including 'host' stanza. We have the same approach now for Cinder+Ceph.

 * Nova-compute services for vCenter are kept running on Controller
 nodes. They are managed by Corosync.

 * There are two options for network backend exist. A good old
 Nova-network and a modern Neutron with ML2 DVS driver enabled. The problem
 with Nova-network is that we have to run it in 'singlehost' mode. It means,
 that the only nova-network service will be running for the whole
 environment. It makes the service a single point of failure, prevents a
 user to use Security Groups, and increases a network consuming for the node
 where the service is running. The problem with Neutron is that there is no
 ML2 DVS driver in an upstream Neutron for Juno and even Kilo. The is an
 unmerged patch [1] with almost no chances to get in Kilo. Good news are
 that we managed to run a PoC lab with this driver and both HVs enabled. So,
 we can build the driver as a package but it'll be a little ugly. That's why
 we picked the Nova-network approach as a basis. In Cluster creation wizard
 will be something to choose if you want to use vCenter in a cluster or not.
 Depending on it the nova-network service will be run in the 'singlenode' or
 'multinode' mode. May be, if we have enough resources we'll implement a
 Neutron + vDS support also.

 * We are going to move all VMWare-specific settings to a separate UI
 tab. On the Settings tab we will keep a Glance backend switch (Swift, Ceph,
 VMware) and a libvirt_type switch (KVM, qemu). At the cluster creation
 wizard there will be a checkbox called 

Re: [openstack-dev] [fuel] [vmware] Two hypervisors in one cloud

2015-01-25 Thread Romil Gupta
With correct links:

https://github.com/stackforge/networking-vsphere/tree/master/specs/kilo
https://github.com/stackforge/networking-vsphere/
https://github.com/hp-networking/ovsvapp

On Sun, Jan 25, 2015 at 7:28 PM, Romil Gupta romilgupt...@gmail.com wrote:

 Hi Andrey,

 We do a have a opensource Neutron based solution for VMware
 vCenter-oriented environment in Kilo, please have look in the links below:

 https://github.com/stackforge/networking-vsphere(For Kilo release woking
 in progress)

 https://github.com/hp-networking/ovsvapp(With Juno stable)


 On Sun, Jan 25, 2015 at 4:01 PM, Kevin Benton blak...@gmail.com wrote:

 Yes, you are correct that the suggestion is to have the integration layer
 in Neutron backed by 3rd party testing. However, there is nothing technical
 preventing you from specifying an arbitrary python path to load as a
 driver/plugin. If there is a lot of demand for vDS support, it might be
 worth considering.

 On Sat, Jan 24, 2015 at 3:25 AM, Andrey Danin ada...@mirantis.com
 wrote:

 I agree. But, as far as I know [2], there should be a some kind of ML2
 integration layer for each plugin, and it should be in Neutron code base
 (see [1] for example). There is no vDS ML2 driver in Neutron at all and FF
 will become soon. So, it seems we cannot manage to adjust a blueprint spec
 [3], make it approved, refactor a code of the driver and provide a 3rd
 party CI for that in such a short period before FF.

 [1] thin Mellanox ML2 driver https://review.openstack.org/#/c/148614/
 [2]
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/core-vendor-decomposition.html
 [3] https://blueprints.launchpad.net/neutron/+spec/ml2-dvs-mech-driver



 On Sat, Jan 24, 2015 at 12:45 AM, Kevin Benton blak...@gmail.com
 wrote:

 It's worth noting that all Neutron ML2 drivers are required to move to
 their own repos starting in Kilo so installing an extra python package to
 use a driver will become part of the standard Neutron installation
 workflow. So I would suggest creating a stackforge project for the vDS
 driver and packaging it up.

 On Fri, Jan 23, 2015 at 11:39 AM, Andrey Danin ada...@mirantis.com
 wrote:

 Hi, all,

 As you may know, Fuel 6.0 has an ability to deploy either a
 KVM-oriented environment or a VMware vCenter-oriented environment. We 
 wants
 to go further and mix them together. A user should be able to run both
 hypervisors in one OpenStack environment. We want to get it in Fuel 6.1.
 Here is how we gonna do this.

 * When vCenter is used as a hypervisor the only way to use volumes
 with it is to use Cinder VMDK backend. And vise versa, KVM cannot operate
 with the volumes provided by Cinder VMDK backend. All that means that we
 should have two separe infrastructures (a hypervisor + a volume service)
 for each HV presented in environment. To do that we decided to place
 corresponding nova-compute and cinder-volume instances into different
 Availability Zones. Also we want to disable 'cross_az_attach' option in
 nova.conf to restrict a user to mount a volume to an instance which 
 doesn't
 support this volume type.

 * A cinder-volume service is just a proxy between vCenter Datastore
 and Glance when used with VMDK. It means that the service itself doesn't
 need a local hard drive but sometimes can significantly consume network.
 That's why it's not a good idea to always put it to a Controller node. So,
 we want to add a new role called 'cinder-vmdk'. A user will be able to put
 this role to whatever node he wants: a separate node or combine with other
 roles. HA will be achieved by placing the role on two or more nodes.
 Cinder-volume services on each node will be configured identicaly,
 including 'host' stanza. We have the same approach now for Cinder+Ceph.

 * Nova-compute services for vCenter are kept running on Controller
 nodes. They are managed by Corosync.

 * There are two options for network backend exist. A good old
 Nova-network and a modern Neutron with ML2 DVS driver enabled. The problem
 with Nova-network is that we have to run it in 'singlehost' mode. It 
 means,
 that the only nova-network service will be running for the whole
 environment. It makes the service a single point of failure, prevents a
 user to use Security Groups, and increases a network consuming for the 
 node
 where the service is running. The problem with Neutron is that there is no
 ML2 DVS driver in an upstream Neutron for Juno and even Kilo. The is an
 unmerged patch [1] with almost no chances to get in Kilo. Good news are
 that we managed to run a PoC lab with this driver and both HVs enabled. 
 So,
 we can build the driver as a package but it'll be a little ugly. That's 
 why
 we picked the Nova-network approach as a basis. In Cluster creation wizard
 will be something to choose if you want to use vCenter in a cluster or 
 not.
 Depending on it the nova-network service will be run in the 'singlenode' 
 or
 'multinode' mode. May be, if we have enough resources we'll implement a
 

Re: [openstack-dev] [fuel] [vmware] Two hypervisors in one cloud

2015-01-24 Thread Andrey Danin
I agree. But, as far as I know [2], there should be a some kind of ML2
integration layer for each plugin, and it should be in Neutron code base
(see [1] for example). There is no vDS ML2 driver in Neutron at all and FF
will become soon. So, it seems we cannot manage to adjust a blueprint spec
[3], make it approved, refactor a code of the driver and provide a 3rd
party CI for that in such a short period before FF.

[1] thin Mellanox ML2 driver https://review.openstack.org/#/c/148614/
[2]
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/core-vendor-decomposition.html
[3] https://blueprints.launchpad.net/neutron/+spec/ml2-dvs-mech-driver


On Sat, Jan 24, 2015 at 12:45 AM, Kevin Benton blak...@gmail.com wrote:

 It's worth noting that all Neutron ML2 drivers are required to move to
 their own repos starting in Kilo so installing an extra python package to
 use a driver will become part of the standard Neutron installation
 workflow. So I would suggest creating a stackforge project for the vDS
 driver and packaging it up.

 On Fri, Jan 23, 2015 at 11:39 AM, Andrey Danin ada...@mirantis.com
 wrote:

 Hi, all,

 As you may know, Fuel 6.0 has an ability to deploy either a KVM-oriented
 environment or a VMware vCenter-oriented environment. We wants to go
 further and mix them together. A user should be able to run both
 hypervisors in one OpenStack environment. We want to get it in Fuel 6.1.
 Here is how we gonna do this.

 * When vCenter is used as a hypervisor the only way to use volumes with
 it is to use Cinder VMDK backend. And vise versa, KVM cannot operate with
 the volumes provided by Cinder VMDK backend. All that means that we should
 have two separe infrastructures (a hypervisor + a volume service) for each
 HV presented in environment. To do that we decided to place corresponding
 nova-compute and cinder-volume instances into different Availability Zones.
 Also we want to disable 'cross_az_attach' option in nova.conf to restrict a
 user to mount a volume to an instance which doesn't support this volume
 type.

 * A cinder-volume service is just a proxy between vCenter Datastore and
 Glance when used with VMDK. It means that the service itself doesn't need a
 local hard drive but sometimes can significantly consume network. That's
 why it's not a good idea to always put it to a Controller node. So, we want
 to add a new role called 'cinder-vmdk'. A user will be able to put this
 role to whatever node he wants: a separate node or combine with other
 roles. HA will be achieved by placing the role on two or more nodes.
 Cinder-volume services on each node will be configured identicaly,
 including 'host' stanza. We have the same approach now for Cinder+Ceph.

 * Nova-compute services for vCenter are kept running on Controller nodes.
 They are managed by Corosync.

 * There are two options for network backend exist. A good old
 Nova-network and a modern Neutron with ML2 DVS driver enabled. The problem
 with Nova-network is that we have to run it in 'singlehost' mode. It means,
 that the only nova-network service will be running for the whole
 environment. It makes the service a single point of failure, prevents a
 user to use Security Groups, and increases a network consuming for the node
 where the service is running. The problem with Neutron is that there is no
 ML2 DVS driver in an upstream Neutron for Juno and even Kilo. The is an
 unmerged patch [1] with almost no chances to get in Kilo. Good news are
 that we managed to run a PoC lab with this driver and both HVs enabled. So,
 we can build the driver as a package but it'll be a little ugly. That's why
 we picked the Nova-network approach as a basis. In Cluster creation wizard
 will be something to choose if you want to use vCenter in a cluster or not.
 Depending on it the nova-network service will be run in the 'singlenode' or
 'multinode' mode. May be, if we have enough resources we'll implement a
 Neutron + vDS support also.

 * We are going to move all VMWare-specific settings to a separate UI tab.
 On the Settings tab we will keep a Glance backend switch (Swift, Ceph,
 VMware) and a libvirt_type switch (KVM, qemu). At the cluster creation
 wizard there will be a checkbox called 'add a VMware vCenter support into
 your cloud'. When it's enabled a user can choose nova-network only.

 * OSTF test suit will be extended to support separate sets of tests for
 each HV.

 [1] Neutron ML2 vDS driver https://review.openstack.org/#/c/111227/

 Links to blueprints:
 https://blueprints.launchpad.net/fuel/+spec/vmware-ui-settings
 https://blueprints.launchpad.net/fuel/+spec/cinder-vmdk-role
 https://blueprints.launchpad.net/fuel/+spec/vmware-dual-hypervisor


 I would appreciate to see your thoughts about all that.



 --
 Andrey Danin
 ada...@mirantis.com
 skype: gcon.monolake

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 

Re: [openstack-dev] [fuel] [vmware] Two hypervisors in one cloud

2015-01-23 Thread Kevin Benton
It's worth noting that all Neutron ML2 drivers are required to move to
their own repos starting in Kilo so installing an extra python package to
use a driver will become part of the standard Neutron installation
workflow. So I would suggest creating a stackforge project for the vDS
driver and packaging it up.

On Fri, Jan 23, 2015 at 11:39 AM, Andrey Danin ada...@mirantis.com wrote:

 Hi, all,

 As you may know, Fuel 6.0 has an ability to deploy either a KVM-oriented
 environment or a VMware vCenter-oriented environment. We wants to go
 further and mix them together. A user should be able to run both
 hypervisors in one OpenStack environment. We want to get it in Fuel 6.1.
 Here is how we gonna do this.

 * When vCenter is used as a hypervisor the only way to use volumes with it
 is to use Cinder VMDK backend. And vise versa, KVM cannot operate with the
 volumes provided by Cinder VMDK backend. All that means that we should have
 two separe infrastructures (a hypervisor + a volume service) for each HV
 presented in environment. To do that we decided to place corresponding
 nova-compute and cinder-volume instances into different Availability Zones.
 Also we want to disable 'cross_az_attach' option in nova.conf to restrict a
 user to mount a volume to an instance which doesn't support this volume
 type.

 * A cinder-volume service is just a proxy between vCenter Datastore and
 Glance when used with VMDK. It means that the service itself doesn't need a
 local hard drive but sometimes can significantly consume network. That's
 why it's not a good idea to always put it to a Controller node. So, we want
 to add a new role called 'cinder-vmdk'. A user will be able to put this
 role to whatever node he wants: a separate node or combine with other
 roles. HA will be achieved by placing the role on two or more nodes.
 Cinder-volume services on each node will be configured identicaly,
 including 'host' stanza. We have the same approach now for Cinder+Ceph.

 * Nova-compute services for vCenter are kept running on Controller nodes.
 They are managed by Corosync.

 * There are two options for network backend exist. A good old Nova-network
 and a modern Neutron with ML2 DVS driver enabled. The problem with
 Nova-network is that we have to run it in 'singlehost' mode. It means, that
 the only nova-network service will be running for the whole environment. It
 makes the service a single point of failure, prevents a user to use
 Security Groups, and increases a network consuming for the node where the
 service is running. The problem with Neutron is that there is no ML2 DVS
 driver in an upstream Neutron for Juno and even Kilo. The is an unmerged
 patch [1] with almost no chances to get in Kilo. Good news are that we
 managed to run a PoC lab with this driver and both HVs enabled. So, we can
 build the driver as a package but it'll be a little ugly. That's why we
 picked the Nova-network approach as a basis. In Cluster creation wizard
 will be something to choose if you want to use vCenter in a cluster or not.
 Depending on it the nova-network service will be run in the 'singlenode' or
 'multinode' mode. May be, if we have enough resources we'll implement a
 Neutron + vDS support also.

 * We are going to move all VMWare-specific settings to a separate UI tab.
 On the Settings tab we will keep a Glance backend switch (Swift, Ceph,
 VMware) and a libvirt_type switch (KVM, qemu). At the cluster creation
 wizard there will be a checkbox called 'add a VMware vCenter support into
 your cloud'. When it's enabled a user can choose nova-network only.

 * OSTF test suit will be extended to support separate sets of tests for
 each HV.

 [1] Neutron ML2 vDS driver https://review.openstack.org/#/c/111227/

 Links to blueprints:
 https://blueprints.launchpad.net/fuel/+spec/vmware-ui-settings
 https://blueprints.launchpad.net/fuel/+spec/cinder-vmdk-role
 https://blueprints.launchpad.net/fuel/+spec/vmware-dual-hypervisor


 I would appreciate to see your thoughts about all that.



 --
 Andrey Danin
 ada...@mirantis.com
 skype: gcon.monolake

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev