Re: [openstack-dev] ML2 Type driver for supporting network overlays, with more than 4K seg

2014-04-04 Thread Padmanabhan Krishnan
The blueprint is updated with more information on the requirements and 
interaction with VDP

https://blueprints.launchpad.net/neutron/+spec/netron-ml2-mechnism-driver-for-cisco-dfa-support


On Monday, March 31, 2014 12:50 PM, Padmanabhan Krishnan  
wrote:
 
Hi Mathieu,
Thanks for the link. Some similarities for sure. I see Nova libvirt being used. 
I had looked at libvirt earlier.

Firstly, the libvirt support that Nova uses to communicate with LLDPAD doesn't 
have support for the latest 2.2 standard. The support is also only for the VEPA 
mode and not for VEB mode. It's also quite not clear as how the VLAN provided 
by VDP is used by libvirt to communicate it back to Openstack.
There's already an existing blueprint where i can add more details 
(https://blueprints.launchpad.net/neutron/+spec/netron-ml2-mechnism-driver-for-cisco-dfa-support)

Even for a single physical network, you need more parameters in the ini file. I 
was thinking of Host or Network Overlay with or w/o VDP for Tunnel mode. I will 
add more to the blueprint.

Thanks,
Paddu

On Friday, March 28, 2014 8:42 AM, Mathieu Rohon  
wrote:
 
Hi,


the more I think about your use case, the more I think you should
create a BP to have tenant network based on interfaces created with
VDP protocol.
I'm not a VDP specialist, but if it creates some vlan back interfaces,
you might match those physical interfaces with the
physical_interface_mappings parameter in your ml2_conf.ini. Then you
could create flat networks backed on those interfaces.
SR-IOv use cases also talk about using vif_type 802.1qbg :
https://wiki.openstack.org/wiki/Nova-neutron-sriov



Mathieu___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 Type driver for supporting network overlays, with more than 4K seg

2014-03-31 Thread Padmanabhan Krishnan
Hi Mathieu,
Thanks for the link. Some similarities for sure. I see Nova libvirt being used. 
I had looked at libvirt earlier.

Firstly, the libvirt support that Nova uses to communicate with LLDPAD doesn't 
have support for the latest 2.2 standard. The support is also only for the VEPA 
mode and not for VEB mode. It's also quite not clear as how the VLAN provided 
by VDP is used by libvirt to communicate it back to Openstack.
There's already an existing blueprint where i can add more details 
(https://blueprints.launchpad.net/neutron/+spec/netron-ml2-mechnism-driver-for-cisco-dfa-support)

Even for a single physical network, you need more parameters in the ini file. I 
was thinking of Host or Network Overlay with or w/o VDP for Tunnel mode. I will 
add more to the blueprint.

Thanks,
Paddu

On Friday, March 28, 2014 8:42 AM, Mathieu Rohon  
wrote:
 
Hi,


the more I think about your use case, the more I think you should
create a BP to have tenant network based on interfaces created with
VDP protocol.
I'm not a VDP specialist, but if it creates some vlan back interfaces,
you might match those physical interfaces with the
physical_interface_mappings parameter in your ml2_conf.ini. Then you
could create flat networks backed on those interfaces.
SR-IOv use cases also talk about using vif_type 802.1qbg :
https://wiki.openstack.org/wiki/Nova-neutron-sriov



Mathieu___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 Type driver for supporting network overlays, with more than 4K seg

2014-03-28 Thread Mathieu Rohon
Hi,


the more I think about your use case, the more I think you should
create a BP to have tenant network based on interfaces created with
VDP protocol.
I'm not a VDP specialist, but if it creates some vlan back interfaces,
you might match those physical interfaces with the
physical_interface_mappings parameter in your ml2_conf.ini. Then you
could create flat networks backed on those interfaces.
SR-IOv use cases also talk about using vif_type 802.1qbg :
https://wiki.openstack.org/wiki/Nova-neutron-sriov


Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 Type driver for supporting network overlays, with more than 4K seg

2014-03-27 Thread Padmanabhan Krishnan
Hi Mathieu,
Thanks for your reply.
Yes, 
even i think the type driver code for tunnels can remain the same since 
the segment/tunnel allocation is not going to change. But some 
distinction has to be given in the naming or by adding another tunnel 
parameter to signify a network overlay. 
For tunnels 
type, br-tun is created. For regular VLAN, br-ex/br-eth has the uplink 
also as its member port. For this, I was thinking, it's easier if we 
don't even create br-tun or VXLAN/GRE end-points since the compute nodes
 (data network in Openstack) is connected through the external fabric. We will 
just
 have the br-eth/r-ex and its port connecting to the fabric just like if the 
type was VLAN. If we had 
to do this, the changes has to be in  neutron agent code. 
Is this the right way to go or any
 suggestions?

Thanks,
Paddu




On Wednesday, March 26, 2014 11:28 AM, Padmanabhan Krishnan  
wrote:
 
Hi Mathieu,
Thanks for your reply.
Yes, even i think the type driver code for tunnels can remain the same since 
the segment/tunnel allocation is not going to change. But some distinction has 
to be given in the naming or by adding another tunnel parameter to signify a 
network overlay. 
For tunnels type, br-tun is created. For regular VLAN, br-ex/br-eth has the 
uplink also as its member port. For this, I was thinking, it's easier if we 
don't even create br-tun or VXLAN/GRE end-points since the compute nodes (data 
network in Openstack) is throug the external fabric. We will just have the 
br-eth/r-ex and its port connecting to the fabric. If we had to do this, the 
changes has to be in  neutron agent code. 
Is this the right way to go or any
 suggestions?

Thanks,
Paddu





On Wednesday, March 26, 2014 1:53 AM, Mathieu Rohon  
wrote:
 
Hi,

thanks for this very interesting use case!
May be you can still use VXLAN or GRE for tenant networks, to bypass
the 4k limit of vlans. then you would have to send packets to the vlan
tagged interface, with the tag assigned by the VDP protocol, and this
traffic would be encapsulated inside the segment to be carried inside
the network fabric. Of course you will have to take care about
 MTU.
The only thing you have to consider is to be sure that the default
route between VXLan endpoints go through your vlan tagged interface.



Best,
Mathieu

On Tue, Mar 25, 2014 at 12:13 AM, Padmanabhan Krishnan  wrote:
> Hello,
> I have a topology where my Openstack compute nodes are connected to the
> external switches. The fabric comprising of the switches support more than
> 4K segments. So, i should be able to create more than 4K networks in
> Openstack. But, the VLAN to be used for communication with the switches is
> assigned by the switches using 802.1QBG (VDP) protocol. This can be thought
> of as a network overlay. The VM's sends .1q frames to the switches and the
> switches associate it to the segment (VNI in case of VXLAN).
> My question is:
> 1. I cannot use
 a type driver of VLAN because of the 4K limitation. I cannot
> use a type driver of VXLAN or GRE because that may mean host based overlay.
> Is there an integrated type driver i can use like an "external network" for
> achieving the above?
> 2. The Openstack module running in the compute should communicate with VDP
> module (lldpad) running there.
> In the computes, i see that ovs_neutron_agent.py is the one programming the
> flows. Here, for the new type driver, should i add a special case to
> provision_local_vlan() for communicating with lldpad for retrieving the
> provider VLAN? If there was a type driver component running in each
> computes, i would have added another one for my purpose. Since, the ML2
> architecture has its mechanism/type driver modules in the controller only, i
> can only make changes here.
>
> Please let me know if there's already an
 implementation for my above
> requirements. If not, should i create a blue-print?
>
> Thanks,
> Paddu
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 Type driver for supporting network overlays, with more than 4K seg

2014-03-26 Thread Mathieu Rohon
Hi,

thanks for this very interesting use case!
May be you can still use VXLAN or GRE for tenant networks, to bypass
the 4k limit of vlans. then you would have to send packets to the vlan
tagged interface, with the tag assigned by the VDP protocol, and this
traffic would be encapsulated inside the segment to be carried inside
the network fabric. Of course you will have to take care about MTU.
The only thing you have to consider is to be sure that the default
route between VXLan endpoints go through your vlan tagged interface.



Best,
Mathieu

On Tue, Mar 25, 2014 at 12:13 AM, Padmanabhan Krishnan  wrote:
> Hello,
> I have a topology where my Openstack compute nodes are connected to the
> external switches. The fabric comprising of the switches support more than
> 4K segments. So, i should be able to create more than 4K networks in
> Openstack. But, the VLAN to be used for communication with the switches is
> assigned by the switches using 802.1QBG (VDP) protocol. This can be thought
> of as a network overlay. The VM's sends .1q frames to the switches and the
> switches associate it to the segment (VNI in case of VXLAN).
> My question is:
> 1. I cannot use a type driver of VLAN because of the 4K limitation. I cannot
> use a type driver of VXLAN or GRE because that may mean host based overlay.
> Is there an integrated type driver i can use like an "external network" for
> achieving the above?
> 2. The Openstack module running in the compute should communicate with VDP
> module (lldpad) running there.
> In the computes, i see that ovs_neutron_agent.py is the one programming the
> flows. Here, for the new type driver, should i add a special case to
> provision_local_vlan() for communicating with lldpad for retrieving the
> provider VLAN? If there was a type driver component running in each
> computes, i would have added another one for my purpose. Since, the ML2
> architecture has its mechanism/type driver modules in the controller only, i
> can only make changes here.
>
> Please let me know if there's already an implementation for my above
> requirements. If not, should i create a blue-print?
>
> Thanks,
> Paddu
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ML2 Type driver for supporting network overlays, with more than 4K seg

2014-03-24 Thread Padmanabhan Krishnan
Hello,
I have a topology where my Openstack compute nodes are connected to the 
external switches. The fabric comprising of the switches support more than 4K 
segments. So, i should be able to create more than 4K networks in Openstack. 
But, the VLAN to be used for communication with the switches is assigned by the 
switches using 802.1QBG (VDP) protocol. This can be thought of as a network 
overlay. The VM's sends .1q frames to the switches and the switches associate 
it to the segment (VNI in case of VXLAN).
My question is:
1. I cannot use a type driver of VLAN because of the 4K limitation. I cannot 
use a type driver of VXLAN or GRE because that may mean host based overlay. Is 
there an integrated type driver i can use like an "external network" for 
achieving the above?
2. The Openstack module running in the compute should communicate with VDP 
module (lldpad) running there.

In the computes, i see that ovs_neutron_agent.py is the one programming the 
flows. Here, for the new type driver, should i add a special case to 
provision_local_vlan() for communicating with lldpad for retrieving the 
provider VLAN? If there was a type driver component running in each computes, i 
would have added another one for my purpose. Since, the ML2 architecture has 
its mechanism/type driver modules in the controller only, i can only make 
changes here.

Please let me know if there's already an implementation for my above 
requirements. If not, should i create a blue-print? 

Thanks,
Paddu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev