The blueprint is updated with more information on the requirements and
interaction with VDP
https://blueprints.launchpad.net/neutron/+spec/netron-ml2-mechnism-driver-for-cisco-dfa-support
On Monday, March 31, 2014 12:50 PM, Padmanabhan Krishnan
wrote:
Hi Mathieu,
Thanks for the link. Some s
Hi Mathieu,
Thanks for the link. Some similarities for sure. I see Nova libvirt being used.
I had looked at libvirt earlier.
Firstly, the libvirt support that Nova uses to communicate with LLDPAD doesn't
have support for the latest 2.2 standard. The support is also only for the VEPA
mode and no
Hi,
the more I think about your use case, the more I think you should
create a BP to have tenant network based on interfaces created with
VDP protocol.
I'm not a VDP specialist, but if it creates some vlan back interfaces,
you might match those physical interfaces with the
physical_interface_mapp
Hi Mathieu,
Thanks for your reply.
Yes,
even i think the type driver code for tunnels can remain the same since
the segment/tunnel allocation is not going to change. But some
distinction has to be given in the naming or by adding another tunnel
parameter to signify a network overlay.
For tunne
Hi,
thanks for this very interesting use case!
May be you can still use VXLAN or GRE for tenant networks, to bypass
the 4k limit of vlans. then you would have to send packets to the vlan
tagged interface, with the tag assigned by the VDP protocol, and this
traffic would be encapsulated inside the
Hello,
I have a topology where my Openstack compute nodes are connected to the
external switches. The fabric comprising of the switches support more than 4K
segments. So, i should be able to create more than 4K networks in Openstack.
But, the VLAN to be used for communication with the switches i