I believe the major reason for doing so could be to decouple underlay L2 
technology from

the overlay VM ports themselves.



The overlay VM ports can continue to plugged with classic VLAN , while the 
underlay L2 technology on how these VMs talk to the

cloud could be dynamically changed.

If the underlay is going to be VLAN, then physical bridges will do traffic 
management.

If the underlay is going to be VXLAN, the tunnel bridge will do traffic 
management.



Also on a single Compute Node you can have both the underlay technologies 
running, with some tenant VMs running with

VLAN Network Type underlay and some other tenant VMs on the same compute node 
running with VXLAN(or GRE)

Network Type underlay.



--

Thanks,



Vivek



From: HS [mailto:hyuns...@ieee.org]
Sent: Thursday, April 24, 2014 8:20 AM
To: openstack@lists.openstack.org
Subject: [Openstack] br-tun and br-int bridges in Neutron OVS



Hi,

When OVS plugin is used with GRE option in Neutron, I see that each compute 
node has br-tun and br-int bridges created.

I'm trying to understand why we need the additional br-tun bridge here.  Can't 
we create tunneling ports in br-int bridge, and have br-int relay traffic 
between VM ports and tunneling ports directly?  Why do we have to introduce 
another br-tun bridge in between?



Thanks,

-hs

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to