Hello,

Today we face a big problem about scalability of our virtual network
since our physical switches can't deal with more than 1024 VLANs
simultaneously.

What we have:

Lots of hypervisors (blade servers), each running one ovs-vswitchd. On
each hypervisors, lots of VMs. Each VM need to use at least 3 VLANs
(could be 10 or more). This could make hundreds of VLANs on one
hypervisor, and thousands inside a enclosure of 14 hypervisors.
We have a 10Gbps backbone to link all our ovs-vswitchd. But when all
this VLANs come on the backbone, we could reach the physical limit of
1024 VLANs on our switches.

How to deal with that? Is there a way to do some kind on VLAN
translation (like NAT)? We really need to keep the VLAN tagged
interface inside the VM, but we don't really care in the backbone
(it's a dedicated one). Do we have to use openflow/nox? We sure need
some centralized controller to keep a translation table or something
like that.

To sum up the ideal solution we are looking for:
Source VM => send packets in VLAN XYZ => OVS on hypervisor1 => 1
Outbound VLAN in the backbone => OVS on hypervisor2 => packets back in
VLAN XYZ => Destination VM

Any idea?



Best regards.

-- 
Edouard Bourguignon
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to