Hi all, I've got an openstack setup using XenServer6 as hypervisor platform. Each XS6 server has 2 bond, one for management traffic and the other for VM traffic:
[root@xen1 ~]# ovs-appctl bond/list bridge bond type slaves xapi2 bond1 balance-slb eth3, eth2 xapi1 bond0 balance-slb eth1, eth0 while management traffic on bond1 does not require vlans, the virtual machine's traffic on bond0 does. If I measure traffic rate between XS6's hosts and network host on management lan the result is near gigabit as expected: ------------------------------------------------------------ Client connecting to 10.1.1.1, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.1.1.34 port 56030 connected with 10.1.1.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.10 GBytes 943 Mbits/sec Even if I do the same test from a virtual machine to a phisical host on the same DMZ I can get similar results. If I check inter vms traffic rate the result is much different. Using iperf on virtual machines running on the same vlan and different XS6 hosts I got: ------------------------------------------------------------ Client connecting to 10.12.0.10, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.12.0.8 port 47465 connected with 10.12.0.10 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 200 MBytes 168 Mbits/sec Openvswitch version on all the hosts is: ovs-vswitchd (Open vSwitch) 1.0.99 Compiled Aug 2 2011 11:50:44 OpenFlow versions 0x1:0x1 Do someone has any suggestion to address this problem? Thanks a lot Giuseppe _______________________________________________ discuss mailing list [email protected] http://openvswitch.org/mailman/listinfo/discuss
