Hi all,
I am running a test Openstack environment with 2 compute nodes each one of them
with and MTU 9000.
Compute nodes:
* hercules-21 (10.0.32.21): 64cpus, 512GB RAM and 2x 25Gbps bond network
* hercules-22 (10.0.32.22): 64cpus, 512GB RAM and 2x 25Gbps bond network
VMs:
* centos (192.168.1.110): 8 vcpus, 16GB RAM
* centos2 (192.168.1.109): 8 vcpus, 16GB RAM
Network bandwidth test physical host to physical host using iperf3:
[root@hercules-21 ~]# iperf -c 10.0.32.22 -P 4
------------------------------------------------------------
Client connecting to 10.0.32.22, TCP port 5001
TCP window size: 325 KByte (default)
------------------------------------------------------------
[ 5] local 10.0.32.21 port 59014 connected with 10.0.32.22 port 5001
[ 3] local 10.0.32.21 port 59008 connected with 10.0.32.22 port 5001
[ 4] local 10.0.32.21 port 59010 connected with 10.0.32.22 port 5001
[ 6] local 10.0.32.21 port 59012 connected with 10.0.32.22 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 6.91 GBytes 5.94 Gbits/sec
[ 3] 0.0-10.0 sec 6.97 GBytes 5.98 Gbits/sec
[ 4] 0.0-10.0 sec 6.96 GBytes 5.98 Gbits/sec
[ 6] 0.0-10.0 sec 6.77 GBytes 5.82 Gbits/sec
[SUM] 0.0-10.0 sec 27.6 GBytes 23.7 Gbits/sec
Network bandwidth test vm to vm using iperf3 (each vm is running on a differen
host):
[centos@centos2 ~]$ iperf -c 192.168.1.110 -P 4
------------------------------------------------------------
Client connecting to 192.168.1.110, TCP port 5001
TCP window size: 325 KByte (default)
------------------------------------------------------------
[ 6] local 192.168.1.109 port 60244 connected with 192.168.1.110 port 5001
[ 3] local 192.168.1.109 port 60238 connected with 192.168.1.110 port 5001
[ 4] local 192.168.1.109 port 60240 connected with 192.168.1.110 port 5001
[ 5] local 192.168.1.109 port 60242 connected with 192.168.1.110 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 2.11 GBytes 1.81 Gbits/sec
[ 4] 0.0-10.0 sec 2.12 GBytes 1.82 Gbits/sec
[ 5] 0.0-10.0 sec 2.10 GBytes 1.80 Gbits/sec
[ 6] 0.0-10.0 sec 2.13 GBytes 1.83 Gbits/sec
[SUM] 0.0-10.0 sec 8.45 GBytes 7.25 Gbits/sec
I am using jumbo frames on physical machines so I did the same on Openstack
MTU on physical host:
[root@hercules-21 ~]# ip a
...
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state
UP qlen 1000
link/ether 7c:fe:90:12:23:ec brd ff:ff:ff:ff:ff:ff
inet 10.0.32.21/16 brd 10.0.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::b1b0:74dd:8a3:705e/64 scope link
valid_lft forever preferred_lft forever
...
MTU on VM:
[centos@centos ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc pfifo_fast state UP
qlen 1000
link/ether fa:16:3e:39:41:08 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.110/24 brd 192.168.1.255 scope global dynamic eth0
valid_lft 85658sec preferred_lft 85658sec
inet6 fe80::f816:3eff:fe39:4108/64 scope link
valid_lft forever preferred_lft forever
NOTES:
* I am only running this 2 vms on the hosts so I have plenty of
resources
* I monitores the cpus on the vms during tests and they are not
throttling the neetwork test
NOTES 2*: I am not sure whether this is important to mention but according to
ovs, the ports are 10Gbps:
[root@hercules-21 ~]# docker exec -itu 0 openvswitch_vswitchd ovs-ofctl show
br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000b6d41e15d246
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src
mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_ tp_src mod_tp_dst
1(patch-tun): addr:2e:f1:69:9c:6b:01
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
2(qvodaf83835-28): addr:96:47:72:b2:4d:12
config: 0
state: 0
current: 10GB-FD COPPER
speed: 10000 Mbps now, 0 Mbps max
LOCAL(br-int): addr:b6:d4:1e:15:d2:46
config: PORT_DOWN
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
QUESTION: I would like to know why my vms are not fully utilising the network
and what can I do to fix it
Envinronment details:
* Openstack version: Pike
* Deployment: kolla-ansible
* Hypervisor: KVM
* Network setup: neturon + ovs + vxlan
Thank you very much
Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E:
[email protected]<mailto:[email protected]>
NOTICE
Please consider the environment before printing this email. This message and
any attachments are intended for the addressee named and may contain legally
privileged/confidential/copyright information. If you are not the intended
recipient, you should not read, use, disclose, copy or distribute this
communication. If you have received this message in error please notify us at
once by return email and then delete both messages. We accept no liability for
the distribution of viruses or similar in electronic communications. This
notice should not be removed.
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack