Hi, all
We’re trying OVS DPDK in openstack cloud, but a big warn makes us hesitate.
Floating IP and qrouter use tap interfaces which are attached into br-int,
SNAT also should use similar way, so OVS DPDK will impact on VM network
performance significantly, I believe many cloud providers have deployed OVS
DPDK, my questions are:
1. Do we have some known ways to improve this?
2. Is there any existing effort for this? Veth in kubernetes should
have the same performance issue in OVS DPDK case.
I also found a very weird issue. I added two veth pairs into ovs bridge and
ovs DPDK bridge, for ovs case, iperf3 can work well, but it can’t for OVS
DPDK case, what’s wrong.
$ sudo ./my-ovs-vsctl show
2a67c1d9-51dc-4728-bb3e-405f2f49e2b1
Bridge br-int
Port "veth3-br"
Interface "veth3-br"
Port "dpdk0"
Interface "dpdk0"
type: dpdk
options: {dpdk-devargs="0000:00:08.0"}
Port br-int
Interface br-int
type: internal
Port "veth2-br"
Interface "veth2-br"
Port "dpdk1"
Interface "dpdk1"
type: dpdk
options: {dpdk-devargs="0000:00:09.0"}
Port "veth4-br"
Interface "veth4-br"
Port "veth1-br"
Interface "veth1-br"
$ sudo ip netns exec ns1 ifconfig veth1
veth1 Link encap:Ethernet HWaddr 26:32:e8:f3:1e:2a
inet addr:20.1.1.1 Bcast:20.1.1.255 Mask:255.255.255.0
inet6 addr: fe80::2432:e8ff:fef3:1e2a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:809 errors:0 dropped:0 overruns:0 frame:0
TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:66050 (66.0 KB) TX bytes:1580 (1.5 KB)
$ sudo ip netns exec ns2 ifconfig veth2
veth2 Link encap:Ethernet HWaddr 82:71:3b:41:d1:ec
inet addr:20.1.1.2 Bcast:20.1.1.255 Mask:255.255.255.0
inet6 addr: fe80::8071:3bff:fe41:d1ec/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:862 errors:0 dropped:0 overruns:0 frame:0
TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:70436 (70.4 KB) TX bytes:2024 (2.0 KB)
$ sudo ip netns exec ns2 ping 20.1.1.1
PING 20.1.1.1 (20.1.1.1) 56(84) bytes of data.
64 bytes from 20.1.1.1: icmp_seq=1 ttl=64 time=0.353 ms
64 bytes from 20.1.1.1: icmp_seq=2 ttl=64 time=0.322 ms
64 bytes from 20.1.1.1: icmp_seq=3 ttl=64 time=0.333 ms
64 bytes from 20.1.1.1: icmp_seq=4 ttl=64 time=0.329 ms
64 bytes from 20.1.1.1: icmp_seq=5 ttl=64 time=0.340 ms
^C
--- 20.1.1.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4099ms
rtt min/avg/max/mdev = 0.322/0.335/0.353/0.019 ms
$ sudo ip netns exec ns1 iperf3 -s -i 10 &
[2] 2851
[1] Exit 1 sudo ip netns exec ns1 iperf3 -s -i 10
$ -----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
$ sudo ip netns exec ns2 iperf3 -t 60 -i 10 -c 20.1.1.1
iperf3: error - unable to connect to server: Connection timed out
$
iperf3 has always been hanging there, then exit because of timeout, what's
wrong here?
$ sudo ./my-ovs-ofctl -Oopenflow13 dump-flows br-int
cookie=0x0, duration=1076.396s, table=0, n_packets=1522, n_bytes=124264,
priority=0 actions=NORMAL
$
The below is Redhat OSP document for your reference.
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/12/
html/network_functions_virtualization_planning_and_configuration_guide/part-
dpdk-configure
8.8. Known limitations
There are certain limitations when configuring OVS-DPDK with Red Hat
OpenStack Platform for the NFV use case:
* Use Linux bonds for control plane networks. Ensure both PCI devices
used in the bond are on the same NUMA node for optimum performance. Neutron
Linux bridge configuration is not supported by Red Hat.
* Huge pages are required for every instance running on the hosts with
OVS-DPDK. If huge pages are not present in the guest, the interface appears
but does not function.
* There is a performance degradation of services that use tap devices,
because these devices do not support DPDK. For example, services such as
DVR, FWaaS, and LBaaS use tap devices.
* With OVS-DPDK, you can enable DVR with netdev datapath, but this has
poor performance and is not suitable for a production environment. DVR uses
kernel namespace and tap devices to perform the routing.
* To ensure the DVR routing performs well with OVS-DPDK, you need to
use a controller such as ODL which implements routing as OpenFlow rules.
With OVS-DPDK, OpenFlow routing removes the bottleneck introduced by the
Linux kernel interfaces so that the full performance of datapath is
maintained.
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ discuss mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
