In the same environment, but I used tap but not veth, retr number is 0 for the case without this patch (of course, I applied Flavio's tap enable patch)
vagrant@ubuntu1804:~$ sudo ./run-iperf3.sh Connecting to host 10.15.1.3, port 5201 [ 4] local 10.15.1.2 port 54572 connected to 10.15.1.3 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-10.00 sec 12.6 GBytes 10.9 Gbits/sec 0 3.14 MBytes [ 4] 10.00-20.00 sec 12.8 GBytes 11.0 Gbits/sec 0 3.14 MBytes [ 4] 20.00-30.00 sec 10.2 GBytes 8.76 Gbits/sec 0 3.14 MBytes [ 4] 30.00-40.00 sec 10.0 GBytes 8.63 Gbits/sec 0 3.14 MBytes [ 4] 40.00-50.00 sec 10.4 GBytes 8.94 Gbits/sec 0 3.14 MBytes [ 4] 50.00-60.00 sec 10.8 GBytes 9.31 Gbits/sec 0 3.14 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-60.00 sec 67.0 GBytes 9.59 Gbits/sec 0 sender [ 4] 0.00-60.00 sec 67.0 GBytes 9.59 Gbits/sec receiver Server output: Accepted connection from 10.15.1.2, port 54570 [ 5] local 10.15.1.3 port 5201 connected to 10.15.1.2 port 54572 [ ID] Interval Transfer Bandwidth [ 5] 0.00-10.00 sec 12.6 GBytes 10.9 Gbits/sec [ 5] 10.00-20.00 sec 12.8 GBytes 11.0 Gbits/sec [ 5] 20.00-30.00 sec 10.2 GBytes 8.76 Gbits/sec [ 5] 30.00-40.00 sec 10.0 GBytes 8.63 Gbits/sec [ 5] 40.00-50.00 sec 10.4 GBytes 8.94 Gbits/sec [ 5] 50.00-60.00 sec 10.8 GBytes 9.31 Gbits/sec [ 5] 60.00-60.00 sec 1.75 MBytes 9.25 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 5] 0.00-60.00 sec 0.00 Bytes 0.00 bits/sec sender [ 5] 0.00-60.00 sec 67.0 GBytes 9.59 Gbits/sec receiver iperf Done. vagrant@ubuntu1804:~$ -----邮件原件----- 发件人: dev [mailto:ovs-dev-boun...@openvswitch.org] 代表 William Tu 发送时间: 2020年2月26日 6:32 收件人: yang_y...@126.com 抄送: yang_y_yi <yang_y...@163.com>; ovs-dev <ovs-dev@openvswitch.org> 主题: Re: [ovs-dev] [PATCH v5] Use TPACKET_V3 to accelerate veth for userspace datapath On Mon, Feb 24, 2020 at 5:01 AM <yang_y...@126.com> wrote: > > From: Yi Yang <yangy...@inspur.com> > > We can avoid high system call overhead by using TPACKET_V3 and using > DPDK-like poll to receive and send packets (Note: send still needs to > call sendto to trigger final packet transmission). > > From Linux kernel 3.10 on, TPACKET_V3 has been supported, so all the > Linux kernels current OVS supports can run > TPACKET_V3 without any problem. > > I can see about 30% performance improvement for veth compared to last > recvmmsg optimization if I use TPACKET_V3, it is about 1.98 Gbps, but > it was 1.47 Gbps before. > > TPACKET_V3 can support TSO, it can work only if your kernel can > support, this has been verified on Ubuntu 18.04 5.3.0-40-generic , if > you find the performance is very poor, please turn off tso for veth > interfces in case userspace-tso-enable is set to true. Do you test the performance of enabling TSO? Using veth (like your run-iperf3.sh) and with kernel 5.3. Without your patch, with TSO enabled, I can get around 6Gbps But with this patch, with TSO enabled, the performance drops to 1.9Gbps. Regards, William _______________________________________________ dev mailing list d...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-dev _______________________________________________ dev mailing list d...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-dev