Hi, folks

I implemented my goal in Ubuntu kernel 4.15.0-92.93, here is my performance 
data with tpacket_v3 and tso. So now I'm very sure tpacket_v3 can do better.

[yangyi@localhost ovs-master]$ sudo ../run-iperf3.sh
iperf3: no process found
Connecting to host 10.15.1.3, port 5201
[  4] local 10.15.1.2 port 44976 connected to 10.15.1.3 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-10.00  sec  19.6 GBytes  16.8 Gbits/sec  106586    307 KBytes
[  4]  10.00-20.00  sec  19.5 GBytes  16.7 Gbits/sec  104625    215 KBytes
[  4]  20.00-30.00  sec  20.0 GBytes  17.2 Gbits/sec  106962    301 KBytes
[  4]  30.00-40.00  sec  19.9 GBytes  17.1 Gbits/sec  102262    346 KBytes
[  4]  40.00-50.00  sec  19.8 GBytes  17.0 Gbits/sec  105383    225 KBytes
[  4]  50.00-60.00  sec  19.9 GBytes  17.1 Gbits/sec  103177    294 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec   119 GBytes  17.0 Gbits/sec  628995             sender
[  4]   0.00-60.00  sec   119 GBytes  17.0 Gbits/sec                  receiver

Server output:
Accepted connection from 10.15.1.2, port 44974
[  5] local 10.15.1.3 port 5201 connected to 10.15.1.2 port 44976
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.00  sec  19.5 GBytes  16.7 Gbits/sec
[  5]  10.00-20.00  sec  19.5 GBytes  16.7 Gbits/sec
[  5]  20.00-30.00  sec  20.0 GBytes  17.2 Gbits/sec
[  5]  30.00-40.00  sec  19.9 GBytes  17.1 Gbits/sec
[  5]  40.00-50.00  sec  19.8 GBytes  17.0 Gbits/sec
[  5]  50.00-60.00  sec  19.9 GBytes  17.1 Gbits/sec
[  5]  60.00-60.04  sec  89.1 MBytes  17.5 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-60.04  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-60.04  sec   119 GBytes  17.0 Gbits/sec                  receiver


iperf Done.
[yangyi@localhost ovs-master]$

-----邮件原件-----
发件人: Yi Yang (杨燚)-云服务集团 
发送时间: 2020年3月19日 11:12
收件人: 'u9012...@gmail.com' <u9012...@gmail.com>
抄送: 'i.maxim...@ovn.org' <i.maxim...@ovn.org>; 'yang_y...@163.com' 
<yang_y...@163.com>; 'ovs-dev@openvswitch.org' <ovs-dev@openvswitch.org>
主题: 答复: [ovs-dev] 答复: [PATCH v7] Use TPACKET_V3 to accelerate veth for 
userspace datapath
重要性: 高

Hi, folks

As I said, TPACKET_V3 does have kernel implementation issue, I tried to fix it 
in Linux kernel 5.5.9, here is my test data with tpacket_v3 and tso enabled. On 
my low end server, my goal is to reach 16Gbps at least, I still have another 
idea to improve it.

[yangyi@localhost ovs-master]$ sudo ../run-iperf3.sh
Connecting to host 10.15.1.3, port 5201
[  4] local 10.15.1.2 port 42336 connected to 10.15.1.3 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-10.00  sec  12.9 GBytes  11.1 Gbits/sec    1   3.09 MBytes
[  4]  10.00-20.00  sec  12.9 GBytes  11.1 Gbits/sec    0   3.09 MBytes
[  4]  20.00-30.00  sec  12.9 GBytes  11.1 Gbits/sec    3   3.09 MBytes
[  4]  30.00-40.00  sec  12.8 GBytes  11.0 Gbits/sec    0   3.09 MBytes
[  4]  40.00-50.00  sec  12.8 GBytes  11.0 Gbits/sec    0   3.09 MBytes
[  4]  50.00-60.00  sec  12.8 GBytes  11.0 Gbits/sec    0   3.09 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  77.2 GBytes  11.1 Gbits/sec    4             sender
[  4]   0.00-60.00  sec  77.2 GBytes  11.1 Gbits/sec                  receiver

Server output:
Accepted connection from 10.15.1.2, port 42334
[  5] local 10.15.1.3 port 5201 connected to 10.15.1.2 port 42336
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.00  sec  12.9 GBytes  11.1 Gbits/sec
[  5]  10.00-20.00  sec  12.9 GBytes  11.1 Gbits/sec
[  5]  20.00-30.00  sec  12.9 GBytes  11.1 Gbits/sec
[  5]  30.00-40.00  sec  12.8 GBytes  11.0 Gbits/sec
[  5]  40.00-50.00  sec  12.8 GBytes  11.0 Gbits/sec
[  5]  50.00-60.00  sec  12.8 GBytes  11.0 Gbits/sec
[  5]  60.00-60.01  sec  14.3 MBytes  12.4 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-60.01  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-60.01  sec  77.2 GBytes  11.0 Gbits/sec                  receiver


iperf Done.
[yangyi@localhost ovs-master]$

-----邮件原件-----
发件人: William Tu [mailto:u9012...@gmail.com] 
发送时间: 2020年3月19日 5:42
收件人: Yi Yang (杨燚)-云服务集团 <yangy...@inspur.com>
抄送: i.maxim...@ovn.org; yang_y...@163.com; ovs-dev@openvswitch.org
主题: Re: [ovs-dev] 答复: [PATCH v7] Use TPACKET_V3 to accelerate veth for 
userspace datapath

On Wed, Mar 18, 2020 at 6:22 AM Yi Yang (杨燚)-云服务集团 <yangy...@inspur.com> wrote:
>
> Ilya, raw socket for the interface type of which is "system" has been 
> set to non-block mode, can you explain which syscall will lead to 
> sleep? Yes, pmd thread will consume CPU resource even if it has 
> nothing to do, but all the type=dpdk ports are handled by pmd thread, 
> here we just let system interfaces look like a DPDK interface. I 
> didn't see any problem in my test, it will be better if you can tell 
> me what will result in a problem and how I can reproduce it. By the 
> way, type=tap/internal interfaces are still be handled by ovs-vswitchd thread.
>
> In addition, only one line change is there, ".is_pmd = true,", 
> ".is_pmd = false," will keep it in ovs-vswitchd if there is any other 
> concern. We can change non-thread-safe parts to support pmd.
>

Hi Yiyang an Ilya,

How about making tpacket_v3 a new netdev class with type="tpacket"?
Like my original patch:
https://mail.openvswitch.org/pipermail/ovs-dev/2019-December/366229.html

Users have to create it specifically by doing type="tpacket", ex:
  $ ovs-vsctl add-port br0 enp2s0 -- set interface enp2s0 type="tpacket"
And we can set is_pmd=true for this particular type.

Regards
William
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to