Ilya, raw socket for the interface type of which is "system" has been set to
non-block mode, can you explain which syscall will lead to sleep? Yes, pmd
thread will consume CPU resource even if it has nothing to do, but all the
type=dpdk ports are handled by pmd thread, here we just let system
interfaces look like a DPDK interface. I didn't see any problem in my test,
it will be better if you can tell me what will result in a problem and how I
can reproduce it. By the way, type=tap/internal interfaces are still be
handled by ovs-vswitchd thread.

In addition, only one line change is there, ".is_pmd = true,", ".is_pmd =
false," will keep it in ovs-vswitchd if there is any other concern. We can
change non-thread-safe parts to support pmd.

-----邮件原件-----
发件人: dev [mailto:[email protected]] 代表 Ilya Maximets
发送时间: 2020年3月18日 19:45
收件人: [email protected]; [email protected]
抄送: [email protected]
主题: Re: [ovs-dev] [PATCH v7] Use TPACKET_V3 to accelerate veth for
userspace datapath

On 3/18/20 10:02 AM, [email protected] wrote:
> From: Yi Yang <[email protected]>
> 
> We can avoid high system call overhead by using TPACKET_V3 and using 
> DPDK-like poll to receive and send packets (Note: send still needs to 
> call sendto to trigger final packet transmission).
> 
> From Linux kernel 3.10 on, TPACKET_V3 has been supported, so all the 
> Linux kernels current OVS supports can run
> TPACKET_V3 without any problem.
> 
> I can see about 50% performance improvement for veth compared to last 
> recvmmsg optimization if I use TPACKET_V3, it is about 2.21 Gbps, but 
> it was 1.47 Gbps before.
> 
> After is_pmd is set to true, performance can be improved much more, it 
> is about 180% performance improvement.
> 
> TPACKET_V3 can support TSO, but its performance isn't good because of 
> TPACKET_V3 kernel implementation issue, so it falls back to recvmmsg 
> in case userspace-tso-enable is set to true, but its performance is 
> better than recvmmsg in case userspace-tso-enable is set to false, so 
> just use TPACKET_V3 in that case.
> 
> Note: how much performance improvement is up to your platform, some 
> platforms can see huge improvement, some ones aren't so noticeable, 
> but if is_pmd is set to true, you can see big performance improvement, 
> the prerequisite is your tested veth interfaces should be attached to 
> different pmd threads.
> 
> Signed-off-by: Yi Yang <[email protected]>
> Co-authored-by: William Tu <[email protected]>
> Signed-off-by: William Tu <[email protected]>
> ---
>  acinclude.m4                     |  12 ++
>  configure.ac                     |   1 +
>  include/sparse/linux/if_packet.h | 111 +++++++++++
>  lib/dp-packet.c                  |  18 ++
>  lib/dp-packet.h                  |   9 +
>  lib/netdev-linux-private.h       |  26 +++
>  lib/netdev-linux.c               | 419
+++++++++++++++++++++++++++++++++++++--
>  7 files changed, 579 insertions(+), 17 deletions(-)
> 
> Changelog:
> - v6->v7
>  * is_pmd is set to true for system interfaces

This can not be done that simple and should not be done unconditionally
anyways.  netdev-linux is not thread safe in many ways.  At least, stats
accounting will be messed up.  Second thing is that this change will harm
all the usual DPDK-based setups since PMD threads will start make a lot of
syscalls and sleep inside the kernel missing packets from the fast DPDK
interfaces.  Third thing is that this change will fire up at least one PMD
thread consuming 100% CPU constantly even for setups where it's not needed.
So, this version is definitely not acceptable.

Best regards, Ilya Maximets.
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to