Re: [ovs-dev] iperf tcp issue on veth using afxdp

2019-12-23 Thread Yifeng Sun
Hi Yi,

I don't have OVS DPDK setup yet. I need to set it up first.

On my machine, afxdp can reach 4.6Gbps.

[  3]  0.0- 1.0 sec   564 MBytes  4.73 Gbits/sec
[  3]  1.0- 2.0 sec   553 MBytes  4.64 Gbits/sec
[  3]  2.0- 3.0 sec   558 MBytes  4.68 Gbits/sec
[  3]  3.0- 4.0 sec   556 MBytes  4.66 Gbits/sec
[  3]  4.0- 5.0 sec   545 MBytes  4.57 Gbits/sec
[  3]  5.0- 6.0 sec   554 MBytes  4.64 Gbits/sec
[  3]  6.0- 7.0 sec   548 MBytes  4.60 Gbits/sec
[  3]  7.0- 8.0 sec   548 MBytes  4.60 Gbits/sec
[  3]  8.0- 9.0 sec   550 MBytes  4.62 Gbits/sec
[  3]  9.0-10.0 sec   548 MBytes  4.60 Gbits/sec

Thanks,
Yifeng

On Sun, Dec 22, 2019 at 4:40 PM Yi Yang (杨燚)-云服务集团  wrote:
>
> Hi, Yifeng
>
> I'll try it again. By the way, did you try af_packet for veth in OVS DPDK? In 
> my machine it can reach 4Gbps, do you think af_xdp can reach this number?
>
> -邮件原件-
> 发件人: Yifeng Sun [mailto:pkusunyif...@gmail.com]
> 发送时间: 2019年12月21日 9:11
> 收件人: William Tu 
> 抄送:  ; Ilya Maximets 
> ; Eelco Chaudron ; Yi Yang 
> (杨燚)-云服务集团 
> 主题: Re: [ovs-dev] iperf tcp issue on veth using afxdp
>
> This seems to be related to netdev-afxdp's batch size bigger than kernel's 
> xdp batch size.
> I created a patch to fix it.
>
> https://patchwork.ozlabs.org/patch/1214397/
>
> Could anyone take a look at this patch?
>
> Thanks,
> Yifeng
>
> On Fri, Nov 22, 2019 at 9:52 AM William Tu  wrote:
> >
> > Hi Ilya and Eelco,
> >
> > Yiyang reports very poor TCP performance on his setup and I can also
> > reproduce it on my machine. Somehow I think this might be a kernel
> > issue, but I don't know where to debug this. Need your suggestion
> > about how to debug.
> >
> > So the setup is like the system-traffic, creating 2 namespaces and
> > veth devices and attach to OVS. I do remember to turn off tx offload
> > and ping, UDP, nc (tcp-mode) works fine.
> >
> > TCP using iperf drops to 0Mbps after 4 seconds.
> > At server side:
> > root@osboxes:~/ovs# ip netns exec at_ns0 iperf -s
> > 
> > Server listening on TCP port 5001
> > TCP window size:  128 KByte (default)
> > 
> > [  4] local 10.1.1.1 port 5001 connected with 10.1.1.2 port 40384
> > Waiting for server threads to complete. Interrupt again to force quit.
> >
> > At client side
> > root@osboxes:~/bpf-next# ip netns exec at_ns1 iperf -c 10.1.1.1 -i 1
> > -t 10
> > 
> > Client connecting to 10.1.1.1, TCP port 5001 TCP window size: 85.0
> > KByte (default)
> > 
> > [  3] local 10.1.1.2 port 40384 connected with 10.1.1.1 port 5001
> > [ ID] Interval   Transfer Bandwidth
> > [  3]  0.0- 1.0 sec  17.0 MBytes   143 Mbits/sec
> > [  3]  1.0- 2.0 sec  9.62 MBytes  80.7 Mbits/sec [  3]  2.0- 3.0 sec
> > 6.75 MBytes  56.6 Mbits/sec [  3]  3.0- 4.0 sec  11.0 MBytes  92.3
> > Mbits/sec [  3]  5.0- 6.0 sec  0.00 Bytes  0.00 bits/sec [  3]  6.0-
> > 7.0 sec  0.00 Bytes  0.00 bits/sec [  3]  7.0- 8.0 sec  0.00 Bytes
> > 0.00 bits/sec [  3]  8.0- 9.0 sec  0.00 Bytes  0.00 bits/sec [  3]
> > 9.0-10.0 sec  0.00 Bytes  0.00 bits/sec [  3] 10.0-11.0 sec  0.00
> > Bytes  0.00 bits/sec
> >
> > (after this, even ping stops working)
> >
> > Script to reproduce
> > -
> > ovs-vsctl -- add-br br0 -- set Bridge br0 datapath_type=netdev
> >
> > ip netns add at_ns0
> > ip link add p0 type veth peer name afxdp-p0 ip link set p0 netns
> > at_ns0 ip link set dev afxdp-p0 up ovs-vsctl add-port br0 afxdp-p0
> >
> > ovs-vsctl -- set interface afxdp-p0 options:n_rxq=1 type="afxdp"
> > options:xdp-mode=native
> > ip netns exec at_ns0 sh << NS_EXEC_HEREDOC ip addr add "10.1.1.1/24"
> > dev p0 ip link set dev p0 up NS_EXEC_HEREDOC
> >
> > ip netns add at_ns1
> > ip link add p1 type veth peer name afxdp-p1 ip link set p1 netns
> > at_ns1 ip link set dev afxdp-p1 up ovs-vsctl add-port br0 afxdp-p1 --
> > \
> >set interface afxdp-p1 options:n_rxq=1 type="afxdp"
> > options:xdp-mode=native
> >
> > ip netns exec at_ns1 sh << NS_EXEC_HEREDOC ip addr add "10.1.1.2/24"
> > dev p1 ip link set dev p1 up NS_EXEC_HEREDOC
> >
> > ethtool -K afxdp-p0 tx off
> > ethtool -K afxdp-p1 tx off
> > ip netns exec at_ns0 ethtool -K p0 tx off ip netns exec at_ns1 ethtool
> > -K p1 tx off
> >
> > ip netns exec at_ns0 ping  -c 10 -i .2 10.1.1.2 echo "ip netns exec
> > at_ns1 iperf -c 10.1.1.1 -i 1 -t 10"
> > ip netns exec at_ns0 iperf -s
> >
> > Thank you
> > William
> > ___
> > dev mailing list
> > d...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] iperf tcp issue on veth using afxdp

2019-12-20 Thread Yifeng Sun
This seems to be related to netdev-afxdp's batch size bigger than
kernel's xdp batch size.
I created a patch to fix it.

https://patchwork.ozlabs.org/patch/1214397/

Could anyone take a look at this patch?

Thanks,
Yifeng

On Fri, Nov 22, 2019 at 9:52 AM William Tu  wrote:
>
> Hi Ilya and Eelco,
>
> Yiyang reports very poor TCP performance on his setup and I can
> also reproduce it on my machine. Somehow I think this might be a
> kernel issue, but I don't know where to debug this. Need your suggestion
> about how to debug.
>
> So the setup is like the system-traffic, creating 2 namespaces and
> veth devices and attach to OVS. I do remember to turn off tx offload
> and ping, UDP, nc (tcp-mode) works fine.
>
> TCP using iperf drops to 0Mbps after 4 seconds.
> At server side:
> root@osboxes:~/ovs# ip netns exec at_ns0 iperf -s
> 
> Server listening on TCP port 5001
> TCP window size:  128 KByte (default)
> 
> [  4] local 10.1.1.1 port 5001 connected with 10.1.1.2 port 40384
> Waiting for server threads to complete. Interrupt again to force quit.
>
> At client side
> root@osboxes:~/bpf-next# ip netns exec at_ns1 iperf -c 10.1.1.1 -i 1 -t 10
> 
> Client connecting to 10.1.1.1, TCP port 5001
> TCP window size: 85.0 KByte (default)
> 
> [  3] local 10.1.1.2 port 40384 connected with 10.1.1.1 port 5001
> [ ID] Interval   Transfer Bandwidth
> [  3]  0.0- 1.0 sec  17.0 MBytes   143 Mbits/sec
> [  3]  1.0- 2.0 sec  9.62 MBytes  80.7 Mbits/sec
> [  3]  2.0- 3.0 sec  6.75 MBytes  56.6 Mbits/sec
> [  3]  3.0- 4.0 sec  11.0 MBytes  92.3 Mbits/sec
> [  3]  5.0- 6.0 sec  0.00 Bytes  0.00 bits/sec
> [  3]  6.0- 7.0 sec  0.00 Bytes  0.00 bits/sec
> [  3]  7.0- 8.0 sec  0.00 Bytes  0.00 bits/sec
> [  3]  8.0- 9.0 sec  0.00 Bytes  0.00 bits/sec
> [  3]  9.0-10.0 sec  0.00 Bytes  0.00 bits/sec
> [  3] 10.0-11.0 sec  0.00 Bytes  0.00 bits/sec
>
> (after this, even ping stops working)
>
> Script to reproduce
> -
> ovs-vsctl -- add-br br0 -- set Bridge br0 datapath_type=netdev
>
> ip netns add at_ns0
> ip link add p0 type veth peer name afxdp-p0
> ip link set p0 netns at_ns0
> ip link set dev afxdp-p0 up
> ovs-vsctl add-port br0 afxdp-p0
>
> ovs-vsctl -- set interface afxdp-p0 options:n_rxq=1 type="afxdp"
> options:xdp-mode=native
> ip netns exec at_ns0 sh << NS_EXEC_HEREDOC
> ip addr add "10.1.1.1/24" dev p0
> ip link set dev p0 up
> NS_EXEC_HEREDOC
>
> ip netns add at_ns1
> ip link add p1 type veth peer name afxdp-p1
> ip link set p1 netns at_ns1
> ip link set dev afxdp-p1 up
> ovs-vsctl add-port br0 afxdp-p1 -- \
>set interface afxdp-p1 options:n_rxq=1 type="afxdp"
> options:xdp-mode=native
>
> ip netns exec at_ns1 sh << NS_EXEC_HEREDOC
> ip addr add "10.1.1.2/24" dev p1
> ip link set dev p1 up
> NS_EXEC_HEREDOC
>
> ethtool -K afxdp-p0 tx off
> ethtool -K afxdp-p1 tx off
> ip netns exec at_ns0 ethtool -K p0 tx off
> ip netns exec at_ns1 ethtool -K p1 tx off
>
> ip netns exec at_ns0 ping  -c 10 -i .2 10.1.1.2
> echo "ip netns exec at_ns1 iperf -c 10.1.1.1 -i 1 -t 10"
> ip netns exec at_ns0 iperf -s
>
> Thank you
> William
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] iperf tcp issue on veth using afxdp

2019-11-26 Thread William Tu
On Fri, Nov 22, 2019 at 10:50 AM Ilya Maximets  wrote:
>
> On 22.11.2019 18:51, William Tu wrote:
> > Hi Ilya and Eelco,
> >
> > Yiyang reports very poor TCP performance on his setup and I can
> > also reproduce it on my machine. Somehow I think this might be a
> > kernel issue, but I don't know where to debug this. Need your suggestion
> > about how to debug.
> >
> > So the setup is like the system-traffic, creating 2 namespaces and
> > veth devices and attach to OVS. I do remember to turn off tx offload
> > and ping, UDP, nc (tcp-mode) works fine.
> >
> > TCP using iperf drops to 0Mbps after 4 seconds.
>

Hi Ilya and Eelco,
Thank you for your suggestion.

> The key questions are:
> * What is your kernel version?
5.4.0-rc5+

> * Does it work in generic mode?
TCP not work (not a single packet and this is expected)

> * Does it work if iperf generates UDP traffic?
>   ex. iperf3 -c 10.1.1.1 -t 3600 -u -b 10G/64 -l 1460

Yes, works fine. (I did -t 360)
[  5]   0.00-360.01 sec   128 GBytes  3.05 Gbits/sec  0.002 ms
4203436/93962933 (4.5%)

coverage/show looks OK
afxdp_cq_empty 0.0/sec 0.000/sec0.3953/sec   total: 1423
afxdp_tx_full  0.0/sec 0.000/sec0.3947/sec   total: 1421

> * It seems like a ring breakage or a umem memory leak.
>   So, what are the afxdp related coverage counters?

Running TCP iperf3
 ip netns exec at_ns0 iperf3 -c 10.1.1.2 -t 360  -i 1
[ ID] Interval   Transfer Bandwidth
[  5]   0.00-1.00   sec  20.4 MBytes   171 Mbits/sec
[  5]   1.00-2.00   sec  12.1 MBytes   101 Mbits/sec
[  5]   2.00-3.00   sec  8.59 MBytes  72.1 Mbits/sec
[  5]   3.00-4.00   sec  11.6 MBytes  97.1 Mbits/sec
[  5]   4.00-5.00   sec  2.21 MBytes  18.5 Mbits/sec
[  5]   5.00-6.00   sec  10.1 MBytes  85.1 Mbits/sec
[  5]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
[  5]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
 all zero...

coverage/show looks also OK
afxdp_cq_empty 0.0/sec 0.033/sec0.0006/sec   total: 2

> * Does OVS forward packets, i.e. if there something received/sent?
>
No packet arrives at OVS and
no packet seen at the veth device attach to OVS.

Based on these debug:
1) UDP works and TCP doesn't
2) correct coverage counter
Then I think it's not related to umem ring management in OVS.
But I should look at why veth driver, the XDP part, doesn't rx/tx packet.

William
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] iperf tcp issue on veth using afxdp

2019-11-25 Thread Eelco Chaudron




On 22 Nov 2019, at 19:50, Ilya Maximets wrote:


On 22.11.2019 18:51, William Tu wrote:

Hi Ilya and Eelco,

Yiyang reports very poor TCP performance on his setup and I can
also reproduce it on my machine. Somehow I think this might be a
kernel issue, but I don't know where to debug this. Need your 
suggestion

about how to debug.

So the setup is like the system-traffic, creating 2 namespaces and
veth devices and attach to OVS. I do remember to turn off tx offload
and ping, UDP, nc (tcp-mode) works fine.

TCP using iperf drops to 0Mbps after 4 seconds.


The key questions are:
* What is your kernel version?
* Does it work in generic mode?
* Does it work if iperf generates UDP traffic?
  ex. iperf3 -c 10.1.1.1 -t 3600 -u -b 10G/64 -l 1460
* It seems like a ring breakage or a umem memory leak.
  So, what are the afxdp related coverage counters?
* Does OVS forward packets, i.e. if there something received/sent?

Best regards, Ilya Maximets.


This looks very much like the issues we had earlier with tap interfaces, 
this issue was related to OVS not managing the umem ring correctly (and 
hence running out of buffers). Guess the coverage counters suggested 
above should show give you a good starting point for debugging.


//Eelco

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] iperf tcp issue on veth using afxdp

2019-11-22 Thread Ilya Maximets
On 22.11.2019 18:51, William Tu wrote:
> Hi Ilya and Eelco,
> 
> Yiyang reports very poor TCP performance on his setup and I can
> also reproduce it on my machine. Somehow I think this might be a
> kernel issue, but I don't know where to debug this. Need your suggestion
> about how to debug.
> 
> So the setup is like the system-traffic, creating 2 namespaces and
> veth devices and attach to OVS. I do remember to turn off tx offload
> and ping, UDP, nc (tcp-mode) works fine.
> 
> TCP using iperf drops to 0Mbps after 4 seconds.

The key questions are:
* What is your kernel version?
* Does it work in generic mode?
* Does it work if iperf generates UDP traffic?
  ex. iperf3 -c 10.1.1.1 -t 3600 -u -b 10G/64 -l 1460
* It seems like a ring breakage or a umem memory leak.
  So, what are the afxdp related coverage counters?
* Does OVS forward packets, i.e. if there something received/sent?

Best regards, Ilya Maximets.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] iperf tcp issue on veth using afxdp

2019-11-22 Thread William Tu
Hi Ilya and Eelco,

Yiyang reports very poor TCP performance on his setup and I can
also reproduce it on my machine. Somehow I think this might be a
kernel issue, but I don't know where to debug this. Need your suggestion
about how to debug.

So the setup is like the system-traffic, creating 2 namespaces and
veth devices and attach to OVS. I do remember to turn off tx offload
and ping, UDP, nc (tcp-mode) works fine.

TCP using iperf drops to 0Mbps after 4 seconds.
At server side:
root@osboxes:~/ovs# ip netns exec at_ns0 iperf -s

Server listening on TCP port 5001
TCP window size:  128 KByte (default)

[  4] local 10.1.1.1 port 5001 connected with 10.1.1.2 port 40384
Waiting for server threads to complete. Interrupt again to force quit.

At client side
root@osboxes:~/bpf-next# ip netns exec at_ns1 iperf -c 10.1.1.1 -i 1 -t 10

Client connecting to 10.1.1.1, TCP port 5001
TCP window size: 85.0 KByte (default)

[  3] local 10.1.1.2 port 40384 connected with 10.1.1.1 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0- 1.0 sec  17.0 MBytes   143 Mbits/sec
[  3]  1.0- 2.0 sec  9.62 MBytes  80.7 Mbits/sec
[  3]  2.0- 3.0 sec  6.75 MBytes  56.6 Mbits/sec
[  3]  3.0- 4.0 sec  11.0 MBytes  92.3 Mbits/sec
[  3]  5.0- 6.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  6.0- 7.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  7.0- 8.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  8.0- 9.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  9.0-10.0 sec  0.00 Bytes  0.00 bits/sec
[  3] 10.0-11.0 sec  0.00 Bytes  0.00 bits/sec

(after this, even ping stops working)

Script to reproduce
-
ovs-vsctl -- add-br br0 -- set Bridge br0 datapath_type=netdev

ip netns add at_ns0
ip link add p0 type veth peer name afxdp-p0
ip link set p0 netns at_ns0
ip link set dev afxdp-p0 up
ovs-vsctl add-port br0 afxdp-p0

ovs-vsctl -- set interface afxdp-p0 options:n_rxq=1 type="afxdp"
options:xdp-mode=native
ip netns exec at_ns0 sh << NS_EXEC_HEREDOC
ip addr add "10.1.1.1/24" dev p0
ip link set dev p0 up
NS_EXEC_HEREDOC

ip netns add at_ns1
ip link add p1 type veth peer name afxdp-p1
ip link set p1 netns at_ns1
ip link set dev afxdp-p1 up
ovs-vsctl add-port br0 afxdp-p1 -- \
   set interface afxdp-p1 options:n_rxq=1 type="afxdp"
options:xdp-mode=native

ip netns exec at_ns1 sh << NS_EXEC_HEREDOC
ip addr add "10.1.1.2/24" dev p1
ip link set dev p1 up
NS_EXEC_HEREDOC

ethtool -K afxdp-p0 tx off
ethtool -K afxdp-p1 tx off
ip netns exec at_ns0 ethtool -K p0 tx off
ip netns exec at_ns1 ethtool -K p1 tx off

ip netns exec at_ns0 ping  -c 10 -i .2 10.1.1.2
echo "ip netns exec at_ns1 iperf -c 10.1.1.1 -i 1 -t 10"
ip netns exec at_ns0 iperf -s

Thank you
William
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev