Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-28 Thread Mike Jakubik
Hi, So basically this is my conclusion, if I cpuset iperf on at least the receiving end i get great performance. Anything outside of that is random. I've tried just about every network tuning knob in FreeBSD as well as what Mellanox recommends in their driver manual, none of these make any

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-17 Thread Dave Cottlehuber
On Fri, 17 Jun 2022, at 02:38, Mike Jakubik wrote: > Hi, > > I believe you hit the nail on the head! I am now getting consistent > high speeds, even higher than on Linux! Is this a problem with the > scheduler? Should someone in that area of expertise be made aware of > this? More importantly i

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-16 Thread Mike Jakubik
Hi, I believe you hit the nail on the head! I am now getting consistent high speeds, even higher than on Linux! Is this a problem with the scheduler? Should someone in that area of expertise be made aware of this? More importantly i guess, would this affect real world performance, these

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-16 Thread Alexander V. Chernikov
> On 16 Jun 2022, at 21:48, Mike Jakubik > wrote: > > After multiple tests and tweaks i believe the issue is not with the HW or > Numa related (Infinity fabric should do around 32GB) but rather with FreeBSD > TCP/IP stack. It's like it cant figure itself out properly for the speed that >

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-16 Thread Mike Jakubik
After multiple tests and tweaks i believe the issue is not with the HW or Numa related (Infinity fabric should do around 32GB) but rather with FreeBSD TCP/IP stack. It's like it cant figure itself out properly for the speed that the HW can do, i keep getting widely varying results when testing.

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-14 Thread Mike Jakubik
Actually, i believe its the disabling to HW LRO that makes the difference (i disabled it and rx/tx pause previously). With rx/tx pause on and LRO off i get similar results. The throughput is still very sporadic though. Connecting to host db-01, port 5201 [  5] local 192.168.10.31 port 59055

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-14 Thread Mike Jakubik
Disabling rx/tx pause seems to produce higher peaks. [root@db-02 ~]# iperf3 -i 1 -t 30 -c db-01 Connecting to host db-01, port 5201 [  5] local 192.168.10.31 port 10146 connected to 192.168.10.30 port 5201 [ ID] Interval   Transfer Bitrate Retr  Cwnd [  5]   0.00-1.00  

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-14 Thread Mike Jakubik
Yes, it is the default of 1500. If I set it to 9000 I get some bizarre network behavior. On Tue, 14 Jun 2022 09:45:10 -0400 Andrey V. Elsukov wrote Hi, Do you have the same MTU size on linux machine? Mike Jakubik

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-14 Thread Andrey V. Elsukov
13.06.2022 21:25, Mike Jakubik пишет: Hello, I have two new servers with a Mellnox ConnectX-6 card linked at 25Gb/s, however, I am unable to get much more than 6Gb/s when testing with iperf3. The servers are Lenovo SR665 (2 x AMD EPYC 7443 24-Core Processor, 256 GB RAM, Mellanox ConnectX-6

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-13 Thread Hans Petter Selasky
On 6/13/22 20:25, Mike Jakubik wrote: Hello, I have two new servers with a Mellnox ConnectX-6 card linked at 25Gb/s, however, I am unable to get much more than 6Gb/s when testing with iperf3. The servers are Lenovo SR665 (2 x AMD EPYC 7443 24-Core Processor, 256 GB RAM, Mellanox ConnectX-6

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-13 Thread Mike Jakubik
Hi, No, I do not see any retransmission in Linux (see the forum URL for screenshots) so I do not think this is a hardware issue. I don't think these cards have flow control on them. I also do not see any errors, drops, or collisions in netstat -i. It's like the network stack doesnt know what

Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-13 Thread Mike Jakubik
Hello, I have two new servers with a Mellnox ConnectX-6 card linked at 25Gb/s, however, I am unable to get much more than 6Gb/s when testing with iperf3. The servers are Lenovo SR665 (2 x AMD EPYC 7443 24-Core Processor, 256 GB RAM, Mellanox ConnectX-6 Lx 10/25GbE SFP28 2-port OCP Ethernet