Hi Sachin,

By link capacity do you mean the bandwidth of the NIC port? If so, the link 
capacity is 100Gb/s.


Cheers,

Lei

________________________________
From: sachin gupta <[email protected]>
Sent: Monday, March 16, 2020 8:44:53 AM
To: Yan Lei; [email protected]
Subject: Re: [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only 
generate 53Gb/s with 64B packets

Cool Yan
Thanks for letting me know as as well. Can you also let me know the link 
capacity

Sachin


Sent from Yahoo Mail for iPhone<https://overview.mail.yahoo.com/?.src=iOS>


On Sunday, March 15, 2020, 12:07 AM, Yan Lei <[email protected]> wrote:

Hi Sachin,

Thanks a lot for the answer. The issue is resolved, was able to get 98Gb/s with 
64B packets after set pci maxReadRequest to 1024 and turn off NIC flow control. 
These optimization settings are actually posted in the mlx5 PMD guide, my bad 
to have ignored them...

Cheers,
Lei
________________________________
From: sachin gupta <[email protected]>
Sent: Thursday, March 12, 2020 7:45:31 AM
To: [email protected]; Yan Lei
Subject: Re: [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only 
generate 53Gb/s with 64B packets

Hi Lei,

The smaller the Packet size, the more the number of Packets per second. I 
believe this is the inherent problem in all systems, even the ones which have 
proprietary hardware.
In general applications which uses such small packets are rare and you will see 
a mix of traffic in the system.

Regards
Sachin

On Thursday, March 12, 2020, 5:10:14 AM GMT+5:30, Yan Lei <[email protected]> wrote:


Hi, I am currently struggling in getting more than 53Gb/s with 64B packets on 
both of the MCX515A-CCAT and MCX516A-CCAT adapter when running a DPDK app that 
generates and transmits packets. WIth 256B packets I can get 98Gb/s.

Has anyone saw the same performance on these NICs? I checked the perf. report 
on https://core.dpdk.org/perf-reports/ but there are no numbers of these NICs.



Is this inherent limitation of these NICs (only reach 100Gb/s with larger 
packets)? If not, which firmware/driver/DPDK/system configurations could I tune 
to get 100Gb/s with 64B packets? My setup is as following: - CPU: E5-2697 v3 
(14 cores, SMT disabled, CPU frequency fixed @ 2.6 GHz) - NIC: Mellanox 
MCX515A-CCAT / MCX516A-CCAT (Using only one port for TX, installed on PCIe Gen3 
x16) - DPDK: 19.05 - RDMA-CORE: v28.0 - Kernel: 5.3.0 - OS: Ubuntu 18.04 - 
Firmware: 16.26.1040 I measured the TX rate with DPDK's testpmd: $ ./testpmd -l 
3-13 -n 4 -w 02:00.0 -- -i --port-topology=chained --nb-ports=1 --rxq=10 
--txq=10 --nb-cores=10 --burst=128 --rxd=512 --txd=512 --mbcache=512 
--forward-mode=txonly So 10 cores generating and transmits 64B packets on 10 
NIC queues. Your feedbacks will be much appreciated.

Thanks, Lei

Reply via email to