Hi Filip,

i have not seen the same issues.
Are you aware of this tuning guide? I applied it and had no issues with intel 
100G NIC.

HPC Tuning Guide for AMD EPYC Processors
http://developer.amd.com/wp-content/resources/56420.pdf

Hope it helps.

Cheers,
Steffen Weise


> Am 11.09.2021 um 10:56 schrieb Filip Janiszewski 
> <cont...@filipjaniszewski.com>:
> 
> I ran more tests,
> 
> This AMD server is a bit confusing, I can tune it to capture 28Mpps (64
> bytes frame) from one single core, so I would assume that using one more
> core will at least increase a bit the capture capabilities, but it's
> not, 1% more speed and it drops regardless of how many queues are
> configured - I've not observed this situation on the Intel server, where
> adding more queues/cores scale to higher throughput.
> 
> This issue have been verified now with both Mellanox and Intel (810
> series, 100GbE) NICs.
> 
> Anybody encountered anything similar?
> 
> Thanks
> 
> Il 9/10/21 3:34 PM, Filip Janiszewski ha scritto:
>> Hi,
>> 
>> I've switched a 100Gbe MLX ConnectX-4 card from an Intel Xeon server to
>> an AMD EPYC server (running 75F3 CPU, 256GiB of RAM and PCIe4 lanes),
>> and using the same capture software we can't get any faster than 10Gbps,
>> when exceeding that speed regardless of the amount of queues configured
>> the rx_discards_phy counter starts to raise and packets are lost in huge
>> amounts.
>> 
>> On the Xeon machine, I was able to get easily to 50Gbps with 4 queues.
>> 
>> Is there any specific DPDK configuration that we might want to setup for
>> those AMD servers? The software is DPDK based so I wonder if some build
>> option is missing somewhere.
>> 
>> What else I might want to look for to investigate this issue?
>> 
>> Thanks
>> 
> 
> -- 
> BR, Filip
> +48 666 369 823

Reply via email to