Great, thanks for the update!
12/09/2021 11:32, Filip Janiszewski:
> Alright, nailed it down to a wrong preferred PCIe device in the BIOS
> configuration, it has not been changed after the NIC have been moved to
> another PCIe slot.
>
> Now the EPYC is going really great, getting 100Gbps rate
Alright, nailed it down to a wrong preferred PCIe device in the BIOS
configuration, it has not been changed after the NIC have been moved to
another PCIe slot.
Now the EPYC is going really great, getting 100Gbps rate easily.
Thank
Il 9/11/21 4:34 PM, Filip Janiszewski ha scritto:
> I wanted
I wanted just to add, while running the same exact testpmd on the other
machine I won't get a single miss with the same patter traffic:
.
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...
--- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0
---
Thanks,
I knew that document and we've implemented many of those settings/rules,
but perhaps there's one crucial I've forgot? Wonder which one.
Anyway, increasing the amount of queues impinge the performance, while
sending 250M packets over a 100GbE link to an Intel 810-cqda2 NIC
mounted on the
Hi Filip,
i have not seen the same issues.
Are you aware of this tuning guide? I applied it and had no issues with intel
100G NIC.
HPC Tuning Guide for AMD EPYC Processors
http://developer.amd.com/wp-content/resources/56420.pdf
Hope it helps.
Cheers,
Steffen Weise
> Am 11.09.2021 um 10:56
I ran more tests,
This AMD server is a bit confusing, I can tune it to capture 28Mpps (64
bytes frame) from one single core, so I would assume that using one more
core will at least increase a bit the capture capabilities, but it's
not, 1% more speed and it drops regardless of how many queues are
Hi,
I've switched a 100Gbe MLX ConnectX-4 card from an Intel Xeon server to
an AMD EPYC server (running 75F3 CPU, 256GiB of RAM and PCIe4 lanes),
and using the same capture software we can't get any faster than 10Gbps,
when exceeding that speed regardless of the amount of queues configured
the