Hi,
We are using DPDK 25.11 with an Intel E810 with Firmware 4.90 and comms DDP
1.3.55 configured 8x10G
We generate traffic and direct it to one of the server's nics, which is
configured to forward it back to the generator via l2fwd:
./build/l2fwd -l 14,16 -a 0000:4b:00.4 -a 0000:4b:00.5 -- -p 0x3 -P
Everything works fine if we leave all defaults.
Problems arise when we diminish the number of mbufs: l2fwd has a nice
formula to compute it:
nb_mbufs = RTE_MAX(nb_ports * (nb_rxd + nb_txd +
MAX_PKT_BURST + nb_lcores * MEMPOOL_CACHE_SIZE), 8192U);
(in our example, this amounts to 8192)
If we then divide this by 4:
nb_mbufs /= 4;
recompile and run it again, the effect is that only 2k packets are
forwarded, then everything blocks, as seen in the l2fwd output: the number
of "Total packets sent" remains at 2032.
Using dpdk-telemetry we see that "rx_mbuf_allocation_errors" counter grows
indefinitely.
In our understanding, diminishing the number of mbufs could degrade
performance but not block the forwarding, as mbufs would eventually be freed
by the nic once the packets are sent. And in fact this is the behavior we
see using mlx5 driver for example.
Are we missing some specific configuration? Or is it an unexpected behavior?
Thanks