On 23/07/2025 21.07, Dipayaan Roy wrote:
This patch enhances RX buffer handling in the mana driver by allocating
pages from a page pool and slicing them into MTU-sized fragments, rather
than dedicating a full page per packet. This approach is especially
beneficial on systems with large page sizes like 64KB.

Key improvements:

- Proper integration of page pool for RX buffer allocations.
- MTU-sized buffer slicing to improve memory utilization.
- Reduce overall per Rx queue memory footprint.
- Automatic fallback to full-page buffers when:
    * Jumbo frames are enabled (MTU > PAGE_SIZE / 2).
    * The XDP path is active, to avoid complexities with fragment reuse.
- Removal of redundant pre-allocated RX buffers used in scenarios like MTU
   changes, ensuring consistency in RX buffer allocation.

Testing on VMs with 64KB pages shows around 200% throughput improvement.
Memory efficiency is significantly improved due to reduced wastage in page
allocations. Example: We are now able to fit 35 rx buffers in a single 64kb
page for MTU size of 1500, instead of 1 rx buffer per page previously.

Tested:

- iperf3, iperf2, and nttcp benchmarks.
- Jumbo frames with MTU 9000.
- Native XDP programs (XDP_PASS, XDP_DROP, XDP_TX, XDP_REDIRECT) for
   testing the XDP path in driver.
- Page leak detection (kmemleak).
- Driver load/unload, reboot, and stress scenarios.

Chris (Cc) discovered a crash/bug[1] with page pool fragments used from the mlx5 driver.
He put together a BPF program that reproduces the issue here:
- [2] https://github.com/arges/xdp-redirector

Can I ask you to test that your driver against this reproducer?


[1] https://lore.kernel.org/all/aIEuZy6fUj_4wtQ6@861G6M3/

--Jesper


Reply via email to