From: Dmitry Kozlyuk
> Sent: Thursday, June 30, 2022 1:08 AM
> To: dev@dpdk.org
> Cc: Raslan Darawsheh <rasl...@nvidia.com>; sta...@dpdk.org; Matan
> Azrad <ma...@nvidia.com>; Slava Ovsiienko <viachesl...@nvidia.com>
> Subject: [PATCH v2] common/mlx5: fix non-expandable global MR cache
> 
> The number of memory regions (MR) that MLX5 PMD can use was limited by
> 512 per IB device, the size of the global MR cache that was fixed at compile
> time.
> The cache allows to search MR LKey by address efficiently, therefore it is the
> last place searched on data path (skipped is the global MR database which
> would be slow).
> If the application logic caused the PMD to create more than 512 MRs, which
> can be the case with external memory, those MRs would never be found on
> data path and later cause a HW failure.
> 
> The cache size was fixed because at the time of overflow the EAL memory
> hotplug lock may be held, prohibiting to allocate a larger cache (it must 
> reside
> in DPDK memory for multi-process support).
> This patch adds logic to release the necessary locks, extend the cache, and
> repeat the attempt to insert new entries.
> 
> `mlx5_mr_btree` structure had `overflow` field that was set when a cache
> (not only the global one) could not accept new entries.
> However, it was only checked for the global cache, because caches of upper
> layers were dynamically expandable.
> With the global cache size limitation removed, this field is not needed.
> Cache size was previously limited by 16-bit indices.
> Use the space in the structure previously fileld by `overflow` field to extend
> indices to 32 bits.
> With this patch, it is the HW and RAM that limit the number of MRs.
> 
> Fixes: 974f1e7ef146 ("net/mlx5: add new memory region support")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Dmitry Kozlyuk <dkozl...@nvidia.com>
Acked-by: Matan Azrad <ma...@nvidia.com>

Reply via email to