On 8/14/2025 11:32 PM, Jake Freeland wrote:
Use rte_fbarray_is_used() to check if the previous fbarray entry is
already empty.
Using prev_ms_idx to do this is flawed in cases where we loop through
multiple memseg lists. Each memseg list has its own count and length,
so using a prev_ms_idx from one memseg list to check for used entries
in another non-empty memseg list can lead to incorrect hole placement.
Signed-off-by: Jake Freeland <[email protected]>
---
lib/eal/freebsd/eal_memory.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c
index 6d3d46a390..be3bde2cb9 100644
--- a/lib/eal/freebsd/eal_memory.c
+++ b/lib/eal/freebsd/eal_memory.c
@@ -103,7 +103,6 @@ rte_eal_hugepage_init(void)
for (i = 0; i < internal_conf->num_hugepage_sizes; i++) {
struct hugepage_info *hpi;
rte_iova_t prev_end = 0;
- int prev_ms_idx = -1;
uint64_t page_sz, mem_needed;
unsigned int n_pages, max_pages;
@@ -167,9 +166,9 @@ rte_eal_hugepage_init(void)
if (ms_idx < 0)
continue;
- if (need_hole && prev_ms_idx == ms_idx - 1)
+ if (need_hole &&
+ rte_fbarray_is_used(arr, ms_idx - 1))
ms_idx++;
- prev_ms_idx = ms_idx;
This is not a bug as the logic won't allow for this to happen, but some
static analysis tools might flag this:
Earlier we check for ms_idx < 0, so assuming ms_idx == 0, we will pass
(ms_idx - 1) to rte_fbarray_is_used. This won't actually happen because
ms_idx will never be 0 if `need_hole` is true, but *technically* it is
not impossible, and so should probably be addressed somehow to avoid
false positives from static analysis.
--
Thanks,
Anatoly