On Fri, Mar 06, 2026 at 07:35:01PM +0000, Catalin Marinas wrote:

[...snip...]

> I wonder whether some early kmem_cache_node allocations like the ones in
> early_kmem_cache_node_alloc() are not tracked and then kmemleak cannot
> find n->barn. I got lost in the slub code, but something like this:

This sounds plausible. Before sheaves, kmem_cache_node just maintained
a list of slabs. Because struct page (and struct slab overlaying on it)
is not tracked by kmemleak (as Vlastimil pointed out off-list),
not calling kmemleak_alloc() for kmem_cache_node was not a problem.

But now it maintains barns and sheaves,
and they are tracked by kmemleak...

> -----------8<-----------------------------------
> diff --git a/mm/slub.c b/mm/slub.c
> index 0c906fefc31b..401557ff5487 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -7513,6 +7513,7 @@ static void early_kmem_cache_node_alloc(int node)
>       slab->freelist = get_freepointer(kmem_cache_node, n);
>       slab->inuse = 1;
>       kmem_cache_node->node[node] = n;
> +     kmemleak_alloc(n, sizeof(*n), 1, GFP_NOWAIT);
>       init_kmem_cache_node(n, NULL);
>       inc_slabs_node(kmem_cache_node, node, slab->objects);

But this function is called for kmem_cache_node cache
(in kmem_cache_init()), even before kmemleak_init()?

kmem_cache and kmalloc caches should call kmemleak_alloc() when
allocating kmem_cache_node structures, but as they are also created
before kmemleak_init(), I doubt that's actually doing its job...

I think we should probably introduce a slab function that kmemleak_init()
calls, which iterates over all slab caches and calls kmemleak_alloc()
for their kmem_cache_node structures?

> -------------8<----------------------------------------
> 
> Another thing I noticed, not sure it's related but we should probably
> ignore an object once it has been passed to kvfree_call_rcu(), similar
> to what we do on the main path in this function. Also see commit
> 5f98fd034ca6 ("rcu: kmemleak: Ignore kmemleak false positives when
> RCU-freeing objects") when we added this kmemleak_ignore().
> 
> ---------8<-----------------------------------
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index d5a70a831a2a..73f4668d870d 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -1954,8 +1954,14 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr)
>       if (!head)
>               might_sleep();
>  
> -     if (!IS_ENABLED(CONFIG_PREEMPT_RT) && kfree_rcu_sheaf(ptr))
> +     if (!IS_ENABLED(CONFIG_PREEMPT_RT) && kfree_rcu_sheaf(ptr)) {
> +             /*
> +              * The object is now queued for deferred freeing via an RCU
> +              * sheaf. Tell kmemleak to ignore it.
> +              */
> +             kmemleak_ignore(ptr);

As Vlastimil pointed out off-list, we need to let kmemleak ignore
sheaves when they are submitted to call_rcu() and ideally undo
kmemleak_ignore() in __kfree_rcu_sheaf() when they are going to be reused.

But looking at mm/kmemleak.c, undoing kmemleak_ignore() doesn't seem to
be a thing.

We could probably send it as a hotfix and fix potential false negatives
later?

I thought this was a more plausible theory and told syzbot to test it [1],
but it still complains :)

[1] https://lore.kernel.org/linux-mm/aa6lBQDAVnqjz_lk@hyeyoo

>               return;
> +     }
>  
>       // Queue the object but don't yet schedule the batch.
>       if (debug_rcu_head_queue(ptr)) {

-- 
Cheers,
Harry / Hyeonggon


_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to