On Wed, Sep 17, 2025 at 03:21:31PM +0200, Vlastimil Babka wrote: > On 9/17/25 15:07, Harry Yoo wrote: > > On Wed, Sep 17, 2025 at 02:05:49PM +0200, Vlastimil Babka wrote: > >> On 9/17/25 13:32, Harry Yoo wrote: > >> > On Wed, Sep 17, 2025 at 11:55:10AM +0200, Vlastimil Babka wrote: > >> >> On 9/17/25 10:30, Harry Yoo wrote: > >> >> > On Wed, Sep 10, 2025 at 10:01:06AM +0200, Vlastimil Babka wrote: > >> >> >> + sfw->skip = true; > >> >> >> + continue; > >> >> >> + } > >> >> >> > >> >> >> + INIT_WORK(&sfw->work, flush_rcu_sheaf); > >> >> >> + sfw->skip = false; > >> >> >> + sfw->s = s; > >> >> >> + queue_work_on(cpu, flushwq, &sfw->work); > >> >> >> + flushed = true; > >> >> >> + } > >> >> >> + > >> >> >> + for_each_online_cpu(cpu) { > >> >> >> + sfw = &per_cpu(slub_flush, cpu); > >> >> >> + if (sfw->skip) > >> >> >> + continue; > >> >> >> + flush_work(&sfw->work); > >> >> >> + } > >> >> >> + > >> >> >> + mutex_unlock(&flush_lock); > >> >> >> + } > >> >> >> + > >> >> >> + mutex_unlock(&slab_mutex); > >> >> >> + cpus_read_unlock(); > >> >> >> + > >> >> >> + if (flushed) > >> >> >> + rcu_barrier(); > >> >> > > >> >> > I think we need to call rcu_barrier() even if flushed == false? > >> >> > > >> >> > Maybe a kvfree_rcu()'d object was already waiting for the rcu > >> >> > callback to > >> >> > be processed before flush_all_rcu_sheaves() is called, and > >> >> > in flush_all_rcu_sheaves() we skipped all (cache, cpu) pairs, > >> >> > so flushed == false but the rcu callback isn't processed yet > >> >> > by the end of the function? > >> >> > > >> >> > That sounds like a very unlikely to happen in a realistic scenario, > >> >> > but still possible... > >> >> > >> >> Yes also good point, will flush unconditionally. > >> >> > >> >> Maybe in __kfree_rcu_sheaf() I should also move the call_rcu(...) before > >> >> local_unlock(). > >> >> > >> >> So we don't end up seeing a NULL pcs->rcu_free in > >> >> flush_all_rcu_sheaves() because __kfree_rcu_sheaf() already set it to > >> >> NULL, > >> >> but didn't yet do the call_rcu() as it got preempted after > >> >> local_unlock(). > >> > > >> > Makes sense to me. > > > > Wait, I'm confused. > > > > I think the caller of kvfree_rcu_barrier() should make sure that it's > > invoked > > only after a kvfree_rcu(X, rhs) call has returned, if the caller expects > > the object X to be freed before kvfree_rcu_barrier() returns? > > Hmm, the caller of kvfree_rcu(X, rhs) might have returned without filling up > the rcu_sheaf fully and thus without submitting it to call_rcu(), then > migrated to another cpu. Then it calls kvfree_rcu_barrier() while another > unrelated kvfree_rcu(X, rhs) call on the previous cpu is for the same > kmem_cache (kvfree_rcu_barrier() is not only for cache destruction), fills > up the rcu_sheaf fully and is about to call_rcu() on it. And since that > sheaf also contains the object X, we should make sure that is flushed.
I was going to say "but we queue and wait for the flushing work to complete, so the sheaf containing object X should be flushed?" But nah, that's true only if we see pcs->rcu_free != NULL in flush_all_rcu_sheaves(). You are right... Hmm, maybe it's simpler to fix this by never skipping queueing the work even when pcs->rcu_sheaf == NULL? > > IOW if flush_all_rcu_sheaves() is called while __kfree_rcu_sheaf(s, X) was > > running on another CPU, we don't have to guarantee that > > flush_all_rcu_sheaves() returns after the object X is freed? > > > >> >> But then rcu_barrier() itself probably won't mean we make sure such cpus > >> >> finished the local_locked section, if we didn't queue work on them. So > >> >> maybe > >> >> we need synchronize_rcu()? > > > > So... we don't need a synchronize_rcu() then? > > > > Or my brain started malfunctioning again :D -- Cheers, Harry / Hyeonggon