On 8/25/25 07:22, Harry Yoo wrote:
> On Wed, Jul 23, 2025 at 03:34:42PM +0200, Vlastimil Babka wrote:
>> Since we don't control the NUMA locality of objects in percpu sheaves,
>> allocations with node restrictions bypass them. Allocations without
>> restrictions may however still expect to get local objects with high
>> probability, and the introduction of sheaves can decrease it due to
>> freed object from a remote node ending up in percpu sheaves.
>> 
>> The fraction of such remote frees seems low (5% on an 8-node machine)
>> but it can be expected that some cache or workload specific corner cases
>> exist. We can either conclude that this is not a problem due to the low
>> fraction, or we can make remote frees bypass percpu sheaves and go
>> directly to their slabs. This will make the remote frees more expensive,
>> but if if's only a small fraction, most frees will still benefit from
>> the lower overhead of percpu sheaves.
>> 
>> This patch thus makes remote object freeing bypass percpu sheaves,
>> including bulk freeing, and kfree_rcu() via the rcu_free sheaf. However
>> it's not intended to be 100% guarantee that percpu sheaves will only
>> contain local objects. The refill from slabs does not provide that
>> guarantee in the first place, and there might be cpu migrations
>> happening when we need to unlock the local_lock. Avoiding all that could
>> be possible but complicated so we can leave it for later investigation
>> whether it would be worth it. It can be expected that the more selective
>> freeing will itself prevent accumulation of remote objects in percpu
>> sheaves so any such violations would have only short-term effects.
>> 
>> Signed-off-by: Vlastimil Babka <vba...@suse.cz>
>> ---
>>  mm/slab_common.c |  7 +++++--
>>  mm/slub.c        | 42 ++++++++++++++++++++++++++++++++++++------
>>  2 files changed, 41 insertions(+), 8 deletions(-)
>> 
>> diff --git a/mm/slab_common.c b/mm/slab_common.c
>> index 
>> 2d806e02568532a1000fd3912db6978e945dcfa8..f466f68a5bd82030a987baf849a98154cd48ef23
>>  100644
>> --- a/mm/slab_common.c
>> +++ b/mm/slab_common.c
>> @@ -1623,8 +1623,11 @@ static bool kfree_rcu_sheaf(void *obj)
>>  
>>      slab = folio_slab(folio);
>>      s = slab->slab_cache;
>> -    if (s->cpu_sheaves)
>> -            return __kfree_rcu_sheaf(s, obj);
>> +    if (s->cpu_sheaves) {
>> +            if (likely(!IS_ENABLED(CONFIG_NUMA) ||
>> +                       slab_nid(slab) == numa_node_id()))
>> +                    return __kfree_rcu_sheaf(s, obj);
>> +    }
> 
> This should be numa_mem_id() to handle memory-less NUMA nodes as
> Christoph mentioned [1]?
> 
> I saw you addressed this in most of places but not this one.

Oops, right.
> With that addressed, please feel free to add:
> Reviewed-by: Harry Yoo <harry....@oracle.com>

Thanks!

> [1] 
> https://lore.kernel.org/linux-mm/c60ae681-6027-0626-8d4e-5833982bf...@gentwo.org
> 
>>  
>>      return false;
>>  }
> 


Reply via email to