On Fri 01-04-16 15:30:17, Vladimir Davydov wrote:
> When we call __kmem_cache_shrink on memory cgroup removal, we need to
> synchronize kmem_cache->cpu_partial update with put_cpu_partial that
> might be running on other cpus. Currently, we achieve that by using
> kick_all_cpus_sync, which works as a system wide memory barrier. Though
> fast it is, this method has a flow - it issues a lot of IPIs, which
> might hurt high performance or real-time workloads.
> 
> To fix this, let's replace kick_all_cpus_sync with synchronize_sched.
> Although the latter one may take much longer to finish, it shouldn't be
> a problem in this particular case, because memory cgroups are destroyed
> asynchronously from a workqueue so that no user visible effects should
> be introduced. OTOH, it will save us from excessive IPIs when someone
> removes a cgroup.
> 
> Anyway, even if using synchronize_sched turns out to take too long, we
> can always introduce a kind of __kmem_cache_shrink batching so that this
> method would only be called once per one cgroup destruction (not per
> each per memcg kmem cache as it is now).
> 
> Reported-and-suggested-by: Peter Zijlstra <[email protected]>
> Signed-off-by: Vladimir Davydov <[email protected]>

Acked-by: Michal Hocko <[email protected]>

> ---
>  mm/slub.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 279e773d80d3..03067f43dcf4 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3697,7 +3697,7 @@ int __kmem_cache_shrink(struct kmem_cache *s, bool 
> deactivate)
>                * s->cpu_partial is checked locklessly (see put_cpu_partial),
>                * so we have to make sure the change is visible.
>                */
> -             kick_all_cpus_sync();
> +             synchronize_sched();
>       }
>  
>       flush_all(s);
> -- 
> 2.1.4
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected].  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]";> [email protected] </a>

-- 
Michal Hocko
SUSE Labs

Reply via email to