On Wed, Dec 02, 2020 at 10:27:21AM -0800, Yang Shi wrote:
> @@ -504,6 +577,34 @@ int memcg_expand_shrinker_maps(int new_id)
>       return ret;
>  }
>  
> +int memcg_expand_shrinker_deferred(int new_id)
> +{
> +     int size, old_size, ret = 0;
> +     struct mem_cgroup *memcg;
> +
> +     size = (new_id + 1) * sizeof(atomic_long_t);
> +     old_size = memcg_shrinker_deferred_size;
> +     if (size <= old_size)
> +             return 0;
> +
> +     mutex_lock(&memcg_shrinker_mutex);

The locking is somewhat confusing. I was wondering why we first read
memcg_shrinker_deferred_size "locklessly", then change it while
holding the &memcg_shrinker_mutex.

memcg_shrinker_deferred_size only changes under shrinker_rwsem(write),
correct? This should be documented in a comment, IMO.

memcg_shrinker_mutex looks superfluous then. The memcg allocation path
is the read-side of memcg_shrinker_deferred_size, and so simply needs
to take shrinker_rwsem(read) to lock out shrinker (de)registration.

Also, isn't memcg_shrinker_deferred_size just shrinker_nr_max? And
memcg_expand_shrinker_deferred() is only called when size >= old_size
in the first place (because id >= shrinker_nr_max)?

Reply via email to