Re: [PATCH v5 13/13] mm: Clear shrinker bit if there are no objects related to memcg

2018-05-16 Thread Vladimir Davydov
On Tue, May 15, 2018 at 11:55:04AM +0300, Kirill Tkhai wrote:
> >> @@ -586,8 +586,23 @@ static unsigned long shrink_slab_memcg(gfp_t 
> >> gfp_mask, int nid,
> >>continue;
> >>  
> >>ret = do_shrink_slab(, shrinker, priority);
> >> -  if (ret == SHRINK_EMPTY)
> >> -  ret = 0;
> >> +  if (ret == SHRINK_EMPTY) {
> >> +  clear_bit(i, map->map);
> >> +  /*
> >> +   * Pairs with mb in memcg_set_shrinker_bit():
> >> +   *
> >> +   * list_lru_add() shrink_slab_memcg()
> >> +   *   list_add_tail()clear_bit()
> >> +   *  
> >> +   *   set_bit()  do_shrink_slab()
> >> +   */
> > 
> > Please improve the comment so that it isn't just a diagram.
> 
> Please, say, which comment you want to see here.

I want the reader to understand why we need to invoke the shrinker twice
if it returns SHRINK_EMPTY. The diagram doesn't really help here IMO. So
I'd write Something like this:

ret = do_shrink_slab(, shrinker, priority);
if (ret == SHRINK_EMPTY) {
clear_bit(i, map->map);
/*
 * After the shrinker reported that it had no objects to free,
 * but before we cleared the corresponding bit in the memcg
 * shrinker map, a new object might have been added. To make
 * sure, we have the bit set in this case, we invoke the
 * shrinker one more time and re-set the bit if it reports that
 * it is not empty anymore. The memory barrier here pairs with
 * the barrier in memcg_set_shrinker_bit():
 *
 * list_lru_add() shrink_slab_memcg()
 *   list_add_tail()clear_bit()
 *  
 *   set_bit()  do_shrink_slab()
 */
smp_mb__after_atomic();
ret = do_shrink_slab(, shrinker, priority);
if (ret == SHRINK_EMPTY)
ret = 0;
else
memcg_set_shrinker_bit(memcg, nid, i);


Re: [PATCH v5 13/13] mm: Clear shrinker bit if there are no objects related to memcg

2018-05-16 Thread Vladimir Davydov
On Tue, May 15, 2018 at 11:55:04AM +0300, Kirill Tkhai wrote:
> >> @@ -586,8 +586,23 @@ static unsigned long shrink_slab_memcg(gfp_t 
> >> gfp_mask, int nid,
> >>continue;
> >>  
> >>ret = do_shrink_slab(, shrinker, priority);
> >> -  if (ret == SHRINK_EMPTY)
> >> -  ret = 0;
> >> +  if (ret == SHRINK_EMPTY) {
> >> +  clear_bit(i, map->map);
> >> +  /*
> >> +   * Pairs with mb in memcg_set_shrinker_bit():
> >> +   *
> >> +   * list_lru_add() shrink_slab_memcg()
> >> +   *   list_add_tail()clear_bit()
> >> +   *  
> >> +   *   set_bit()  do_shrink_slab()
> >> +   */
> > 
> > Please improve the comment so that it isn't just a diagram.
> 
> Please, say, which comment you want to see here.

I want the reader to understand why we need to invoke the shrinker twice
if it returns SHRINK_EMPTY. The diagram doesn't really help here IMO. So
I'd write Something like this:

ret = do_shrink_slab(, shrinker, priority);
if (ret == SHRINK_EMPTY) {
clear_bit(i, map->map);
/*
 * After the shrinker reported that it had no objects to free,
 * but before we cleared the corresponding bit in the memcg
 * shrinker map, a new object might have been added. To make
 * sure, we have the bit set in this case, we invoke the
 * shrinker one more time and re-set the bit if it reports that
 * it is not empty anymore. The memory barrier here pairs with
 * the barrier in memcg_set_shrinker_bit():
 *
 * list_lru_add() shrink_slab_memcg()
 *   list_add_tail()clear_bit()
 *  
 *   set_bit()  do_shrink_slab()
 */
smp_mb__after_atomic();
ret = do_shrink_slab(, shrinker, priority);
if (ret == SHRINK_EMPTY)
ret = 0;
else
memcg_set_shrinker_bit(memcg, nid, i);


Re: [PATCH v5 13/13] mm: Clear shrinker bit if there are no objects related to memcg

2018-05-15 Thread Kirill Tkhai
On 15.05.2018 08:59, Vladimir Davydov wrote:
> On Thu, May 10, 2018 at 12:54:15PM +0300, Kirill Tkhai wrote:
>> To avoid further unneed calls of do_shrink_slab()
>> for shrinkers, which already do not have any charged
>> objects in a memcg, their bits have to be cleared.
>>
>> This patch introduces a lockless mechanism to do that
>> without races without parallel list lru add. After
>> do_shrink_slab() returns SHRINK_EMPTY the first time,
>> we clear the bit and call it once again. Then we restore
>> the bit, if the new return value is different.
>>
>> Note, that single smp_mb__after_atomic() in shrink_slab_memcg()
>> covers two situations:
>>
>> 1)list_lru_add() shrink_slab_memcg
>> list_add_tail()for_each_set_bit() <--- read bit
>>  do_shrink_slab() <--- missed list update (no 
>> barrier)
>>  
>> set_bit()do_shrink_slab() <--- seen list update
>>
>> This situation, when the first do_shrink_slab() sees set bit,
>> but it doesn't see list update (i.e., race with the first element
>> queueing), is rare. So we don't add  before the first call
>> of do_shrink_slab() instead of this to do not slow down generic
>> case. Also, it's need the second call as seen in below in (2).
>>
>> 2)list_lru_add()  shrink_slab_memcg()
>> list_add_tail() ...
>> set_bit()   ...
>>   ...   for_each_set_bit()
>>   do_shrink_slab()do_shrink_slab()
>> clear_bit()   ...
>>   ... ...
>>   list_lru_add()  ...
>> list_add_tail()   clear_bit()
>>   
>> set_bit() do_shrink_slab()
>>
>> The barriers guarantees, the second do_shrink_slab()
>> in the right side task sees list update if really
>> cleared the bit. This case is drawn in the code comment.
>>
>> [Results/performance of the patchset]
>>
>> After the whole patchset applied the below test shows signify
>> increase of performance:
>>
>> $echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy
>> $mkdir /sys/fs/cgroup/memory/ct
>> $echo 4000M > /sys/fs/cgroup/memory/ct/memory.kmem.limit_in_bytes
>> $for i in `seq 0 4000`; do mkdir /sys/fs/cgroup/memory/ct/$i; echo $$ > 
>> /sys/fs/cgroup/memory/ct/$i/cgroup.procs; mkdir -p s/$i; mount -t tmpfs $i 
>> s/$i; touch s/$i/file; done
>>
>> Then, 5 sequential calls of drop caches:
>> $time echo 3 > /proc/sys/vm/drop_caches
>>
>> 1)Before:
>> 0.00user 13.78system 0:13.78elapsed 99%CPU
>> 0.00user 5.59system 0:05.60elapsed 99%CPU
>> 0.00user 5.48system 0:05.48elapsed 99%CPU
>> 0.00user 8.35system 0:08.35elapsed 99%CPU
>> 0.00user 8.34system 0:08.35elapsed 99%CPU
>>
>> 2)After
>> 0.00user 1.10system 0:01.10elapsed 99%CPU
>> 0.00user 0.00system 0:00.01elapsed 64%CPU
>> 0.00user 0.01system 0:00.01elapsed 82%CPU
>> 0.00user 0.00system 0:00.01elapsed 64%CPU
>> 0.00user 0.01system 0:00.01elapsed 82%CPU
>>
>> The results show the performance increases at least in 548 times.
>>
>> Signed-off-by: Kirill Tkhai 
>> ---
>>  include/linux/memcontrol.h |2 ++
>>  mm/vmscan.c|   19 +--
>>  2 files changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
>> index 436691a66500..82c0bf2d0579 100644
>> --- a/include/linux/memcontrol.h
>> +++ b/include/linux/memcontrol.h
>> @@ -1283,6 +1283,8 @@ static inline void memcg_set_shrinker_bit(struct 
>> mem_cgroup *memcg, int nid, int
>>  
>>  rcu_read_lock();
>>  map = MEMCG_SHRINKER_MAP(memcg, nid);
>> +/* Pairs with smp mb in shrink_slab() */
>> +smp_mb__before_atomic();
>>  set_bit(nr, map->map);
>>  rcu_read_unlock();
>>  }
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 7b0075612d73..189b163bef4a 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -586,8 +586,23 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, 
>> int nid,
>>  continue;
>>  
>>  ret = do_shrink_slab(, shrinker, priority);
>> -if (ret == SHRINK_EMPTY)
>> -ret = 0;
>> +if (ret == SHRINK_EMPTY) {
>> +clear_bit(i, map->map);
>> +/*
>> + * Pairs with mb in memcg_set_shrinker_bit():
>> + *
>> + * list_lru_add() shrink_slab_memcg()
>> + *   list_add_tail()clear_bit()
>> + *  
>> + *   set_bit()  do_shrink_slab()
>> + */
> 
> Please improve the comment so that it isn't just a diagram.

Please, say, which comment you want to see here.

>> +smp_mb__after_atomic();
>> +ret = do_shrink_slab(, shrinker, priority);
>> +if (ret == SHRINK_EMPTY)
>> +ret = 0;
>> +else

Re: [PATCH v5 13/13] mm: Clear shrinker bit if there are no objects related to memcg

2018-05-15 Thread Kirill Tkhai
On 15.05.2018 08:59, Vladimir Davydov wrote:
> On Thu, May 10, 2018 at 12:54:15PM +0300, Kirill Tkhai wrote:
>> To avoid further unneed calls of do_shrink_slab()
>> for shrinkers, which already do not have any charged
>> objects in a memcg, their bits have to be cleared.
>>
>> This patch introduces a lockless mechanism to do that
>> without races without parallel list lru add. After
>> do_shrink_slab() returns SHRINK_EMPTY the first time,
>> we clear the bit and call it once again. Then we restore
>> the bit, if the new return value is different.
>>
>> Note, that single smp_mb__after_atomic() in shrink_slab_memcg()
>> covers two situations:
>>
>> 1)list_lru_add() shrink_slab_memcg
>> list_add_tail()for_each_set_bit() <--- read bit
>>  do_shrink_slab() <--- missed list update (no 
>> barrier)
>>  
>> set_bit()do_shrink_slab() <--- seen list update
>>
>> This situation, when the first do_shrink_slab() sees set bit,
>> but it doesn't see list update (i.e., race with the first element
>> queueing), is rare. So we don't add  before the first call
>> of do_shrink_slab() instead of this to do not slow down generic
>> case. Also, it's need the second call as seen in below in (2).
>>
>> 2)list_lru_add()  shrink_slab_memcg()
>> list_add_tail() ...
>> set_bit()   ...
>>   ...   for_each_set_bit()
>>   do_shrink_slab()do_shrink_slab()
>> clear_bit()   ...
>>   ... ...
>>   list_lru_add()  ...
>> list_add_tail()   clear_bit()
>>   
>> set_bit() do_shrink_slab()
>>
>> The barriers guarantees, the second do_shrink_slab()
>> in the right side task sees list update if really
>> cleared the bit. This case is drawn in the code comment.
>>
>> [Results/performance of the patchset]
>>
>> After the whole patchset applied the below test shows signify
>> increase of performance:
>>
>> $echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy
>> $mkdir /sys/fs/cgroup/memory/ct
>> $echo 4000M > /sys/fs/cgroup/memory/ct/memory.kmem.limit_in_bytes
>> $for i in `seq 0 4000`; do mkdir /sys/fs/cgroup/memory/ct/$i; echo $$ > 
>> /sys/fs/cgroup/memory/ct/$i/cgroup.procs; mkdir -p s/$i; mount -t tmpfs $i 
>> s/$i; touch s/$i/file; done
>>
>> Then, 5 sequential calls of drop caches:
>> $time echo 3 > /proc/sys/vm/drop_caches
>>
>> 1)Before:
>> 0.00user 13.78system 0:13.78elapsed 99%CPU
>> 0.00user 5.59system 0:05.60elapsed 99%CPU
>> 0.00user 5.48system 0:05.48elapsed 99%CPU
>> 0.00user 8.35system 0:08.35elapsed 99%CPU
>> 0.00user 8.34system 0:08.35elapsed 99%CPU
>>
>> 2)After
>> 0.00user 1.10system 0:01.10elapsed 99%CPU
>> 0.00user 0.00system 0:00.01elapsed 64%CPU
>> 0.00user 0.01system 0:00.01elapsed 82%CPU
>> 0.00user 0.00system 0:00.01elapsed 64%CPU
>> 0.00user 0.01system 0:00.01elapsed 82%CPU
>>
>> The results show the performance increases at least in 548 times.
>>
>> Signed-off-by: Kirill Tkhai 
>> ---
>>  include/linux/memcontrol.h |2 ++
>>  mm/vmscan.c|   19 +--
>>  2 files changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
>> index 436691a66500..82c0bf2d0579 100644
>> --- a/include/linux/memcontrol.h
>> +++ b/include/linux/memcontrol.h
>> @@ -1283,6 +1283,8 @@ static inline void memcg_set_shrinker_bit(struct 
>> mem_cgroup *memcg, int nid, int
>>  
>>  rcu_read_lock();
>>  map = MEMCG_SHRINKER_MAP(memcg, nid);
>> +/* Pairs with smp mb in shrink_slab() */
>> +smp_mb__before_atomic();
>>  set_bit(nr, map->map);
>>  rcu_read_unlock();
>>  }
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 7b0075612d73..189b163bef4a 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -586,8 +586,23 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, 
>> int nid,
>>  continue;
>>  
>>  ret = do_shrink_slab(, shrinker, priority);
>> -if (ret == SHRINK_EMPTY)
>> -ret = 0;
>> +if (ret == SHRINK_EMPTY) {
>> +clear_bit(i, map->map);
>> +/*
>> + * Pairs with mb in memcg_set_shrinker_bit():
>> + *
>> + * list_lru_add() shrink_slab_memcg()
>> + *   list_add_tail()clear_bit()
>> + *  
>> + *   set_bit()  do_shrink_slab()
>> + */
> 
> Please improve the comment so that it isn't just a diagram.

Please, say, which comment you want to see here.

>> +smp_mb__after_atomic();
>> +ret = do_shrink_slab(, shrinker, priority);
>> +if (ret == SHRINK_EMPTY)
>> +ret = 0;
>> +else
>> +   

Re: [PATCH v5 13/13] mm: Clear shrinker bit if there are no objects related to memcg

2018-05-14 Thread Vladimir Davydov
On Thu, May 10, 2018 at 12:54:15PM +0300, Kirill Tkhai wrote:
> To avoid further unneed calls of do_shrink_slab()
> for shrinkers, which already do not have any charged
> objects in a memcg, their bits have to be cleared.
> 
> This patch introduces a lockless mechanism to do that
> without races without parallel list lru add. After
> do_shrink_slab() returns SHRINK_EMPTY the first time,
> we clear the bit and call it once again. Then we restore
> the bit, if the new return value is different.
> 
> Note, that single smp_mb__after_atomic() in shrink_slab_memcg()
> covers two situations:
> 
> 1)list_lru_add() shrink_slab_memcg
> list_add_tail()for_each_set_bit() <--- read bit
>  do_shrink_slab() <--- missed list update (no barrier)
>  
> set_bit()do_shrink_slab() <--- seen list update
> 
> This situation, when the first do_shrink_slab() sees set bit,
> but it doesn't see list update (i.e., race with the first element
> queueing), is rare. So we don't add  before the first call
> of do_shrink_slab() instead of this to do not slow down generic
> case. Also, it's need the second call as seen in below in (2).
> 
> 2)list_lru_add()  shrink_slab_memcg()
> list_add_tail() ...
> set_bit()   ...
>   ...   for_each_set_bit()
>   do_shrink_slab()do_shrink_slab()
> clear_bit()   ...
>   ... ...
>   list_lru_add()  ...
> list_add_tail()   clear_bit()
>   
> set_bit() do_shrink_slab()
> 
> The barriers guarantees, the second do_shrink_slab()
> in the right side task sees list update if really
> cleared the bit. This case is drawn in the code comment.
> 
> [Results/performance of the patchset]
> 
> After the whole patchset applied the below test shows signify
> increase of performance:
> 
> $echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy
> $mkdir /sys/fs/cgroup/memory/ct
> $echo 4000M > /sys/fs/cgroup/memory/ct/memory.kmem.limit_in_bytes
> $for i in `seq 0 4000`; do mkdir /sys/fs/cgroup/memory/ct/$i; echo $$ > 
> /sys/fs/cgroup/memory/ct/$i/cgroup.procs; mkdir -p s/$i; mount -t tmpfs $i 
> s/$i; touch s/$i/file; done
> 
> Then, 5 sequential calls of drop caches:
> $time echo 3 > /proc/sys/vm/drop_caches
> 
> 1)Before:
> 0.00user 13.78system 0:13.78elapsed 99%CPU
> 0.00user 5.59system 0:05.60elapsed 99%CPU
> 0.00user 5.48system 0:05.48elapsed 99%CPU
> 0.00user 8.35system 0:08.35elapsed 99%CPU
> 0.00user 8.34system 0:08.35elapsed 99%CPU
> 
> 2)After
> 0.00user 1.10system 0:01.10elapsed 99%CPU
> 0.00user 0.00system 0:00.01elapsed 64%CPU
> 0.00user 0.01system 0:00.01elapsed 82%CPU
> 0.00user 0.00system 0:00.01elapsed 64%CPU
> 0.00user 0.01system 0:00.01elapsed 82%CPU
> 
> The results show the performance increases at least in 548 times.
> 
> Signed-off-by: Kirill Tkhai 
> ---
>  include/linux/memcontrol.h |2 ++
>  mm/vmscan.c|   19 +--
>  2 files changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 436691a66500..82c0bf2d0579 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1283,6 +1283,8 @@ static inline void memcg_set_shrinker_bit(struct 
> mem_cgroup *memcg, int nid, int
>  
>   rcu_read_lock();
>   map = MEMCG_SHRINKER_MAP(memcg, nid);
> + /* Pairs with smp mb in shrink_slab() */
> + smp_mb__before_atomic();
>   set_bit(nr, map->map);
>   rcu_read_unlock();
>   }
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 7b0075612d73..189b163bef4a 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -586,8 +586,23 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, 
> int nid,
>   continue;
>  
>   ret = do_shrink_slab(, shrinker, priority);
> - if (ret == SHRINK_EMPTY)
> - ret = 0;
> + if (ret == SHRINK_EMPTY) {
> + clear_bit(i, map->map);
> + /*
> +  * Pairs with mb in memcg_set_shrinker_bit():
> +  *
> +  * list_lru_add() shrink_slab_memcg()
> +  *   list_add_tail()clear_bit()
> +  *  
> +  *   set_bit()  do_shrink_slab()
> +  */

Please improve the comment so that it isn't just a diagram.

> + smp_mb__after_atomic();
> + ret = do_shrink_slab(, shrinker, priority);
> + if (ret == SHRINK_EMPTY)
> + ret = 0;
> + else
> + memcg_set_shrinker_bit(memcg, nid, i);
> + }
>   freed += ret;
>  
>   if (rwsem_is_contended(_rwsem)) {
> 


Re: [PATCH v5 13/13] mm: Clear shrinker bit if there are no objects related to memcg

2018-05-14 Thread Vladimir Davydov
On Thu, May 10, 2018 at 12:54:15PM +0300, Kirill Tkhai wrote:
> To avoid further unneed calls of do_shrink_slab()
> for shrinkers, which already do not have any charged
> objects in a memcg, their bits have to be cleared.
> 
> This patch introduces a lockless mechanism to do that
> without races without parallel list lru add. After
> do_shrink_slab() returns SHRINK_EMPTY the first time,
> we clear the bit and call it once again. Then we restore
> the bit, if the new return value is different.
> 
> Note, that single smp_mb__after_atomic() in shrink_slab_memcg()
> covers two situations:
> 
> 1)list_lru_add() shrink_slab_memcg
> list_add_tail()for_each_set_bit() <--- read bit
>  do_shrink_slab() <--- missed list update (no barrier)
>  
> set_bit()do_shrink_slab() <--- seen list update
> 
> This situation, when the first do_shrink_slab() sees set bit,
> but it doesn't see list update (i.e., race with the first element
> queueing), is rare. So we don't add  before the first call
> of do_shrink_slab() instead of this to do not slow down generic
> case. Also, it's need the second call as seen in below in (2).
> 
> 2)list_lru_add()  shrink_slab_memcg()
> list_add_tail() ...
> set_bit()   ...
>   ...   for_each_set_bit()
>   do_shrink_slab()do_shrink_slab()
> clear_bit()   ...
>   ... ...
>   list_lru_add()  ...
> list_add_tail()   clear_bit()
>   
> set_bit() do_shrink_slab()
> 
> The barriers guarantees, the second do_shrink_slab()
> in the right side task sees list update if really
> cleared the bit. This case is drawn in the code comment.
> 
> [Results/performance of the patchset]
> 
> After the whole patchset applied the below test shows signify
> increase of performance:
> 
> $echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy
> $mkdir /sys/fs/cgroup/memory/ct
> $echo 4000M > /sys/fs/cgroup/memory/ct/memory.kmem.limit_in_bytes
> $for i in `seq 0 4000`; do mkdir /sys/fs/cgroup/memory/ct/$i; echo $$ > 
> /sys/fs/cgroup/memory/ct/$i/cgroup.procs; mkdir -p s/$i; mount -t tmpfs $i 
> s/$i; touch s/$i/file; done
> 
> Then, 5 sequential calls of drop caches:
> $time echo 3 > /proc/sys/vm/drop_caches
> 
> 1)Before:
> 0.00user 13.78system 0:13.78elapsed 99%CPU
> 0.00user 5.59system 0:05.60elapsed 99%CPU
> 0.00user 5.48system 0:05.48elapsed 99%CPU
> 0.00user 8.35system 0:08.35elapsed 99%CPU
> 0.00user 8.34system 0:08.35elapsed 99%CPU
> 
> 2)After
> 0.00user 1.10system 0:01.10elapsed 99%CPU
> 0.00user 0.00system 0:00.01elapsed 64%CPU
> 0.00user 0.01system 0:00.01elapsed 82%CPU
> 0.00user 0.00system 0:00.01elapsed 64%CPU
> 0.00user 0.01system 0:00.01elapsed 82%CPU
> 
> The results show the performance increases at least in 548 times.
> 
> Signed-off-by: Kirill Tkhai 
> ---
>  include/linux/memcontrol.h |2 ++
>  mm/vmscan.c|   19 +--
>  2 files changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 436691a66500..82c0bf2d0579 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1283,6 +1283,8 @@ static inline void memcg_set_shrinker_bit(struct 
> mem_cgroup *memcg, int nid, int
>  
>   rcu_read_lock();
>   map = MEMCG_SHRINKER_MAP(memcg, nid);
> + /* Pairs with smp mb in shrink_slab() */
> + smp_mb__before_atomic();
>   set_bit(nr, map->map);
>   rcu_read_unlock();
>   }
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 7b0075612d73..189b163bef4a 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -586,8 +586,23 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, 
> int nid,
>   continue;
>  
>   ret = do_shrink_slab(, shrinker, priority);
> - if (ret == SHRINK_EMPTY)
> - ret = 0;
> + if (ret == SHRINK_EMPTY) {
> + clear_bit(i, map->map);
> + /*
> +  * Pairs with mb in memcg_set_shrinker_bit():
> +  *
> +  * list_lru_add() shrink_slab_memcg()
> +  *   list_add_tail()clear_bit()
> +  *  
> +  *   set_bit()  do_shrink_slab()
> +  */

Please improve the comment so that it isn't just a diagram.

> + smp_mb__after_atomic();
> + ret = do_shrink_slab(, shrinker, priority);
> + if (ret == SHRINK_EMPTY)
> + ret = 0;
> + else
> + memcg_set_shrinker_bit(memcg, nid, i);
> + }
>   freed += ret;
>  
>   if (rwsem_is_contended(_rwsem)) {
> 


[PATCH v5 13/13] mm: Clear shrinker bit if there are no objects related to memcg

2018-05-10 Thread Kirill Tkhai
To avoid further unneed calls of do_shrink_slab()
for shrinkers, which already do not have any charged
objects in a memcg, their bits have to be cleared.

This patch introduces a lockless mechanism to do that
without races without parallel list lru add. After
do_shrink_slab() returns SHRINK_EMPTY the first time,
we clear the bit and call it once again. Then we restore
the bit, if the new return value is different.

Note, that single smp_mb__after_atomic() in shrink_slab_memcg()
covers two situations:

1)list_lru_add() shrink_slab_memcg
list_add_tail()for_each_set_bit() <--- read bit
 do_shrink_slab() <--- missed list update (no barrier)
 
set_bit()do_shrink_slab() <--- seen list update

This situation, when the first do_shrink_slab() sees set bit,
but it doesn't see list update (i.e., race with the first element
queueing), is rare. So we don't add  before the first call
of do_shrink_slab() instead of this to do not slow down generic
case. Also, it's need the second call as seen in below in (2).

2)list_lru_add()  shrink_slab_memcg()
list_add_tail() ...
set_bit()   ...
  ...   for_each_set_bit()
  do_shrink_slab()do_shrink_slab()
clear_bit()   ...
  ... ...
  list_lru_add()  ...
list_add_tail()   clear_bit()
  
set_bit() do_shrink_slab()

The barriers guarantees, the second do_shrink_slab()
in the right side task sees list update if really
cleared the bit. This case is drawn in the code comment.

[Results/performance of the patchset]

After the whole patchset applied the below test shows signify
increase of performance:

$echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy
$mkdir /sys/fs/cgroup/memory/ct
$echo 4000M > /sys/fs/cgroup/memory/ct/memory.kmem.limit_in_bytes
$for i in `seq 0 4000`; do mkdir /sys/fs/cgroup/memory/ct/$i; echo $$ > 
/sys/fs/cgroup/memory/ct/$i/cgroup.procs; mkdir -p s/$i; mount -t tmpfs $i 
s/$i; touch s/$i/file; done

Then, 5 sequential calls of drop caches:
$time echo 3 > /proc/sys/vm/drop_caches

1)Before:
0.00user 13.78system 0:13.78elapsed 99%CPU
0.00user 5.59system 0:05.60elapsed 99%CPU
0.00user 5.48system 0:05.48elapsed 99%CPU
0.00user 8.35system 0:08.35elapsed 99%CPU
0.00user 8.34system 0:08.35elapsed 99%CPU

2)After
0.00user 1.10system 0:01.10elapsed 99%CPU
0.00user 0.00system 0:00.01elapsed 64%CPU
0.00user 0.01system 0:00.01elapsed 82%CPU
0.00user 0.00system 0:00.01elapsed 64%CPU
0.00user 0.01system 0:00.01elapsed 82%CPU

The results show the performance increases at least in 548 times.

Signed-off-by: Kirill Tkhai 
---
 include/linux/memcontrol.h |2 ++
 mm/vmscan.c|   19 +--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 436691a66500..82c0bf2d0579 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1283,6 +1283,8 @@ static inline void memcg_set_shrinker_bit(struct 
mem_cgroup *memcg, int nid, int
 
rcu_read_lock();
map = MEMCG_SHRINKER_MAP(memcg, nid);
+   /* Pairs with smp mb in shrink_slab() */
+   smp_mb__before_atomic();
set_bit(nr, map->map);
rcu_read_unlock();
}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7b0075612d73..189b163bef4a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -586,8 +586,23 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int 
nid,
continue;
 
ret = do_shrink_slab(, shrinker, priority);
-   if (ret == SHRINK_EMPTY)
-   ret = 0;
+   if (ret == SHRINK_EMPTY) {
+   clear_bit(i, map->map);
+   /*
+* Pairs with mb in memcg_set_shrinker_bit():
+*
+* list_lru_add() shrink_slab_memcg()
+*   list_add_tail()clear_bit()
+*  
+*   set_bit()  do_shrink_slab()
+*/
+   smp_mb__after_atomic();
+   ret = do_shrink_slab(, shrinker, priority);
+   if (ret == SHRINK_EMPTY)
+   ret = 0;
+   else
+   memcg_set_shrinker_bit(memcg, nid, i);
+   }
freed += ret;
 
if (rwsem_is_contended(_rwsem)) {



[PATCH v5 13/13] mm: Clear shrinker bit if there are no objects related to memcg

2018-05-10 Thread Kirill Tkhai
To avoid further unneed calls of do_shrink_slab()
for shrinkers, which already do not have any charged
objects in a memcg, their bits have to be cleared.

This patch introduces a lockless mechanism to do that
without races without parallel list lru add. After
do_shrink_slab() returns SHRINK_EMPTY the first time,
we clear the bit and call it once again. Then we restore
the bit, if the new return value is different.

Note, that single smp_mb__after_atomic() in shrink_slab_memcg()
covers two situations:

1)list_lru_add() shrink_slab_memcg
list_add_tail()for_each_set_bit() <--- read bit
 do_shrink_slab() <--- missed list update (no barrier)
 
set_bit()do_shrink_slab() <--- seen list update

This situation, when the first do_shrink_slab() sees set bit,
but it doesn't see list update (i.e., race with the first element
queueing), is rare. So we don't add  before the first call
of do_shrink_slab() instead of this to do not slow down generic
case. Also, it's need the second call as seen in below in (2).

2)list_lru_add()  shrink_slab_memcg()
list_add_tail() ...
set_bit()   ...
  ...   for_each_set_bit()
  do_shrink_slab()do_shrink_slab()
clear_bit()   ...
  ... ...
  list_lru_add()  ...
list_add_tail()   clear_bit()
  
set_bit() do_shrink_slab()

The barriers guarantees, the second do_shrink_slab()
in the right side task sees list update if really
cleared the bit. This case is drawn in the code comment.

[Results/performance of the patchset]

After the whole patchset applied the below test shows signify
increase of performance:

$echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy
$mkdir /sys/fs/cgroup/memory/ct
$echo 4000M > /sys/fs/cgroup/memory/ct/memory.kmem.limit_in_bytes
$for i in `seq 0 4000`; do mkdir /sys/fs/cgroup/memory/ct/$i; echo $$ > 
/sys/fs/cgroup/memory/ct/$i/cgroup.procs; mkdir -p s/$i; mount -t tmpfs $i 
s/$i; touch s/$i/file; done

Then, 5 sequential calls of drop caches:
$time echo 3 > /proc/sys/vm/drop_caches

1)Before:
0.00user 13.78system 0:13.78elapsed 99%CPU
0.00user 5.59system 0:05.60elapsed 99%CPU
0.00user 5.48system 0:05.48elapsed 99%CPU
0.00user 8.35system 0:08.35elapsed 99%CPU
0.00user 8.34system 0:08.35elapsed 99%CPU

2)After
0.00user 1.10system 0:01.10elapsed 99%CPU
0.00user 0.00system 0:00.01elapsed 64%CPU
0.00user 0.01system 0:00.01elapsed 82%CPU
0.00user 0.00system 0:00.01elapsed 64%CPU
0.00user 0.01system 0:00.01elapsed 82%CPU

The results show the performance increases at least in 548 times.

Signed-off-by: Kirill Tkhai 
---
 include/linux/memcontrol.h |2 ++
 mm/vmscan.c|   19 +--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 436691a66500..82c0bf2d0579 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1283,6 +1283,8 @@ static inline void memcg_set_shrinker_bit(struct 
mem_cgroup *memcg, int nid, int
 
rcu_read_lock();
map = MEMCG_SHRINKER_MAP(memcg, nid);
+   /* Pairs with smp mb in shrink_slab() */
+   smp_mb__before_atomic();
set_bit(nr, map->map);
rcu_read_unlock();
}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7b0075612d73..189b163bef4a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -586,8 +586,23 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int 
nid,
continue;
 
ret = do_shrink_slab(, shrinker, priority);
-   if (ret == SHRINK_EMPTY)
-   ret = 0;
+   if (ret == SHRINK_EMPTY) {
+   clear_bit(i, map->map);
+   /*
+* Pairs with mb in memcg_set_shrinker_bit():
+*
+* list_lru_add() shrink_slab_memcg()
+*   list_add_tail()clear_bit()
+*  
+*   set_bit()  do_shrink_slab()
+*/
+   smp_mb__after_atomic();
+   ret = do_shrink_slab(, shrinker, priority);
+   if (ret == SHRINK_EMPTY)
+   ret = 0;
+   else
+   memcg_set_shrinker_bit(memcg, nid, i);
+   }
freed += ret;
 
if (rwsem_is_contended(_rwsem)) {