From: "Steven Rostedt (VMware)" <>

The commit "memcontrol: Prevent scheduling while atomic in cgroup code"
fixed this issue:

                   spin_lock() <== boom

But commit 3e32cb2e0a12b ("mm: memcontrol: lockless page counters") replaced
the calls to res_counter_uncharge() in drain_stock() to the lockless
function page_counter_uncharge(). There is no more spin lock there and no
more reason to have that local lock.

Cc: <>
Reported-by: Haiyang HY1 Tan <>
Signed-off-by: Steven Rostedt (VMware) <>
[bigeasy: That upstream commit appeared in v3.19 and the patch in
  question in v3.18.7-rt2 and v3.18 seems still to be maintained. So I
  guess that v3.18 would need the locallocks that we are about to remove
  here. I am not sure if any earlier versions have the patch
  The stable tag here is because Haiyang reported (and debugged) a crash
  in 4.4-RT with this patch applied (which has get_cpu_light() instead
  the locallocks it gained in v4.9-RT).
Signed-off-by: Sebastian Andrzej Siewior <>
 mm/memcontrol.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 493b4986d5dc..56f67a15937b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1925,17 +1925,14 @@ static void drain_local_stock(struct work_struct *dummy)
 static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
-       struct memcg_stock_pcp *stock;
-       int cpu = get_cpu_light();
-       stock = &per_cpu(memcg_stock, cpu);
+       struct memcg_stock_pcp *stock = &get_cpu_var(memcg_stock);
        if (stock->cached != memcg) { /* reset if necessary */
                stock->cached = memcg;
        stock->nr_pages += nr_pages;
-       put_cpu_light();
+       put_cpu_var(memcg_stock);

Reply via email to