On Tue, Jul 31, 2018 at 04:39:08PM -0700, Andrew Morton wrote:
> On Mon, 30 Jul 2018 11:31:13 -0400 Johannes Weiner <han...@cmpxchg.org> wrote:
> 
> > Subject: [PATCH] mm: memcontrol: simplify memcg idr allocation and error
> >  unwinding
> > 
> > The memcg ID is allocated early in the multi-step memcg creation
> > process, which needs 2-step ID allocation and IDR publishing, as well
> > as two separate IDR cleanup/unwind sites on error.
> > 
> > Defer the IDR allocation until the last second during onlining to
> > eliminate all this complexity. There is no requirement to have the ID
> > and IDR entry earlier than that. And the root reference to the ID is
> > put in the offline path, so this matches nicely.
> 
> This patch isn't aware of Kirill's later "mm, memcg: assign memcg-aware
> shrinkers bitmap to memcg", which altered mem_cgroup_css_online():
> 
> @@ -4356,6 +4470,11 @@ static int mem_cgroup_css_online(struct
>  {
>       struct mem_cgroup *memcg = mem_cgroup_from_css(css);
>  
> +     if (memcg_alloc_shrinker_maps(memcg)) {
> +             mem_cgroup_id_remove(memcg);
> +             return -ENOMEM;
> +     }
> +
>       /* Online state pins memcg ID, memcg ID pins CSS */
>       atomic_set(&memcg->id.ref, 1);
>       css_get(css);
> 

Hm, that looks out of place too. The bitmaps are allocated for the
entire lifetime of the css, not just while it's online.

Any objections to the following fixup to that patch?

>From bbb785f1daca74024232aa34ba29a8a108556ace Mon Sep 17 00:00:00 2001
From: Johannes Weiner <han...@cmpxchg.org>
Date: Wed, 1 Aug 2018 11:42:55 -0400
Subject: [PATCH] mm, memcg: assign memcg-aware shrinkers bitmap to memcg fix

The shrinker bitmap allocation is a bit out of place in the css
onlining path.

Allocate and free those bitmaps as part of the memcg alloc and free
procedures.

Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
---
 mm/memcontrol.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index c9098200326f..82eb40b715da 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4342,6 +4342,7 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg)
 {
        int node;
 
+       memcg_free_shrinker_maps(memcg);
        for_each_node(node)
                free_mem_cgroup_per_node_info(memcg, node);
        free_percpu(memcg->stat_cpu);
@@ -4381,6 +4382,9 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
                if (alloc_mem_cgroup_per_node_info(memcg, node))
                        goto fail;
 
+       if (memcg_alloc_shrinker_maps(memcg))
+               goto fail;
+
        if (memcg_wb_domain_init(memcg, GFP_KERNEL))
                goto fail;
 
@@ -4470,11 +4474,6 @@ static int mem_cgroup_css_online(struct 
cgroup_subsys_state *css)
 {
        struct mem_cgroup *memcg = mem_cgroup_from_css(css);
 
-       if (memcg_alloc_shrinker_maps(memcg)) {
-               mem_cgroup_id_remove(memcg);
-               return -ENOMEM;
-       }
-
        /* Online state pins memcg ID, memcg ID pins CSS */
        atomic_set(&memcg->id.ref, 1);
        css_get(css);
@@ -4527,7 +4526,6 @@ static void mem_cgroup_css_free(struct 
cgroup_subsys_state *css)
        vmpressure_cleanup(&memcg->vmpressure);
        cancel_work_sync(&memcg->high_work);
        mem_cgroup_remove_from_trees(memcg);
-       memcg_free_shrinker_maps(memcg);
        memcg_free_kmem(memcg);
        mem_cgroup_free(memcg);
 }
-- 
2.18.0

Reply via email to