On Tue 17-12-13 11:13:39, Li Zefan wrote:
[...]
> From: Li Zefan <lize...@huawei.com>
> Date: Tue, 17 Dec 2013 10:45:09 +0800
> Subject: [PATCH] cgroup: don't recycle cgroup id until all csses' have been 
> destroyed
> 
> Hugh reported this bug:
> 
> > CONFIG_MEMCG_SWAP is broken in 3.13-rc.  Try something like this:
> >
> > mkdir -p /tmp/tmpfs /tmp/memcg
> > mount -t tmpfs -o size=1G tmpfs /tmp/tmpfs
> > mount -t cgroup -o memory memcg /tmp/memcg
> > mkdir /tmp/memcg/old
> > echo 512M >/tmp/memcg/old/memory.limit_in_bytes
> > echo $$ >/tmp/memcg/old/tasks
> > cp /dev/zero /tmp/tmpfs/zero 2>/dev/null
> > echo $$ >/tmp/memcg/tasks
> > rmdir /tmp/memcg/old
> > sleep 1     # let rmdir work complete
> > mkdir /tmp/memcg/new
> > umount /tmp/tmpfs
> > dmesg | grep WARNING
> > rmdir /tmp/memcg/new
> > umount /tmp/memcg
> >
> > Shows lots of WARNING: CPU: 1 PID: 1006 at kernel/res_counter.c:91
> >                            res_counter_uncharge_locked+0x1f/0x2f()
> >
> > Breakage comes from 34c00c319ce7 ("memcg: convert to use cgroup id").
> >
> > The lifetime of a cgroup id is different from the lifetime of the
> > css id it replaced: memsw's css_get()s do nothing to hold on to the
> > old cgroup id, it soon gets recycled to a new cgroup, which then
> > mysteriously inherits the old's swap, without any charge for it.
> 
> Instead of removing cgroup id right after all the csses have been
> offlined, we should do that after csses have been destroyed.
> 
> To make sure an invalid css pointer won't be returned after the css
> is destroyed, make sure css_from_id() returns NULL in this case.

OK, so this will postpone idr_remove to css_free and until then
mem_cgroup_lookup finds a correct memcg. This will work as well.
It is basically the same thing we had with css_id AFAIR.

Originally I thought this wouldn't be possible because of the comment
above idr_remove for some reason.

> Reported-by: Hugh Dickins <hu...@google.com>
> Signed-off-by: Li Zefan <lize...@huawei.com>

Reviewed-by: Michal Hocko <mho...@suse.cz>

> ---
>  kernel/cgroup.c | 18 ++++++++++--------
>  1 file changed, 10 insertions(+), 8 deletions(-)
> 
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index c36d906..769b5bb 100644
> --- a/kernel/cgroup.c
> +++ b/kernel/cgroup.c
> @@ -868,6 +868,15 @@ static void cgroup_diput(struct dentry *dentry, struct 
> inode *inode)
>               struct cgroup *cgrp = dentry->d_fsdata;
>  
>               BUG_ON(!(cgroup_is_dead(cgrp)));
> +
> +             /*
> +              * We should remove the cgroup object from idr before its
> +              * grace period starts, so we won't be looking up a cgroup
> +              * while the cgroup is being freed.
> +              */
> +             idr_remove(&cgrp->root->cgroup_idr, cgrp->id);
> +             cgrp->id = -1;
> +
>               call_rcu(&cgrp->rcu_head, cgroup_free_rcu);
>       } else {
>               struct cfent *cfe = __d_cfe(dentry);
> @@ -4104,6 +4113,7 @@ static void css_release(struct percpu_ref *ref)
>       struct cgroup_subsys_state *css =
>               container_of(ref, struct cgroup_subsys_state, refcnt);
>  
> +     rcu_assign_pointer(css->cgroup->subsys[css->ss->subsys_id], NULL);
>       call_rcu(&css->rcu_head, css_free_rcu_fn);
>  }
>  
> @@ -4545,14 +4555,6 @@ static void cgroup_destroy_css_killed(struct cgroup 
> *cgrp)
>       /* delete this cgroup from parent->children */
>       list_del_rcu(&cgrp->sibling);
>  
> -     /*
> -      * We should remove the cgroup object from idr before its grace
> -      * period starts, so we won't be looking up a cgroup while the
> -      * cgroup is being freed.
> -      */
> -     idr_remove(&cgrp->root->cgroup_idr, cgrp->id);
> -     cgrp->id = -1;
> -
>       dput(d);
>  
>       set_bit(CGRP_RELEASABLE, &parent->flags);
> -- 
> 1.8.0.2
> 
> 
> 
> 

-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to