On Wed 26-08-20 12:43:32, Johannes Weiner wrote:
> On Wed, Aug 26, 2020 at 09:47:02PM +0800, Xunlei Pang wrote:
> > We've met softlockup with "CONFIG_PREEMPT_NONE=y", when
> > the target memcg doesn't have any reclaimable memory.
> > 
> > It can be easily reproduced as below:
> >  watchdog: BUG: soft lockup - CPU#0 stuck for 111s![memcg_test:2204]
> >  CPU: 0 PID: 2204 Comm: memcg_test Not tainted 5.9.0-rc2+ #12
> >  Call Trace:
> >   shrink_lruvec+0x49f/0x640
> >   shrink_node+0x2a6/0x6f0
> >   do_try_to_free_pages+0xe9/0x3e0
> >   try_to_free_mem_cgroup_pages+0xef/0x1f0
> >   try_charge+0x2c1/0x750
> >   mem_cgroup_charge+0xd7/0x240
> >   __add_to_page_cache_locked+0x2fd/0x370
> >   add_to_page_cache_lru+0x4a/0xc0
> >   pagecache_get_page+0x10b/0x2f0
> >   filemap_fault+0x661/0xad0
> >   ext4_filemap_fault+0x2c/0x40
> >   __do_fault+0x4d/0xf9
> >   handle_mm_fault+0x1080/0x1790
> > 
> > It only happens on our 1-vcpu instances, because there's no chance
> > for oom reaper to run to reclaim the to-be-killed process.
> > 
> > Add cond_resched() at the upper shrink_node_memcgs() to solve this
> > issue, and any other possible issue like meomry.min protection.
> > 
> > Suggested-by: Michal Hocko <mho...@suse.com>
> > Signed-off-by: Xunlei Pang <xlp...@linux.alibaba.com>
> 
> This generally makes sense to me but really should have a comment:
> 
>       /*
>        * This loop can become CPU-bound when there are thousands
>        * of cgroups that aren't eligible for reclaim - either
>        * because they don't have any pages, or because their
>        * memory is explicitly protected. Avoid soft lockups.
>        */
>        cond_resched();
> 
> The placement in the middle of the multi-part protection checks is a
> bit odd too. It would be better to have it either at the top of the
> loop, or at the end, by replacing the continues with goto next.

Yes makes sense. I would stick it to the begining of the loop to make it
stand out and make it obvious wrt code flow.

-- 
Michal Hocko
SUSE Labs

Reply via email to