Re: [PATCH v2] fs/dcache.c: fix spin lockup issue on nlru->lock

2017-06-22 Thread Vladimir Davydov
On Thu, Jun 22, 2017 at 10:01:39PM +0530, Sahitya Tummala wrote:
> 
> 
> On 6/21/2017 10:01 PM, Vladimir Davydov wrote:
> >
> >>index cddf397..c8ca150 100644
> >>--- a/fs/dcache.c
> >>+++ b/fs/dcache.c
> >>@@ -1133,10 +1133,11 @@ void shrink_dcache_sb(struct super_block *sb)
> >>LIST_HEAD(dispose);
> >>freed = list_lru_walk(&sb->s_dentry_lru,
> >>-   dentry_lru_isolate_shrink, &dispose, UINT_MAX);
> >>+   dentry_lru_isolate_shrink, &dispose, 1024);
> >>this_cpu_sub(nr_dentry_unused, freed);
> >>shrink_dentry_list(&dispose);
> >>+   cond_resched();
> >>} while (freed > 0);
> >In an extreme case, a single invocation of list_lru_walk() can skip all
> >1024 dentries, in which case 'freed' will be 0 forcing us to break the
> >loop prematurely. I think we should loop until there's at least one
> >dentry left on the LRU, i.e.
> >
> > while (list_lru_count(&sb->s_dentry_lru) > 0)
> >
> >However, even that wouldn't be quite correct, because list_lru_count()
> >iterates over all memory cgroups to sum list_lru_one->nr_items, which
> >can race with memcg offlining code migrating dentries off a dead cgroup
> >(see memcg_drain_all_list_lrus()). So it looks like to make this check
> >race-free, we need to account the number of entries on the LRU not only
> >per memcg, but also per node, i.e. add list_lru_node->nr_items.
> >Fortunately, list_lru entries can't be migrated between NUMA nodes.
> It looks like list_lru_count() is iterating per node before iterating over
> all memory
> cgroups as below -
> 
> unsigned long list_lru_count_node(struct list_lru *lru, int nid)
> {
> long count = 0;
> int memcg_idx;
> 
> count += __list_lru_count_one(lru, nid, -1);
> if (list_lru_memcg_aware(lru)) {
> for_each_memcg_cache_index(memcg_idx)
> count += __list_lru_count_one(lru, nid, memcg_idx);
> }
> return count;
> }
> 
> The first call to __list_lru_count_one() is iterating all the items per node
> i.e, nlru->lru->nr_items.

lru->node[nid].lru.nr_items returned by __list_lru_count_one(lru, nid, -1)
only counts items accounted to the root cgroup, not the total number of
entries on the node.

> Is my understanding correct? If not, could you please clarify on how to get
> the lru items per node?

What I mean is iterating over list_lru_node->memcg_lrus to count the
number of entries on the node is racy. For example, suppose you have
three cgroups with the following values of list_lru_one->nr_items:

  0   0   10

While list_lru_count_node() is at #1, cgroup #2 is offlined and its
list_lru_one is drained, i.e. its entries are migrated to the parent
cgroup, which happens to be #0, i.e. we see the following picture:

 10   0   0

 ^^^
  memcg_ids points here in list_lru_count_node() 

Then the count returned by list_lru_count_node() will be 0, although
there are still 10 entries on the list.

To avoid this race, we could keep list_lru_node->lock locked while
walking over list_lru_node->memcg_lrus, but that's too heavy. I'd prefer
adding list_lru_node->nr_count which would be equal to the total number
of list_lru entries on the node, i.e. sum of list_lru_node->lru.nr_lrus
and list_lru_node->memcg_lrus->lru[]->nr_items.


Re: [PATCH v2] fs/dcache.c: fix spin lockup issue on nlru->lock

2017-06-22 Thread Sahitya Tummala



On 6/21/2017 10:01 PM, Vladimir Davydov wrote:



index cddf397..c8ca150 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -1133,10 +1133,11 @@ void shrink_dcache_sb(struct super_block *sb)
LIST_HEAD(dispose);
  
  		freed = list_lru_walk(&sb->s_dentry_lru,

-   dentry_lru_isolate_shrink, &dispose, UINT_MAX);
+   dentry_lru_isolate_shrink, &dispose, 1024);
  
  		this_cpu_sub(nr_dentry_unused, freed);

shrink_dentry_list(&dispose);
+   cond_resched();
} while (freed > 0);

In an extreme case, a single invocation of list_lru_walk() can skip all
1024 dentries, in which case 'freed' will be 0 forcing us to break the
loop prematurely. I think we should loop until there's at least one
dentry left on the LRU, i.e.

while (list_lru_count(&sb->s_dentry_lru) > 0)

However, even that wouldn't be quite correct, because list_lru_count()
iterates over all memory cgroups to sum list_lru_one->nr_items, which
can race with memcg offlining code migrating dentries off a dead cgroup
(see memcg_drain_all_list_lrus()). So it looks like to make this check
race-free, we need to account the number of entries on the LRU not only
per memcg, but also per node, i.e. add list_lru_node->nr_items.
Fortunately, list_lru entries can't be migrated between NUMA nodes.
It looks like list_lru_count() is iterating per node before iterating 
over all memory

cgroups as below -

unsigned long list_lru_count_node(struct list_lru *lru, int nid)
{
long count = 0;
int memcg_idx;

count += __list_lru_count_one(lru, nid, -1);
if (list_lru_memcg_aware(lru)) {
for_each_memcg_cache_index(memcg_idx)
count += __list_lru_count_one(lru, nid, memcg_idx);
}
return count;
}

The first call to __list_lru_count_one() is iterating all the items per 
node i.e, nlru->lru->nr_items.
Is my understanding correct? If not, could you please clarify on how to 
get the lru items per node?


--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project.



Re: [PATCH v2] fs/dcache.c: fix spin lockup issue on nlru->lock

2017-06-21 Thread Vladimir Davydov
On Wed, Jun 21, 2017 at 12:09:15PM +0530, Sahitya Tummala wrote:
> __list_lru_walk_one() acquires nlru spin lock (nlru->lock) for
> longer duration if there are more number of items in the lru list.
> As per the current code, it can hold the spin lock for upto maximum
> UINT_MAX entries at a time. So if there are more number of items in
> the lru list, then "BUG: spinlock lockup suspected" is observed in
> the below path -
> 
> [] spin_bug+0x90
> [] do_raw_spin_lock+0xfc
> [] _raw_spin_lock+0x28
> [] list_lru_add+0x28
> [] dput+0x1c8
> [] path_put+0x20
> [] terminate_walk+0x3c
> [] path_lookupat+0x100
> [] filename_lookup+0x6c
> [] user_path_at_empty+0x54
> [] SyS_faccessat+0xd0
> [] el0_svc_naked+0x24
> 
> This nlru->lock is acquired by another CPU in this path -
> 
> [] d_lru_shrink_move+0x34
> [] dentry_lru_isolate_shrink+0x48
> [] __list_lru_walk_one.isra.10+0x94
> [] list_lru_walk_node+0x40
> [] shrink_dcache_sb+0x60
> [] do_remount_sb+0xbc
> [] do_emergency_remount+0xb0
> [] process_one_work+0x228
> [] worker_thread+0x2e0
> [] kthread+0xf4
> [] ret_from_fork+0x10
> 
> Fix this lockup by reducing the number of entries to be shrinked
> from the lru list to 1024 at once. Also, add cond_resched() before
> processing the lru list again.
> 
> Link: http://marc.info/?t=14972286491&r=1&w=2
> Fix-suggested-by: Jan kara 
> Fix-suggested-by: Vladimir Davydov 
> Signed-off-by: Sahitya Tummala 
> ---
> v2: patch shrink_dcache_sb() instead of list_lru_walk()
> ---
>  fs/dcache.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/dcache.c b/fs/dcache.c
> index cddf397..c8ca150 100644
> --- a/fs/dcache.c
> +++ b/fs/dcache.c
> @@ -1133,10 +1133,11 @@ void shrink_dcache_sb(struct super_block *sb)
>   LIST_HEAD(dispose);
>  
>   freed = list_lru_walk(&sb->s_dentry_lru,
> - dentry_lru_isolate_shrink, &dispose, UINT_MAX);
> + dentry_lru_isolate_shrink, &dispose, 1024);
>  
>   this_cpu_sub(nr_dentry_unused, freed);
>   shrink_dentry_list(&dispose);
> + cond_resched();
>   } while (freed > 0);

In an extreme case, a single invocation of list_lru_walk() can skip all
1024 dentries, in which case 'freed' will be 0 forcing us to break the
loop prematurely. I think we should loop until there's at least one
dentry left on the LRU, i.e.

while (list_lru_count(&sb->s_dentry_lru) > 0)

However, even that wouldn't be quite correct, because list_lru_count()
iterates over all memory cgroups to sum list_lru_one->nr_items, which
can race with memcg offlining code migrating dentries off a dead cgroup
(see memcg_drain_all_list_lrus()). So it looks like to make this check
race-free, we need to account the number of entries on the LRU not only
per memcg, but also per node, i.e. add list_lru_node->nr_items.
Fortunately, list_lru entries can't be migrated between NUMA nodes.

>  }
>  EXPORT_SYMBOL(shrink_dcache_sb);