Hi Ross,

Comments below. Sorry if this is a bit incoherent; it's early and I'm
not properly caffeinated yet.

----- Original Message -----
> Under certain conditions, lru_count may drop below zero resulting in
> a large amount of log spam like this:
> 
> vmscan: shrink_slab: gfs2_dump_glock+0x3b0/0x630 [gfs2] \
>     negative objects to delete nr=-1
> 
> This happens as follows:
> 1) A glock is moved from lru_list to the dispose list and lru_count is
>    decremented.
> 2) The dispose function calls cond_resched() and drops the lru lock.
> 3) Another thread takes the lru lock and tries to add the same glock to
>    lru_list, checking if the glock is on an lru list.
> 4) It is on a list (actually the dispose list) and so it avoids
>    incrementing lru_count.
> 5) The glock is moved to lru_list.
> 5) The original thread doesn't dispose it because it has been re-added
>    to the lru list but the lru_count has still decreased by one.
> 
> Fix by checking if the LRU flag is set on the glock rather than checking
> if the glock is on some list and rearrange the code so that the LRU flag
> is added/removed precisely when the glock is added/removed from lru_list.
> 
> Signed-off-by: Ross Lagerwall <ross.lagerw...@citrix.com>
> ---
>  fs/gfs2/glock.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
> index b92740edc416..53e6c7e0c1b3 100644
> --- a/fs/gfs2/glock.c
> +++ b/fs/gfs2/glock.c
> @@ -185,13 +185,14 @@ void gfs2_glock_add_to_lru(struct gfs2_glock *gl)
>  {
>       spin_lock(&lru_lock);
>  
> -     if (!list_empty(&gl->gl_lru))
> -             list_del_init(&gl->gl_lru);
> -     else
> +     list_del(&gl->gl_lru);
> +     list_add_tail(&gl->gl_lru, &lru_list);

This looks like a bug, and I like your idea of using the GLF_LRU bit
to determine whether or not to do the manipulation, but I have some
concerns. First, does it work with kernel list debugging turned on?

To me it looks like the list_del (as opposed to list_del_init) above
will set entry->next and prev to LIST_POISON values, then the
list_add_tail() calls __list_add() which checks:
        if (!__list_add_valid(new, prev, next))
                return;
Without list debugging, the value is always returned true, but with
list debugging it checks for circular values of list->prev and list->next
which, since they're LIST_POISON, ought to fail.
So it seems like the original list_del_init is correct.

The intent was: if the glock is already on the lru, take it off
before re-adding it, and the count ought to be okay, because if it's
on the LRU list, it's already been incremented. So taking it off and
adding it back on is a net 0 on the count. But that's only
true if the GLF_LRU bit is set. If it's on a different list (the
dispose list), as you noted, it still needs to be incremented.

If the glock is on the dispose_list, rather than the lru list, we
want to take it off the dispose list and move it to the lru_list,
but in that case, we need to increment the lru count, and not
poison the list_head.

So to me it seems like we should keep the list_del_init, and only
do it if the list isn't empty, but trigger off the GLF_LRU flag
for managing the count. The lru_lock ought to prevent races.

> +
> +     if (!test_bit(GLF_LRU, &gl->gl_flags)) {
> +             set_bit(GLF_LRU, &gl->gl_flags);
>               atomic_inc(&lru_count);
> +     }

The above may be simplified to something like:
+       if (!test_and_set_bit(GLF_LRU, &gl->gl_flags))
                atomic_inc(&lru_count);

>  
> -     list_add_tail(&gl->gl_lru, &lru_list);
> -     set_bit(GLF_LRU, &gl->gl_flags);
>       spin_unlock(&lru_lock);
>  }
>  
> @@ -201,7 +202,7 @@ static void gfs2_glock_remove_from_lru(struct gfs2_glock
> *gl)
>               return;
>  
>       spin_lock(&lru_lock);
> -     if (!list_empty(&gl->gl_lru)) {
> +     if (test_bit(GLF_LRU, &gl->gl_flags)) {
>               list_del_init(&gl->gl_lru);
>               atomic_dec(&lru_count);
>               clear_bit(GLF_LRU, &gl->gl_flags);

Here again, we could simplify with test_and_clear_bit above.

> @@ -1456,6 +1457,7 @@ __acquires(&lru_lock)
>               if (!spin_trylock(&gl->gl_lockref.lock)) {
>  add_back_to_lru:
>                       list_add(&gl->gl_lru, &lru_list);
> +                     set_bit(GLF_LRU, &gl->gl_flags);
>                       atomic_inc(&lru_count);
>                       continue;
>               }
> @@ -1463,7 +1465,6 @@ __acquires(&lru_lock)
>                       spin_unlock(&gl->gl_lockref.lock);
>                       goto add_back_to_lru;
>               }
> -             clear_bit(GLF_LRU, &gl->gl_flags);
>               gl->gl_lockref.count++;
>               if (demote_ok(gl))
>                       handle_callback(gl, LM_ST_UNLOCKED, 0, false);
> @@ -1498,6 +1499,7 @@ static long gfs2_scan_glock_lru(int nr)
>               if (!test_bit(GLF_LOCK, &gl->gl_flags)) {
>                       list_move(&gl->gl_lru, &dispose);
>                       atomic_dec(&lru_count);
> +                     clear_bit(GLF_LRU, &gl->gl_flags);
>                       freed++;
>                       continue;
>               }
> --
> 2.17.2
> 
> 

Reply via email to