In its current implementation, shrink_slab() won't scan caches that have
less than batch_size objects. If there are only a few shrinkers
available, such a behavior won't cause any problems, because the
batch_size is usually small, but if we have a lot of slab shrinkers,
which is perfectly possible since FS shrinkers are now per-superblock,
we can end up with hundreds of megabytes of practically unreclaimable
kmem objects. For instance, mounting a thousand of ext2 FS images with a
hundred of files in each and iterating over all the files using du(1)
will result in about 200 Mb of FS caches that cannot be dropped even
with the aid of the vm.drop_caches sysctl! Fix this.

Signed-off-by: Vladimir Davydov <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Chinner <[email protected]>
Cc: Glauber Costa <[email protected]>
---
 mm/vmscan.c |   25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index f6d716d..2e710d4 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -275,7 +275,7 @@ shrink_slab_node(struct shrink_control *shrinkctl, struct 
shrinker *shrinker,
         * a large delta change is calculated directly.
         */
        if (delta < freeable / 4)
-               total_scan = min(total_scan, freeable / 2);
+               total_scan = min(total_scan, max(freeable / 2, batch_size));
 
        /*
         * Avoid risking looping forever due to too large nr value:
@@ -289,21 +289,34 @@ shrink_slab_node(struct shrink_control *shrinkctl, struct 
shrinker *shrinker,
                                nr_pages_scanned, lru_pages,
                                freeable, delta, total_scan);
 
-       while (total_scan >= batch_size) {
+       /*
+        * To avoid CPU cache thrashing, we should not scan less than
+        * batch_size objects in one pass, but if the cache has less
+        * than batch_size objects in total, and we really want to
+        * shrink them all, go ahead and scan what we have, otherwise
+        * last batch_size objects will never get reclaimed.
+        */
+       if (total_scan < batch_size &&
+           !(freeable < batch_size && total_scan >= freeable))
+               goto out;
+
+       do {
                unsigned long ret;
+               unsigned long nr_to_scan = min(total_scan, batch_size);
 
-               shrinkctl->nr_to_scan = batch_size;
+               shrinkctl->nr_to_scan = nr_to_scan;
                ret = shrinker->scan_objects(shrinker, shrinkctl);
                if (ret == SHRINK_STOP)
                        break;
                freed += ret;
 
-               count_vm_events(SLABS_SCANNED, batch_size);
-               total_scan -= batch_size;
+               count_vm_events(SLABS_SCANNED, nr_to_scan);
+               total_scan -= nr_to_scan;
 
                cond_resched();
-       }
+       } while (total_scan >= batch_size);
 
+out:
        /*
         * move the unused scan count back into the shrinker in a
         * manner that handles concurrent updates. If we exhausted the
-- 
1.7.10.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to