The deferred split THPs may get accumulated with some workloads, they
would get shrunk when memory pressure is hit.  Now we use DEFAULT_SEEKS
to determine how many objects would get scanned then split if possible,
but actually they are not like other system cache objects, i.e. inode
cache which would incur extra I/O if over reclaimed, the unmapped pages
will not be accessed anymore, so we could shrink them more aggressively.

We could shrink THPs more pro-actively even though memory pressure is not
hit, however, IMHO waiting for memory pressure is still a good
compromise and trade-off.  And, we do have simpler ways to shrink these
objects harder until we have to take other means do pro-actively drain.

Change shrinker->seeks to 0 to shrink deferred split THPs harder.

Cc: Kirill A. Shutemov <[email protected]>
Cc: Kirill Tkhai <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Shakeel Butt <[email protected]>
Cc: David Rientjes <[email protected]>
Signed-off-by: Yang Shi <[email protected]>
---
 mm/huge_memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 3b78910..1d6b1f1 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2955,7 +2955,7 @@ static unsigned long deferred_split_scan(struct shrinker 
*shrink,
 static struct shrinker deferred_split_shrinker = {
        .count_objects = deferred_split_count,
        .scan_objects = deferred_split_scan,
-       .seeks = DEFAULT_SEEKS,
+       .seeks = 0,
        .flags = SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE |
                 SHRINKER_NONSLAB,
 };
-- 
1.8.3.1

Reply via email to