On Mon 16-01-17 11:09:34, Mel Gorman wrote:
[...]
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 532a2a750952..46aac487b89a 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2684,6 +2684,7 @@ static void shrink_zones(struct zonelist *zonelist, 
> struct scan_control *sc)
>                               continue;
>  
>                       if (sc->priority != DEF_PRIORITY &&
> +                         !buffer_heads_over_limit &&
>                           !pgdat_reclaimable(zone->zone_pgdat))
>                               continue;       /* Let kswapd poll it */

I think we should rather remove pgdat_reclaimable here. This sounds like
a wrong layer to decide whether we want to reclaim and how much.

But even that won't help very much I am afraid. As I've noted in the
other response as long as we will scale the slab shrinking based on
nr_scanned we will have a problem with situations where slab outnumbers
lru lists too much. I do not have a good idea how to fix that though...

-- 
Michal Hocko
SUSE Labs

Reply via email to