Re: [PATCH 6/7] mm: free swap cache aggressively if memcg swap is full
On Fri, Dec 11, 2015 at 02:33:58PM -0500, Johannes Weiner wrote: > On Thu, Dec 10, 2015 at 02:39:19PM +0300, Vladimir Davydov wrote: > > Swap cache pages are freed aggressively if swap is nearly full (>50% > > currently), because otherwise we are likely to stop scanning anonymous > > when we near the swap limit even if there is plenty of freeable swap > > cache pages. We should follow the same trend in case of memory cgroup, > > which has its own swap limit. > > > > Signed-off-by: Vladimir Davydov > > Acked-by: Johannes Weiner > > One note: > > > @@ -5839,6 +5839,29 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup > > *memcg) > > return nr_swap_pages; > > } > > > > +bool mem_cgroup_swap_full(struct page *page) > > +{ > > + struct mem_cgroup *memcg; > > + > > + VM_BUG_ON_PAGE(!PageLocked(page), page); > > + > > + if (vm_swap_full()) > > + return true; > > + if (!do_swap_account || !PageSwapCache(page)) > > + return false; > > The callers establish PageSwapCache() under the page lock, which makes > sense since they only inquire about the swap state when deciding what > to do with a swapcache page at hand. So this check seems unnecessary. Yeah, you're right, we don't need it here. Will remove it in v2. Besides, I think I should have inserted cgroup_subsys_on_dflt check in this function so that it wouldn't check memcg->swap limit in case the legacy hierarchy is used. Will do. Thanks, Vladimir -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 6/7] mm: free swap cache aggressively if memcg swap is full
On Fri, Dec 11, 2015 at 02:33:58PM -0500, Johannes Weiner wrote: > On Thu, Dec 10, 2015 at 02:39:19PM +0300, Vladimir Davydov wrote: > > Swap cache pages are freed aggressively if swap is nearly full (>50% > > currently), because otherwise we are likely to stop scanning anonymous > > when we near the swap limit even if there is plenty of freeable swap > > cache pages. We should follow the same trend in case of memory cgroup, > > which has its own swap limit. > > > > Signed-off-by: Vladimir Davydov> > Acked-by: Johannes Weiner > > One note: > > > @@ -5839,6 +5839,29 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup > > *memcg) > > return nr_swap_pages; > > } > > > > +bool mem_cgroup_swap_full(struct page *page) > > +{ > > + struct mem_cgroup *memcg; > > + > > + VM_BUG_ON_PAGE(!PageLocked(page), page); > > + > > + if (vm_swap_full()) > > + return true; > > + if (!do_swap_account || !PageSwapCache(page)) > > + return false; > > The callers establish PageSwapCache() under the page lock, which makes > sense since they only inquire about the swap state when deciding what > to do with a swapcache page at hand. So this check seems unnecessary. Yeah, you're right, we don't need it here. Will remove it in v2. Besides, I think I should have inserted cgroup_subsys_on_dflt check in this function so that it wouldn't check memcg->swap limit in case the legacy hierarchy is used. Will do. Thanks, Vladimir -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 6/7] mm: free swap cache aggressively if memcg swap is full
On Thu, Dec 10, 2015 at 02:39:19PM +0300, Vladimir Davydov wrote: > Swap cache pages are freed aggressively if swap is nearly full (>50% > currently), because otherwise we are likely to stop scanning anonymous > when we near the swap limit even if there is plenty of freeable swap > cache pages. We should follow the same trend in case of memory cgroup, > which has its own swap limit. > > Signed-off-by: Vladimir Davydov Acked-by: Johannes Weiner One note: > @@ -5839,6 +5839,29 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup > *memcg) > return nr_swap_pages; > } > > +bool mem_cgroup_swap_full(struct page *page) > +{ > + struct mem_cgroup *memcg; > + > + VM_BUG_ON_PAGE(!PageLocked(page), page); > + > + if (vm_swap_full()) > + return true; > + if (!do_swap_account || !PageSwapCache(page)) > + return false; The callers establish PageSwapCache() under the page lock, which makes sense since they only inquire about the swap state when deciding what to do with a swapcache page at hand. So this check seems unnecessary. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 6/7] mm: free swap cache aggressively if memcg swap is full
On Thu, Dec 10, 2015 at 02:39:19PM +0300, Vladimir Davydov wrote: > Swap cache pages are freed aggressively if swap is nearly full (>50% > currently), because otherwise we are likely to stop scanning anonymous > when we near the swap limit even if there is plenty of freeable swap > cache pages. We should follow the same trend in case of memory cgroup, > which has its own swap limit. > > Signed-off-by: Vladimir DavydovAcked-by: Johannes Weiner One note: > @@ -5839,6 +5839,29 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup > *memcg) > return nr_swap_pages; > } > > +bool mem_cgroup_swap_full(struct page *page) > +{ > + struct mem_cgroup *memcg; > + > + VM_BUG_ON_PAGE(!PageLocked(page), page); > + > + if (vm_swap_full()) > + return true; > + if (!do_swap_account || !PageSwapCache(page)) > + return false; The callers establish PageSwapCache() under the page lock, which makes sense since they only inquire about the swap state when deciding what to do with a swapcache page at hand. So this check seems unnecessary. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/