Re: [patch 7/7] mm: reduce rmap overhead for ex-KSM page copies created on swap faults

2012-12-19 Thread Johannes Weiner
On Wed, Dec 19, 2012 at 02:01:19AM -0500, Simon Jeons wrote:
> On Mon, 2012-12-17 at 13:12 -0500, Johannes Weiner wrote:
> > When ex-KSM pages are faulted from swap cache, the fault handler is
> > not capable of re-establishing anon_vma-spanning KSM pages.  In this
> > case, a copy of the page is created instead, just like during a COW
> > break.
> > 
> > These freshly made copies are known to be exclusive to the faulting
> > VMA and there is no reason to go look for this page in parent and
> > sibling processes during rmap operations.
> > 
> > Use page_add_new_anon_rmap() for these copies.  This also puts them on
> > the proper LRU lists and marks them SwapBacked, so we can get rid of
> > doing this ad-hoc in the KSM copy code.
> 
> Is it just a code cleanup instead of reduce rmap overhead?

Both.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 7/7] mm: reduce rmap overhead for ex-KSM page copies created on swap faults

2012-12-19 Thread Johannes Weiner
On Wed, Dec 19, 2012 at 02:01:19AM -0500, Simon Jeons wrote:
 On Mon, 2012-12-17 at 13:12 -0500, Johannes Weiner wrote:
  When ex-KSM pages are faulted from swap cache, the fault handler is
  not capable of re-establishing anon_vma-spanning KSM pages.  In this
  case, a copy of the page is created instead, just like during a COW
  break.
  
  These freshly made copies are known to be exclusive to the faulting
  VMA and there is no reason to go look for this page in parent and
  sibling processes during rmap operations.
  
  Use page_add_new_anon_rmap() for these copies.  This also puts them on
  the proper LRU lists and marks them SwapBacked, so we can get rid of
  doing this ad-hoc in the KSM copy code.
 
 Is it just a code cleanup instead of reduce rmap overhead?

Both.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 7/7] mm: reduce rmap overhead for ex-KSM page copies created on swap faults

2012-12-18 Thread Simon Jeons
On Mon, 2012-12-17 at 13:12 -0500, Johannes Weiner wrote:
> When ex-KSM pages are faulted from swap cache, the fault handler is
> not capable of re-establishing anon_vma-spanning KSM pages.  In this
> case, a copy of the page is created instead, just like during a COW
> break.
> 
> These freshly made copies are known to be exclusive to the faulting
> VMA and there is no reason to go look for this page in parent and
> sibling processes during rmap operations.
> 
> Use page_add_new_anon_rmap() for these copies.  This also puts them on
> the proper LRU lists and marks them SwapBacked, so we can get rid of
> doing this ad-hoc in the KSM copy code.

Is it just a code cleanup instead of reduce rmap overhead?

> 
> Signed-off-by: Johannes Weiner 
> Reviewed-by: Rik van Riel 
> ---
>  mm/ksm.c| 6 --
>  mm/memory.c | 5 -
>  2 files changed, 4 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 382d930..7275c74 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1590,13 +1590,7 @@ struct page *ksm_does_need_to_copy(struct page *page,
>  
>   SetPageDirty(new_page);
>   __SetPageUptodate(new_page);
> - SetPageSwapBacked(new_page);
>   __set_page_locked(new_page);
> -
> - if (!mlocked_vma_newpage(vma, new_page))
> - lru_cache_add_lru(new_page, LRU_ACTIVE_ANON);
> - else
> - add_page_to_unevictable_list(new_page);
>   }
>  
>   return new_page;
> diff --git a/mm/memory.c b/mm/memory.c
> index db2e9e7..7e17eb0 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3020,7 +3020,10 @@ static int do_swap_page(struct mm_struct *mm, struct 
> vm_area_struct *vma,
>   }
>   flush_icache_page(vma, page);
>   set_pte_at(mm, address, page_table, pte);
> - do_page_add_anon_rmap(page, vma, address, exclusive);
> + if (swapcache) /* ksm created a completely new copy */
> + page_add_new_anon_rmap(page, vma, address);
> + else
> + do_page_add_anon_rmap(page, vma, address, exclusive);
>   /* It's better to call commit-charge after rmap is established */
>   mem_cgroup_commit_charge_swapin(page, ptr);
>  


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 7/7] mm: reduce rmap overhead for ex-KSM page copies created on swap faults

2012-12-18 Thread Simon Jeons
On Mon, 2012-12-17 at 13:12 -0500, Johannes Weiner wrote:
 When ex-KSM pages are faulted from swap cache, the fault handler is
 not capable of re-establishing anon_vma-spanning KSM pages.  In this
 case, a copy of the page is created instead, just like during a COW
 break.
 
 These freshly made copies are known to be exclusive to the faulting
 VMA and there is no reason to go look for this page in parent and
 sibling processes during rmap operations.
 
 Use page_add_new_anon_rmap() for these copies.  This also puts them on
 the proper LRU lists and marks them SwapBacked, so we can get rid of
 doing this ad-hoc in the KSM copy code.

Is it just a code cleanup instead of reduce rmap overhead?

 
 Signed-off-by: Johannes Weiner han...@cmpxchg.org
 Reviewed-by: Rik van Riel r...@redhat.com
 ---
  mm/ksm.c| 6 --
  mm/memory.c | 5 -
  2 files changed, 4 insertions(+), 7 deletions(-)
 
 diff --git a/mm/ksm.c b/mm/ksm.c
 index 382d930..7275c74 100644
 --- a/mm/ksm.c
 +++ b/mm/ksm.c
 @@ -1590,13 +1590,7 @@ struct page *ksm_does_need_to_copy(struct page *page,
  
   SetPageDirty(new_page);
   __SetPageUptodate(new_page);
 - SetPageSwapBacked(new_page);
   __set_page_locked(new_page);
 -
 - if (!mlocked_vma_newpage(vma, new_page))
 - lru_cache_add_lru(new_page, LRU_ACTIVE_ANON);
 - else
 - add_page_to_unevictable_list(new_page);
   }
  
   return new_page;
 diff --git a/mm/memory.c b/mm/memory.c
 index db2e9e7..7e17eb0 100644
 --- a/mm/memory.c
 +++ b/mm/memory.c
 @@ -3020,7 +3020,10 @@ static int do_swap_page(struct mm_struct *mm, struct 
 vm_area_struct *vma,
   }
   flush_icache_page(vma, page);
   set_pte_at(mm, address, page_table, pte);
 - do_page_add_anon_rmap(page, vma, address, exclusive);
 + if (swapcache) /* ksm created a completely new copy */
 + page_add_new_anon_rmap(page, vma, address);
 + else
 + do_page_add_anon_rmap(page, vma, address, exclusive);
   /* It's better to call commit-charge after rmap is established */
   mem_cgroup_commit_charge_swapin(page, ptr);
  


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 7/7] mm: reduce rmap overhead for ex-KSM page copies created on swap faults

2012-12-17 Thread Hugh Dickins
On Mon, 17 Dec 2012, Johannes Weiner wrote:

> When ex-KSM pages are faulted from swap cache, the fault handler is
> not capable of re-establishing anon_vma-spanning KSM pages.  In this
> case, a copy of the page is created instead, just like during a COW
> break.
> 
> These freshly made copies are known to be exclusive to the faulting
> VMA and there is no reason to go look for this page in parent and
> sibling processes during rmap operations.
> 
> Use page_add_new_anon_rmap() for these copies.  This also puts them on
> the proper LRU lists and marks them SwapBacked, so we can get rid of
> doing this ad-hoc in the KSM copy code.
> 
> Signed-off-by: Johannes Weiner 
> Reviewed-by: Rik van Riel 

Yes, that's good, thanks Hannes:
Acked-by: Hugh Dickins 

> ---
>  mm/ksm.c| 6 --
>  mm/memory.c | 5 -
>  2 files changed, 4 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 382d930..7275c74 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1590,13 +1590,7 @@ struct page *ksm_does_need_to_copy(struct page *page,
>  
>   SetPageDirty(new_page);
>   __SetPageUptodate(new_page);
> - SetPageSwapBacked(new_page);
>   __set_page_locked(new_page);
> -
> - if (!mlocked_vma_newpage(vma, new_page))
> - lru_cache_add_lru(new_page, LRU_ACTIVE_ANON);
> - else
> - add_page_to_unevictable_list(new_page);
>   }
>  
>   return new_page;
> diff --git a/mm/memory.c b/mm/memory.c
> index db2e9e7..7e17eb0 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3020,7 +3020,10 @@ static int do_swap_page(struct mm_struct *mm, struct 
> vm_area_struct *vma,
>   }
>   flush_icache_page(vma, page);
>   set_pte_at(mm, address, page_table, pte);
> - do_page_add_anon_rmap(page, vma, address, exclusive);
> + if (swapcache) /* ksm created a completely new copy */
> + page_add_new_anon_rmap(page, vma, address);
> + else
> + do_page_add_anon_rmap(page, vma, address, exclusive);
>   /* It's better to call commit-charge after rmap is established */
>   mem_cgroup_commit_charge_swapin(page, ptr);
>  
> -- 
> 1.7.11.7
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 7/7] mm: reduce rmap overhead for ex-KSM page copies created on swap faults

2012-12-17 Thread Hugh Dickins
On Mon, 17 Dec 2012, Johannes Weiner wrote:

 When ex-KSM pages are faulted from swap cache, the fault handler is
 not capable of re-establishing anon_vma-spanning KSM pages.  In this
 case, a copy of the page is created instead, just like during a COW
 break.
 
 These freshly made copies are known to be exclusive to the faulting
 VMA and there is no reason to go look for this page in parent and
 sibling processes during rmap operations.
 
 Use page_add_new_anon_rmap() for these copies.  This also puts them on
 the proper LRU lists and marks them SwapBacked, so we can get rid of
 doing this ad-hoc in the KSM copy code.
 
 Signed-off-by: Johannes Weiner han...@cmpxchg.org
 Reviewed-by: Rik van Riel r...@redhat.com

Yes, that's good, thanks Hannes:
Acked-by: Hugh Dickins hu...@google.com

 ---
  mm/ksm.c| 6 --
  mm/memory.c | 5 -
  2 files changed, 4 insertions(+), 7 deletions(-)
 
 diff --git a/mm/ksm.c b/mm/ksm.c
 index 382d930..7275c74 100644
 --- a/mm/ksm.c
 +++ b/mm/ksm.c
 @@ -1590,13 +1590,7 @@ struct page *ksm_does_need_to_copy(struct page *page,
  
   SetPageDirty(new_page);
   __SetPageUptodate(new_page);
 - SetPageSwapBacked(new_page);
   __set_page_locked(new_page);
 -
 - if (!mlocked_vma_newpage(vma, new_page))
 - lru_cache_add_lru(new_page, LRU_ACTIVE_ANON);
 - else
 - add_page_to_unevictable_list(new_page);
   }
  
   return new_page;
 diff --git a/mm/memory.c b/mm/memory.c
 index db2e9e7..7e17eb0 100644
 --- a/mm/memory.c
 +++ b/mm/memory.c
 @@ -3020,7 +3020,10 @@ static int do_swap_page(struct mm_struct *mm, struct 
 vm_area_struct *vma,
   }
   flush_icache_page(vma, page);
   set_pte_at(mm, address, page_table, pte);
 - do_page_add_anon_rmap(page, vma, address, exclusive);
 + if (swapcache) /* ksm created a completely new copy */
 + page_add_new_anon_rmap(page, vma, address);
 + else
 + do_page_add_anon_rmap(page, vma, address, exclusive);
   /* It's better to call commit-charge after rmap is established */
   mem_cgroup_commit_charge_swapin(page, ptr);
  
 -- 
 1.7.11.7
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/