On Fri, May 15, 2015 at 01:35:49PM +0200, Vlastimil Babka wrote:
> On 05/15/2015 01:21 PM, Kirill A. Shutemov wrote:
> >On Fri, May 15, 2015 at 11:15:00AM +0200, Vlastimil Babka wrote:
> >>On 04/23/2015 11:03 PM, Kirill A. Shutemov wrote:
> >>>With new refcounting we will be able map the same compound page with
> >>>PTEs and PMDs. It requires adjustment to conditions when we can reuse
> >>>the page on write-protection fault.
> >>>
> >>>For PTE fault we can't reuse the page if it's part of huge page.
> >>>
> >>>For PMD we can only reuse the page if nobody else maps the huge page or
> >>>it's part. We can do it by checking page_mapcount() on each sub-page,
> >>>but it's expensive.
> >>>
> >>>The cheaper way is to check page_count() to be equal 1: every mapcount
> >>>takes page reference, so this way we can guarantee, that the PMD is the
> >>>only mapping.
> >>>
> >>>This approach can give false negative if somebody pinned the page, but
> >>>that doesn't affect correctness.
> >>>
> >>>Signed-off-by: Kirill A. Shutemov <[email protected]>
> >>>Tested-by: Sasha Levin <[email protected]>
> >>
> >>Acked-by: Vlastimil Babka <[email protected]>
> >>
> >>So couldn't the same trick be used in Patch 1 to avoid counting individual
> >>oder-0 pages?
> >
> >Hm. You're right, we could. But is smaps that performance sensitive to
> >bother?
> 
> Well, I was nudged to optimize it when doing the shmem swap accounting
> changes there :) User may not care about the latency of obtaining the smaps
> file contents, but since it has mmap_sem locked for that, the process might
> care...

Somewthing like this?

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index e04399e53965..5bc3d2b1176e 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -462,6 +462,19 @@ static void smaps_account(struct mem_size_stats *mss, 
struct page *page,
        if (young || PageReferenced(page))
                mss->referenced += size;
 
+       /*
+        * page_count(page) == 1 guarantees the page is mapped exactly once.
+        * If any subpage of the compound page mapped with PTE it would elevate
+        * page_count().
+        */
+       if (page_count(page) == 1) {
+               if (dirty || PageDirty(page))
+                       mss->private_dirty += size;
+               else
+                       mss->private_clean += size;
+               return;
+       }
+
        for (i = 0; i < nr; i++, page++) {
                int mapcount = page_mapcount(page);
 
-- 
 Kirill A. Shutemov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to