On 2/20/21 1:18 AM, Dan Williams wrote:
> On Tue, Dec 8, 2020 at 9:32 AM Joao Martins <[email protected]> wrote:
>> Patch 6 - 8: Optimize grabbing/release a page refcount changes given that we
>> are working with compound pages i.e. we do 1 increment/decrement to the head
>> page for a given set of N subpages compared as opposed to N individual
>> writes.
>> {get,pin}_user_pages_fast() for zone_device with compound pagemap
>> consequently
>> improves considerably, and unpin_user_pages() improves as well when passed a
>> set of consecutive pages:
>>
>> before after
>> (get_user_pages_fast 1G;2M page size) ~75k us -> ~3.2k ; ~5.2k us
>> (pin_user_pages_fast 1G;2M page size) ~125k us -> ~3.4k ; ~5.5k us
>
> Compelling!
>
BTW is there any reason why we don't support pin_user_pages_fast() with
FOLL_LONGTERM for
device-dax?
Looking at the history, I understand that fsdax can't support it atm, but I am
not sure
that the same holds for device-dax. I have this small chunk (see below the
scissors mark)
which relaxes this for a pgmap of type MEMORY_DEVICE_GENERIC, albeit not sure
if there is
a fundamental issue for the other types that makes this an unwelcoming change.
Joao
--------------------->8---------------------
Subject: [PATCH] mm/gup: allow FOLL_LONGTERM pin-fast for
MEMORY_DEVICE_GENERIC
The downside would be one extra lookup in dev_pagemap tree
for other pgmap->types (P2P, FSDAX, PRIVATE). But just one
per gup-fast() call.
Signed-off-by: Joao Martins <[email protected]>
---
include/linux/mm.h | 5 +++++
mm/gup.c | 24 +++++++++++++-----------
2 files changed, 18 insertions(+), 11 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 32f0c3986d4f..c89a049bbd7a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1171,6 +1171,11 @@ static inline bool is_pci_p2pdma_page(const struct page
*page)
page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA;
}
+static inline bool devmap_longterm_available(const struct dev_pagemap *pgmap)
+{
+ return pgmap->type == MEMORY_DEVICE_GENERIC;
+}
+
/* 127: arbitrary random number, small enough to assemble well */
#define page_ref_zero_or_close_to_overflow(page) \
((unsigned int) page_ref_count(page) + 127u <= 127u)
diff --git a/mm/gup.c b/mm/gup.c
index 222d1fdc5cfa..03e370d360e6 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2092,14 +2092,18 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr,
unsigned
long end,
goto pte_unmap;
if (pte_devmap(pte)) {
- if (unlikely(flags & FOLL_LONGTERM))
- goto pte_unmap;
-
pgmap = get_dev_pagemap(pte_pfn(pte), pgmap);
if (unlikely(!pgmap)) {
undo_dev_pagemap(nr, nr_start, flags, pages);
goto pte_unmap;
}
+
+ if (unlikely(flags & FOLL_LONGTERM) &&
+ !devmap_longterm_available(pgmap)) {
+ undo_dev_pagemap(nr, nr_start, flags, pages);
+ goto pte_unmap;
+ }
+
} else if (pte_special(pte))
goto pte_unmap;
@@ -2195,6 +2199,10 @@ static int __gup_device_huge(unsigned long pfn, unsigned
long addr,
return 0;
}
+ if (unlikely(flags & FOLL_LONGTERM) &&
+ !devmap_longterm_available(pgmap))
+ return 0;
+
@@ -2356,12 +2364,9 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp,
unsigned long addr,
if (!pmd_access_permitted(orig, flags & FOLL_WRITE))
return 0;
- if (pmd_devmap(orig)) {
- if (unlikely(flags & FOLL_LONGTERM))
- return 0;
+ if (pmd_devmap(orig))
return __gup_device_huge_pmd(orig, pmdp, addr, end, flags,
pages, nr);
- }
page = pmd_page(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
refs = record_subpages(page, addr, end, pages + *nr);
@@ -2390,12 +2395,9 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp,
unsigned long addr,
if (!pud_access_permitted(orig, flags & FOLL_WRITE))
return 0;
- if (pud_devmap(orig)) {
- if (unlikely(flags & FOLL_LONGTERM))
- return 0;
+ if (pud_devmap(orig))
return __gup_device_huge_pud(orig, pudp, addr, end, flags,
pages, nr);
- }
page = pud_page(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
refs = record_subpages(page, addr, end, pages + *nr);
--
2.17.1
_______________________________________________
Linux-nvdimm mailing list -- [email protected]
To unsubscribe send an email to [email protected]