Popple
CC: Alexey Kardashevskiy
CC: Mark Hairgrove
CC: Balbir Singh
CC: David Gibson
CC: Andrea Arcangeli
CC: Jerome Glisse
CC: Jason Wang
CC: linuxppc-dev@lists.ozlabs.org
CC: linux-ker...@vger.kernel.org
Signed-off-by: Peter Xu
---
arch/powerpc/platforms/powernv/npu-dma.c | 10
r_warn("HugeTLB: hugepagesz %s specified twice, ignoring\n",
> s);
> return 0;
> }
>
> + parsed_valid_hugepagesz = true;
> hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT);
> return 1;
> }
> __setup("hugepagesz=", hugepagesz_setup);
>
> +/*
> + * default_hugepagesz command line input
> + * Only one instance of default_hugepagesz allowed on command line. Do not
> + * add hstate here as that will confuse hugepagesz/hugepages processing.
> + */
> static int __init default_hugepagesz_setup(char *s)
> {
> unsigned long size;
>
> + if (!hugepages_supported()) {
> + pr_warn("HugeTLB: huge pages not supported, ignoring
> default_hugepagesz = %s\n", s);
> + return 0;
> + }
> +
> size = (unsigned long)memparse(s, NULL);
>
> if (!arch_hugetlb_valid_size(size)) {
> @@ -3349,6 +3400,11 @@ static int __init default_hugepagesz_setup(char *s)
> return 0;
> }
>
> + if (default_hstate_size) {
> + pr_err("HugeTLB: default_hugepagesz previously specified,
> ignoring %s\n", s);
> + return 0;
> + }
Nitpick: ideally this can be moved before memparse().
Thanks,
> +
> default_hstate_size = size;
> return 1;
> }
> --
> 2.25.1
>
>
--
Peter Xu
pecified twice, ignoring\n");
> return;
> }
Nitpick: I think the brackets need to be removed to follow linux
coding style. With that:
Reviewed-by: Peter Xu
--
Peter Xu
ld that be slightly
cleaner?
Thanks,
--
Peter Xu
it's not a big deal, assuming even to capture error people will
majorly still look for error lines in general..
Reviewed-by: Peter Xu
--
Peter Xu
On Mon, Apr 13, 2020 at 10:59:26AM -0700, Mike Kravetz wrote:
> On 4/10/20 1:37 PM, Peter Xu wrote:
> > On Wed, Apr 01, 2020 at 11:38:19AM -0700, Mike Kravetz wrote:
> >> With all hugetlb page processing done in a single file clean up code.
> >> - Make code match desire
Use the general page fault accounting by passing regs into handle_mm_fault().
CC: Michael Ellerman
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: linuxppc-dev@lists.ozlabs.org
Acked-by: Michael Ellerman
Signed-off-by: Peter Xu
---
arch/powerpc/mm/fault.c | 11 +++
1 file changed
Use the general page fault accounting by passing regs into handle_mm_fault().
CC: Michael Ellerman
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Peter Xu
---
arch/powerpc/mm/fault.c | 11 +++
1 file changed, 3 insertions(+), 8
Use the new mm_fault_accounting() helper for page fault accounting.
cmo_account_page_fault() is special. Keep that.
CC: Michael Ellerman
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Peter Xu
---
arch/powerpc/mm/fault.c | 13 -
1
Use the general page fault accounting by passing regs into handle_mm_fault().
CC: Michael Ellerman
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Peter Xu
---
arch/powerpc/mm/fault.c | 11 +++
1 file changed, 3 insertions(+), 8
ever I don't
fully understand the commit message [1] on: How do we guarantee we're not
moving a thp pte?
--
Peter Xu
>
> move_page_tables() checks for pmd_trans_huge() and ends up calling
> move_huge_pmd if it is a THP entry.
Sorry to be unclear: what if a huge pud thp?
--
Peter Xu
On Thu, May 20, 2021 at 03:06:30PM -0400, Zi Yan wrote:
> On 20 May 2021, at 10:57, Peter Xu wrote:
>
> > On Thu, May 20, 2021 at 07:07:57PM +0530, Aneesh Kumar K.V wrote:
> >> "Aneesh Kumar K.V" writes:
> >>
> >>> On 5/20/21 6:16 PM, Peter
On Thu, May 20, 2021 at 07:07:57PM +0530, Aneesh Kumar K.V wrote:
> "Aneesh Kumar K.V" writes:
>
> > On 5/20/21 6:16 PM, Peter Xu wrote:
> >> On Thu, May 20, 2021 at 01:56:54PM +0530, Aneesh Kumar K.V wrote:
> >>>> This seems to work at lea
p_read_lock()")
> Signed-off-by: Hugh Dickins
The locking is indeed slightly complicated.. but I didn't spot anything
wrong.
Acked-by: Peter Xu
Thanks,
--
Peter Xu
if (userfaultfd_armed(vma) &&
> + !(vma->vm_flags & VM_SHARED))
> + goto recheck;
> + }
> + }
>
> - /* Huge page lock is still held, so page table must remain empty */
> - pml = pmd_lock(mm, pmd);
> - if (ptl != pml)
> - spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
> pgt_pmd = pmdp_collapse_flush(vma, haddr, pmd);
> pmdp_get_lockless_sync();
> if (ptl != pml)
> @@ -1648,6 +1665,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm,
> unsigned long addr,
> }
> if (start_pte)
> pte_unmap_unlock(start_pte, ptl);
> + if (pml && pml != ptl)
> + spin_unlock(pml);
> if (notified)
> mmu_notifier_invalidate_range_end();
> drop_hpage:
> --
> 2.35.3
--
Peter Xu
> > -*/
> > - mmap_read_lock(mm);
> > - goto out_gmap;
> > + if (gmap) {
> > + mmap_read_lock(mm);
> > + goto out_gmap;
> > + }
> > + goto out;
>
> Yes, that makes sense. With that
>
> Acked-by: Christian Borntraeger
Looks sane, thanks Heiko, Christian. I'll cook another one.
--
Peter Xu
Morton , linuxppc-dev@lists.ozlabs.org,
>"David S . Miller"
Errors-To: linuxppc-dev-bounces+archive=mail-archive@lists.ozlabs.org
Sender: "Linuxppc-dev"
On Mon, May 30, 2022 at 11:52:54AM -0400, Peter Xu wrote:
> On Mon, May 30, 2022 at 11:35:10AM +0200, Christian Borntraeger wr
Peter Zijlstra (Intel)
Acked-by: Johannes Weiner
Acked-by: Vineet Gupta
Acked-by: Guo Ren
Acked-by: Max Filippov
Acked-by: Christian Borntraeger
Acked-by: Michael Ellerman (powerpc)
Acked-by: Catalin Marinas
Reviewed-by: Alistair Popple
Reviewed-by: Ingo Molnar
Signed-off-by: Peter Xu
---
dev@lists.ozlabs.org, "David
S . Miller"
Errors-To: linuxppc-dev-bounces+archive=mail-archive@lists.ozlabs.org
Sender: "Linuxppc-dev"
On Mon, May 30, 2022 at 07:03:31PM +0200, Heiko Carstens wrote:
> On Mon, May 30, 2022 at 12:00:52PM -0400, Peter Xu wrote:
> > On
sm() because they do
not handle VM_FAULT_RETRY even with existing code, so I'm literally keeping
them as-is.
Signed-off-by: Peter Xu
---
v3:
- Rebase to akpm/mm-unstable
- Copy arch maintainers
---
arch/alpha/mm/fault.c | 4
arch/arc/mm/fault.c | 4
arch/arm/mm/fa
ewed-by: Ingo Molnar
Signed-off-by: Peter Xu
---
v4:
- Picked up a-bs and r-bs
- Fix grammar in the comment of faultin_page() [Ingo]
- Fix s390 for gmap since gmap needs the mmap lock [Heiko]
v3:
- Rebase to akpm/mm-unstable
- Copy arch maintainers
---
arch/alpha/mm/fault.c | 4
quot;
Errors-To: linuxppc-dev-bounces+archive=mail-archive@lists.ozlabs.org
Sender: "Linuxppc-dev"
Hi, Heiko,
On Fri, May 27, 2022 at 02:23:42PM +0200, Heiko Carstens wrote:
> On Tue, May 24, 2022 at 07:45:31PM -0400, Peter Xu wrote:
> > I observed that for each of the shared fil
quot;
Errors-To: linuxppc-dev-bounces+archive=mail-archive@lists.ozlabs.org
Sender: "Linuxppc-dev"
On Fri, May 27, 2022 at 12:46:31PM +0200, Ingo Molnar wrote:
>
> * Peter Xu wrote:
>
> > This patch provides a ~12% perf boost on my aarch64 test VM with a simple
> > pr
tectures can provide their own version. */
> +__weak unsigned long hugetlb_mask_last_page(struct hstate *h)
> +{
> + return ~(0UL);
I'm wondering whether it's better to return 0 rather than ~0 by default.
Could an arch with !CONFIG_ARCH_WANT_GENERAL_HUGETLB wrongly skip some
valid address ranges with ~0, or perhaps I misread?
Thanks,
--
Peter Xu
if it's not really anything urgently
needed. I assume that won't need to block this patchset since we need the
pteval for pte_dirty() check anyway and uffd-wp definitely needs it too.
Thanks,
--
Peter Xu
ta
> won't be written back to swap storage as it is considered uptodate,
> resulting in data loss if the page is subsequently accessed.
>
> Prevent this by copying the dirty bit to the page when removing the pte
> to match what try_to_migrate_one() does.
>
> Signed-off-by: A
also based on this "cpages", not "npages":
if (args->cpages)
migrate_vma_unmap(args);
So I never figured out how this code really works. It'll be great if you
could shed some light to it.
Thanks,
--
Peter Xu
On Wed, Aug 24, 2022 at 04:25:44PM -0400, Peter Xu wrote:
> On Wed, Aug 24, 2022 at 11:56:25AM +1000, Alistair Popple wrote:
> > >> Still I don't know whether there'll be any side effect of having stall
> > >> tlbs
> > >> in !present ptes because I'm n
be changed if explicitly did so (e.g. fork() plus
mremap() for anonymous here) but I just want to make sure I get the whole
point of it.
Thanks,
--
Peter Xu
On Thu, Aug 25, 2022 at 10:42:41AM +1000, Alistair Popple wrote:
>
> Peter Xu writes:
>
> > On Wed, Aug 24, 2022 at 04:25:44PM -0400, Peter Xu wrote:
> >> On Wed, Aug 24, 2022 at 11:56:25AM +1000, Alistair Popple wrote:
> >> > >> Still I don't know whe
s the magic bit, we have to make sure that we won't see new
> GUP pins, thus the TLB flush.
>
> include/linux/mm.h:gup_must_unshare() contains documentation.
Hmm.. Shouldn't ptep_get_and_clear() (e.g., xchg() on x86_64) already
guarantees that no other process/thread will see this pte anymore
afterwards?
--
Peter Xu
On Fri, Aug 26, 2022 at 11:02:58AM +1000, Alistair Popple wrote:
>
> Peter Xu writes:
>
> > On Fri, Aug 26, 2022 at 08:21:44AM +1000, Alistair Popple wrote:
> >>
> >> Peter Xu writes:
> >>
> >> > On Wed, Aug 24, 2022 at 01:03:38PM +1000,
On Fri, Aug 26, 2022 at 06:46:02PM +0200, David Hildenbrand wrote:
> On 26.08.22 17:55, Peter Xu wrote:
> > On Fri, Aug 26, 2022 at 04:47:22PM +0200, David Hildenbrand wrote:
> >>> To me anon exclusive only shows this mm exclusively owns this page. I
> >>&
ct(), there's a strong barrier of not allowing further write
after mprotect() returns.
Still I don't know whether there'll be any side effect of having stall tlbs
in !present ptes because I'm not familiar enough with the private dev swap
migration code. But I think having them will be safe, even if redundant.
Thanks,
--
Peter Xu
)
> vm_fault_t ret = 0;
> void *shadow = NULL;
>
> + if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> + ret = VM_FAULT_RETRY;
> + goto out;
> + }
> +
May want to fail early similarly for handle_userfault() too for similar
reason. Thanks,
--
Peter Xu
On Tue, Sep 06, 2022 at 01:08:10PM -0700, Suren Baghdasaryan wrote:
> On Tue, Sep 6, 2022 at 12:39 PM Peter Xu wrote:
> >
> > On Thu, Sep 01, 2022 at 10:35:07AM -0700, Suren Baghdasaryan wrote:
> > > Due to the possibility of do_swap_page dropping mmap_lock, abort fault
diff after rebase, though.. I'm not sure how
the ordering would be at last, but anyway I think this patch stands as its
own too..
Acked-by: Peter Xu
Thanks for tolerant with my nitpickings,
>
> ---
>
> New for v4
> ---
> mm/migrate_device.c | 2 +-
> 1 file changed
try
> after madvise returns. Fix this by flushing the TLB while holding the
> PTL.
>
> Signed-off-by: Alistair Popple
> Reported-by: Nadav Amit
> Reviewed-by: "Huang, Ying"
> Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while
> collecting pages")
> Cc: sta...@vger.kernel.org
Acked-by: Peter Xu
--
Peter Xu
mar K.V
> Signed-off-by: Yang Shi
Acked-by: Peter Xu
--
Peter Xu
(or have
> swap-cache allocated to it, but I'm hoping to at least get that fixed).
If so I'd suggest even more straightforward document for either this
trylock() or on the APIs (e.g. for migrate_vma_setup()). This behavior is
IMHO hiding deep and many people may not realize. I'll comment in the
comment update patch.
Thanks.
--
Peter Xu
On Fri, Aug 26, 2022 at 08:21:44AM +1000, Alistair Popple wrote:
>
> Peter Xu writes:
>
> > On Wed, Aug 24, 2022 at 01:03:38PM +1000, Alistair Popple wrote:
> >> migrate_vma_setup() has a fast path in migrate_vma_collect_pmd() that
> >> installs migratio
because the worst case
is the caller will fetch a wrong page, but then it should be invalidated
very soon with mmu notifiers. One thing worth mention is that pmd unshare
should never free a pgtable page.
IIUC it's also the same as fast-gup - afaiu we don't take the read vma lock
in fast-gup too but I also think it's safe. But I hope I didn't miss
something.
--
Peter Xu
f-work on Mon & Tue,
but maybe I'll still try).
--
Peter Xu
migration_entry_wait_huge(pte, ptl);
> + goto retry;
> + }
> + /*
> + * hwpoisoned entry is treated as no_page_table in
> + * follow_page_mask().
> + */
> + }
> +out:
> + spin_unlock(ptl);
> + return page;
> +}
--
Peter Xu
On Wed, Oct 26, 2022 at 05:34:04PM -0700, Mike Kravetz wrote:
> On 10/26/22 17:59, Peter Xu wrote:
> > Hi, Mike,
> >
> > On Sun, Sep 18, 2022 at 07:13:48PM -0700, Mike Kravetz wrote:
> > > +struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
> >
On Fri, Oct 28, 2022 at 08:27:57AM -0700, Mike Kravetz wrote:
> On 10/27/22 15:34, Peter Xu wrote:
> > On Wed, Oct 26, 2022 at 05:34:04PM -0700, Mike Kravetz wrote:
> > > On 10/26/22 17:59, Peter Xu wrote:
> >
> > If we want to use the vma read lock to pro
On Wed, Aug 17, 2022 at 11:49:03AM +1000, Alistair Popple wrote:
>
> Peter Xu writes:
>
> > On Tue, Aug 16, 2022 at 04:10:29PM +0800, huang ying wrote:
> >> > @@ -193,11 +194,10 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
> >> >
rch_leave_lazy_mmu_mode();
pte_unmap_unlock();
I may miss something, but even if not it already doesn't look pretty.
Thanks,
--
Peter Xu
than using per-pte
ptep_clear_flush(). It may enlarge the race window but fundamentally
(iiuc) they're the same thing here as long as there's no atomic way to both
"clear pte and flush tlb".
[1] https://lore.kernel.org/lkml/e37036e0-566e-40c7-ad15-720cdb003...@gmail.com/
--
Peter Xu
p_get_and_clear() afaiu but keep "pte"
updated.
Thanks,
--
Peter Xu
_range_single().
> - Remove zap_page_range.
>
> [1]
> https://lore.kernel.org/linux-mm/20221114235507.294320-2-mike.krav...@oracle.com/
> Suggested-by: Peter Xu
> Signed-off-by: Mike Kravetz
Acked-by: Peter Xu
--
Peter Xu
t;
> [1]
> https://lore.kernel.org/linux-mm/20221114235507.294320-2-mike.krav...@oracle.com/
> Suggested-by: Peter Xu
> Signed-off-by: Mike Kravetz
Acked-by: Peter Xu
Thanks!
--
Peter Xu
t; + page_table_check_pte_clear_range(mm, addr, pgt_pmd);
> + pte_free_defer(mm, pmd_pgtable(pgt_pmd));
> }
> - i_mmap_unlock_write(mapping);
> - return target_result;
> + i_mmap_unlock_read(mapping);
> }
>
> /**
> @@ -2261,9 +2210,11 @@ static int collapse_file(struct mm_struct *mm,
> unsigned long addr,
>
> /*
>* Remove pte page tables, so we can re-fault the page as huge.
> + * If MADV_COLLAPSE, adjust result to call collapse_pte_mapped_thp().
>*/
> - result = retract_page_tables(mapping, start, mm, addr, hpage,
> - cc);
> + retract_page_tables(mapping, start);
> + if (cc && !cc->is_khugepaged)
> + result = SCAN_PTE_MAPPED_HUGEPAGE;
> unlock_page(hpage);
>
> /*
> --
> 2.35.3
>
--
Peter Xu
> detail in responses to you there - thanks for your patience :)
Not a problem at all here!
>
> On Mon, 29 May 2023, Peter Xu wrote:
> > On Sun, May 28, 2023 at 11:25:15PM -0700, Hugh Dickins wrote:
> ...
> > > @@ -1748,123 +1747,73 @@ static void
> > &g
d in pgtable_pte_page_dtor(),
in Hugh's series IIUC we need the spinlock being there for the rcu section
alongside the page itself. So even if to do so we'll need to also rcu call
pgtable_pte_page_dtor() when needed.
--
Peter Xu
or either cpu or iommu hardwares.
However OTOH, maybe it'll also be safer to just have the mmu notifiers like
before (e.g., no idea whether anything can cache invalidate tlb
translations from the empty pgtable)? As that doesn't seems to beat the
purpose of the patchset as notifiers shouldn't fail.
>
> (FWIW, last I looked, there also seemed to be some other issues with
> MMU notifier usage wrt IOMMUv2, see the thread
> <https://lore.kernel.org/linux-mm/yzbaf9hw1%2frek...@nvidia.com/>.)
>
>
> > + if (ptl != pml)
> > + spin_unlock(ptl);
> > + spin_unlock(pml);
> > +
> > + mm_dec_nr_ptes(mm);
> > + page_table_check_pte_clear_range(mm, addr, pgt_pmd);
> > + pte_free_defer(mm, pmd_pgtable(pgt_pmd));
> > }
> > - i_mmap_unlock_write(mapping);
> > - return target_result;
> > + i_mmap_unlock_read(mapping);
> > }
> >
> > /**
> > @@ -2261,9 +2210,11 @@ static int collapse_file(struct mm_struct *mm,
> > unsigned long addr,
> >
> > /*
> > * Remove pte page tables, so we can re-fault the page as huge.
> > +* If MADV_COLLAPSE, adjust result to call
> > collapse_pte_mapped_thp().
> > */
> > - result = retract_page_tables(mapping, start, mm, addr, hpage,
> > -cc);
> > + retract_page_tables(mapping, start);
> > + if (cc && !cc->is_khugepaged)
> > + result = SCAN_PTE_MAPPED_HUGEPAGE;
> > unlock_page(hpage);
> >
> > /*
> > --
> > 2.35.3
> >
>
--
Peter Xu
+#else
> + spinlock_t ptl;
> +#endif
> + };
> + unsigned int __page_type;
> + atomic_t _refcount;
> +#ifdef CONFIG_MEMCG
> + unsigned long pt_memcg_data;
> +#endif
> +};
--
Peter Xu
On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:13PM +0800, pet...@redhat.com wrote:
> > From: Peter Xu
> >
> > ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD. It
> > can be a helpful helper i
gt; > pud = READ_ONCE(*pudp);
> > - if (pud_none(pud))
> > + if (pud_none(pud) || !pud_present(pud))
> > return no_page_table(vma, flags, address);
>
> Isn't 'pud_none() || !pud_present()' redundent? A none pud is
> non-present, by definition?
Hmm yes, seems redundant. Let me drop it.
>
> > - if (pud_devmap(pud)) {
> > + if (pud_huge(pud)) {
> > ptl = pud_lock(mm, pudp);
> > - page = follow_devmap_pud(vma, address, pudp, flags,
> > >pgmap);
> > + page = follow_huge_pud(vma, address, pudp, flags, ctx);
> > spin_unlock(ptl);
> > if (page)
> > return page;
>
> Otherwise it looks OK to me
>
> Reviewed-by: Jason Gunthorpe
Thanks!
--
Peter Xu
function in this series? When
> does this re-use happen??
It's reused in patch 12 ("mm/gup: Handle hugepd for follow_page()").
Thanks,
--
Peter Xu
On Wed, Feb 21, 2024 at 08:57:53AM -0400, Jason Gunthorpe wrote:
> On Wed, Feb 21, 2024 at 05:37:37PM +0800, Peter Xu wrote:
> > On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> > > On Wed, Jan 03, 2024 at 05:14:13PM +0800, pet...@redhat.com wrote:
>
On Tue, Dec 19, 2023 at 11:28:54AM -0500, James Houghton wrote:
> On Tue, Dec 19, 2023 at 2:57 AM wrote:
> >
> > From: Peter Xu
> >
> > Introduce "pud_t pud" in the function, so the code won't dereference *pudp
> > multiple time. Not only becaus
Copy Muchun, which I forgot since the start, sorry.
--
Peter Xu
/asm/pgtable.h:#define pmd_thp_or_huge(pmd) (pmd_huge(pmd)
|| pmd_trans_huge(pmd))
So far this series only touches generic code. Would you mind I keep this
patch as-is, and leave renaming to later?
>
> BTW, please cc me via the new email (muchun.s...@linux.dev) next edition.
Sure. Thanks for taking a look.
--
Peter Xu
On Mon, Dec 25, 2023 at 02:34:48PM +0800, Muchun Song wrote:
> Reviewed-by: Muchun Song
You're using the old email address here. Do you want me to also use the
linux.dev one that you suggested me to use?
--
Peter Xu
ut I can overlook important
things here.
It'll be definitely great if hugepd can be merged into some existing forms
like a generic pgtable (IMHO cont_* is such case: it's the same as no
cont_* entries for softwares, while hardware can accelerate with TLB hits
on larger ranges). But I can be asking a very silly question here too, as
I can overlook very important things.
Thanks,
--
Peter Xu
epd_t hugepd, unsigned long addr,
unsigned int pdshift, unsigned long end, unsigned int flags,
struct page **pages, int *nr)
--
Peter Xu
rt for gup on large folios, and whether there's any performance number
to share. It's definitely good news to me because it means Ryan's work can
also then benefit hugetlb if this series will be merged, I just don't know
how much difference there will be.
Thanks,
--
Peter Xu
he above
series.
It's a matter of whether one follow_page_mask() call can fetch more than
one page* for a cont_pte entry on aarch64 for a large non-hugetlb folio
(and if this series lands, it'll be the same to hugetlb or non-hugetlb).
Now the current code can only fetch one page I think.
Thanks,
--
Peter Xu
err = walk_hugetlb_range(start, end, walk);
} else
err = walk_pgd_range(start, end, walk);
It means to me as long as the vma is hugetlb, it'll not trigger any code in
walk_pgd_range(), but only walk_hugetlb_range(). Do you perhaps mean
hugepd is used outside hugetlbfs?
Thanks,
--
Peter Xu
ed if gup is not yet touched
from your side, afaict. I'll see whether I can provide some rough numbers
instead in the next post (I'll probably only be able to test it in a VM,
though, but hopefully that should still reflect mostly the truth).
--
Peter Xu
up, it might be relatively easy
when comparing to the rest. I'm still hesitating for the long term plan.
Please let me know if you have any thoughts on any of above.
Thanks!
--
Peter Xu
so we
actually have three users indeed, if not counting potential future archs
adding support to also get that same tlb benefit.
Thanks,
--
Peter Xu
On Fri, Nov 24, 2023 at 11:07:51AM -0500, Peter Xu wrote:
> On Fri, Nov 24, 2023 at 09:06:01AM +, Ryan Roberts wrote:
> > I don't have any micro-benchmarks for GUP though, if that's your question.
> > Is
> > there an easy-to-use test I can run to get some numbers? I'd
On Mon, Jan 15, 2024 at 01:37:37PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:11PM +0800, pet...@redhat.com wrote:
> > From: Peter Xu
> >
> > Introduce a config option that will be selected as long as huge leaves are
> > involved in pgtable (t
to hugepd.
Drop that check, not only because it'll never be true for hugepd, but also
it paves way for reusing the function outside fast-gup.
Cc: Lorenzo Stoakes
Cc: Michael Ellerman
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Peter Xu
---
mm/gup.c | 5 -
1 file changed, 5 deletion
On Mon, Nov 20, 2023 at 12:26:24AM -0800, Christoph Hellwig wrote:
> On Wed, Nov 15, 2023 at 08:29:02PM -0500, Peter Xu wrote:
> > Hugepd format is only used in PowerPC with hugetlbfs. In commit
> > a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
> >
On Wed, Nov 22, 2023 at 12:00:24AM -0800, Christoph Hellwig wrote:
> On Tue, Nov 21, 2023 at 10:59:35AM -0500, Peter Xu wrote:
> > > What prevents us from ever using hugepd with file mappings? I think
> > > it would naturally fit in with how large folios for the pagecache
ition my next step; it seems like at least I should not
adding any more hugepd code, then should I go with ARCH_HAS_HUGEPD checks,
or you're going to have an RFC soon then I can base on top?
Thanks,
--
Peter Xu
On Thu, Apr 11, 2024 at 06:55:44PM +0200, Paolo Bonzini wrote:
> On Mon, Apr 8, 2024 at 3:56 PM Peter Xu wrote:
> > Paolo,
> >
> > I may miss a bunch of details here (as I still remember some change_pte
> > patches previously on the list..), however not sure wheth
On Tue, Apr 09, 2024 at 08:43:55PM -0300, Jason Gunthorpe wrote:
> On Fri, Apr 05, 2024 at 05:42:44PM -0400, Peter Xu wrote:
> > In short, hugetlb mappings shouldn't be special comparing to other huge pXd
> > and large folio (cont-pXd) mappings for most of the walkers in my mind,
On Wed, Apr 10, 2024 at 04:30:41PM +, Christophe Leroy wrote:
>
>
> Le 10/04/2024 à 17:28, Peter Xu a écrit :
> > On Tue, Apr 09, 2024 at 08:43:55PM -0300, Jason Gunthorpe wrote:
> >> On Fri, Apr 05, 2024 at 05:42:44PM -0400, Peter Xu wrote:
> >>>
On Fri, Apr 12, 2024 at 02:08:03PM +, Christophe Leroy wrote:
>
>
> Le 11/04/2024 à 18:15, Peter Xu a écrit :
> > On Mon, Mar 25, 2024 at 01:38:40PM -0300, Jason Gunthorpe wrote:
> >> On Mon, Mar 25, 2024 at 03:55:53PM +0100, Christophe Leroy wrote:
> >>>
On Tue, Apr 16, 2024 at 10:58:33AM +, Christophe Leroy wrote:
>
>
> Le 15/04/2024 à 21:12, Christophe Leroy a écrit :
> >
> >
> > Le 12/04/2024 à 16:30, Peter Xu a écrit :
> >> On Fri, Apr 12, 2024 at 02:08:03PM +, Christophe Leroy wrote:
> >&
On Fri, Apr 26, 2024 at 09:44:58AM -0400, Peter Xu wrote:
> On Fri, Apr 26, 2024 at 09:17:47AM +0200, David Hildenbrand wrote:
> > On 02.04.24 14:55, David Hildenbrand wrote:
> > > Let's consistently call the "fast-only" part of GUP "GUP-fast" and rename
gup_hugepte() -> gup_fast_hugepte()
>
> I just realized that we end up calling these from follow_hugepd() as well.
> And something seems to be off, because gup_fast_hugepd() won't have the VMA
> even in the slow-GUP case to pass it to gup_must_unshare().
>
> So these are GUP-fast fu
fix on
hugepd putting this aside.
I hope that before the end of this year, whatever I'll fix can go away, by
removing hugepd completely from Linux. For now that may or may not be as
smooth, so we'd better still fix it.
--
Peter Xu
On Fri, Apr 26, 2024 at 07:28:31PM +0200, David Hildenbrand wrote:
> On 26.04.24 18:12, Peter Xu wrote:
> > On Fri, Apr 26, 2024 at 09:44:58AM -0400, Peter Xu wrote:
> > > On Fri, Apr 26, 2024 at 09:17:47AM +0200, David Hildenbrand wrote:
> > > > On 02.04.24
2083d721d7 ("mm/gup: handle hugepd for follow_page()")
Reviewed-by: David Hildenbrand
Signed-off-by: Peter Xu
---
v1: https://lore.kernel.org/r/20240428190151.201002-1-pet...@redhat.com
This is v2 and dropped the 2nd test patch as a better one can come later,
this patch alone is k
2083d721d7 ("mm/gup: handle hugepd for follow_page()")
Signed-off-by: Peter Xu
---
Note: The target commit to be fixed should just been moved into mm-stable,
so no need to cc stable.
---
mm/gup.c | 64 ++--
1 file changed, 39 inserti
at least to
cover the unshare care for R/O longterm pins, in which case the first R/O
GUP attempt will fault in the page R/O first, then the 2nd will go through
the unshare path, checking whether an unshare is needed.
Cc: David Hildenbrand
Signed-off-by: Peter Xu
---
tools/testing/selftests/mm
16MB huge page.
Thanks,
[1] https://lore.kernel.org/r/20240327152332.950956-1-pet...@redhat.com
Peter Xu (2):
mm/gup: Fix hugepd handling in hugetlb rework
mm/selftests: Don't prefault in gup_longterm tests
mm/gup.c | 64 ++-
tools/testing
are? IIUC it used to be not
> > touched because of pte_write() always returns true with a write prefault.
> >
> > Then we let patch 1 go through first, and drop this one?
>
> Whatever you prefer!
Thanks!
Andrew, would you consider taking patch 1 but ignore this patch 2? Or do
you prefer me to resend?
--
Peter Xu
AULT_SET_HINDEX(hstate_index(h));
> goto out_mutex;
> }
> diff --git a/mm/memory.c b/mm/memory.c
> index d2155ced45f8..29a833b996ae 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3910,7 +3910,7 @@ static vm_fault_t handle_pte_marker(struct vm_fault
> *vmf)
>
> /* Higher priority than uffd-wp when data corrupted */
> if (marker & PTE_MARKER_POISONED)
> - return VM_FAULT_HWPOISON;
> + return VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_SIM;
>
> if (pte_marker_entry_uffd_wp(entry))
> return pte_marker_handle_uffd_wp(vmf);
> --
> 2.45.0.118.g7fe29c98d7-goog
>
--
Peter Xu
utually
> exclusive).
>
> Reviewed-by: John Hubbard
> Signed-off-by: Axel Rasmussen
Acked-by: Peter Xu
One nicpick below.
> ---
> arch/parisc/mm/fault.c | 7 +--
> arch/powerpc/mm/fault.c | 6 --
> arch/x86/mm/fault.c | 6 --
> include/linux/mm_t
On Tue, May 14, 2024 at 10:26:49PM +0200, Oscar Salvador wrote:
> On Fri, May 10, 2024 at 03:29:48PM -0400, Peter Xu wrote:
> > IMHO we shouldn't mention that detail, but only state the effect which is
> > to not report the event to syslog.
> >
> > There's no hard r
On Mon, Apr 29, 2024 at 09:28:15AM +0200, David Hildenbrand wrote:
> On 28.04.24 21:01, Peter Xu wrote:
> > Prefault, especially with RW, makes the GUP test too easy, and may not yet
> > reach the core of the test.
> >
> > For example, R/O longterm pins will
On Thu, Mar 07, 2024 at 02:12:33PM -0400, Jason Gunthorpe wrote:
> On Wed, Mar 06, 2024 at 06:41:35PM +0800, pet...@redhat.com wrote:
> > From: Peter Xu
> >
> > Swap pud entries do not always return true for pud_huge() for all archs.
> > x86 and sparc (so far) al
On Wed, Mar 06, 2024 at 11:56:56PM +1100, Michael Ellerman wrote:
> pet...@redhat.com writes:
> > From: Peter Xu
> >
> > PowerPC book3s 4K mostly has the same definition on both, except pXd_huge()
> > constantly returns 0 for hash MMUs. AFAICT that is fine to be re
1 - 100 of 133 matches
Mail list logo