Le 17/08/2021 à 16:21, Fabiano Rosas a écrit :
Michael Ellerman <[email protected]> writes:

Hi, I already mentioned these things in private, but I'll post here so
everyone can see:

Because pte_update() takes the set of PTE bits to set and clear we can't
use our existing helpers, eg. pte_wrprotect() etc. and instead have to
open code the set of flags. We will clean that up somehow in a future
commit.

I tested the following on P9 and it seems to work fine. Not sure if it
works for CONFIG_PPC_8xx, though.


  static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
  {
        long action = (long)data;
        pte_t pte;
spin_lock(&init_mm.page_table_lock);
-
-       /* invalidate the PTE so it's safe to modify */
-       pte = ptep_get_and_clear(&init_mm, addr, ptep);
-       flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+       pte = *ptep;

Maybe using ptep_get() is better.

/* modify the PTE bits as desired, then apply */
        switch (action) {
@@ -59,11 +42,9 @@ static int change_page_attr(pte_t *ptep, unsigned long addr, 
void *data)
                break;
        }
- set_pte_at(&init_mm, addr, ptep, pte);
+       pte_update(&init_mm, addr, ptep, ~0UL, pte_val(pte), 0);

Good simple idea, indeed yes it should work with much more effort.


+       flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
- /* See ptesync comment in radix__set_pte_at() */
-       if (radix_enabled())
-               asm volatile("ptesync": : :"memory");
        spin_unlock(&init_mm.page_table_lock);
return 0;
---

For reference, the full patch is here:
https://github.com/farosas/linux/commit/923c95c84d7081d7be9503bf5b276dd93bd17036.patch


[1]: https://lore.kernel.org/linuxppc-dev/[email protected]/

Fixes: 1f9ad21c3b38 ("powerpc/mm: Implement set_memory() routines")
Reported-by: Laurent Vivier <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
---

...

-       set_pte_at(&init_mm, addr, ptep, pte);
+       pte_update(&init_mm, addr, ptep, clear, set, 0);
/* See ptesync comment in radix__set_pte_at() */
        if (radix_enabled())
                asm volatile("ptesync": : :"memory");
+
+       flush_tlb_kernel_range(addr, addr + PAGE_SIZE);

I think there's an optimization possible here, when relaxing access, to
skip the TLB flush. Would still need the ptesync though. Similar to what
Nick did in e5f7cb58c2b7 ("powerpc/64s/radix: do not flush TLB when
relaxing access"). It is out of scope for this patch but maybe worth
thinking about.

+
        spin_unlock(&init_mm.page_table_lock);
return 0;

base-commit: cbc06f051c524dcfe52ef0d1f30647828e226d30

Reply via email to