> Date: Sun, 23 Dec 2018 19:35:02 +0100 (CET)
> From: Mark Kettenis <[email protected]>
>
> > Date: Sun, 23 Dec 2018 12:26:17 +0100 (CET)
> > From: Mark Kettenis <[email protected]>
> >
> > That's a very good find. I think there still is a potential race in
> > your diff on MP systems since you save the bits before removing the
> > PTE from the has tables. I'll see if I can come up with a better diff.
>
> So here is the diff I propose instead. This zaps the PTE before
> unlinking. At that point the PTED_VA_MANAGED_M flag is still set so
> the MOD/REF accounting will happen. A process running on the other
> CPU can't put the PTE back into the hash as we still hold the lock on
> the pmap.
>
> Bootstrapping clang with this diff, and things are still running even
> though I've hit swap. I'll give this a spin on an MP machine as well.
I fear sombody else needs to test on an MP machine as my dual G4
doesn't boot anymore :(.
> Index: arch/powerpc/powerpc/pmap.c
> ===================================================================
> RCS file: /cvs/src/sys/arch/powerpc/powerpc/pmap.c,v
> retrieving revision 1.168
> diff -u -p -r1.168 pmap.c
> --- arch/powerpc/powerpc/pmap.c 22 Oct 2018 17:31:25 -0000 1.168
> +++ arch/powerpc/powerpc/pmap.c 23 Dec 2018 18:25:32 -0000
> @@ -2067,7 +2067,9 @@ void
> pmap_page_protect(struct vm_page *pg, vm_prot_t prot)
> {
> struct pte_desc *pted;
> + void *pte;
> pmap_t pm;
> + int s;
>
> if (prot == PROT_NONE) {
> mtx_enter(&pg->mdpage.pv_mtx);
> @@ -2097,6 +2099,11 @@ pmap_page_protect(struct vm_page *pg, vm
> mtx_enter(&pg->mdpage.pv_mtx);
> continue;
> }
> +
> + PMAP_HASH_LOCK(s);
> + if ((pte = pmap_ptedinhash(pted)) != NULL)
> + pte_zap(pte, pted);
> + PMAP_HASH_UNLOCK(s);
>
> pted->pted_va &= ~PTED_VA_MANAGED_M;
> LIST_REMOVE(pted, pted_pv_list);
>
>