Module Name: src Committed By: thorpej Date: Fri Mar 29 03:11:32 UTC 2019
Modified Files: src/sys/arch/alpha/alpha: pmap.c Log Message: Fix a couple of latent MP issues in the Alpha pmap: - In pmap_activate(), even though we manipulate the active mask with atomic ops, the lev1map pointer needs to stay consistent, so we do, in fact, have to take the pmap lock there. - In pmap_emulate_reference(), some of the DEBUG checks done here are race-prone, so don't do them. (Leave them #if 0'd out for documentary purposes.) To generate a diff of this commit: cvs rdiff -u -r1.263 -r1.264 src/sys/arch/alpha/alpha/pmap.c Please note that diffs are not public domain; they are subject to the copyright notices on the relevant files.
Modified files: Index: src/sys/arch/alpha/alpha/pmap.c diff -u src/sys/arch/alpha/alpha/pmap.c:1.263 src/sys/arch/alpha/alpha/pmap.c:1.264 --- src/sys/arch/alpha/alpha/pmap.c:1.263 Fri Mar 1 04:29:20 2019 +++ src/sys/arch/alpha/alpha/pmap.c Fri Mar 29 03:11:32 2019 @@ -1,4 +1,4 @@ -/* $NetBSD: pmap.c,v 1.263 2019/03/01 04:29:20 mrg Exp $ */ +/* $NetBSD: pmap.c,v 1.264 2019/03/29 03:11:32 thorpej Exp $ */ /*- * Copyright (c) 1998, 1999, 2000, 2001, 2007, 2008 The NetBSD Foundation, Inc. @@ -140,7 +140,7 @@ #include <sys/cdefs.h> /* RCS ID & Copyright macro defns */ -__KERNEL_RCSID(0, "$NetBSD: pmap.c,v 1.263 2019/03/01 04:29:20 mrg Exp $"); +__KERNEL_RCSID(0, "$NetBSD: pmap.c,v 1.264 2019/03/29 03:11:32 thorpej Exp $"); #include <sys/param.h> #include <sys/systm.h> @@ -2126,6 +2126,14 @@ pmap_activate(struct lwp *l) printf("pmap_activate(%p)\n", l); #endif + /* + * Lock the pmap across the work we do here; although the + * in-use mask is manipulated with an atomic op and the + * ASN info is per-cpu, the lev1map pointer needs to remain + * consistent across the entire call. + */ + PMAP_LOCK(pmap); + /* Mark the pmap in use by this processor. */ atomic_or_ulong(&pmap->pm_cpus, (1UL << cpu_id)); @@ -2133,6 +2141,8 @@ pmap_activate(struct lwp *l) pmap_asn_alloc(pmap, cpu_id); PMAP_ACTIVATE(pmap, l, cpu_id); + + PMAP_UNLOCK(pmap); } /* @@ -2140,10 +2150,6 @@ pmap_activate(struct lwp *l) * * Mark that the pmap used by the specified process is no longer * in use by the processor. - * - * The comment above pmap_activate() wrt. locking applies here, - * as well. Note that we use only a single `atomic' operation, - * so no locking is necessary. */ void pmap_deactivate(struct lwp *l) @@ -2156,7 +2162,8 @@ pmap_deactivate(struct lwp *l) #endif /* - * Mark the pmap no longer in use by this processor. + * Mark the pmap no longer in use by this processor. Because + * this is all we're doing, no need to take the pmap lock. */ atomic_and_ulong(&pmap->pm_cpus, ~(1UL << cpu_number())); } @@ -2635,7 +2642,7 @@ pmap_emulate_reference(struct lwp *l, va printf("*pte = 0x%lx\n", *pte); } #endif -#ifdef DEBUG /* These checks are more expensive */ +#if 0/*DEBUG*/ /* These checks are, expensive, racy, and unreliable. */ if (!pmap_pte_v(pte)) panic("pmap_emulate_reference: invalid pte"); if (type == ALPHA_MMCSR_FOW) {