Module Name: src Committed By: chs Date: Sat Jan 27 23:07:36 UTC 2018
Modified Files: src/sys/arch/alpha/alpha: pmap.c src/sys/arch/m68k/m68k: pmap_motorola.c src/sys/arch/powerpc/oea: pmap.c src/sys/arch/sparc64/sparc64: pmap.c Log Message: apply the change from arch/x86/x86/pmap.c rev. 1.266 commitid vZRjvmxG7YTHLOfA: In pmap_enter_ma(), only try to allocate pves if we might need them, and even if that fails, only fail the operation if we later discover that we really do need them. If we are replacing an existing mapping, reuse the pv structure where possible. This implements the requirement that pmap_enter(PMAP_CANFAIL) must not fail when replacing an existing mapping with the first mapping of a new page, which is an unintended consequence of the changes from the rmind-uvmplock branch in 2011. The problem arises when pmap_enter(PMAP_CANFAIL) is used to replace an existing pmap mapping with a mapping of a different page (eg. to resolve a copy-on-write). If that fails and leaves the old pmap entry in place, then UVM won't hold the right locks when it eventually retries. This entanglement of the UVM and pmap locking was done in rmind-uvmplock in order to improve performance, but it also means that the UVM state and pmap state need to be kept in sync more than they did before. It would be possible to handle this in the UVM code instead of in the pmap code, but these pmap changes improve the handling of low memory situations in general, and handling this in UVM would be clunky, so this seemed like the better way to go. This somewhat indirectly fixes PR 52706 on the remaining platforms where this problem existed. To generate a diff of this commit: cvs rdiff -u -r1.261 -r1.262 src/sys/arch/alpha/alpha/pmap.c cvs rdiff -u -r1.69 -r1.70 src/sys/arch/m68k/m68k/pmap_motorola.c cvs rdiff -u -r1.94 -r1.95 src/sys/arch/powerpc/oea/pmap.c cvs rdiff -u -r1.307 -r1.308 src/sys/arch/sparc64/sparc64/pmap.c Please note that diffs are not public domain; they are subject to the copyright notices on the relevant files.