Module Name: src Committed By: rin Date: Mon Oct 5 04:48:24 UTC 2020
Modified Files: src/sys/uvm: uvm_bio.c Log Message: PR kern/55658 ubc_fault_page(): Ignore PG_RDONLY flag and always pmap_enter() the page with the permissions of the original access_type. It is the file system's responsibility to allocate blocks that is being modified by write(), before calling into UBC to fill the pages for that range. KASSERT() is added there to confirm that no clean page is mapped writable. Fix infinite loop in uvm_fault_internal(), observed on 16KB-page systems, where it continues to try to make a partially-backed page writable. No regression in ATF and KASSERT() does not fire on several architectures, as far as I can see. Fix suggested by chs. Thanks! To generate a diff of this commit: cvs rdiff -u -r1.121 -r1.122 src/sys/uvm/uvm_bio.c Please note that diffs are not public domain; they are subject to the copyright notices on the relevant files.
Modified files: Index: src/sys/uvm/uvm_bio.c diff -u src/sys/uvm/uvm_bio.c:1.121 src/sys/uvm/uvm_bio.c:1.122 --- src/sys/uvm/uvm_bio.c:1.121 Thu Jul 9 09:24:32 2020 +++ src/sys/uvm/uvm_bio.c Mon Oct 5 04:48:23 2020 @@ -1,4 +1,4 @@ -/* $NetBSD: uvm_bio.c,v 1.121 2020/07/09 09:24:32 rin Exp $ */ +/* $NetBSD: uvm_bio.c,v 1.122 2020/10/05 04:48:23 rin Exp $ */ /* * Copyright (c) 1998 Chuck Silvers. @@ -34,7 +34,7 @@ */ #include <sys/cdefs.h> -__KERNEL_RCSID(0, "$NetBSD: uvm_bio.c,v 1.121 2020/07/09 09:24:32 rin Exp $"); +__KERNEL_RCSID(0, "$NetBSD: uvm_bio.c,v 1.122 2020/10/05 04:48:23 rin Exp $"); #include "opt_uvmhist.h" #include "opt_ubc.h" @@ -235,9 +235,7 @@ static inline int ubc_fault_page(const struct uvm_faultinfo *ufi, const struct ubc_map *umap, struct vm_page *pg, vm_prot_t prot, vm_prot_t access_type, vaddr_t va) { - vm_prot_t mask; int error; - bool rdonly; KASSERT(rw_write_held(pg->uobject->vmobjlock)); @@ -280,11 +278,11 @@ ubc_fault_page(const struct uvm_faultinf pg->offset < umap->writeoff || pg->offset + PAGE_SIZE > umap->writeoff + umap->writelen); - rdonly = uvm_pagereadonly_p(pg); - mask = rdonly ? ~VM_PROT_WRITE : VM_PROT_ALL; + KASSERT((access_type & VM_PROT_WRITE) == 0 || + uvm_pagegetdirty(pg) != UVM_PAGE_STATUS_CLEAN); error = pmap_enter(ufi->orig_map->pmap, va, VM_PAGE_TO_PHYS(pg), - prot & mask, PMAP_CANFAIL | (access_type & mask)); + prot, PMAP_CANFAIL | access_type); uvm_pagelock(pg); uvm_pageactivate(pg);