Re: [PATCH v4 0/7] introduce vm_flags modifier functions
On Tue, Mar 14, 2023 at 02:11:44PM -0600, Alex Williamson wrote: > On Thu, 26 Jan 2023 11:37:45 -0800 > Suren Baghdasaryan wrote: > > > This patchset was originally published as a part of per-VMA locking [1] and > > was split after suggestion that it's viable on its own and to facilitate > > the review process. It is now a preprequisite for the next version of > > per-VMA > > lock patchset, which reuses vm_flags modifier functions to lock the VMA when > > vm_flags are being updated. > > > > VMA vm_flags modifications are usually done under exclusive mmap_lock > > protection because this attrubute affects other decisions like VMA merging > > or splitting and races should be prevented. Introduce vm_flags modifier > > functions to enforce correct locking. > > > > The patchset applies cleanly over mm-unstable branch of mm tree. > > With this series, vfio-pci developed a bunch of warnings around not > holding the mmap_lock write semaphore while calling > io_remap_pfn_range() from our fault handler, vfio_pci_mmap_fault(). > > I suspect vdpa has the same issue for their use of remap_pfn_range() > from their fault handler, JasonW, MST, FYI. Yeah, IMHO this whole approach has always been a bit sketchy, it was never intended that remap_pfn would be called from a fault handler, you are supposed to use a vmf_insert_pfn() type API from fault handlers. > The reason for using remap_pfn_range() on fault in vfio-pci is that > we're mapping device MMIO to userspace, where that MMIO can be disabled > and we'd rather zap the mapping when that occurs so that we can sigbus > the user rather than allow the user to trigger potentially fatal bus > errors on the host. > Peter Xu has suggested offline that a non-lazy approach to reinsert the > mappings might be more inline with mm expectations relative to touching > vm_flags during fault. Yes, I feel the same - along with completing the address_space conversion you had started. IIRC that was part of the reason we needed this design in vfio. Jason
Re: [PATCH v4 0/7] introduce vm_flags modifier functions
On Fri, Mar 17, 2023 at 3:41 PM Alex Williamson wrote: > > On Fri, 17 Mar 2023 12:08:32 -0700 > Suren Baghdasaryan wrote: > > > On Tue, Mar 14, 2023 at 1:11 PM Alex Williamson > > wrote: > > > > > > On Thu, 26 Jan 2023 11:37:45 -0800 > > > Suren Baghdasaryan wrote: > > > > > > > This patchset was originally published as a part of per-VMA locking [1] > > > > and > > > > was split after suggestion that it's viable on its own and to facilitate > > > > the review process. It is now a preprequisite for the next version of > > > > per-VMA > > > > lock patchset, which reuses vm_flags modifier functions to lock the VMA > > > > when > > > > vm_flags are being updated. > > > > > > > > VMA vm_flags modifications are usually done under exclusive mmap_lock > > > > protection because this attrubute affects other decisions like VMA > > > > merging > > > > or splitting and races should be prevented. Introduce vm_flags modifier > > > > functions to enforce correct locking. > > > > > > > > The patchset applies cleanly over mm-unstable branch of mm tree. > > > > > > With this series, vfio-pci developed a bunch of warnings around not > > > holding the mmap_lock write semaphore while calling > > > io_remap_pfn_range() from our fault handler, vfio_pci_mmap_fault(). > > > > > > I suspect vdpa has the same issue for their use of remap_pfn_range() > > > from their fault handler, JasonW, MST, FYI. > > > > > > It also looks like gru_fault() would have the same issue, Dimitri. > > > > > > In all cases, we're preemptively setting vm_flags to what > > > remap_pfn_range_notrack() uses, so I thought we were safe here as I > > > specifically remember trying to avoid changing vm_flags from the > > > fault handler. But apparently that doesn't take into account > > > track_pfn_remap() where VM_PAT comes into play. > > > > > > The reason for using remap_pfn_range() on fault in vfio-pci is that > > > we're mapping device MMIO to userspace, where that MMIO can be disabled > > > and we'd rather zap the mapping when that occurs so that we can sigbus > > > the user rather than allow the user to trigger potentially fatal bus > > > errors on the host. > > > > > > Peter Xu has suggested offline that a non-lazy approach to reinsert the > > > mappings might be more inline with mm expectations relative to touching > > > vm_flags during fault. What's the right solution here? Can the fault > > > handling be salvaged, is proactive remapping the right approach, or is > > > there something better? Thanks, > > > > Hi Alex, > > If in your case it's safe to change vm_flags without holding exclusive > > mmap_lock, maybe you can use __vm_flags_mod() the way I used it in > > https://lore.kernel.org/all/20230126193752.297968-7-sur...@google.com, > > while explaining why this should be safe? > > Hi Suren, > > Thanks for the reply, but I'm not sure I'm following. Are you > suggesting a bool arg added to io_remap_pfn_range(), or some new > variant of that function to conditionally use __vm_flags_mod() in place > of vm_flags_set() across the call chain? Thanks, I think either way could work but after taking a closer look, both ways would be quite ugly. If we could somehow identify that we are handling a page fault and use __vm_flags_mod() without additional parameters it would be more palatable IMHO... Peter's suggestion to avoid touching vm_flags during fault would be much cleaner but I'm not sure how easily that can be done. > > Alex > > -- > To unsubscribe from this group and stop receiving emails from it, send an > email to kernel-team+unsubscr...@android.com. >
Re: [PATCH v4 0/7] introduce vm_flags modifier functions
On Fri, 17 Mar 2023 12:08:32 -0700 Suren Baghdasaryan wrote: > On Tue, Mar 14, 2023 at 1:11 PM Alex Williamson > wrote: > > > > On Thu, 26 Jan 2023 11:37:45 -0800 > > Suren Baghdasaryan wrote: > > > > > This patchset was originally published as a part of per-VMA locking [1] > > > and > > > was split after suggestion that it's viable on its own and to facilitate > > > the review process. It is now a preprequisite for the next version of > > > per-VMA > > > lock patchset, which reuses vm_flags modifier functions to lock the VMA > > > when > > > vm_flags are being updated. > > > > > > VMA vm_flags modifications are usually done under exclusive mmap_lock > > > protection because this attrubute affects other decisions like VMA merging > > > or splitting and races should be prevented. Introduce vm_flags modifier > > > functions to enforce correct locking. > > > > > > The patchset applies cleanly over mm-unstable branch of mm tree. > > > > With this series, vfio-pci developed a bunch of warnings around not > > holding the mmap_lock write semaphore while calling > > io_remap_pfn_range() from our fault handler, vfio_pci_mmap_fault(). > > > > I suspect vdpa has the same issue for their use of remap_pfn_range() > > from their fault handler, JasonW, MST, FYI. > > > > It also looks like gru_fault() would have the same issue, Dimitri. > > > > In all cases, we're preemptively setting vm_flags to what > > remap_pfn_range_notrack() uses, so I thought we were safe here as I > > specifically remember trying to avoid changing vm_flags from the > > fault handler. But apparently that doesn't take into account > > track_pfn_remap() where VM_PAT comes into play. > > > > The reason for using remap_pfn_range() on fault in vfio-pci is that > > we're mapping device MMIO to userspace, where that MMIO can be disabled > > and we'd rather zap the mapping when that occurs so that we can sigbus > > the user rather than allow the user to trigger potentially fatal bus > > errors on the host. > > > > Peter Xu has suggested offline that a non-lazy approach to reinsert the > > mappings might be more inline with mm expectations relative to touching > > vm_flags during fault. What's the right solution here? Can the fault > > handling be salvaged, is proactive remapping the right approach, or is > > there something better? Thanks, > > Hi Alex, > If in your case it's safe to change vm_flags without holding exclusive > mmap_lock, maybe you can use __vm_flags_mod() the way I used it in > https://lore.kernel.org/all/20230126193752.297968-7-sur...@google.com, > while explaining why this should be safe? Hi Suren, Thanks for the reply, but I'm not sure I'm following. Are you suggesting a bool arg added to io_remap_pfn_range(), or some new variant of that function to conditionally use __vm_flags_mod() in place of vm_flags_set() across the call chain? Thanks, Alex
Re: [PATCH v4 0/7] introduce vm_flags modifier functions
On Tue, Mar 14, 2023 at 1:11 PM Alex Williamson wrote: > > On Thu, 26 Jan 2023 11:37:45 -0800 > Suren Baghdasaryan wrote: > > > This patchset was originally published as a part of per-VMA locking [1] and > > was split after suggestion that it's viable on its own and to facilitate > > the review process. It is now a preprequisite for the next version of > > per-VMA > > lock patchset, which reuses vm_flags modifier functions to lock the VMA when > > vm_flags are being updated. > > > > VMA vm_flags modifications are usually done under exclusive mmap_lock > > protection because this attrubute affects other decisions like VMA merging > > or splitting and races should be prevented. Introduce vm_flags modifier > > functions to enforce correct locking. > > > > The patchset applies cleanly over mm-unstable branch of mm tree. > > With this series, vfio-pci developed a bunch of warnings around not > holding the mmap_lock write semaphore while calling > io_remap_pfn_range() from our fault handler, vfio_pci_mmap_fault(). > > I suspect vdpa has the same issue for their use of remap_pfn_range() > from their fault handler, JasonW, MST, FYI. > > It also looks like gru_fault() would have the same issue, Dimitri. > > In all cases, we're preemptively setting vm_flags to what > remap_pfn_range_notrack() uses, so I thought we were safe here as I > specifically remember trying to avoid changing vm_flags from the > fault handler. But apparently that doesn't take into account > track_pfn_remap() where VM_PAT comes into play. > > The reason for using remap_pfn_range() on fault in vfio-pci is that > we're mapping device MMIO to userspace, where that MMIO can be disabled > and we'd rather zap the mapping when that occurs so that we can sigbus > the user rather than allow the user to trigger potentially fatal bus > errors on the host. > > Peter Xu has suggested offline that a non-lazy approach to reinsert the > mappings might be more inline with mm expectations relative to touching > vm_flags during fault. What's the right solution here? Can the fault > handling be salvaged, is proactive remapping the right approach, or is > there something better? Thanks, Hi Alex, If in your case it's safe to change vm_flags without holding exclusive mmap_lock, maybe you can use __vm_flags_mod() the way I used it in https://lore.kernel.org/all/20230126193752.297968-7-sur...@google.com, while explaining why this should be safe? > > Alex >
Re: [PATCH v4 0/7] introduce vm_flags modifier functions
On Thu, 26 Jan 2023 11:37:45 -0800 Suren Baghdasaryan wrote: > This patchset was originally published as a part of per-VMA locking [1] and > was split after suggestion that it's viable on its own and to facilitate > the review process. It is now a preprequisite for the next version of per-VMA > lock patchset, which reuses vm_flags modifier functions to lock the VMA when > vm_flags are being updated. > > VMA vm_flags modifications are usually done under exclusive mmap_lock > protection because this attrubute affects other decisions like VMA merging > or splitting and races should be prevented. Introduce vm_flags modifier > functions to enforce correct locking. > > The patchset applies cleanly over mm-unstable branch of mm tree. With this series, vfio-pci developed a bunch of warnings around not holding the mmap_lock write semaphore while calling io_remap_pfn_range() from our fault handler, vfio_pci_mmap_fault(). I suspect vdpa has the same issue for their use of remap_pfn_range() from their fault handler, JasonW, MST, FYI. It also looks like gru_fault() would have the same issue, Dimitri. In all cases, we're preemptively setting vm_flags to what remap_pfn_range_notrack() uses, so I thought we were safe here as I specifically remember trying to avoid changing vm_flags from the fault handler. But apparently that doesn't take into account track_pfn_remap() where VM_PAT comes into play. The reason for using remap_pfn_range() on fault in vfio-pci is that we're mapping device MMIO to userspace, where that MMIO can be disabled and we'd rather zap the mapping when that occurs so that we can sigbus the user rather than allow the user to trigger potentially fatal bus errors on the host. Peter Xu has suggested offline that a non-lazy approach to reinsert the mappings might be more inline with mm expectations relative to touching vm_flags during fault. What's the right solution here? Can the fault handling be salvaged, is proactive remapping the right approach, or is there something better? Thanks, Alex
[PATCH v4 0/7] introduce vm_flags modifier functions
This patchset was originally published as a part of per-VMA locking [1] and was split after suggestion that it's viable on its own and to facilitate the review process. It is now a preprequisite for the next version of per-VMA lock patchset, which reuses vm_flags modifier functions to lock the VMA when vm_flags are being updated. VMA vm_flags modifications are usually done under exclusive mmap_lock protection because this attrubute affects other decisions like VMA merging or splitting and races should be prevented. Introduce vm_flags modifier functions to enforce correct locking. The patchset applies cleanly over mm-unstable branch of mm tree. Changes since v3 [2] - Fixed build breakage in nommu.c introduced in previous version, per Andrew Morton - Added back data_race() hint in vm_area_dup, per Mel Gorman - Renamed vm_flags modifiers, per Andrew Morton, Mike Rapoport and Mel Gorman - Changed vm_flags_mod to reset vm_flags with one assignment, per Mel Gorman - Added comments about the need to copy vm_flags before ksm_madvise, per Mel Gorman - Added clarifications for __vm_flags_mod usage, per Mel Gorman [1] https://lore.kernel.org/all/20230109205336.3665937-1-sur...@google.com/ [2] https://lore.kernel.org/lkml/20230125233554.153109-1-sur...@google.com/ Suren Baghdasaryan (7): kernel/fork: convert vma assignment to a memcpy mm: introduce vma->vm_flags wrapper functions mm: replace VM_LOCKED_CLEAR_MASK with VM_LOCKED_MASK mm: replace vma->vm_flags direct modifications with modifier calls mm: replace vma->vm_flags indirect modification in ksm_madvise mm: introduce __vm_flags_mod and use it in untrack_pfn mm: export dump_mm() arch/arm/kernel/process.c | 2 +- arch/ia64/mm/init.c | 8 +-- arch/loongarch/include/asm/tlb.h | 2 +- arch/powerpc/kvm/book3s_hv_uvmem.c| 6 +- arch/powerpc/kvm/book3s_xive_native.c | 2 +- arch/powerpc/mm/book3s64/subpage_prot.c | 2 +- arch/powerpc/platforms/book3s/vas-api.c | 2 +- arch/powerpc/platforms/cell/spufs/file.c | 14 ++--- arch/s390/mm/gmap.c | 9 ++- arch/x86/entry/vsyscall/vsyscall_64.c | 2 +- arch/x86/kernel/cpu/sgx/driver.c | 2 +- arch/x86/kernel/cpu/sgx/virt.c| 2 +- arch/x86/mm/pat/memtype.c | 14 +++-- arch/x86/um/mem_32.c | 2 +- drivers/acpi/pfr_telemetry.c | 2 +- drivers/android/binder.c | 3 +- drivers/char/mspec.c | 2 +- drivers/crypto/hisilicon/qm.c | 2 +- drivers/dax/device.c | 2 +- drivers/dma/idxd/cdev.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 2 +- drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 4 +- drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c | 4 +- drivers/gpu/drm/amd/amdkfd/kfd_events.c | 4 +- drivers/gpu/drm/amd/amdkfd/kfd_process.c | 4 +- drivers/gpu/drm/drm_gem.c | 2 +- drivers/gpu/drm/drm_gem_dma_helper.c | 3 +- drivers/gpu/drm/drm_gem_shmem_helper.c| 2 +- drivers/gpu/drm/drm_vm.c | 8 +-- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 2 +- drivers/gpu/drm/exynos/exynos_drm_gem.c | 4 +- drivers/gpu/drm/gma500/framebuffer.c | 2 +- drivers/gpu/drm/i810/i810_dma.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_mman.c | 4 +- drivers/gpu/drm/mediatek/mtk_drm_gem.c| 2 +- drivers/gpu/drm/msm/msm_gem.c | 2 +- drivers/gpu/drm/omapdrm/omap_gem.c| 3 +- drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 3 +- drivers/gpu/drm/tegra/gem.c | 5 +- drivers/gpu/drm/ttm/ttm_bo_vm.c | 3 +- drivers/gpu/drm/virtio/virtgpu_vram.c | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c | 2 +- drivers/gpu/drm/xen/xen_drm_front_gem.c | 3 +- drivers/hsi/clients/cmt_speech.c | 2 +- drivers/hwtracing/intel_th/msu.c | 2 +- drivers/hwtracing/stm/core.c | 2 +- drivers/infiniband/hw/hfi1/file_ops.c | 4 +- drivers/infiniband/hw/mlx5/main.c | 4 +- drivers/infiniband/hw/qib/qib_file_ops.c | 13 ++--- drivers/infiniband/hw/usnic/usnic_ib_verbs.c | 2 +- .../infiniband/hw/vmw_pvrdma/pvrdma_verbs.c | 2 +- .../common/videobuf2/videobuf2-dma-contig.c | 2 +- .../common/videobuf2/videobuf2-vmalloc.c | 2 +- drivers/media/v4l2-core/videobuf-dma-contig.c | 2 +- drivers/media/v4l2-core/videobuf-dma-sg.c | 4 +- drivers/media/v4l2-core/videobuf-vmalloc.c| 2 +- drivers/misc/cxl/context.c| 2 +- drivers/misc/habanalabs/common/memory.c | 2 +- drivers/misc/habanalabs/gaudi/gaudi.c | 4 +- drivers/misc/habanalabs/gaudi2/gaudi2.c | 8 +--