On Thu, Jan 31, 2019 at 06:30:22PM +0800, Peter Xu wrote: > The change_pte() notifier was designed to use as a quick path to > update secondary MMU PTEs on write permission changes or PFN changes. > For KVM, it could reduce the vm-exits when vcpu faults on the pages > that was touched up by KSM. It's not used to do cache invalidations, > for example, if we see the notifier will be called before the real PTE > update after all (please see set_pte_at_notify that set_pte_at was > called later). > > All the necessary cache invalidation should all be done in > invalidate_range() already. > > CC: Benjamin Herrenschmidt <[email protected]> > CC: Paul Mackerras <[email protected]> > CC: Michael Ellerman <[email protected]> > CC: Alistair Popple <[email protected]> > CC: Alexey Kardashevskiy <[email protected]> > CC: Mark Hairgrove <[email protected]> > CC: Balbir Singh <[email protected]> > CC: David Gibson <[email protected]> > CC: Andrea Arcangeli <[email protected]> > CC: Jerome Glisse <[email protected]> > CC: Jason Wang <[email protected]> > CC: [email protected] > CC: [email protected] > Signed-off-by: Peter Xu <[email protected]> > --- > arch/powerpc/platforms/powernv/npu-dma.c | 10 ---------- > 1 file changed, 10 deletions(-)
Reviewed-by: Andrea Arcangeli <[email protected]> It doesn't make sense to implement change_pte as an invalidate, change_pte is not compulsory to implement so if one wants to have invalidates only, change_pte method shouldn't be implemented in the first place and the common code will guarantee to invoke the range invalidates instead. Currently the whole change_pte optimization is effectively disabled as noted in past discussions with Jerome (because of the range invalidates that always surrounds it), so we need to revisit the whole change_pte logic and decide it to re-enable it or to drop it as a whole, but in the meantime it's good to cleanup spots like below that should leave change_pte alone. There are several examples of mmu_notifiers_ops in the kernel that don't implement change_pte, in fact it's the majority. Of all mmu notifier users, only nv_nmmu_notifier_ops, intel_mmuops_change and kvm_mmu_notifier_ops implements change_pte and as Peter found out by source review nv_nmmu_notifier_ops, intel_mmuops_change are wrong about it and should stop implementing it as an invalidate. In short change_pte is only implemented correctly from KVM which can really updates the spte and flushes the TLB but the spte update remains and could avoid a vmexit if we figure out how to re-enable the optimization safely (the TLB fill after change_pte in KVM EPT/shadow secondary MMU will be looked up by the CPU in hardware). If change_pte is implemented, it should update the mapping like KVM does and not do an invalidate. Thanks, Andrea > > diff --git a/arch/powerpc/platforms/powernv/npu-dma.c > b/arch/powerpc/platforms/powernv/npu-dma.c > index 3f58c7dbd581..c003b29d870e 100644 > --- a/arch/powerpc/platforms/powernv/npu-dma.c > +++ b/arch/powerpc/platforms/powernv/npu-dma.c > @@ -917,15 +917,6 @@ static void pnv_npu2_mn_release(struct mmu_notifier *mn, > mmio_invalidate(npu_context, 0, ~0UL); > } > > -static void pnv_npu2_mn_change_pte(struct mmu_notifier *mn, > - struct mm_struct *mm, > - unsigned long address, > - pte_t pte) > -{ > - struct npu_context *npu_context = mn_to_npu_context(mn); > - mmio_invalidate(npu_context, address, PAGE_SIZE); > -} > - > static void pnv_npu2_mn_invalidate_range(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, unsigned long end) > @@ -936,7 +927,6 @@ static void pnv_npu2_mn_invalidate_range(struct > mmu_notifier *mn, > > static const struct mmu_notifier_ops nv_nmmu_notifier_ops = { > .release = pnv_npu2_mn_release, > - .change_pte = pnv_npu2_mn_change_pte, > .invalidate_range = pnv_npu2_mn_invalidate_range, > }; > > -- > 2.17.1 >

