On Wed, Mar 10, 2021, Paolo Bonzini wrote:
> On 10/03/21 01:30, Sean Christopherson wrote:
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 50ef757c5586..f0c99fa04ef2 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -323,7 +323,18 @@ static void handle_removed_tdp_mmu_page(struct kvm 
> > *kvm, u64 *pt,
> >                             cpu_relax();
> >                     }
> >             } else {
> > +                   /*
> > +                    * If the SPTE is not MMU-present, there is no backing
> > +                    * page associated with the SPTE and so no side effects
> > +                    * that need to be recorded, and exclusive ownership of
> > +                    * mmu_lock ensures the SPTE can't be made present.
> > +                    * Note, zapping MMIO SPTEs is also unnecessary as they
> > +                    * are guarded by the memslots generation, not by being
> > +                    * unreachable.
> > +                    */
> >                     old_child_spte = READ_ONCE(*sptep);
> > +                   if (!is_shadow_present_pte(old_child_spte))
> > +                           continue;
> >                     /*
> >                      * Marking the SPTE as a removed SPTE is not
> 
> Ben, do you plan to make this path take mmu_lock for read?  If so, this
> wouldn't be too useful IIUC.

I can see kvm_mmu_zap_all_fast()->kvm_tdp_mmu_zap_all() moving to a shared-mode
flow, but I don't think we'll ever want to move away from exclusive-mode zapping
for kvm_arch_flush_shadow_all()->kvm_mmu_zap_all()->kvm_tdp_mmu_zap_all().  In
that case, the VM is dead or dying; freeing memory should be done as quickly as
possible.

Reply via email to