On Fri, Apr 2, 2021 at 12:53 AM Paolo Bonzini wrote:
>
> On 02/04/21 01:37, Ben Gardon wrote:
> > +void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
> > + bool shared)
> > {
> > gfn_t max_gfn = 1ULL << (shadow_phys_bits - PAGE_SHIFT);
> >
> > -
On 02/04/21 01:37, Ben Gardon wrote:
+void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
+ bool shared)
{
gfn_t max_gfn = 1ULL << (shadow_phys_bits - PAGE_SHIFT);
- lockdep_assert_held_write(&kvm->mmu_lock);
+ kvm_lockdep_assert_mmu_l
To reduce lock contention and interference with page fault handlers,
allow the TDP MMU function to zap a GFN range to operate under the MMU
read lock.
Signed-off-by: Ben Gardon
---
arch/x86/kvm/mmu/mmu.c | 22 +---
arch/x86/kvm/mmu/tdp_mmu.c | 111 ++---
3 matches
Mail list logo