From: Yang Weijiang <weijiang.y...@intel.com>

Host page swapping/migration may change the translation in
EPT leaf entry, if the target page is SPP protected,
re-enable SPP protection in MMU notifier. If SPPT shadow
page is reclaimed, the level1 pages don't have rmap to clear.

Signed-off-by: Yang Weijiang <weijiang.y...@intel.com>
Message-Id: <20190717133751.12910-10-weijiang.y...@intel.com>
Signed-off-by: Adalbert Lazăr <ala...@bitdefender.com>
---
 arch/x86/kvm/mmu.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 24222e3add91..0b859b1797f6 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2004,6 +2004,24 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, struct 
kvm_rmap_head *rmap_head,
                        new_spte &= ~PT_WRITABLE_MASK;
                        new_spte &= ~SPTE_HOST_WRITEABLE;
 
+                       /*
+                        * if it's EPT leaf entry and the physical page is
+                        * SPP protected, then re-enable SPP protection for
+                        * the page.
+                        */
+                       if (kvm->arch.spp_active &&
+                           level == PT_PAGE_TABLE_LEVEL) {
+                               struct kvm_subpage spp_info = {0};
+                               int i;
+
+                               spp_info.base_gfn = gfn;
+                               spp_info.npages = 1;
+                               i = kvm_mmu_get_subpages(kvm, &spp_info, true);
+                               if (i == 1 &&
+                                   spp_info.access_map[0] != FULL_SPP_ACCESS)
+                                       new_spte |= PT_SPP_MASK;
+                       }
+
                        new_spte = mark_spte_for_access_track(new_spte);
 
                        mmu_spte_clear_track_bits(sptep);
@@ -2905,6 +2923,10 @@ static bool mmu_page_zap_pte(struct kvm *kvm, struct 
kvm_mmu_page *sp,
        pte = *spte;
        if (is_shadow_present_pte(pte)) {
                if (is_last_spte(pte, sp->role.level)) {
+                       /* SPPT leaf entries don't have rmaps*/
+                       if (sp->role.level == PT_PAGE_TABLE_LEVEL &&
+                           is_spp_spte(sp))
+                               return true;
                        drop_spte(kvm, spte);
                        if (is_large_pte(pte))
                                --kvm->stat.lpages;
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to