On 17/06/2020 15:38, Catalin Marinas wrote:
On Wed, Jun 17, 2020 at 01:38:44PM +0100, Steven Price wrote:
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index e3b9ee268823..040a7fffaa93 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1783,6 +1783,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, 
phys_addr_t fault_ipa,
                        vma_pagesize = PMD_SIZE;
        }
+ if (system_supports_mte() && kvm->arch.vcpu_has_mte) {
+               /*
+                * VM will be able to see the page's tags, so we must ensure
+                * they have been initialised.
+                */
+               struct page *page = pfn_to_page(pfn);
+
+               if (!test_and_set_bit(PG_mte_tagged, &page->flags))
+                       mte_clear_page_tags(page_address(page), 
page_size(page));
+       }

Are all the guest pages always mapped via a Stage 2 fault? It may be
better if we did that via kvm_set_spte_hva().


I was under the impression that pages are always faulted into the stage 2, but I have to admit I'm not 100% sure about that.

kvm_set_spte_hva() may be more appropriate, although on first look I don't understand how that function deals with huge pages. Is it actually called for normal mappings or only for changes due to the likes of KSM?

Steve

Reply via email to