On Fri, 19 Mar 2021 16:17:07 +0000,
Yoan Picchi <[email protected]> wrote:
> 
> Add some counter for when a dpage get invalidated. The counter isn't
> in the function that actually do it though because it doesn't have
> either a kvm or vcpu argument, so we would have no way to access the
> counters. For this reason, the counter have been added to the calling
> functions instead.
> 
> Signed-off-by: Yoan Picchi <[email protected]>
> ---
>  arch/arm64/include/asm/kvm_host.h | 1 +
>  arch/arm64/kvm/guest.c            | 1 +
>  arch/arm64/kvm/mmu.c              | 5 ++++-
>  3 files changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h 
> b/arch/arm64/include/asm/kvm_host.h
> index 863603285..3609aa89d 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -547,6 +547,7 @@ static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, 
> int reg)
>  
>  struct kvm_vm_stat {
>       ulong remote_tlb_flush;
> +     ulong cached_page_invalidated;
>  };
>  
>  struct kvm_vcpu_stat {
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 1029976ca..f6b1f0b63 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -41,6 +41,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
>       VCPU_STAT("regular_page_mapped", regular_page_mapped),
>       VCPU_STAT("huge_page_mapped", huge_page_mapped),
>       VM_STAT("remote_tlb_flush", remote_tlb_flush),
> +     VM_STAT("cached_page_invalidated", cached_page_invalidated),
>       VCPU_STAT("exits", exits),
>       VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
>       VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 55d7fe63b..d6ddf5ab8 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -893,8 +893,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, 
> phys_addr_t fault_ipa,
>       if (writable)
>               prot |= KVM_PGTABLE_PROT_W;
>  
> -     if (fault_status != FSC_PERM && !device)
> +     if (fault_status != FSC_PERM && !device) {
>               clean_dcache_guest_page(pfn, vma_pagesize);
> +             kvm->stat.cached_page_invalidated++;
> +     }
>  
>       if (exec_fault) {
>               prot |= KVM_PGTABLE_PROT_X;
> @@ -1166,6 +1168,7 @@ int kvm_set_spte_hva(struct kvm *kvm, unsigned long 
> hva, pte_t pte)
>        * just like a translation fault and clean the cache to the PoC.
>        */
>       clean_dcache_guest_page(pfn, PAGE_SIZE);
> +     kvm->stat.cached_page_invalidated++;
>       handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &pfn);
>       return 0;
>  }

Given that PoC flushing is only done on translation fault, what are
the odds that this would report a different number than that of
translation faults (assuming it was actually implemented)?

It is also interesting that you attribute the same cost to flushing a
4kB page or a 1GB block. And what does this mean when either FWB or
IDC are available?

        M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
[email protected]
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to