On 2/7/2026 6:53 AM, Jim Mattson wrote:
> Update amd_pmu_set_eventsel_hw() to clear the event selector's hardware
> enable bit when the PMC should not count based on the guest's Host-Only and
> Guest-Only event selector bits and the current vCPU state.
> 
> Signed-off-by: Jim Mattson <[email protected]>
> ---
>  arch/x86/include/asm/perf_event.h |  2 ++
>  arch/x86/kvm/svm/pmu.c            | 18 ++++++++++++++++++
>  2 files changed, 20 insertions(+)
> 
> diff --git a/arch/x86/include/asm/perf_event.h 
> b/arch/x86/include/asm/perf_event.h
> index 0d9af4135e0a..4dfe12053c09 100644
> --- a/arch/x86/include/asm/perf_event.h
> +++ b/arch/x86/include/asm/perf_event.h
> @@ -58,6 +58,8 @@
>  #define AMD64_EVENTSEL_INT_CORE_ENABLE                       (1ULL << 36)
>  #define AMD64_EVENTSEL_GUESTONLY                     (1ULL << 40)
>  #define AMD64_EVENTSEL_HOSTONLY                              (1ULL << 41)
> +#define AMD64_EVENTSEL_HOST_GUEST_MASK                       \
> +     (AMD64_EVENTSEL_HOSTONLY | AMD64_EVENTSEL_GUESTONLY)
>  
>  #define AMD64_EVENTSEL_INT_CORE_SEL_SHIFT            37
>  #define AMD64_EVENTSEL_INT_CORE_SEL_MASK             \
> diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
> index d9ca633f9f49..8d451110a94d 100644
> --- a/arch/x86/kvm/svm/pmu.c
> +++ b/arch/x86/kvm/svm/pmu.c
> @@ -149,8 +149,26 @@ static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, struct 
> msr_data *msr_info)
>  
>  static void amd_pmu_set_eventsel_hw(struct kvm_pmc *pmc)
>  {
> +     struct kvm_vcpu *vcpu = pmc->vcpu;
> +     u64 host_guest_bits;
> +
>       pmc->eventsel_hw = (pmc->eventsel & ~AMD64_EVENTSEL_HOSTONLY) |
>                          AMD64_EVENTSEL_GUESTONLY;
> +
> +     if (!(pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE))
> +             return;
> +
> +     if (!(vcpu->arch.efer & EFER_SVME))
> +             return;
> +
> +     host_guest_bits = pmc->eventsel & AMD64_EVENTSEL_HOST_GUEST_MASK;
> +     if (!host_guest_bits || host_guest_bits == 
> AMD64_EVENTSEL_HOST_GUEST_MASK)
> +             return;
> +
> +     if (!!(host_guest_bits & AMD64_EVENTSEL_GUESTONLY) == 
> is_guest_mode(vcpu))
> +             return;

This seems to disable the PMCs after exits from an L2 guest to the L0 
hypervisor.
For such transitions, the corresponding L1 vCPU's PMC has GuestOnly set but
is_guest_mode() is false as this function is called at the very end of
leave_guest_mode() after vcpu->stat.guest_mode is set to 0.

Is this a correct interpretation of the condition above?

> +
> +     pmc->eventsel_hw &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
>  }
>  
>  static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)


Reply via email to