On Fri, Feb 06, 2026, Jim Mattson wrote:
> Add amd_pmu_refresh_host_guest_eventsel_hw() to recalculate eventsel_hw for
> all PMCs based on the current vCPU state. This is needed because Host-Only
> and Guest-Only counters must be enabled/disabled at:
> 
>   - SVME changes: When EFER.SVME is modified, counters with Guest-Only bits
>     need their hardware enable state updated.
> 
>   - Nested transitions: When entering or leaving guest mode, Host-Only
>     counters should be disabled/enabled and Guest-Only counters should be
>     enabled/disabled accordingly.
> 
> Add a nested_transition() callback to kvm_x86_ops and call it from
> enter_guest_mode() and leave_guest_mode() to ensure the PMU state stays
> synchronized with guest mode transitions.

Blech, I'm not a fan of this kvm_x86_ops hook.  I especially don't like calling
out to vendor code from {enter,leave}_guest_mode().  The subtle dependency on
vcpu-arch.efer being up-to-date in svm_set_efer() is a little nasty too.

More importantly, I think this series is actively buggy, as I don't see anything
in amd_pmu_refresh_host_guest_eventsel_hw() that restricts it to the mediated
PMU.  And I'm pretty sure that path will bypass the PMU event filter.  And I
believe kvm_pmu_recalc_pmc_emulation() also needs to be invoked so that emulated
instructions are counted correctly.

To avoid ordering issues and bugs where event filtering and guest/host handling
clobber each other, I think we should funnel all processing through KVM_REQ_PMU,
and then do something like this:

diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
index 14e2cbab8312..a2a9492063f7 100644
--- a/arch/x86/kvm/kvm_cache_regs.h
+++ b/arch/x86/kvm/kvm_cache_regs.h
@@ -227,7 +227,8 @@ static inline void enter_guest_mode(struct kvm_vcpu *vcpu)
 {
        vcpu->arch.hflags |= HF_GUEST_MASK;
        vcpu->stat.guest_mode = 1;
-       kvm_x86_call(nested_transition)(vcpu);
+
+       kvm_pmu_handle_nested_transition();
 }
 
 static inline void leave_guest_mode(struct kvm_vcpu *vcpu)
@@ -240,7 +241,8 @@ static inline void leave_guest_mode(struct kvm_vcpu *vcpu)
        }
 
        vcpu->stat.guest_mode = 0;
-       kvm_x86_call(nested_transition)(vcpu);
+
+       kvm_pmu_handle_nested_transition();
 }
 
 static inline bool is_guest_mode(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index 0925246731cb..098dae2d45b4 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -244,6 +244,18 @@ static inline bool 
kvm_pmu_is_fastpath_emulation_allowed(struct kvm_vcpu *vcpu)
                                  X86_PMC_IDX_MAX);
 }
 
+static inline void kvm_pmu_handle_nested_transition(struct kvm_vcpu *vcpu)
+{
+       if (!kvm_vcpu_has_mediated_pmu(vcpu))
+               return;
+
+       if (vcpu_to_pmu(vcpu)->reserved_bits & AMD64_EVENTSEL_HOST_GUEST_MASK)
+               return;
+
+       atomic64_set(&vcpu_to_pmu(vcpu)->__reprogram_pmi, -1ull);
+       kvm_make_request(KVM_REQ_PMU, vcpu);
+}
+
 void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu);
 void kvm_pmu_handle_event(struct kvm_vcpu *vcpu);
 int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data);

Reply via email to