On Thu, Jan 22, 2026 at 8:49 AM Sean Christopherson <[email protected]> wrote: > > On Wed, Jan 21, 2026, Jim Mattson wrote: > > Add pmc_hostonly and pmc_guestonly bitmaps to struct kvm_pmu to track which > > guest-enabled performance counters have just one of the Host-Only and > > Guest-Only event selector bits set. PMCs that are disabled, have neither > > HG_ONLY bit set, or have both HG_ONLY bits set are not tracked, because > > they don't require special handling at vCPU state transitions. > > Why bother with bitmaps? The bitmaps are basically just eliding the checks in > amd_pmc_is_active() (my name), and those checks are super fast compared to > emulating transitions between L1 and L2. > > Can't we simply do: > > void amd_pmu_refresh_host_guest_eventsels(struct kvm_vcpu *vcpu) > { > struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); > struct kvm_pmc *pmc; > int i; > > kvm_for_each_pmc(pmu, pmc, i, pmu->all_valid_pmc_idx) > amd_pmu_set_eventsel_hw(pmc); > > } > > And then call that helper on all transitions? > > > +static void amd_pmu_update_hg_bitmaps(struct kvm_pmc *pmc) > > +{ > > + struct kvm_pmu *pmu = pmc_to_pmu(pmc); > > + u64 eventsel = pmc->eventsel; > > + > > + if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE)) { > > + bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1); > > + bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1); > > + return; > > + } > > + > > + switch (eventsel & AMD64_EVENTSEL_HG_ONLY) { > > + case AMD64_EVENTSEL_HOSTONLY: > > + bitmap_set(pmu->pmc_hostonly, pmc->idx, 1); > > + bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1); > > + break; > > + case AMD64_EVENTSEL_GUESTONLY: > > + bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1); > > + bitmap_set(pmu->pmc_guestonly, pmc->idx, 1); > > + break; > > + default: > > + bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1); > > + bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1); > > + break; > > + } > > +} > > + > > static bool amd_pmu_dormant_hg_event(struct kvm_pmc *pmc) > > { > > u64 hg_only = pmc->eventsel & AMD64_EVENTSEL_HG_ONLY; > > @@ -196,6 +223,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, > > struct msr_data *msr_info) > > if (data != pmc->eventsel) { > > pmc->eventsel = data; > > amd_pmu_set_eventsel_hw(pmc); > > + amd_pmu_update_hg_bitmaps(pmc); > > If we're going to bother adding amd_pmu_set_eventsel_hw(), and not reuse it as > suggested above, then it amd_pmu_set_eventsel_hw() should be renamed to just > amd_pmu_set_eventsel() and it should be the one configuring the bitmaps. > Because > KVM should never write to an eventsel without updating the bitmaps. That > would > also better capture the relationship between the bitmaps and eventsel_hw, e.g. > > pmc->eventsel_hw = (pmc->eventsel & ~AMD64_EVENTSEL_HOSTONLY) | > AMD64_EVENTSEL_GUESTONLY; > > if (!amd_pmc_is_active(pmc)) > pmc->eventsel_hw &= ~ARCH_PERFMON_EVENTSEL_ENABLE; > > /* > * Update the host/guest bitmaps used to reconfigure eventsel_hw on > * transitions to/from an L2 guest, so that KVM can quickly refresh > * the event selectors programmed into hardware, e.g. without having > * to > */ > if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE)) { > bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1); > bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1); > return; > } > > switch (eventsel & AMD64_EVENTSEL_HG_ONLY) { > case AMD64_EVENTSEL_HOSTONLY: > bitmap_set(pmu->pmc_hostonly, pmc->idx, 1); > bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1); > break; > case AMD64_EVENTSEL_GUESTONLY: > bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1); > bitmap_set(pmu->pmc_guestonly, pmc->idx, 1); > break; > default: > bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1); > bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1); > break; > } > > But I still don't see any point in the bitmaps.
No problem. I will drop them in the next version.

