Oliver Upton <[email protected]> writes:

On Mon, May 04, 2026 at 09:18:03PM +0000, Colton Lewis wrote:
+
+/**
+ * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters
+ * @pmu: Pointer to arm_pmu struct
+ *
+ * Compute the bitmask that selects the host-reserved counters in the
+ * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters
+ * in HPMN..N
+ *
+ * Return: Bitmask
+ */
+u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu)
+{
+       u8 nr_counters = *host_data_ptr(nr_event_counters);
+
+       if (kvm_pmu_is_partitioned(pmu))
+               return GENMASK(nr_counters - 1, pmu->max_guest_counters);
+
+       return ARMV8_PMU_CNT_MASK_ALL;
+}
+
+/**
+ * kvm_pmu_guest_counter_mask() - Compute bitmask of guest-reserved counters
+ * @pmu: Pointer to arm_pmu struct
+ *
+ * Compute the bitmask that selects the guest-reserved counters in the
+ * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters
+ * in 0..HPMN and the cycle and instruction counters.
+ *
+ * Return: Bitmask
+ */
+u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu)
+{
+       if (kvm_pmu_is_partitioned(pmu))
+               return ARMV8_PMU_CNT_MASK_C | GENMASK(pmu->max_guest_counters - 
1, 0);
+
+       return 0;
+}
+
+/**
+ * kvm_pmu_load() - Load untrapped PMU registers
+ * @vcpu: Pointer to struct kvm_vcpu
+ *
+ * Load all untrapped PMU registers from the VCPU into the PCPU. Mask
+ * to only bits belonging to guest-reserved counters and leave
+ * host-reserved counters alone in bitmask registers.
+ */
+void kvm_pmu_load(struct kvm_vcpu *vcpu)
+{
+       struct arm_pmu *pmu;
+       unsigned long guest_counters;
+       u64 mask;
+       u8 i;
+       u64 val;
+
+       /*
+        * If we aren't guest-owned then we know the guest isn't using
+        * the PMU anyway, so no need to bother with the swap.
+        */
+       if (!kvm_vcpu_pmu_is_partitioned(vcpu))
+               return;
+
+       preempt_disable();
+
+       pmu = vcpu->kvm->arch.arm_pmu;
+       guest_counters = kvm_pmu_guest_counter_mask(pmu);
+
+       for_each_set_bit(i, &guest_counters, ARMPMU_MAX_HWEVENTS) {
+               val = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i);
+
+               if (i == ARMV8_PMU_CYCLE_IDX) {
+                       write_sysreg(val, pmccntr_el0);
+               } else {
+                       write_sysreg(i, pmselr_el0);
+                       write_sysreg(val, pmxevcntr_el0);

This is wrong, you would need an intervening ISB. It'd be better to
avoid the ISB altogether and just use {read,write}_pmevcntrn().

Good catch, I was using {read,write}_pmevcntrn here before but changed
it after your feedback that:

I'd prefer KVM directly accessed the PMU registers to
avoid the possibility of taking some instrumented codepath in the
future.

https://lore.kernel.org/kvm/[email protected]/

I assume this is a compromise with that.

Reply via email to