From: Andi Kleen <[email protected]>

The topdown metrics and slots events are mapped to a fixed counter,
but should have the normal weight for the scheduler.
So special case this.

Signed-off-by: Andi Kleen <[email protected]>
Signed-off-by: Kan Liang <[email protected]>
---
 arch/x86/events/intel/core.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 2eec172765f4..6de9249acb28 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -5281,6 +5281,15 @@ __init int intel_pmu_init(void)
                 * counter, so do not extend mask to generic counters
                 */
                for_each_event_constraint(c, x86_pmu.event_constraints) {
+                       /*
+                        * Don't limit the event mask for TopDown
+                        * metrics and slots events.
+                        */
+                       if (x86_pmu.num_counters_fixed >= 3 &&
+                           c->idxmsk64 & INTEL_PMC_MSK_ANY_SLOTS) {
+                               c->weight = hweight64(c->idxmsk64);
+                               continue;
+                       }
                        if (c->cmask == FIXED_EVENT_FLAGS
                            && c->idxmsk64 != INTEL_PMC_MSK_FIXED_REF_CYCLES) {
                                c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 
1;
-- 
2.14.5

Reply via email to