On Wed, 1 Mar 2017, Thomas Gleixner wrote:

        WARN_ON(c->x86_cache_occ_scale != cqm_l3_scale);

@@ -1585,12 +1580,17 @@ static int intel_cqm_cpu_starting(unsigned int cpu)

 static int intel_cqm_cpu_exit(unsigned int cpu)
 {
+       struct intel_pqr_state *state = &per_cpu(pqr_state, cpu);

Can be this_cpu_ptr() because the callback is guaranteed to run on the
outgoing CPU.

Will fix this. Assumed the calls are setup cache alloc way - cpuhp_setup_state(CPUHP_AP_ONLINE_DYN ..


        int target;

        /* Is @cpu the current cqm reader for this package ? */
        if (!cpumask_test_and_clear_cpu(cpu, &cqm_cpumask))
                return 0;

So if the CPU is not the current cqm reader then the per cpu state of this
CPU is left stale. Great improvement.

+       state->rmid = 0;
+       state->rmid_usecnt = 0;
+       wrmsr(MSR_IA32_PQR_ASSOC, 0, state->closid);

What clears state->closid? And what guarantees that state->rmid is not
updated before the CPU has really gone away?

- The rdt code takes care of clearing closid state now. Will update the comment. - The cqm however was never writing a zero to PQR_ASSOC.

So the update needs to be - to remove the state->closid = 0 from cqm code as the rdt code takes care of closid state in clear_closid() called from both offline and online cpu.
And also write a rmid = 0 to PQR_ASSOC.

We can integrate the two of these hot cpu calls(from cat and cqm) to write PQR only once.

guess I can skip all of these and send it as part of cqm changes we planned anyways, because this is really a cqm change.

Thanks,
Vikas

Reply via email to