On Thu, Aug 08, 2013 at 06:15:43PM -0700, Andi Kleen wrote:
> +++ b/arch/x86/kernel/cpu/perf_event_intel.c

> @@ -1141,6 +1146,17 @@ static void intel_pmu_enable_event(struct perf_event 
> *event)
>  int intel_pmu_save_and_restart(struct perf_event *event)
>  {
>       x86_perf_event_update(event);
> +     /*
> +      * For a checkpointed counter always reset back to 0.  This
> +      * avoids a situation where the counter overflows, aborts the
> +      * transaction and is then set back to shortly before the
> +      * overflow, and overflows and aborts again.
> +      */
> +     if (unlikely(event_is_checkpointed(event))) {
> +             /* No race with NMIs because the counter should not be armed */
> +             wrmsrl(event->hw.event_base, 0);
> +             local64_set(&event->hw.prev_count, 0);
> +     }

Right, if it wasn't for KVM you could've done a smaller special case
handler for checkpointed events, but as it stands I suppose it makes
sense to use the normal paths.

>       return x86_perf_event_set_period(event);
>  }
>  
> @@ -1224,6 +1240,15 @@ again:
>               x86_pmu.drain_pebs(regs);
>       }
>  
> +     /*
> +      * To avoid spurious interrupts with perf stat always reset checkpointed
> +      * counters.
> +      *
> +      * XXX move somewhere else.

Like where? Afaict it needs to be here. You could write it prettier I
suppose and I guess we'll eventually need to assume all events can be
checkpointed but I don't see how it could be done elsewhere.

> +      */
> +     if (cpuc->events[2] && event_is_checkpointed(cpuc->events[2]))
> +             status |= (1ULL << 2);
> +
>       for_each_set_bit(bit, (unsigned long *)&status, X86_PMC_IDX_MAX) {
>               struct perf_event *event = cpuc->events[bit];
>  


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to