On Thu, Mar 21, 2019 at 01:56:44PM -0700, [email protected] wrote:
> +static inline void *next_pebs_record(void *p)
> +{
> +     struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> +     unsigned int size;
> +
> +     if (x86_pmu.intel_cap.pebs_format < 4)
> +             size = x86_pmu.pebs_record_size;
> +     else
> +             size = cpuc->pebs_record_size;
> +     return p + size;
> +}

> @@ -1323,19 +1580,19 @@ get_next_pebs_record_by_bit(void *base, void *top, 
> int bit)
>       if (base == NULL)
>               return NULL;
>  
> -     for (at = base; at < top; at += x86_pmu.pebs_record_size) {
> -             struct pebs_record_nhm *p = at;
> +     for (at = base; at < top; at = next_pebs_record(at)) {
> +             unsigned long status = get_pebs_status(at);

afaict we do not mix base and adaptive records, and thus the above
really could use cpuc->pebs_record_size unconditionally, right?


Reply via email to