On Mon, Feb 11, 2019 at 03:20:18PM +0100, Greg Kroah-Hartman wrote:
> 4.9-stable review patch.  If anyone has any objections, please let me know.
> 
> ------------------
> 
> From: Andi Kleen <a...@linux.intel.com>
> 
> commit a7e3ed1e470116c9d12c2f778431a481a6be8ab6 upstream.

The patch doesn't seem to match the commit log.

Did something got mixed up?

> Unfortunately this event requires programming a mask in a separate
> register. And worse this separate register is per core, not per
> CPU thread.
> 
> This patch:
> 
> - Teaches perf_events that OFFCORE_RESPONSE needs extra parameters.
>   The extra parameters are passed by user space in the
>   perf_event_attr::config1 field.
> 
> - Adds support to the Intel perf_event core to schedule per
>   core resources. This adds fairly generic infrastructure that
>   can be also used for other per core resources.
>   The basic code has is patterned after the similar AMD northbridge
>   constraints code.
> 
> Thanks to Stephane Eranian who pointed out some problems
> in the original version and suggested improvements.
> 
> Signed-off-by: Andi Kleen <a...@linux.intel.com>
> Signed-off-by: Lin Ming <ming.m....@intel.com>
> Signed-off-by: Peter Zijlstra <a.p.zijls...@chello.nl>
> LKML-Reference: <1299119690-13991-2-git-send-email-ming.m....@intel.com>
> Signed-off-by: Ingo Molnar <mi...@elte.hu>
> [ He Zhe: Fixes conflict caused by missing disable_counter_freeze which is
>  introduced since v4.20 af3bdb991a5cb. ]
> Signed-off-by: He Zhe <zhe...@windriver.com>
> Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>
> 
> ---
>  arch/x86/events/intel/core.c |   10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -3235,6 +3235,11 @@ static void free_excl_cntrs(int cpu)
>  
>  static void intel_pmu_cpu_dying(int cpu)
>  {
> +     fini_debug_store_on_cpu(cpu);
> +}
> +
> +static void intel_pmu_cpu_dead(int cpu)
> +{
>       struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
>       struct intel_shared_regs *pc;
>  
> @@ -3246,8 +3251,6 @@ static void intel_pmu_cpu_dying(int cpu)
>       }
>  
>       free_excl_cntrs(cpu);
> -
> -     fini_debug_store_on_cpu(cpu);
>  }
>  
>  static void intel_pmu_sched_task(struct perf_event_context *ctx,
> @@ -3324,6 +3327,7 @@ static __initconst const struct x86_pmu
>       .cpu_prepare            = intel_pmu_cpu_prepare,
>       .cpu_starting           = intel_pmu_cpu_starting,
>       .cpu_dying              = intel_pmu_cpu_dying,
> +     .cpu_dead               = intel_pmu_cpu_dead,
>  };
>  
>  static __initconst const struct x86_pmu intel_pmu = {
> @@ -3359,6 +3363,8 @@ static __initconst const struct x86_pmu
>       .cpu_prepare            = intel_pmu_cpu_prepare,
>       .cpu_starting           = intel_pmu_cpu_starting,
>       .cpu_dying              = intel_pmu_cpu_dying,
> +     .cpu_dead               = intel_pmu_cpu_dead,
> +
>       .guest_get_msrs         = intel_guest_get_msrs,
>       .sched_task             = intel_pmu_sched_task,
>  };
> 
> 

Reply via email to