On Wed, 26 Mar 2014 23:01:07 +0100 Stephane Eranian <[email protected]> wrote:

> On Wed, Mar 26, 2014 at 9:31 PM, Andrew Morton
> <[email protected]> wrote:
> > On Tue, 25 Mar 2014 01:59:10 +0300 Artem Fetishev <[email protected]> 
> > wrote:
> >
> >> On x86 uniprocessor systems topology_physical_package_id() returns -1 which
> >> causes rapl_cpu_prepare() to leave rapl_pmu variable uninitialized which 
> >> leads
> >> to GPF in rapl_pmu_init(). See arch/x86/kernel/cpu/perf_event_intel_rapl.c.
> >>
> >> It turns out that physical_package_id and core_id can actually be 
> >> retreived for
> >> uniprocessor systems too. Enabling them also fixes rapl_pmu code.
> >>
> >> Signed-off-by: Artem Fetishev <[email protected]>
> >> ---
> >> diff --git a/arch/x86/include/asm/topology.h 
> >> b/arch/x86/include/asm/topology.h
> >> index d35f24e..1306d11 100644
> >> --- a/arch/x86/include/asm/topology.h
> >> +++ b/arch/x86/include/asm/topology.h
> >> @@ -119,9 +119,10 @@ static inline void setup_node_to_cpumask_map(void) { }
> >>
> >>  extern const struct cpumask *cpu_coregroup_mask(int cpu);
> >>
> >> -#ifdef ENABLE_TOPO_DEFINES
> >>  #define topology_physical_package_id(cpu)    (cpu_data(cpu).phys_proc_id)
> >>  #define topology_core_id(cpu)                        
> >> (cpu_data(cpu).cpu_core_id)
> >> +
> >> +#ifdef ENABLE_TOPO_DEFINES
> >>  #define topology_core_cpumask(cpu)           (per_cpu(cpu_core_map, cpu))
> >>  #define topology_thread_cpumask(cpu)         (per_cpu(cpu_sibling_map, 
> >> cpu))
> >>  #endif
> >
> > The patch applies to 3.13 and perhaps earlier kernels.  Is it needed in
> > those kernel versions?
> 
> Before 3.13 there was no RAPL support.
> But it seems the patch is still useful regardless

Is that an ack?  If so I'll squirt it Linuswards right now for 3.14.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to