If allocated earlier and the search fails, then cpumask need to be freed. However cpu_l1_cache_map can be allocated after we search thread group.
Cc: linuxppc-dev <[email protected]> Cc: LKML <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Anton Blanchard <[email protected]> Cc: Oliver O'Halloran <[email protected]> Cc: Nathan Lynch <[email protected]> Cc: Michael Neuling <[email protected]> Cc: Gautham R Shenoy <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Valentin Schneider <[email protected]> Cc: Jordan Niethe <[email protected]> Reviewed-by: Gautham R. Shenoy <[email protected]> Signed-off-by: Srikar Dronamraju <[email protected]> --- arch/powerpc/kernel/smp.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index cbca4a8c3314..7d8d44cbab11 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -797,10 +797,6 @@ static int init_cpu_l1_cache_map(int cpu) if (err) goto out; - zalloc_cpumask_var_node(&per_cpu(cpu_l1_cache_map, cpu), - GFP_KERNEL, - cpu_to_node(cpu)); - cpu_group_start = get_cpu_thread_group_start(cpu, &tg); if (unlikely(cpu_group_start == -1)) { @@ -809,6 +805,9 @@ static int init_cpu_l1_cache_map(int cpu) goto out; } + zalloc_cpumask_var_node(&per_cpu(cpu_l1_cache_map, cpu), + GFP_KERNEL, cpu_to_node(cpu)); + for (i = first_thread; i < first_thread + threads_per_core; i++) { int i_group_start = get_cpu_thread_group_start(i, &tg); -- 2.18.2

