Michael Ellerman debugged an issue w.r.t workqueue changes
(see https://lkml.org/lkml/2016/10/17/352) down to the fact
that we don't setup our per cpu (cpu to node) binding early
enough (in setup_per_cpu_areas like x86 does).

This lead to a problem with workqueue changes where the
cpus seen by for_each_node() in workqueue_init_early() was
different from their binding seen later in

for_each_possible_cpu(cpu) {
        node = cpu_to_node(cpu)


In setup_arch()->initmem_init() we have access to the binding
in numa_cpu_lookup_table()

This patch implements Michael's suggestion of setting up
the per cpu node binding inside of setup_per_cpu_areas()

I did not remove the original setting of these values
from smp_prepare_cpus(). I've also not setup per cpu
mem's via set_cpu_numa_mem() since zonelists are not
yet built by the time we do per cpu setup.

Reported-by: Michael Ellerman <m...@ellerman.id.au>
Signed-off-by: Balbir Singh <bsinghar...@gmail.com>
 arch/powerpc/kernel/setup_64.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index c3e1290..842415a 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -625,6 +625,8 @@ void __init setup_per_cpu_areas(void)
        for_each_possible_cpu(cpu) {
                 __per_cpu_offset[cpu] = delta + pcpu_unit_offsets[cpu];
                paca[cpu].data_offset = __per_cpu_offset[cpu];
+               set_cpu_numa_node(cpu, numa_cpu_lookup_table[cpu]);

Reply via email to