Re: [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU

2016-07-15 Thread Tejun Heo
On Fri, Jul 15, 2016 at 03:30:41PM +1000, Michael Ellerman wrote:
> It looks like this still hasn't gone to Linus for 4.7?
> 
> Could it please, it's a pretty nasty regression on our boxes.

Sorry about that.  Just sent out the pull request.

Thanks.

-- 
tejun
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU

2016-07-14 Thread Michael Ellerman
Tejun Heo  writes:

> On Thu, Jun 16, 2016 at 02:45:48PM +0200, Peter Zijlstra wrote:
>> Subject: workqueue: Fix setting affinity of unbound worker threads
>> From: Peter Zijlstra 
>> Date: Thu Jun 16 14:38:42 CEST 2016
>> 
>> With commit e9d867a67fd03ccc ("sched: Allow per-cpu kernel threads to
>> run on online && !active"), __set_cpus_allowed_ptr() expects that only
>> strict per-cpu kernel threads can have affinity to an online CPU which
>> is not yet active.
>> 
>> This assumption is currently broken in the CPU_ONLINE notification
>> handler for the workqueues where restore_unbound_workers_cpumask()
>> calls set_cpus_allowed_ptr() when the first cpu in the unbound
>> worker's pool->attr->cpumask comes online. Since
>> set_cpus_allowed_ptr() is called with pool->attr->cpumask in which
>> only one CPU is online which is not yet active, we get the following
>> WARN_ON during an CPU online operation.
>
> Applied to wq/for-4.7-fixes.

It looks like this still hasn't gone to Linus for 4.7?

Could it please, it's a pretty nasty regression on our boxes.

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU

2016-07-14 Thread Gautham R Shenoy
Hi Tejun,

On Thu, Jun 16, 2016 at 03:39:05PM -0400, Tejun Heo wrote:
> On Thu, Jun 16, 2016 at 02:45:48PM +0200, Peter Zijlstra wrote:
> > Subject: workqueue: Fix setting affinity of unbound worker threads
> > From: Peter Zijlstra 
> > Date: Thu Jun 16 14:38:42 CEST 2016
> > 
> > With commit e9d867a67fd03ccc ("sched: Allow per-cpu kernel threads to
> > run on online && !active"), __set_cpus_allowed_ptr() expects that only
> > strict per-cpu kernel threads can have affinity to an online CPU which
> > is not yet active.
> > 
> > This assumption is currently broken in the CPU_ONLINE notification
> > handler for the workqueues where restore_unbound_workers_cpumask()
> > calls set_cpus_allowed_ptr() when the first cpu in the unbound
> > worker's pool->attr->cpumask comes online. Since
> > set_cpus_allowed_ptr() is called with pool->attr->cpumask in which
> > only one CPU is online which is not yet active, we get the following
> > WARN_ON during an CPU online operation.
> 
> Applied to wq/for-4.7-fixes.

Did this patch get missed by any chance? It is not in
the master branch of
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git yet.

We're still hitting the WARN_ON() during boot-up on the mainline kernel.
> Thanks.
> 
> -- 
> tejun
> 

--
Thanks and Regards
gautham.

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU

2016-06-16 Thread Michael Ellerman
On Thu, 2016-06-16 at 15:39 -0400, Tejun Heo wrote:
> On Thu, Jun 16, 2016 at 02:45:48PM +0200, Peter Zijlstra wrote:
> > Subject: workqueue: Fix setting affinity of unbound worker threads
> > From: Peter Zijlstra 
> > Date: Thu Jun 16 14:38:42 CEST 2016
> > 
> > With commit e9d867a67fd03ccc ("sched: Allow per-cpu kernel threads to
> > run on online && !active"), __set_cpus_allowed_ptr() expects that only
> > strict per-cpu kernel threads can have affinity to an online CPU which
> > is not yet active.
> > 
> > This assumption is currently broken in the CPU_ONLINE notification
> > handler for the workqueues where restore_unbound_workers_cpumask()
> > calls set_cpus_allowed_ptr() when the first cpu in the unbound
> > worker's pool->attr->cpumask comes online. Since
> > set_cpus_allowed_ptr() is called with pool->attr->cpumask in which
> > only one CPU is online which is not yet active, we get the following
> > WARN_ON during an CPU online operation.
> 
> Applied to wq/for-4.7-fixes.

Thanks all.

cheers

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU

2016-06-16 Thread Tejun Heo
On Thu, Jun 16, 2016 at 02:45:48PM +0200, Peter Zijlstra wrote:
> Subject: workqueue: Fix setting affinity of unbound worker threads
> From: Peter Zijlstra 
> Date: Thu Jun 16 14:38:42 CEST 2016
> 
> With commit e9d867a67fd03ccc ("sched: Allow per-cpu kernel threads to
> run on online && !active"), __set_cpus_allowed_ptr() expects that only
> strict per-cpu kernel threads can have affinity to an online CPU which
> is not yet active.
> 
> This assumption is currently broken in the CPU_ONLINE notification
> handler for the workqueues where restore_unbound_workers_cpumask()
> calls set_cpus_allowed_ptr() when the first cpu in the unbound
> worker's pool->attr->cpumask comes online. Since
> set_cpus_allowed_ptr() is called with pool->attr->cpumask in which
> only one CPU is online which is not yet active, we get the following
> WARN_ON during an CPU online operation.

Applied to wq/for-4.7-fixes.

Thanks.

-- 
tejun
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU

2016-06-16 Thread Peter Zijlstra
On Thu, Jun 16, 2016 at 10:11:24PM +1000, Michael Ellerman wrote:
> Peterz do you want to send a SOB'ed patch, or can we take what you posted and
> add your SOB?

So I took Ego's first patch, so as to not steal his credits take that
one and then see below.

---
Subject: workqueue: Fix setting affinity of unbound worker threads
From: Peter Zijlstra 
Date: Thu Jun 16 14:38:42 CEST 2016

With commit e9d867a67fd03ccc ("sched: Allow per-cpu kernel threads to
run on online && !active"), __set_cpus_allowed_ptr() expects that only
strict per-cpu kernel threads can have affinity to an online CPU which
is not yet active.

This assumption is currently broken in the CPU_ONLINE notification
handler for the workqueues where restore_unbound_workers_cpumask()
calls set_cpus_allowed_ptr() when the first cpu in the unbound
worker's pool->attr->cpumask comes online. Since
set_cpus_allowed_ptr() is called with pool->attr->cpumask in which
only one CPU is online which is not yet active, we get the following
WARN_ON during an CPU online operation.

[ cut here ]
WARNING: CPU: 40 PID: 248 at kernel/sched/core.c:1166
__set_cpus_allowed_ptr+0x228/0x2e0
Modules linked in:
CPU: 40 PID: 248 Comm: cpuhp/40 Not tainted 4.6.0-autotest+ #4
<..snip..>
Call Trace:
[c00f273ff920] [c010493c] __set_cpus_allowed_ptr+0x2cc/0x2e0 
(unreliable)
[c00f273ffac0] [c00ed4b0] workqueue_cpu_up_callback+0x2c0/0x470
[c00f273ffb70] [c00f5c58] notifier_call_chain+0x98/0x100
[c00f273ffbc0] [c00c5ed0] __cpu_notify+0x70/0xe0
[c00f273ffc00] [c00c6028] notify_online+0x38/0x50
[c00f273ffc30] [c00c5214] cpuhp_invoke_callback+0x84/0x250
[c00f273ffc90] [c00c562c] cpuhp_up_callbacks+0x5c/0x120
[c00f273ffce0] [c00c64d4] cpuhp_thread_fun+0x184/0x1c0
[c00f273ffd20] [c00fa050] smpboot_thread_fn+0x290/0x2a0
[c00f273ffd80] [c00f45b0] kthread+0x110/0x130
[c00f273ffe30] [c0009570] ret_from_kernel_thread+0x5c/0x6c
---[ end trace 00f1456578b2a3b2 ]---

This patch fixes this by limiting the mask to the intersection of
the pool affinity and online CPUs.

Changelog-cribbed-from: Gautham R. Shenoy 
Reported-by: Abdul Haleem 
Signed-off-by: Peter Zijlstra (Intel) 
---
 kernel/workqueue.c |6 +-
 1 file changed, 1 insertion(+), 5 deletions(-)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4600,15 +4600,11 @@ static void restore_unbound_workers_cpum
if (!cpumask_test_cpu(cpu, pool->attrs->cpumask))
return;
 
-   /* is @cpu the only online CPU? */
cpumask_and(, pool->attrs->cpumask, cpu_online_mask);
-   if (cpumask_weight() != 1)
-   return;
 
/* as we're called from CPU_ONLINE, the following shouldn't fail */
for_each_pool_worker(worker, pool)
-   WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
- pool->attrs->cpumask) < 0);
+   WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, ) < 0);
 }
 
 /*
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU

2016-06-16 Thread Michael Ellerman
On Wed, 2016-06-15 at 12:01 -0400, Tejun Heo wrote:
> On Wed, Jun 15, 2016 at 03:14:15PM +0200, Peter Zijlstra wrote:
> > On Wed, Jun 15, 2016 at 06:20:33PM +0530, Gautham R Shenoy wrote:
> > > We will no longer have the optimization in
> > > restore_unbound_workers_cpumask() but I suppose we don't lose much by
> > > resetting the affinity every time a CPU in the pool->attr->cpumask
> > > comes online.
> > 
> > Right; optimizing hotplug really isn't worth it. The code needs to be
> > simple and robust (ha! funny).
> 
> The only case it might matter is CPU hotplug being used aggressively
> for power saving.  No idea how popular that is now tho.
> set_cpus_allowed isn't that expensive and phones don't tend to have
> massive number of kworkers, so hopefully it won't show up.
> 
> > In any case, Tejun, does this work for you?
> 
> I'm not sure about the reordering part but for setting affinity on
> each onlining, no objection.  If it ever shows up as performance /
> power regression, we can revisit it later.

Peterz do you want to send a SOB'ed patch, or can we take what you posted and
add your SOB?

And Tejun are you happy to merge this for rc4?

cheers

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU

2016-06-15 Thread Tejun Heo
Hello,

On Wed, Jun 15, 2016 at 03:14:15PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 15, 2016 at 06:20:33PM +0530, Gautham R Shenoy wrote:
> > We will no longer have the optimization in
> > restore_unbound_workers_cpumask() but I suppose we don't lose much by
> > resetting the affinity every time a CPU in the pool->attr->cpumask
> > comes online.
> 
> Right; optimizing hotplug really isn't worth it. The code needs to be
> simple and robust (ha! funny).

The only case it might matter is CPU hotplug being used aggressively
for power saving.  No idea how popular that is now tho.
set_cpus_allowed isn't that expensive and phones don't tend to have
massive number of kworkers, so hopefully it won't show up.

> In any case, Tejun, does this work for you?

I'm not sure about the reordering part but for setting affinity on
each onlining, no objection.  If it ever shows up as performance /
power regression, we can revisit it later.

Thanks.

-- 
tejun
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU

2016-06-15 Thread Peter Zijlstra
On Wed, Jun 15, 2016 at 06:20:33PM +0530, Gautham R Shenoy wrote:
> We will no longer have the optimization in
> restore_unbound_workers_cpumask() but I suppose we don't lose much by
> resetting the affinity every time a CPU in the pool->attr->cpumask
> comes online.

Right; optimizing hotplug really isn't worth it. The code needs to be
simple and robust (ha! funny).

In any case, Tejun, does this work for you?
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU

2016-06-15 Thread Gautham R Shenoy
On Wed, Jun 15, 2016 at 01:32:49PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 15, 2016 at 03:49:36PM +0530, Gautham R Shenoy wrote:
> 
> > Also, with the first patch in the series (which ensures that
> > restore_unbound_workers are called *after* the new workers for the
> > newly onlined CPUs are created) and without this one, you can
> > reproduce this WARN_ON on both x86 and PPC by offlining all the CPUs
> > of a node and bringing just one of them online.
> 
> Ah good.
> 
> > I am not sure about that. The workqueue creates unbound workers for a
> > node via wq_update_unbound_numa() whenever the first CPU of every node
> > comes online. So that seems legitimate. It then tries to affine these
> > workers to the cpumask of that node. Again this seems right. As an
> > optimization, it does this only when the first CPU of the node comes
> > online. Since this online CPU is not yet active, and since
> > nr_cpus_allowed > 1, we will hit the WARN_ON().
> 
> So I had another look and isn't the below a much simpler solution?
> 
> It seems to work on my x86 with:
> 
>   for i in /sys/devices/system/cpu/cpu*/online ; do echo 0 > $i ; done
>   for i in /sys/devices/system/cpu/cpu*/online ; do echo 1 > $i ; done
> 
> without complaint.

Yup. This will work on PPC as well. 

We will no longer have the optimization in
restore_unbound_workers_cpumask() but I suppose we don't lose much by
resetting the affinity every time a CPU in the pool->attr->cpumask
comes online.

--
Thanks and Regards
gautham.

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU

2016-06-15 Thread Peter Zijlstra
On Wed, Jun 15, 2016 at 03:49:36PM +0530, Gautham R Shenoy wrote:

> Also, with the first patch in the series (which ensures that
> restore_unbound_workers are called *after* the new workers for the
> newly onlined CPUs are created) and without this one, you can
> reproduce this WARN_ON on both x86 and PPC by offlining all the CPUs
> of a node and bringing just one of them online.

Ah good.

> I am not sure about that. The workqueue creates unbound workers for a
> node via wq_update_unbound_numa() whenever the first CPU of every node
> comes online. So that seems legitimate. It then tries to affine these
> workers to the cpumask of that node. Again this seems right. As an
> optimization, it does this only when the first CPU of the node comes
> online. Since this online CPU is not yet active, and since
> nr_cpus_allowed > 1, we will hit the WARN_ON().

So I had another look and isn't the below a much simpler solution?

It seems to work on my x86 with:

  for i in /sys/devices/system/cpu/cpu*/online ; do echo 0 > $i ; done
  for i in /sys/devices/system/cpu/cpu*/online ; do echo 1 > $i ; done

without complaint.

---
 kernel/workqueue.c | 14 +-
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index e1c0e99..09c9160 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4600,15 +4600,11 @@ static void restore_unbound_workers_cpumask(struct 
worker_pool *pool, int cpu)
if (!cpumask_test_cpu(cpu, pool->attrs->cpumask))
return;
 
-   /* is @cpu the only online CPU? */
cpumask_and(, pool->attrs->cpumask, cpu_online_mask);
-   if (cpumask_weight() != 1)
-   return;
 
/* as we're called from CPU_ONLINE, the following shouldn't fail */
for_each_pool_worker(worker, pool)
-   WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
- pool->attrs->cpumask) < 0);
+   WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, ) < 0);
 }
 
 /*
@@ -4638,6 +4634,10 @@ static int workqueue_cpu_up_callback(struct 
notifier_block *nfb,
case CPU_ONLINE:
mutex_lock(_pool_mutex);
 
+   /* update NUMA affinity of unbound workqueues */
+   list_for_each_entry(wq, , list)
+   wq_update_unbound_numa(wq, cpu, true);
+
for_each_pool(pool, pi) {
mutex_lock(>attach_mutex);
 
@@ -4649,10 +4649,6 @@ static int workqueue_cpu_up_callback(struct 
notifier_block *nfb,
mutex_unlock(>attach_mutex);
}
 
-   /* update NUMA affinity of unbound workqueues */
-   list_for_each_entry(wq, , list)
-   wq_update_unbound_numa(wq, cpu, true);
-
mutex_unlock(_pool_mutex);
break;
}
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU

2016-06-15 Thread Gautham R Shenoy
Hi Peter,

On Tue, Jun 14, 2016 at 01:22:34PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 07, 2016 at 08:44:03PM +0530, Gautham R. Shenoy wrote:
> 
> I'm still puzzled why we don't see this on x86. Afaict there's nothing
> PPC specific about this.

You are right. On PPC, at boot time we hit the WARN_ON like once in 5
times. Using some debug prints, I have verified that these are
instances when the workqueue subsystem gets initialized before all the
CPUs come online. On x86, I have never been able to hit this since it
appears that every time the workqueues get initialized only after all
the CPUs have come online.

PPC doesn't uses any specific unbound workqueue early in the boot. The
unbound workqueues causing the WARN_ON() were the
"events_unbound" workqueue which was created by workqueue_init().

=
[WQ] Creating Unbound workers for WQ events_unbound,cpumask 0-127.
 online mask 0
[WQ] Creating Unbound workers for WQ events_unbound,cpumask 0-31. 
 online mask 0
[WQ] Creating Unbound workers for WQ events_unbound,cpumask 32-63.
  online mask 0
[WQ] Creating Unbound workers for WQ events_unbound,cpumask 64-95.
  online mask 0
[WQ] Creating Unbound workers for WQ events_unbound,cpumask 96-127.
  online mask 0
=

Also, with the first patch in the series (which ensures that
restore_unbound_workers are called *after* the new workers for the
newly onlined CPUs are created) and without this one, you can
reproduce this WARN_ON on both x86 and PPC by offlining all the CPUs
of a node and bringing just one of them online. So essentially the BUG
fixed by the previous patch is currently hiding this BUG which is why
we are not able to reproduce this WARN_ON() with CPU-hotplug once the
system has booted.

> 
> > This patch sets the affinity of the worker to
> > a) the only online CPU in the cpumask of the worker pool when it comes
> >online.
> > b) the cpumask of the worker pool when the second CPU in the pool's
> >cpumask comes online.
> 
> This basically works around the WARN conditions, which I suppose is fair
> enough, but I would like a note here to revisit this once the whole cpu
> hotplug rework has settled.
> 

Sure.

> The real problem is that workqueues seem to want to create worker
> threads before there's anybody who would use them or something like
> that.

I am not sure about that. The workqueue creates unbound workers for a
node via wq_update_unbound_numa() whenever the first CPU of every node
comes online. So that seems legitimate. It then tries to affine these
workers to the cpumask of that node. Again this seems right. As an
optimization, it does this only when the first CPU of the node comes
online. Since this online CPU is not yet active, and since
nr_cpus_allowed > 1, we will hit the WARN_ON().

However, I agree with you that during boot-up, the workqueue subsystem
needs to create unbound worker threads for only the online CPUs
(instead of all possible CPUs as it currently does!) and let the
CPU_ONLINE notification take care of creating the remaining workers
when they are really required.

> 
> Or is that what PPC does funny? Use an unbound workqueue this early in
> cpu bringup?

Like I pointed out above, PPC doesn't use an unbound workqueue early
in the CPU bring up.

--
Thanks and Regards
gautham.

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU

2016-06-08 Thread Abdul Haleem

Hi Gautham,

Thanks a lot for the fix.

With your patches applied, 4.7.0-rc2 builds fine on ppc64le bare metal.
Boot was successful with No call traces.

Thanks for all your support !

Regard's
Abdul

On Tuesday 07 June 2016 08:44 PM, Gautham R. Shenoy wrote:


With commit e9d867a67fd03ccc ("sched: Allow per-cpu kernel threads to
run on online && !active"), __set_cpus_allowed_ptr() expects that only
strict per-cpu kernel threads can have affinity to an online CPU which
is not yet active.

This assumption is currently broken in the CPU_ONLINE notification
handler for the workqueues where restore_unbound_workers_cpumask()
calls set_cpus_allowed_ptr() when the first cpu in the unbound
worker's pool->attr->cpumask comes online. Since
set_cpus_allowed_ptr() is called with pool->attr->cpumask in which
only one CPU is online which is not yet active, we get the following
WARN_ON during an CPU online operation.

[ cut here ]
WARNING: CPU: 40 PID: 248 at kernel/sched/core.c:1166
__set_cpus_allowed_ptr+0x228/0x2e0
Modules linked in:
CPU: 40 PID: 248 Comm: cpuhp/40 Not tainted 4.6.0-autotest+ #4
<..snip..>
Call Trace:
[c00f273ff920] [c010493c] __set_cpus_allowed_ptr+0x2cc/0x2e0 
(unreliable)
[c00f273ffac0] [c00ed4b0] workqueue_cpu_up_callback+0x2c0/0x470
[c00f273ffb70] [c00f5c58] notifier_call_chain+0x98/0x100
[c00f273ffbc0] [c00c5ed0] __cpu_notify+0x70/0xe0
[c00f273ffc00] [c00c6028] notify_online+0x38/0x50
[c00f273ffc30] [c00c5214] cpuhp_invoke_callback+0x84/0x250
[c00f273ffc90] [c00c562c] cpuhp_up_callbacks+0x5c/0x120
[c00f273ffce0] [c00c64d4] cpuhp_thread_fun+0x184/0x1c0
[c00f273ffd20] [c00fa050] smpboot_thread_fn+0x290/0x2a0
[c00f273ffd80] [c00f45b0] kthread+0x110/0x130
[c00f273ffe30] [c0009570] ret_from_kernel_thread+0x5c/0x6c
---[ end trace 00f1456578b2a3b2 ]---

This patch sets the affinity of the worker to
a) the only online CPU in the cpumask of the worker pool when it comes
online.
b) the cpumask of the worker pool when the second CPU in the pool's
cpumask comes online.

Reported-by: Abdul Haleem 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Tejun Heo 
Cc: Michael Ellerman 
Signed-off-by: Gautham R. Shenoy 
---


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev