On 04/11/2018 11:57 AM, Sebastian Andrzej Siewior wrote:
> This patch reverts commit c7f26ccfb2c3 ("mm/vmstat.c: fix
> vmstat_update() preemption BUG").
> Steven saw a "using smp_processor_id() in preemptible" message and
> added a preempt_disable() section around it to keep it quiet. This is
> not the right thing to do it does not fix the real problem.
> vmstat_update() is invoked by a kworker on a specific CPU. This worker
> it bound to this CPU. The name of the worker was "kworker/1:1" so it
> should have been a worker which was bound to CPU1. A worker which can
> run on any CPU would have a `u' before the first digit.
Oh my, and I have just been assured by Tejun that his cannot happen :)
And yet, in the original report  I see:
CPU: 0 PID: 269 Comm: kworker/1:1 Not tainted
So is this perhaps related to the cpu hotplug that  mentions? e.g. is
the cpu being hotplugged cpu 1, the worker started too early before
stuff can be scheduled on the CPU, so it has to run on different than
> smp_processor_id() can be used in a preempt-enabled region as long as
> the task is bound to a single CPU which is the case here. If it could
> run on an arbitrary CPU then this is the problem we have an should seek
> to resolve.
> Not only this smp_processor_id() must not be migrated to another CPU but
> also refresh_cpu_vm_stats() which might access wrong per-CPU variables.
> Not to mention that other code relies on the fact that such a worker
> runs on one specific CPU only.
> Therefore I revert that commit and we should look instead what broke the
> affinity mask of the kworker.
> Cc: Steven J. Hill <steven.h...@cavium.com>
> Cc: Tejun Heo <hte...@gmail.com>
> Cc: Andrew Morton <a...@linux-foundation.org>
> Signed-off-by: Sebastian Andrzej Siewior <bige...@linutronix.de>
> mm/vmstat.c | 2 --
> 1 file changed, 2 deletions(-)
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 33581be705f0..40b2db6db6b1 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1839,11 +1839,9 @@ static void vmstat_update(struct work_struct *w)
> * to occur in the future. Keep on running the
> * update worker thread.
> - preempt_disable();
> queue_delayed_work_on(smp_processor_id(), mm_percpu_wq,
> - preempt_enable();