On Thu, Nov 30, 2017 at 01:39:27PM +0100, Sebastian Andrzej Siewior wrote:
> mcryptd_enqueue_request() grabs the per-CPU queue struct and protects
> access to it with disabled preemption. Then it schedules a worker on the
> same CPU. The worker in mcryptd_queue_worker() guards access to the same
> per-CPU variable with disabled preemption.
> 
> If we take CPU-hotplug into account then it is possible that between
> queue_work_on() and the actual invocation of the worker the CPU goes
> down and the worker will be scheduled on _another_ CPU. And here the
> preempt_disable() protection does not work anymore. The easiest thing is
> to add a spin_lock() to guard access to the list.
> 
> Another detail: mcryptd_queue_worker() is not processing more than
> MCRYPTD_BATCH invocation in a row. If there are still items left, then
> it will invoke queue_work() to proceed with more later. *I* would
> suggest to simply drop that check because it does not use a system
> workqueue and the workqueue is already marked as "CPU_INTENSIVE". And if
> preemption is required then the scheduler should do it.
> However if queue_work() is used then the work item is marked as CPU
> unbound. That means it will try to run on the local CPU but it may run
> on another CPU as well. Especially with CONFIG_DEBUG_WQ_FORCE_RR_CPU=y.
> Again, the preempt_disable() won't work here but lock which was
> introduced will help.
> In order to keep work-item on the local CPU (and avoid RR) I changed it
> to queue_work_on().
> 
> Cc: sta...@vger.kernel.org
> Signed-off-by: Sebastian Andrzej Siewior <bige...@linutronix.de>

Patch applied.  Thanks.
-- 
Email: Herbert Xu <herb...@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

Reply via email to