__raise_softirq_irqoff() will update per-CPU mask of pending softirqs,
it need to be called in irq disabled context in order to keep it atomic
operation, otherwise it will be interrupted by hardware interrupt,
and per-CPU softirqs pending mask will be corrupted, the result is
there will be unexpected issue, for example hrtimer soft irq will
be losed and soft hrtimer will never be expire and handled.

Enable CONFIG_PROVE_LOCKING to use lockdep_assert_irqs_disabled() to
check hardirqs and softirqs status, and provide warning in irqs enabled
context.

Signed-off-by: Jiafei Pan <jiafei....@nxp.com>
---
Changes in v2:
- use lockdep_assert_irqs_disabled()
- removed extra comments
- changed commit message

 kernel/softirq.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index bf88d7f62433..09229ad82209 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -481,6 +481,7 @@ void raise_softirq(unsigned int nr)
 
 void __raise_softirq_irqoff(unsigned int nr)
 {
+       lockdep_assert_irqs_disabled();
        trace_softirq_raise(nr);
        or_softirq_pending(1UL << nr);
 }
-- 
2.17.1

Reply via email to