prepare_percpu_nmi() acquires lock first by irq_get_desc_lock(), no matter whether preempt enabled or not, acquiring lock forces preempt off.
This simplifies the usage of prepare_percpu_nmi() and we don't need to acquire extra lock or explicitly call preempt_[disable,enable](). Signed-off-by: Lecopzer Chen <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Julien Thierry <[email protected]> Cc: YJ Chiang <[email protected]> Cc: Lecopzer Chen <[email protected]> --- kernel/irq/manage.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 78f3ddeb7fe4..aa03640cd7fb 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -2509,9 +2509,6 @@ int request_percpu_nmi(unsigned int irq, irq_handler_t handler, * This call prepares an interrupt line to deliver NMI on the current CPU, * before that interrupt line gets enabled with enable_percpu_nmi(). * - * As a CPU local operation, this should be called from non-preemptible - * context. - * * If the interrupt line cannot be used to deliver NMIs, function * will fail returning a negative value. */ @@ -2521,8 +2518,6 @@ int prepare_percpu_nmi(unsigned int irq) struct irq_desc *desc; int ret = 0; - WARN_ON(preemptible()); - desc = irq_get_desc_lock(irq, &flags, IRQ_GET_DESC_CHECK_PERCPU); if (!desc) @@ -2554,17 +2549,12 @@ int prepare_percpu_nmi(unsigned int irq) * This call undoes the setup done by prepare_percpu_nmi(). * * IRQ line should not be enabled for the current CPU. - * - * As a CPU local operation, this should be called from non-preemptible - * context. */ void teardown_percpu_nmi(unsigned int irq) { unsigned long flags; struct irq_desc *desc; - WARN_ON(preemptible()); - desc = irq_get_desc_lock(irq, &flags, IRQ_GET_DESC_CHECK_PERCPU); if (!desc) -- 2.18.0

