NMIs could happen at any time. This patch makes sure that the safe printk() in NMI will schedule IRQ work only when the related structs are initialized.
All pending messages are flushed when the IRQ work is being initialized. Signed-off-by: Petr Mladek <[email protected]> Cc: Jan Kara <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Russell King <[email protected]> Cc: Daniel Thompson <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: David Miller <[email protected]> Signed-off-by: Andrew Morton <[email protected]> --- kernel/printk/nmi.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/kernel/printk/nmi.c b/kernel/printk/nmi.c index 479e0764203c..303cf0d15e57 100644 --- a/kernel/printk/nmi.c +++ b/kernel/printk/nmi.c @@ -38,6 +38,7 @@ * were handled or when IRQs are blocked. */ DEFINE_PER_CPU(printk_func_t, printk_func) = vprintk_default; +static int printk_nmi_irq_ready; #define NMI_LOG_BUF_LEN (4096 - sizeof(atomic_t) - sizeof(struct irq_work)) @@ -84,8 +85,11 @@ again: goto again; /* Get flushed in a more safe context. */ - if (add) + if (add && printk_nmi_irq_ready) { + /* Make sure that IRQ work is really initialized. */ + smp_rmb(); irq_work_queue(&s->work); + } return add; } @@ -195,6 +199,13 @@ void __init printk_nmi_init(void) init_irq_work(&s->work, __printk_nmi_flush); } + + /* Make sure that IRQ works are initialized before enabling. */ + smp_wmb(); + printk_nmi_irq_ready = 1; + + /* Flush pending messages that did not have scheduled IRQ works. */ + printk_nmi_flush(); } void printk_nmi_enter(void) -- 1.8.5.6

