On (05/17/18 16:39), Petr Mladek wrote:
> 
> CPU0                  CPU1                    CPU2
> 
> printk()
>   vprintk_emit()
>     spin_lock(&logbuf_lock)
> 
>                                               trigger_all_cpu_backtrace()
>                                                 raise()
> 
>                       nmi_enter()
>                         printk_nmi_enter()
>                           if (this_cpu_read(printk_context)
>                             & PRINTK_SAFE_CONTEXT_MASK)
>                             // false
>                           else
>                             // looks safe to use printk_deferred()
>                             this_cpu_or(printk_context,
>                               PRINTK_NMI_DEFERRED_CONTEXT_MASK);
> 
>                         nmi_cpu_backtrace()
>                           arch_spin_lock(&lock);
>                             show_regs()
> 
> nmi_enter()
>   nmi_cpu_backtrace()
>     arch_spin_lock(&lock);
> 
>                             printk()
>                               vprintk_func()
>                                 vprintk_deferred()
>                                   vprintk_emit()
>                                     spin_lock(&logbuf_lock)
> 
> DEADLOCK: between &logbuf_lock from vprintk_emit() and
>                 &lock from nmi_cpu_backtrace().
> 
> CPU0                  CPU1
> lock(logbuf_lock)     lock(lock)
>   lock(lock)            lock(logbuf_lock)
> 
[..]
> Signed-off-by: Petr Mladek <[email protected]>

This is a pretty cool find!

Acked-by: Sergey Senozhatsky <[email protected]>

> -     if ((this_cpu_read(printk_context) & PRINTK_SAFE_CONTEXT_MASK) &&
> -         raw_spin_is_locked(&logbuf_lock)) {
> +     if (raw_spin_is_locked(&logbuf_lock))
>               this_cpu_or(printk_context, PRINTK_NMI_CONTEXT_MASK);
> -     } else {
> +     else
>               this_cpu_or(printk_context, PRINTK_NMI_DEFERRED_CONTEXT_MASK);
> -     }

A question - can we switch to a bitwise OR?

if (this_cpu_read(printk_context) & PRINTK_SAFE_CONTEXT_MASK) ||
        raw_spin_is_locked(&logbuf_lock)

just to check per-CPU `printk_context' first and only afterwards
access the global `logbuf_lock'. printk_nmi_enter() happens on
every CPU, so maybe we can avoid some overhead by checking the
local per-CPU data first.

        -ss

Reply via email to