Hi,

On Mon,  9 Nov 2020 22:55:38 -0500
Sasha Levin <[email protected]> wrote:

> From: "Steven Rostedt (VMware)" <[email protected]>
> 
> [ Upstream commit 645f224e7ba2f4200bf163153d384ceb0de5462e ]
> 
> Since the kprobe handlers have protection that prohibits other handlers from
> executing in other contexts (like if an NMI comes in while processing a
> kprobe, and executes the same kprobe, it will get fail with a "busy"
> return). Lockdep is unaware of this protection. Use lockdep's nesting api to
> differentiate between locks taken in INT3 context and other context to
> suppress the false warnings.
> 
> Link: 
> https://lore.kernel.org/r/[email protected]
> 

This fixes a lockdep false positive warning comes from commit e03b4a084ea6
("kprobes: Remove NMI context check"). Does anyone report that happen on the
stable kernel?

If not, you do not need this patch for stable kernels.

Thank you,


> Cc: Peter Zijlstra <[email protected]>
> Acked-by: Masami Hiramatsu <[email protected]>
> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
> Signed-off-by: Sasha Levin <[email protected]>
> ---
>  kernel/kprobes.c | 25 +++++++++++++++++++++----
>  1 file changed, 21 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 2161f519d4812..2ce9053de6ae4 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -1204,7 +1204,13 @@ __acquires(hlist_lock)
>  
>       *head = &kretprobe_inst_table[hash];
>       hlist_lock = kretprobe_table_lock_ptr(hash);
> -     raw_spin_lock_irqsave(hlist_lock, *flags);
> +     /*
> +      * Nested is a workaround that will soon not be needed.
> +      * There's other protections that make sure the same lock
> +      * is not taken on the same CPU that lockdep is unaware of.
> +      * Differentiate when it is taken in NMI context.
> +      */
> +     raw_spin_lock_irqsave_nested(hlist_lock, *flags, !!in_nmi());
>  }
>  NOKPROBE_SYMBOL(kretprobe_hash_lock);
>  
> @@ -1213,7 +1219,13 @@ static void kretprobe_table_lock(unsigned long hash,
>  __acquires(hlist_lock)
>  {
>       raw_spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash);
> -     raw_spin_lock_irqsave(hlist_lock, *flags);
> +     /*
> +      * Nested is a workaround that will soon not be needed.
> +      * There's other protections that make sure the same lock
> +      * is not taken on the same CPU that lockdep is unaware of.
> +      * Differentiate when it is taken in NMI context.
> +      */
> +     raw_spin_lock_irqsave_nested(hlist_lock, *flags, !!in_nmi());
>  }
>  NOKPROBE_SYMBOL(kretprobe_table_lock);
>  
> @@ -1884,7 +1896,12 @@ static int pre_handler_kretprobe(struct kprobe *p, 
> struct pt_regs *regs)
>  
>       /* TODO: consider to only swap the RA after the last pre_handler fired 
> */
>       hash = hash_ptr(current, KPROBE_HASH_BITS);
> -     raw_spin_lock_irqsave(&rp->lock, flags);
> +     /*
> +      * Nested is a workaround that will soon not be needed.
> +      * There's other protections that make sure the same lock
> +      * is not taken on the same CPU that lockdep is unaware of.
> +      */
> +     raw_spin_lock_irqsave_nested(&rp->lock, flags, 1);
>       if (!hlist_empty(&rp->free_instances)) {
>               ri = hlist_entry(rp->free_instances.first,
>                               struct kretprobe_instance, hlist);
> @@ -1895,7 +1912,7 @@ static int pre_handler_kretprobe(struct kprobe *p, 
> struct pt_regs *regs)
>               ri->task = current;
>  
>               if (rp->entry_handler && rp->entry_handler(ri, regs)) {
> -                     raw_spin_lock_irqsave(&rp->lock, flags);
> +                     raw_spin_lock_irqsave_nested(&rp->lock, flags, 1);
>                       hlist_add_head(&ri->hlist, &rp->free_instances);
>                       raw_spin_unlock_irqrestore(&rp->lock, flags);
>                       return 0;
> -- 
> 2.27.0
> 


-- 
Masami Hiramatsu <[email protected]>

Reply via email to