On Sun,  5 Oct 2025 19:46:36 +0530
[email protected] wrote:

> From: Shaurya Rane <[email protected]>
> 
> The kernel's lockdep validator detected a circular locking dependency
> in ring_buffer_map(). The function was acquiring the per-CPU
> 'cpu_buffer->mapping_lock' before the global 'buffer->mutex'.
> 

You should either have a link to the email reporting the lockdep splat, or
post it in the change log. I'd like to know exactly what the race was.

> This violates the established locking hierarchy where 'buffer->mutex'
> should be acquired first, leading to a potential deadlock.
> 
> Fix this by reordering the mutex acquisition to lock 'buffer->mutex'
> before 'cpu_buffer->mapping_lock', satisfying the lockdep requirements
> and preventing the deadlock.
> 
> Reported-by: [email protected]
> 
> Signed-off-by: Shaurya Rane <[email protected]>
> ---
>  kernel/trace/ring_buffer.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index 43460949ad3f..82c3d5d2dcf6 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -7222,9 +7222,10 @@ int ring_buffer_map(struct trace_buffer *buffer, int 
> cpu,
>  
>       if (!cpumask_test_cpu(cpu, buffer->cpumask))
>               return -EINVAL;
> -
> +     

Added white space?

>       cpu_buffer = buffer->buffers[cpu];
> -
> +     

More added white space?

> +     guard(mutex)(&buffer->mutex);
>       guard(mutex)(&cpu_buffer->mapping_lock);

You state that you are reversing the order here, but all I see is you added
taking the buffer->mutex lock. If there was a reverse order, then I'm
assuming that later on in this function the buffer->mutex is taken again.
That would cause a deadlock.

What exactly are you reversing?

-- Steve


>  
>       if (cpu_buffer->user_mapped) {


Reply via email to