Re: [BUG] possible deadlock in __schedule (with reproducer available)

2024-12-01 Thread Akinobu Mita
2024年11月29日(金) 21:09 Peter Zijlstra :
>
> On Fri, Nov 29, 2024 at 05:35:54PM +0900, Masami Hiramatsu wrote:
> > On Sat, 23 Nov 2024 03:39:45 +
> > Ruan Bonan  wrote:
> >
> > >
> > >vprintk_emit+0x414/0xb90 kernel/printk/printk.c:2406
> > >_printk+0x7a/0xa0 kernel/printk/printk.c:2432
> > >fail_dump lib/fault-inject.c:46 [inline]
> > >should_fail_ex+0x3be/0x570 lib/fault-inject.c:154
> > >strncpy_from_user+0x36/0x230 lib/strncpy_from_user.c:118
> > >strncpy_from_user_nofault+0x71/0x140 mm/maccess.c:186
> > >bpf_probe_read_user_str_common kernel/trace/bpf_trace.c:215 
> > > [inline]
> > >bpf_probe_read_user_str kernel/trace/bpf_trace.c:224 [inline]
> >
> > Hmm, this is a combination issue of BPF and fault injection.
> >
> > static void fail_dump(struct fault_attr *attr)
> > {
> > if (attr->verbose > 0 && __ratelimit(&attr->ratelimit_state)) {
> > printk(KERN_NOTICE "FAULT_INJECTION: forcing a failure.\n"
> >"name %pd, interval %lu, probability %lu, "
> >"space %d, times %d\n", attr->dname,
> >attr->interval, attr->probability,
> >atomic_read(&attr->space),
> >atomic_read(&attr->times));
> >
> > This printk() acquires console lock under rq->lock has been acquired.
> >
> > This can happen if we use fault injection and trace event too because
> > the fault injection caused printk warning.
>
> Ah indeed. Same difference though, if you don't know the context, most
> things are unsafe to do.
>
> > I think this should be a bug of the fault injection, not tracing/BPF.
> > And to solve this issue, we may be able to check the context and if
> > it is tracing/NMI etc, fault injection should NOT make it failure.
>
> Well, it should be okay to cause the failure, but it must be very
> careful how it goes about doing that. Tripping printk() definitely is
> out.
>
> But there's a much bigger problem there, get_random*() is not wait-free,
> in fact it takes a spinlock_t which makes that it is unusable from most
> context, and it's definitely out for tracing.
>
> Notably, this spinlock_t makes that it is unsafe to use from anything
> that holds a raw_spinlock_t or is from hardirq context, or has
> preempt_disable() -- which is a TON of code.
>
> On this alone I would currently label the whole of fault-injection
> broken. The should_fail() call itself is unsafe where many of its
> callsites are otherwise perfectly fine -- eg. usercopy per the above.
>
> Perhaps it should use a simple PRNG, a simple LFSR should be plenty good
> enough to provide failure conditions.

Sounds good.

> And yeah, I would just completely rip out the printk. Trying to figure
> out where and when it's safe to call printk() is non-trivial and just
> not worth the effort imo.

Instead of removing the printk completely, How about setting the default value
of the verbose option to zero so it doesn't call printk and gives a loud
warning when changing the verbose option?



Re: [BUG] possible deadlock in __schedule (with reproducer available)

2024-11-29 Thread Peter Zijlstra
On Fri, Nov 29, 2024 at 05:35:54PM +0900, Masami Hiramatsu wrote:
> On Sat, 23 Nov 2024 03:39:45 +
> Ruan Bonan  wrote:
> 
> > 
> >vprintk_emit+0x414/0xb90 kernel/printk/printk.c:2406
> >_printk+0x7a/0xa0 kernel/printk/printk.c:2432
> >fail_dump lib/fault-inject.c:46 [inline]
> >should_fail_ex+0x3be/0x570 lib/fault-inject.c:154
> >strncpy_from_user+0x36/0x230 lib/strncpy_from_user.c:118
> >strncpy_from_user_nofault+0x71/0x140 mm/maccess.c:186
> >bpf_probe_read_user_str_common kernel/trace/bpf_trace.c:215 [inline]
> >bpf_probe_read_user_str kernel/trace/bpf_trace.c:224 [inline]
> 
> Hmm, this is a combination issue of BPF and fault injection.
> 
> static void fail_dump(struct fault_attr *attr)
> {
> if (attr->verbose > 0 && __ratelimit(&attr->ratelimit_state)) {
> printk(KERN_NOTICE "FAULT_INJECTION: forcing a failure.\n"
>"name %pd, interval %lu, probability %lu, "
>"space %d, times %d\n", attr->dname,
>attr->interval, attr->probability,
>atomic_read(&attr->space),
>atomic_read(&attr->times));
> 
> This printk() acquires console lock under rq->lock has been acquired.
> 
> This can happen if we use fault injection and trace event too because
> the fault injection caused printk warning.

Ah indeed. Same difference though, if you don't know the context, most
things are unsafe to do.

> I think this should be a bug of the fault injection, not tracing/BPF.
> And to solve this issue, we may be able to check the context and if
> it is tracing/NMI etc, fault injection should NOT make it failure.

Well, it should be okay to cause the failure, but it must be very
careful how it goes about doing that. Tripping printk() definitely is
out.

But there's a much bigger problem there, get_random*() is not wait-free,
in fact it takes a spinlock_t which makes that it is unusable from most
context, and it's definitely out for tracing.

Notably, this spinlock_t makes that it is unsafe to use from anything
that holds a raw_spinlock_t or is from hardirq context, or has
preempt_disable() -- which is a TON of code.

On this alone I would currently label the whole of fault-injection
broken. The should_fail() call itself is unsafe where many of its
callsites are otherwise perfectly fine -- eg. usercopy per the above.

Perhaps it should use a simple PRNG, a simple LFSR should be plenty good
enough to provide failure conditions.

And yeah, I would just completely rip out the printk. Trying to figure
out where and when it's safe to call printk() is non-trivial and just
not worth the effort imo.



Re: [BUG] possible deadlock in __schedule (with reproducer available)

2024-11-29 Thread Google
On Sat, 23 Nov 2024 03:39:45 +
Ruan Bonan  wrote:

> 
>vprintk_emit+0x414/0xb90 kernel/printk/printk.c:2406
>_printk+0x7a/0xa0 kernel/printk/printk.c:2432
>fail_dump lib/fault-inject.c:46 [inline]
>should_fail_ex+0x3be/0x570 lib/fault-inject.c:154
>strncpy_from_user+0x36/0x230 lib/strncpy_from_user.c:118
>strncpy_from_user_nofault+0x71/0x140 mm/maccess.c:186
>bpf_probe_read_user_str_common kernel/trace/bpf_trace.c:215 [inline]
>bpf_probe_read_user_str kernel/trace/bpf_trace.c:224 [inline]

Hmm, this is a combination issue of BPF and fault injection.

static void fail_dump(struct fault_attr *attr)
{
if (attr->verbose > 0 && __ratelimit(&attr->ratelimit_state)) {
printk(KERN_NOTICE "FAULT_INJECTION: forcing a failure.\n"
   "name %pd, interval %lu, probability %lu, "
   "space %d, times %d\n", attr->dname,
   attr->interval, attr->probability,
   atomic_read(&attr->space),
   atomic_read(&attr->times));

This printk() acquires console lock under rq->lock has been acquired.

This can happen if we use fault injection and trace event too because
the fault injection caused printk warning.
I think this should be a bug of the fault injection, not tracing/BPF.
And to solve this issue, we may be able to check the context and if
it is tracing/NMI etc, fault injection should NOT make it failure.

Thank you,

-- 
Masami Hiramatsu (Google) 



Re: [BUG] possible deadlock in __schedule (with reproducer available)

2024-11-26 Thread Andrii Nakryiko
On Mon, Nov 25, 2024 at 1:44 AM Peter Zijlstra  wrote:
>
> On Mon, Nov 25, 2024 at 05:24:05AM +, Ruan Bonan wrote:
>
> > From the discussion, it appears that the root cause might involve
> > specific printk or BPF operations in the given context. To clarify and
> > possibly avoid similar issues in the future, are there guidelines or
> > best practices for writing BPF programs/hooks that interact with
> > tracepoints, especially those related to scheduler events, to prevent
> > such deadlocks?
>
> The general guideline and recommendation for all tracepoints is to be
> wait-free. Typically all tracer code should be.
>
> Now, BPF (users) (ab)uses tracepoints to do all sorts and takes certain
> liberties with them, but it is very much at the discretion of the BPF
> user.

We do assume that tracepoints are just like kprobes and can run in
NMI. And in this case BPF is just a vehicle to trigger a
promised-to-be-wait-free strncpy_from_user_nofault(). That's as far as
BPF involvement goes, we should stop discussing BPF in this context,
it's misleading.

As Alexei mentioned, this is the problem with printk code, not in BPF.
I'll just copy-paste the relevant parts of stack trace to make this
clear:

   console_trylock_spinning kernel/printk/printk.c:1990 [inline]
   vprintk_emit+0x414/0xb90 kernel/printk/printk.c:2406
   _printk+0x7a/0xa0 kernel/printk/printk.c:2432
   fail_dump lib/fault-inject.c:46 [inline]
   should_fail_ex+0x3be/0x570 lib/fault-inject.c:154
   strncpy_from_user+0x36/0x230 lib/strncpy_from_user.c:118
   strncpy_from_user_nofault+0x71/0x140 mm/maccess.c:186
   bpf_probe_read_user_str_common kernel/trace/bpf_trace.c:215 [inline]

>
> Slightly relaxed guideline would perhaps be to consider the context of
> the tracepoint, notably one of: NMI, IRQ, SoftIRQ or Task context -- and
> to not exceed the bounds of the given context.
>
> More specifically, when the tracepoint is inside critical sections of
> any sort (as is the case here) then it very much is on the BPF user to
> not cause inversions.
>
> At this point there really is no substitute for knowing what you're
> doing. Knowledge is key.
>
> In short; tracepoints should be wait-free, if you know what you're doing
> you can perhaps get away with a little more.

>From BPF perspective tracepoints are wait-free and we don't allow any
sleepable code to be called (until sleepable tracepoints are properly
supported, which is a separate "blessed" case of tracepoints).



Re: [BUG] possible deadlock in __schedule (with reproducer available)

2024-11-25 Thread Peter Zijlstra
On Mon, Nov 25, 2024 at 05:24:05AM +, Ruan Bonan wrote:

> From the discussion, it appears that the root cause might involve
> specific printk or BPF operations in the given context. To clarify and
> possibly avoid similar issues in the future, are there guidelines or
> best practices for writing BPF programs/hooks that interact with
> tracepoints, especially those related to scheduler events, to prevent
> such deadlocks?

The general guideline and recommendation for all tracepoints is to be
wait-free. Typically all tracer code should be.

Now, BPF (users) (ab)uses tracepoints to do all sorts and takes certain
liberties with them, but it is very much at the discretion of the BPF
user.

Slightly relaxed guideline would perhaps be to consider the context of
the tracepoint, notably one of: NMI, IRQ, SoftIRQ or Task context -- and
to not exceed the bounds of the given context.

More specifically, when the tracepoint is inside critical sections of
any sort (as is the case here) then it very much is on the BPF user to
not cause inversions.

At this point there really is no substitute for knowing what you're
doing. Knowledge is key.

In short; tracepoints should be wait-free, if you know what you're doing
you can perhaps get away with a little more.



Re: [BUG] possible deadlock in __schedule (with reproducer available)

2024-11-24 Thread Ruan Bonan
Hi Alexei, Steven, and Peter,

Thank you for the detailed feedback. I really appreciate it. I understand your 
point regarding the responsibilities when attaching code to tracepoints and the 
complexities involved in such contexts. My intent was to highlight a 
reproducible scenario where this deadlock might occur, rather than to assign 
blame to the scheduler code itself. Also, I found that there are some similar 
cases reported, such as 
https://lore.kernel.org/bpf/611d0b3b-18bd-8564-4c8d-de7522ada...@fb.com/T/.

Regarding the bug report, I tried to follow the report routine at 
https://www.kernel.org/doc/html/v4.19/admin-guide/reporting-bugs.html. However, 
in this case it is not very clear for me which subsystem solely should be 
involved in this report based on the local call trace. I apologize for 
bothering you, and I will try to identify and only involve the directly related 
subsystem in future bug reports.

From the discussion, it appears that the root cause might involve specific 
printk or BPF operations in the given context. To clarify and possibly avoid 
similar issues in the future, are there guidelines or best practices for 
writing BPF programs/hooks that interact with tracepoints, especially those 
related to scheduler events, to prevent such deadlocks?

P.S. I found a prior discussion here: 
https://lore.kernel.org/bpf/CANpmjNPrHv56Wvc_NbwhoGEU1ZnOepWXT2AmDVVjuY=r8n2...@mail.gmail.com/T/.
 However, there are no more updates.

Thanks,
Bonan

On 2024/11/25, 11:45, "Steven Rostedt" mailto:rost...@goodmis.org>> wrote:


- External Email -






On Sun, 24 Nov 2024 22:30:45 -0500
Steven Rostedt mailto:rost...@goodmis.org>> wrote:


> > > Ack. BPF should not be causing deadlocks by doing code called from
> > > tracepoints.
> >
> > I sense so much BPF love here that it diminishes the ability to read
> > stack traces :)
>
> You know I love BPF ;-) I do recommend it when I feel it's the right
> tool for the job.


BTW, I want to apologize if my email sounded like an attack on BPF.
That wasn't my intention. It was more about Peter's response being
so short, where the submitter may not understand his response. It's not
up to Peter to explain himself. As I said, this isn't his problem.


I figured I would fill in the gap. As I fear with more people using BPF,
when some bug happens when they attach a BPF program somewhere, they
then blame the code that they attached to. If this was titled "Possible
deadlock when attaching BPF program to scheduler" and was sent to the
BPF folks, I would not have any issue with it. But it was sent to the
scheduler maintainers.


We need to teach people that if a bug happens because they attach a BPF
program somewhere, they first notify the BPF folks. Then if it really
ends up being a bug where the BPF program was attached, it should be
the BPF folks that inform that subsystem maintainers. Not the original
submitter.


Cheers,


-- Steve





Re: [BUG] possible deadlock in __schedule (with reproducer available)

2024-11-24 Thread Steven Rostedt
On Sun, 24 Nov 2024 22:30:45 -0500
Steven Rostedt  wrote:

> > > Ack. BPF should not be causing deadlocks by doing code called from
> > > tracepoints.
> > 
> > I sense so much BPF love here that it diminishes the ability to read
> > stack traces :)  
> 
> You know I love BPF ;-)  I do recommend it when I feel it's the right
> tool for the job.

BTW, I want to apologize if my email sounded like an attack on BPF.
That wasn't my intention. It was more about Peter's response being
so short, where the submitter may not understand his response. It's not
up to Peter to explain himself. As I said, this isn't his problem.

I figured I would fill in the gap. As I fear with more people using BPF,
when some bug happens when they attach a BPF program somewhere, they
then blame the code that they attached to. If this was titled "Possible
deadlock when attaching BPF program to scheduler" and was sent to the
BPF folks, I would not have any issue with it. But it was sent to the
scheduler maintainers.

We need to teach people that if a bug happens because they attach a BPF
program somewhere, they first notify the BPF folks. Then if it really
ends up being a bug where the BPF program was attached, it should be
the BPF folks that inform that subsystem maintainers. Not the original
submitter.

Cheers,

-- Steve



Re: [BUG] possible deadlock in __schedule (with reproducer available)

2024-11-24 Thread Steven Rostedt
On Sun, 24 Nov 2024 18:02:35 -0800
Alexei Starovoitov  wrote:

> > > -EWONTFIX. Don't do stupid.  
> >
> > Ack. BPF should not be causing deadlocks by doing code called from
> > tracepoints.  
> 
> I sense so much BPF love here that it diminishes the ability to read
> stack traces :)

You know I love BPF ;-)  I do recommend it when I feel it's the right
tool for the job.

> Above is one of many printk related splats that syzbot keeps finding.
> This is not a new issue and it has nothing to do with bpf.

I had to fight printk related spats too. But when that happens, its not
considered a bug to the code that is being attached to. Note, my
response is more about the subject title, which sounds like it's
blaming the schedule code. Which is not the issue.

> 
> > Tracepoints have a special context similar to NMIs. If you add
> > a hook into an NMI handler that causes a deadlock, it's a bug in the hook,
> > not the NMI code. If you add code that causes a deadlock when attaching to a
> > tracepoint, it's a bug in the hook, not the tracepoint.  
> 
> trace events call strncpy_from_user_nofault() just as well.
> kernel/trace/trace_events_filter.c:830

Well, in some cases you could do that from NMI as well. The point is,
tracepoints are a different context, and things need to be careful when
using it. If any deadlock occurs by attaching to a tracepoint (and this
isn't just BPF, I have code too that needs to be very careful about
this as well), then the bug is with the attached callback.

I agree with Peter. This isn't his problem. Hence my Ack.

-- Steve



Re: [BUG] possible deadlock in __schedule (with reproducer available)

2024-11-24 Thread Alexei Starovoitov
On Sat, Nov 23, 2024 at 2:59 PM Steven Rostedt  wrote:
>
> On Sat, 23 Nov 2024 21:27:44 +0100
> Peter Zijlstra  wrote:
>
> > On Sat, Nov 23, 2024 at 03:39:45AM +, Ruan Bonan wrote:
> >
> > >  
> > > FAULT_INJECTION: forcing a failure.
> > > name fail_usercopy, interval 1, probability 0, space 0, times 0
> > > ==
> > > WARNING: possible circular locking dependency detected
> > > 6.12.0-rc7-00144-g66418447d27b #8 Not tainted
> > > --
> > > syz-executor144/330 is trying to acquire lock:
> > > bcd2da38 ((console_sem).lock){}-{2:2}, at: 
> > > down_trylock+0x20/0xa0 kernel/locking/semaphore.c:139
> > >
> > > but task is already holding lock:
> > > 888065cbd718 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested 
> > > kernel/sched/core.c:598 [inline]
> > > 888065cbd718 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock 
> > > kernel/sched/sched.h:1506 [inline]
> > > 888065cbd718 (&rq->__lock){-.-.}-{2:2}, at: rq_lock 
> > > kernel/sched/sched.h:1805 [inline]
> > > 888065cbd718 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0x140/0x1e70 
> > > kernel/sched/core.c:6592
> > >
> > > which lock already depends on the new lock.
> > >
> > >_printk+0x7a/0xa0 kernel/printk/printk.c:2432
> > >fail_dump lib/fault-inject.c:46 [inline]
> > >should_fail_ex+0x3be/0x570 lib/fault-inject.c:154
> > >strncpy_from_user+0x36/0x230 lib/strncpy_from_user.c:118
> > >strncpy_from_user_nofault+0x71/0x140 mm/maccess.c:186
> > >bpf_probe_read_user_str_common kernel/trace/bpf_trace.c:215 
> > > [inline]
> > >bpf_probe_read_user_str kernel/trace/bpf_trace.c:224 [inline]
> > >bpf_probe_read_user_str+0x2a/0x70 kernel/trace/bpf_trace.c:221
> > >bpf_prog_bc7c5c6b9645592f+0x3e/0x40
> > >bpf_dispatcher_nop_func include/linux/bpf.h:1265 [inline]
> > >__bpf_prog_run include/linux/filter.h:701 [inline]
> > >bpf_prog_run include/linux/filter.h:708 [inline]
> > >__bpf_trace_run kernel/trace/bpf_trace.c:2316 [inline]
> > >bpf_trace_run4+0x30b/0x4d0 kernel/trace/bpf_trace.c:2359
> > >__bpf_trace_sched_switch+0x1c6/0x2c0 
> > > include/trace/events/sched.h:222
> > >trace_sched_switch+0x12a/0x190 include/trace/events/sched.h:222
> >
> > -EWONTFIX. Don't do stupid.
>
> Ack. BPF should not be causing deadlocks by doing code called from
> tracepoints.

I sense so much BPF love here that it diminishes the ability to read
stack traces :)
Above is one of many printk related splats that syzbot keeps finding.
This is not a new issue and it has nothing to do with bpf.

> Tracepoints have a special context similar to NMIs. If you add
> a hook into an NMI handler that causes a deadlock, it's a bug in the hook,
> not the NMI code. If you add code that causes a deadlock when attaching to a
> tracepoint, it's a bug in the hook, not the tracepoint.

trace events call strncpy_from_user_nofault() just as well.
kernel/trace/trace_events_filter.c:830



Re: [BUG] possible deadlock in __schedule (with reproducer available)

2024-11-23 Thread Steven Rostedt
On Sat, 23 Nov 2024 21:27:44 +0100
Peter Zijlstra  wrote:

> On Sat, Nov 23, 2024 at 03:39:45AM +, Ruan Bonan wrote:
> 
> >  
> > FAULT_INJECTION: forcing a failure.
> > name fail_usercopy, interval 1, probability 0, space 0, times 0
> > ==
> > WARNING: possible circular locking dependency detected
> > 6.12.0-rc7-00144-g66418447d27b #8 Not tainted
> > --
> > syz-executor144/330 is trying to acquire lock:
> > bcd2da38 ((console_sem).lock){}-{2:2}, at: 
> > down_trylock+0x20/0xa0 kernel/locking/semaphore.c:139
> > 
> > but task is already holding lock:
> > 888065cbd718 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested 
> > kernel/sched/core.c:598 [inline]
> > 888065cbd718 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock 
> > kernel/sched/sched.h:1506 [inline]
> > 888065cbd718 (&rq->__lock){-.-.}-{2:2}, at: rq_lock 
> > kernel/sched/sched.h:1805 [inline]
> > 888065cbd718 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0x140/0x1e70 
> > kernel/sched/core.c:6592
> > 
> > which lock already depends on the new lock.
> > 
> >_printk+0x7a/0xa0 kernel/printk/printk.c:2432
> >fail_dump lib/fault-inject.c:46 [inline]
> >should_fail_ex+0x3be/0x570 lib/fault-inject.c:154
> >strncpy_from_user+0x36/0x230 lib/strncpy_from_user.c:118
> >strncpy_from_user_nofault+0x71/0x140 mm/maccess.c:186
> >bpf_probe_read_user_str_common kernel/trace/bpf_trace.c:215 [inline]
> >bpf_probe_read_user_str kernel/trace/bpf_trace.c:224 [inline]
> >bpf_probe_read_user_str+0x2a/0x70 kernel/trace/bpf_trace.c:221
> >bpf_prog_bc7c5c6b9645592f+0x3e/0x40
> >bpf_dispatcher_nop_func include/linux/bpf.h:1265 [inline]
> >__bpf_prog_run include/linux/filter.h:701 [inline]
> >bpf_prog_run include/linux/filter.h:708 [inline]
> >__bpf_trace_run kernel/trace/bpf_trace.c:2316 [inline]
> >bpf_trace_run4+0x30b/0x4d0 kernel/trace/bpf_trace.c:2359
> >__bpf_trace_sched_switch+0x1c6/0x2c0 include/trace/events/sched.h:222
> >trace_sched_switch+0x12a/0x190 include/trace/events/sched.h:222  
> 
> -EWONTFIX. Don't do stupid.

Ack. BPF should not be causing deadlocks by doing code called from
tracepoints. Tracepoints have a special context similar to NMIs. If you add
a hook into an NMI handler that causes a deadlock, it's a bug in the hook,
not the NMI code. If you add code that causes a deadlock when attaching to a
tracepoint, it's a bug in the hook, not the tracepoint.

-- Steve



Re: [BUG] possible deadlock in __schedule (with reproducer available)

2024-11-23 Thread Peter Zijlstra
On Sat, Nov 23, 2024 at 03:39:45AM +, Ruan Bonan wrote:

>  
> FAULT_INJECTION: forcing a failure.
> name fail_usercopy, interval 1, probability 0, space 0, times 0
> ==
> WARNING: possible circular locking dependency detected
> 6.12.0-rc7-00144-g66418447d27b #8 Not tainted
> --
> syz-executor144/330 is trying to acquire lock:
> bcd2da38 ((console_sem).lock){}-{2:2}, at: down_trylock+0x20/0xa0 
> kernel/locking/semaphore.c:139
> 
> but task is already holding lock:
> 888065cbd718 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested 
> kernel/sched/core.c:598 [inline]
> 888065cbd718 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock 
> kernel/sched/sched.h:1506 [inline]
> 888065cbd718 (&rq->__lock){-.-.}-{2:2}, at: rq_lock 
> kernel/sched/sched.h:1805 [inline]
> 888065cbd718 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0x140/0x1e70 
> kernel/sched/core.c:6592
> 
> which lock already depends on the new lock.
> 
>_printk+0x7a/0xa0 kernel/printk/printk.c:2432
>fail_dump lib/fault-inject.c:46 [inline]
>should_fail_ex+0x3be/0x570 lib/fault-inject.c:154
>strncpy_from_user+0x36/0x230 lib/strncpy_from_user.c:118
>strncpy_from_user_nofault+0x71/0x140 mm/maccess.c:186
>bpf_probe_read_user_str_common kernel/trace/bpf_trace.c:215 [inline]
>bpf_probe_read_user_str kernel/trace/bpf_trace.c:224 [inline]
>bpf_probe_read_user_str+0x2a/0x70 kernel/trace/bpf_trace.c:221
>bpf_prog_bc7c5c6b9645592f+0x3e/0x40
>bpf_dispatcher_nop_func include/linux/bpf.h:1265 [inline]
>__bpf_prog_run include/linux/filter.h:701 [inline]
>bpf_prog_run include/linux/filter.h:708 [inline]
>__bpf_trace_run kernel/trace/bpf_trace.c:2316 [inline]
>bpf_trace_run4+0x30b/0x4d0 kernel/trace/bpf_trace.c:2359
>__bpf_trace_sched_switch+0x1c6/0x2c0 include/trace/events/sched.h:222
>trace_sched_switch+0x12a/0x190 include/trace/events/sched.h:222

-EWONTFIX. Don't do stupid.