Re: RCU stall leading to deadlock warning
On Wed, Dec 16, 2020 at 09:54:42AM -0800, Paul E. McKenney wrote: > On Wed, Dec 16, 2020 at 05:29:39PM +, Qais Yousef wrote: > > Hi Paul > > > > We hit the below splat a couple of days ago in our testing. Sadly I can't > > reproduce it. And it was on android-mainline branch.. > > > > It's the deadlock message that bothers me. I can't see how we could have > > ended > > there. We detect a stall and when trying to dump the stack LOCKDEP spits the > > warning. > > > > Maybe should take this report with a pinch of salt since it wasn't on > > mainline. > > I just thought it might be something worth sharing in case you can actually > > spot something obvious that I can't see. If I got more info or a reproducer > > I will share them. > > > > The failure was triggered twice on that day running 2 different tests. > > This looks like the same problem that Mark Rutland's recent patch series > was fixing. Do you have this series applied? > > lore.kernel.org/lkml/20201126123602.23454-1-mark.rutl...@arm.com I would not expect the patch below to help given what your RCU CPU stall warning looks like, but just in case... (Full disclosure: Peter fixed a bug of mine, filenames notwithstanding.) Thanx, Paul commit f355d19f94bf4361d641fb3dbb9ece0fbac766f8 Author: Peter Zijlstra Date: Sat Aug 29 10:22:24 2020 -0700 sched/core: Allow try_invoke_on_locked_down_task() with irqs disabled The try_invoke_on_locked_down_task() function currently requires that interrupts be enabled, but it is called with interrupts disabled from rcu_print_task_stall(), resulting in an "IRQs not enabled as expected" diagnostic. This commit therefore updates try_invoke_on_locked_down_task() to use raw_spin_lock_irqsave() instead of raw_spin_lock_irq(), thus allowing use from either context. Link: https://lore.kernel.org/lkml/903d5805ab908...@google.com/ Link: https://lore.kernel.org/lkml/20200928075729.gc2...@hirez.programming.kicks-ass.net/ Reported-by: syzbot+cb3b69ae80afd6535...@syzkaller.appspotmail.com Signed-off-by: Peter Zijlstra Signed-off-by: Paul E. McKenney diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b2d6898..4abf041 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2989,7 +2989,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) /** * try_invoke_on_locked_down_task - Invoke a function on task in fixed state - * @p: Process for which the function is to be invoked. + * @p: Process for which the function is to be invoked, can be @current. * @func: Function to invoke. * @arg: Argument to function. * @@ -3007,12 +3007,11 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) */ bool try_invoke_on_locked_down_task(struct task_struct *p, bool (*func)(struct task_struct *t, void *arg), void *arg) { - bool ret = false; struct rq_flags rf; + bool ret = false; struct rq *rq; - lockdep_assert_irqs_enabled(); - raw_spin_lock_irq(>pi_lock); + raw_spin_lock_irqsave(>pi_lock, rf.flags); if (p->on_rq) { rq = __task_rq_lock(p, ); if (task_rq(p) == rq) @@ -3029,7 +3028,7 @@ bool try_invoke_on_locked_down_task(struct task_struct *p, bool (*func)(struct t ret = func(p, arg); } } - raw_spin_unlock_irq(>pi_lock); + raw_spin_unlock_irqrestore(>pi_lock, rf.flags); return ret; }
Re: RCU stall leading to deadlock warning
On Wed, Dec 16, 2020 at 05:29:39PM +, Qais Yousef wrote: > Hi Paul > > We hit the below splat a couple of days ago in our testing. Sadly I can't > reproduce it. And it was on android-mainline branch.. > > It's the deadlock message that bothers me. I can't see how we could have ended > there. We detect a stall and when trying to dump the stack LOCKDEP spits the > warning. > > Maybe should take this report with a pinch of salt since it wasn't on > mainline. > I just thought it might be something worth sharing in case you can actually > spot something obvious that I can't see. If I got more info or a reproducer > I will share them. > > The failure was triggered twice on that day running 2 different tests. This looks like the same problem that Mark Rutland's recent patch series was fixing. Do you have this series applied? lore.kernel.org/lkml/20201126123602.23454-1-mark.rutl...@arm.com Thanx, Paul > [ 310.073379] LTP: starting leapsec01 > [ 345.070123] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: > [ 345.076717] rcu: 0-...!: (1 ticks this GP) idle=798/0/0x0 > softirq=19187/19187 fqs=0 (false positive?) > [ 345.087533](detected by 2, t=6502 jiffies, g=57249, q=184) > [ 345.093284] > [ 345.094797] > [ 345.100139] WARNING: possible recursive locking detected > [ 345.105485] 5.10.0-rc7-02296-g3f43bd6f2c3b-ab89 #1 Not tainted > [ 345.111349] > [ 345.116693] swapper/2/0 is trying to acquire lock: > [ 345.121515] a00013b50c58 (rcu_node_0){-.-.}-{2:2}, at: > rcu_dump_cpu_stacks+0x7c/0x14c > [ 345.129813] > [ 345.129813] but task is already holding lock: > [ 345.135678] a00013b50c58 (rcu_node_0){-.-.}-{2:2}, at: > rcu_sched_clock_irq+0x68c/0x11c0 > [ 345.144143] > [ 345.144143] other info that might help us debug this: > [ 345.150702] Possible unsafe locking scenario: > [ 345.150702] > [ 345.156651]CPU0 > [ 345.159119] > [ 345.161585] lock(rcu_node_0); > [ 345.164779] lock(rcu_node_0); > [ 345.167973] > [ 345.167973] *** DEADLOCK *** > [ 345.167973] > [ 345.173923] May be due to missing lock nesting notation > [ 345.173923] > [ 345.180746] 1 lock held by swapper/2/0: > [ 345.184607] #0: a00013b50c58 (rcu_node_0){-.-.}-{2:2}, at: > rcu_sched_clock_irq+0x68c/0x11c0 > [ 345.193517] > [ 345.193517] stack backtrace: > [ 345.197910] CPU: 2 PID: 0 Comm: swapper/2 Not tainted > 5.10.0-rc7-02296-g3f43bd6f2c3b-ab89 #1 > [ 345.206389] Hardware name: ARM LTD ARM Juno Development Platform/ARM Juno > Development Platform, BIOS EDK II Oct 19 2019 > [ 345.217215] Call trace: > [ 345.219691] dump_backtrace+0x0/0x2b8 > [ 345.223383] show_stack+0x18/0x68 > [ 345.226731] dump_stack+0x110/0x188 > [ 345.230255] __lock_acquire+0x23f0/0x2410 > [ 345.234300] lock_acquire+0x3b8/0x730 > [ 345.237997] _raw_spin_lock_irqsave+0x80/0x168 > [ 345.242476] rcu_dump_cpu_stacks+0x7c/0x14c > [ 345.246693] rcu_sched_clock_irq+0xfd4/0x11c0 > [ 345.251087] update_process_times+0x84/0xe0 > [ 345.255306] tick_sched_handle.isra.0+0x68/0x98 > [ 345.259871] tick_sched_timer+0x60/0xd8 > [ 345.263742] __hrtimer_run_queues+0x534/0x9e0 > [ 345.268134] hrtimer_interrupt+0x1a8/0x398 > [ 345.272264] tick_receive_broadcast+0x60/0x88 > [ 345.276657] ipi_handler+0x250/0x4b8 > [ 345.280270] handle_percpu_devid_fasteoi_ipi+0x138/0x4f0 > [ 345.285619] generic_handle_irq+0x4c/0x68 > [ 345.289661] __handle_domain_irq+0x9c/0x118 > [ 345.293880] gic_handle_irq+0xdc/0x118 > [ 345.297661] el1_irq+0xc8/0x180 > [ 345.300835] cpuidle_enter_state+0x16c/0x810 > [ 345.305139] cpuidle_enter+0x4c/0x78 > [ 345.308749] call_cpuidle+0x44/0x88 > [ 345.312271] do_idle+0x2d4/0x338 > [ 345.315532] cpu_startup_entry+0x24/0x68 > [ 345.319491] secondary_start_kernel+0x1d4/0x2d8 > > Thanks > > -- > Qais Yousef
Re: RCU stall leading to deadlock warning
On 12/16/20 10:00, Paul E. McKenney wrote: > On Wed, Dec 16, 2020 at 09:54:42AM -0800, Paul E. McKenney wrote: > > On Wed, Dec 16, 2020 at 05:29:39PM +, Qais Yousef wrote: > > > Hi Paul > > > > > > We hit the below splat a couple of days ago in our testing. Sadly I can't > > > reproduce it. And it was on android-mainline branch.. > > > > > > It's the deadlock message that bothers me. I can't see how we could have > > > ended > > > there. We detect a stall and when trying to dump the stack LOCKDEP spits > > > the > > > warning. > > > > > > Maybe should take this report with a pinch of salt since it wasn't on > > > mainline. > > > I just thought it might be something worth sharing in case you can > > > actually > > > spot something obvious that I can't see. If I got more info or a > > > reproducer > > > I will share them. > > > > > > The failure was triggered twice on that day running 2 different tests. > > > > This looks like the same problem that Mark Rutland's recent patch series > > was fixing. Do you have this series applied? > > > > lore.kernel.org/lkml/20201126123602.23454-1-mark.rutl...@arm.com Oh yeah I remember this one. Yes it could be relevant. I don't see the series in the tree. If it wasn't merged (which AFAICS it isn't), it won't appear there. > I would not expect the patch below to help given what your RCU CPU stall > warning looks like, but just in case... Thanks. If I manage to find a reproducer I will give it a go. Still no luck in triggering it in my test env :( Thanks -- Qais Yousef > > (Full disclosure: Peter fixed a bug of mine, filenames notwithstanding.) > > Thanx, Paul > > > > commit f355d19f94bf4361d641fb3dbb9ece0fbac766f8 > Author: Peter Zijlstra > Date: Sat Aug 29 10:22:24 2020 -0700 > > sched/core: Allow try_invoke_on_locked_down_task() with irqs disabled > > The try_invoke_on_locked_down_task() function currently requires > that interrupts be enabled, but it is called with interrupts > disabled from rcu_print_task_stall(), resulting in an "IRQs not > enabled as expected" diagnostic. This commit therefore updates > try_invoke_on_locked_down_task() to use raw_spin_lock_irqsave() instead > of raw_spin_lock_irq(), thus allowing use from either context. > > Link: > https://lore.kernel.org/lkml/903d5805ab908...@google.com/ > Link: > https://lore.kernel.org/lkml/20200928075729.gc2...@hirez.programming.kicks-ass.net/ > Reported-by: syzbot+cb3b69ae80afd6535...@syzkaller.appspotmail.com > Signed-off-by: Peter Zijlstra > Signed-off-by: Paul E. McKenney > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index b2d6898..4abf041 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -2989,7 +2989,7 @@ try_to_wake_up(struct task_struct *p, unsigned int > state, int wake_flags) > > /** > * try_invoke_on_locked_down_task - Invoke a function on task in fixed state > - * @p: Process for which the function is to be invoked. > + * @p: Process for which the function is to be invoked, can be @current. > * @func: Function to invoke. > * @arg: Argument to function. > * > @@ -3007,12 +3007,11 @@ try_to_wake_up(struct task_struct *p, unsigned int > state, int wake_flags) > */ > bool try_invoke_on_locked_down_task(struct task_struct *p, bool > (*func)(struct task_struct *t, void *arg), void *arg) > { > - bool ret = false; > struct rq_flags rf; > + bool ret = false; > struct rq *rq; > > - lockdep_assert_irqs_enabled(); > - raw_spin_lock_irq(>pi_lock); > + raw_spin_lock_irqsave(>pi_lock, rf.flags); > if (p->on_rq) { > rq = __task_rq_lock(p, ); > if (task_rq(p) == rq) > @@ -3029,7 +3028,7 @@ bool try_invoke_on_locked_down_task(struct task_struct > *p, bool (*func)(struct t > ret = func(p, arg); > } > } > - raw_spin_unlock_irq(>pi_lock); > + raw_spin_unlock_irqrestore(>pi_lock, rf.flags); > return ret; > } >
RCU stall leading to deadlock warning
Hi Paul We hit the below splat a couple of days ago in our testing. Sadly I can't reproduce it. And it was on android-mainline branch.. It's the deadlock message that bothers me. I can't see how we could have ended there. We detect a stall and when trying to dump the stack LOCKDEP spits the warning. Maybe should take this report with a pinch of salt since it wasn't on mainline. I just thought it might be something worth sharing in case you can actually spot something obvious that I can't see. If I got more info or a reproducer I will share them. The failure was triggered twice on that day running 2 different tests. [ 310.073379] LTP: starting leapsec01 [ 345.070123] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: [ 345.076717] rcu: 0-...!: (1 ticks this GP) idle=798/0/0x0 softirq=19187/19187 fqs=0 (false positive?) [ 345.087533](detected by 2, t=6502 jiffies, g=57249, q=184) [ 345.093284] [ 345.094797] [ 345.100139] WARNING: possible recursive locking detected [ 345.105485] 5.10.0-rc7-02296-g3f43bd6f2c3b-ab89 #1 Not tainted [ 345.111349] [ 345.116693] swapper/2/0 is trying to acquire lock: [ 345.121515] a00013b50c58 (rcu_node_0){-.-.}-{2:2}, at: rcu_dump_cpu_stacks+0x7c/0x14c [ 345.129813] [ 345.129813] but task is already holding lock: [ 345.135678] a00013b50c58 (rcu_node_0){-.-.}-{2:2}, at: rcu_sched_clock_irq+0x68c/0x11c0 [ 345.144143] [ 345.144143] other info that might help us debug this: [ 345.150702] Possible unsafe locking scenario: [ 345.150702] [ 345.156651]CPU0 [ 345.159119] [ 345.161585] lock(rcu_node_0); [ 345.164779] lock(rcu_node_0); [ 345.167973] [ 345.167973] *** DEADLOCK *** [ 345.167973] [ 345.173923] May be due to missing lock nesting notation [ 345.173923] [ 345.180746] 1 lock held by swapper/2/0: [ 345.184607] #0: a00013b50c58 (rcu_node_0){-.-.}-{2:2}, at: rcu_sched_clock_irq+0x68c/0x11c0 [ 345.193517] [ 345.193517] stack backtrace: [ 345.197910] CPU: 2 PID: 0 Comm: swapper/2 Not tainted 5.10.0-rc7-02296-g3f43bd6f2c3b-ab89 #1 [ 345.206389] Hardware name: ARM LTD ARM Juno Development Platform/ARM Juno Development Platform, BIOS EDK II Oct 19 2019 [ 345.217215] Call trace: [ 345.219691] dump_backtrace+0x0/0x2b8 [ 345.223383] show_stack+0x18/0x68 [ 345.226731] dump_stack+0x110/0x188 [ 345.230255] __lock_acquire+0x23f0/0x2410 [ 345.234300] lock_acquire+0x3b8/0x730 [ 345.237997] _raw_spin_lock_irqsave+0x80/0x168 [ 345.242476] rcu_dump_cpu_stacks+0x7c/0x14c [ 345.246693] rcu_sched_clock_irq+0xfd4/0x11c0 [ 345.251087] update_process_times+0x84/0xe0 [ 345.255306] tick_sched_handle.isra.0+0x68/0x98 [ 345.259871] tick_sched_timer+0x60/0xd8 [ 345.263742] __hrtimer_run_queues+0x534/0x9e0 [ 345.268134] hrtimer_interrupt+0x1a8/0x398 [ 345.272264] tick_receive_broadcast+0x60/0x88 [ 345.276657] ipi_handler+0x250/0x4b8 [ 345.280270] handle_percpu_devid_fasteoi_ipi+0x138/0x4f0 [ 345.285619] generic_handle_irq+0x4c/0x68 [ 345.289661] __handle_domain_irq+0x9c/0x118 [ 345.293880] gic_handle_irq+0xdc/0x118 [ 345.297661] el1_irq+0xc8/0x180 [ 345.300835] cpuidle_enter_state+0x16c/0x810 [ 345.305139] cpuidle_enter+0x4c/0x78 [ 345.308749] call_cpuidle+0x44/0x88 [ 345.312271] do_idle+0x2d4/0x338 [ 345.315532] cpu_startup_entry+0x24/0x68 [ 345.319491] secondary_start_kernel+0x1d4/0x2d8 Thanks -- Qais Yousef