Re: [PATCH RFC v5 1/6] softirq: reorder trace_softirqs_on to prevent lockdep splat

2018-05-01 Thread Joel Fernandes
On Tue, May 1, 2018 at 7:02 AM Steven Rostedt  wrote:

> On Mon, 30 Apr 2018 18:41:59 -0700
> Joel Fernandes  wrote:

> > I'm able to reproduce a lockdep splat when CONFIG_PROVE_LOCKING=y and
> > CONFIG_PREEMPTIRQ_EVENTS=y.

> Needs more info in the change log. It also requires that
> CONFIG_DEBUG_LOCKED=y is set.

> Add this to the change log:

> The issue was this:

> Start with: preempt_count = 1 << SOFTIRQ_SHIFT

>  __local_bh_enable(cnt = 1 << SOFTIRQ_SHIFT) {
>  if (softirq_count() == (cnt && SOFTIRQ_MASK)) {
>  trace_softirqs_on() {
>  current->softirqs_enabled = 1;
>  }
>  }
>  preempt_count_sub(cnt) {
>  trace_preempt_on() {
>  tracepoint() {
>  rcu_read_lock_sched() {
>  // jumps into lockdep

> Where preempt_count still has softirqs disabled, but
> current->softirqs_enabled is true, and we get a splat.

> This patch should be separate (as you had it before), and needs to be
> submitted now because it already causes issues. We can add:

> Cc: sta...@vger.kernel.org
> Fixes: d59158162e032 ("tracing: Add support for preempt and irq
enable/disable events")

Ok, I'll split and resubmit with your reasoning in the changelog.

Just to clarify, my changelog was based on older patches, that's why the
reason appears different but the root cause is the same. In an earlier
series, I was doing lockdep_{off,on} around the rcu_read_lock_sched call so
that stage wasn't splatting, but then after the read_lock is held, the
tracepoint probe such as those registered by preemptoff tracer  was
acquiring locks and causing the same splat.

Anyway, this just for some justification of why changelog appears to
present a different scenario with the same fix. But I'll change it to yours
and resubmit with the tags, thanks a lot for looking into it,

thanks,

- Joel


Re: [PATCH RFC v5 1/6] softirq: reorder trace_softirqs_on to prevent lockdep splat

2018-05-01 Thread Steven Rostedt
On Mon, 30 Apr 2018 18:41:59 -0700
Joel Fernandes  wrote:

> I'm able to reproduce a lockdep splat when CONFIG_PROVE_LOCKING=y and
> CONFIG_PREEMPTIRQ_EVENTS=y.

Needs more info in the change log. It also requires that
CONFIG_DEBUG_LOCKED=y is set.

Add this to the change log:

The issue was this:

Start with: preempt_count = 1 << SOFTIRQ_SHIFT

__local_bh_enable(cnt = 1 << SOFTIRQ_SHIFT) {
if (softirq_count() == (cnt && SOFTIRQ_MASK)) {
trace_softirqs_on() {
current->softirqs_enabled = 1;
}
}
preempt_count_sub(cnt) {
trace_preempt_on() {
tracepoint() {
rcu_read_lock_sched() {
// jumps into lockdep

Where preempt_count still has softirqs disabled, but
current->softirqs_enabled is true, and we get a splat.

This patch should be separate (as you had it before), and needs to be
submitted now because it already causes issues. We can add:

Cc: sta...@vger.kernel.org
Fixes: d59158162e032 ("tracing: Add support for preempt and irq enable/disable 
events")

-- Steve


> 
> $ echo 1 > /d/tracing/events/preemptirq/preempt_enable/enable
> 
> Cc: Steven Rostedt 
> Cc: Peter Zilstra 
> Cc: Ingo Molnar 
> Cc: Mathieu Desnoyers 
> Cc: Tom Zanussi 
> Cc: Namhyung Kim 
> Cc: Thomas Glexiner 
> Cc: Boqun Feng 
> Cc: Paul McKenney 
> Cc: Frederic Weisbecker 
> Cc: Randy Dunlap 
> Cc: Masami Hiramatsu 
> Cc: Fenguang Wu 
> Cc: Baohong Liu 
> Cc: Vedang Patel 
> Cc: kernel-t...@android.com
> Signed-off-by: Joel Fernandes 
> ---
>  kernel/softirq.c | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/softirq.c b/kernel/softirq.c
> index 24d243ef8e71..47e2f61938c0 100644
> --- a/kernel/softirq.c
> +++ b/kernel/softirq.c
> @@ -139,9 +139,13 @@ static void __local_bh_enable(unsigned int cnt)
>  {
>   lockdep_assert_irqs_disabled();
>  
> + if (preempt_count() == cnt)
> + trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
> +
>   if (softirq_count() == (cnt & SOFTIRQ_MASK))
>   trace_softirqs_on(_RET_IP_);
> - preempt_count_sub(cnt);
> +
> + __preempt_count_sub(cnt);
>  }
>  
>  /*



[PATCH RFC v5 1/6] softirq: reorder trace_softirqs_on to prevent lockdep splat

2018-04-30 Thread Joel Fernandes
I'm able to reproduce a lockdep splat when CONFIG_PROVE_LOCKING=y and
CONFIG_PREEMPTIRQ_EVENTS=y.

$ echo 1 > /d/tracing/events/preemptirq/preempt_enable/enable

Cc: Steven Rostedt 
Cc: Peter Zilstra 
Cc: Ingo Molnar 
Cc: Mathieu Desnoyers 
Cc: Tom Zanussi 
Cc: Namhyung Kim 
Cc: Thomas Glexiner 
Cc: Boqun Feng 
Cc: Paul McKenney 
Cc: Frederic Weisbecker 
Cc: Randy Dunlap 
Cc: Masami Hiramatsu 
Cc: Fenguang Wu 
Cc: Baohong Liu 
Cc: Vedang Patel 
Cc: kernel-t...@android.com
Signed-off-by: Joel Fernandes 
---
 kernel/softirq.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 24d243ef8e71..47e2f61938c0 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -139,9 +139,13 @@ static void __local_bh_enable(unsigned int cnt)
 {
lockdep_assert_irqs_disabled();
 
+   if (preempt_count() == cnt)
+   trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
+
if (softirq_count() == (cnt & SOFTIRQ_MASK))
trace_softirqs_on(_RET_IP_);
-   preempt_count_sub(cnt);
+
+   __preempt_count_sub(cnt);
 }
 
 /*
-- 
2.17.0.441.gb46fe60e1d-goog