[tip:timers/core] timer: Fix jiffies wrap behavior of round_jiffies_common()

2013-06-28 Thread tip-bot for Bart Van Assche
Commit-ID:  9e04d3804d3ac97d8c03a41d78d0f0674b5d01e1
Gitweb: http://git.kernel.org/tip/9e04d3804d3ac97d8c03a41d78d0f0674b5d01e1
Author: Bart Van Assche bart.vanass...@gmail.com
AuthorDate: Tue, 21 May 2013 20:43:50 +0200
Committer:  Thomas Gleixner t...@linutronix.de
CommitDate: Fri, 28 Jun 2013 17:10:11 +0200

timer: Fix jiffies wrap behavior of round_jiffies_common()

Direct compare of jiffies related values does not work in the wrap
around case. Replace it with time_is_after_jiffies().

Signed-off-by: Bart Van Assche bvanass...@acm.org
Cc: Arjan van de Ven ar...@infradead.org
Cc: Stephen Rothwell s...@canb.auug.org.au
Link: http://lkml.kernel.org/r/519bc066.5080...@acm.org
Cc: sta...@vger.kernel.org
Signed-off-by: Thomas Gleixner t...@linutronix.de
---
 kernel/timer.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/kernel/timer.c b/kernel/timer.c
index 15ffdb3..15bc1b4 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -149,9 +149,11 @@ static unsigned long round_jiffies_common(unsigned long j, 
int cpu,
/* now that we have rounded, subtract the extra skew again */
j -= cpu * 3;
 
-   if (j = jiffies) /* rounding ate our timeout entirely; */
-   return original;
-   return j;
+   /*
+* Make sure j is still in the future. Otherwise return the
+* unmodified value.
+*/
+   return time_is_after_jiffies(j) ? j : original;
 }
 
 /**
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:locking/core] locking/spinlocks: Always evaluate the second argument of spin_lock_nested()

2014-08-13 Thread tip-bot for Bart Van Assche
Commit-ID:  4999201a59ef555f9105d2bb2459ed895627f7aa
Gitweb: http://git.kernel.org/tip/4999201a59ef555f9105d2bb2459ed895627f7aa
Author: Bart Van Assche bvanass...@acm.org
AuthorDate: Fri, 8 Aug 2014 12:35:36 +0200
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Wed, 13 Aug 2014 10:32:38 +0200

locking/spinlocks: Always evaluate the second argument of spin_lock_nested()

Evaluating a macro argument only if certain configuration options
have been selected is confusing and error-prone. Hence always
evaluate the second argument of spin_lock_nested().

An intentional side effect of this patch is that it avoids that
the following warning is reported for netif_addr_lock_nested()
when building with CONFIG_DEBUG_LOCK_ALLOC=n and with W=1:

  include/linux/netdevice.h: In function 'netif_addr_lock_nested':
  include/linux/netdevice.h:2865:6: warning: variable 'subclass' set but not 
used [-Wunused-but-set-variable]
int subclass = SINGLE_DEPTH_NESTING;
^

Signed-off-by: Bart Van Assche bvanass...@acm.org
Signed-off-by: Peter Zijlstra pet...@infradead.org
Cc: David Rientjes rient...@google.com
Cc: David S. Miller da...@davemloft.net
Cc: Andrew Morton a...@linux-foundation.org
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Oleg Nesterov o...@redhat.com
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/53e4a7f8.1040...@acm.org
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 include/linux/spinlock.h | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 3f2867f..262ba4e 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -197,7 +197,13 @@ static inline void do_raw_spin_unlock(raw_spinlock_t 
*lock) __releases(lock)
 _raw_spin_lock_nest_lock(lock, (nest_lock)-dep_map); \
 } while (0)
 #else
-# define raw_spin_lock_nested(lock, subclass)  _raw_spin_lock(lock)
+/*
+ * Always evaluate the 'subclass' argument to avoid that the compiler
+ * warns about set-but-not-used variables when building with
+ * CONFIG_DEBUG_LOCK_ALLOC=n and with W=1.
+ */
+# define raw_spin_lock_nested(lock, subclass)  \
+   _raw_spin_lock(((void)(subclass), (lock)))
 # define raw_spin_lock_nest_lock(lock, nest_lock)  _raw_spin_lock(lock)
 #endif
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:timers/core] timekeeping: Use proper seqcount initializer

2018-12-05 Thread tip-bot for Bart Van Assche
Commit-ID:  ce10a5b3954f2514af726beb78ed8d7350c5e41c
Gitweb: https://git.kernel.org/tip/ce10a5b3954f2514af726beb78ed8d7350c5e41c
Author: Bart Van Assche 
AuthorDate: Wed, 28 Nov 2018 15:43:09 -0800
Committer:  Thomas Gleixner 
CommitDate: Wed, 5 Dec 2018 11:00:09 +0100

timekeeping: Use proper seqcount initializer

tk_core.seq is initialized open coded, but that misses to initialize the
lockdep map when lockdep is enabled. Lockdep splats involving tk_core seq
consequently lack a name and are hard to read.

Use the proper initializer which takes care of the lockdep map
initialization.

[ tglx: Massaged changelog ]

Signed-off-by: Bart Van Assche 
Signed-off-by: Thomas Gleixner 
Cc: pet...@infradead.org
Cc: t...@kernel.org
Cc: johannes.b...@intel.com
Link: https://lkml.kernel.org/r/20181128234325.110011-12-bvanass...@acm.org

---
 kernel/time/timekeeping.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index cd02bd38cf2d..c801e25875a3 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -45,7 +45,9 @@ enum timekeeping_adv_mode {
 static struct {
seqcount_t  seq;
struct timekeeper   timekeeper;
-} tk_core cacheline_aligned;
+} tk_core cacheline_aligned = {
+   .seq = SEQCNT_ZERO(tk_core.seq),
+};
 
 static DEFINE_RAW_SPINLOCK(timekeeper_lock);
 static struct timekeeper shadow_timekeeper;


[tip:timers/core] timer: Fix jiffies wrap behavior of round_jiffies_common()

2013-06-28 Thread tip-bot for Bart Van Assche
Commit-ID:  9e04d3804d3ac97d8c03a41d78d0f0674b5d01e1
Gitweb: http://git.kernel.org/tip/9e04d3804d3ac97d8c03a41d78d0f0674b5d01e1
Author: Bart Van Assche 
AuthorDate: Tue, 21 May 2013 20:43:50 +0200
Committer:  Thomas Gleixner 
CommitDate: Fri, 28 Jun 2013 17:10:11 +0200

timer: Fix jiffies wrap behavior of round_jiffies_common()

Direct compare of jiffies related values does not work in the wrap
around case. Replace it with time_is_after_jiffies().

Signed-off-by: Bart Van Assche 
Cc: Arjan van de Ven 
Cc: Stephen Rothwell 
Link: http://lkml.kernel.org/r/519bc066.5080...@acm.org
Cc: sta...@vger.kernel.org
Signed-off-by: Thomas Gleixner 
---
 kernel/timer.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/kernel/timer.c b/kernel/timer.c
index 15ffdb3..15bc1b4 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -149,9 +149,11 @@ static unsigned long round_jiffies_common(unsigned long j, 
int cpu,
/* now that we have rounded, subtract the extra skew again */
j -= cpu * 3;
 
-   if (j <= jiffies) /* rounding ate our timeout entirely; */
-   return original;
-   return j;
+   /*
+* Make sure j is still in the future. Otherwise return the
+* unmodified value.
+*/
+   return time_is_after_jiffies(j) ? j : original;
 }
 
 /**
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:locking/core] locking/spinlocks: Always evaluate the second argument of spin_lock_nested()

2014-08-13 Thread tip-bot for Bart Van Assche
Commit-ID:  4999201a59ef555f9105d2bb2459ed895627f7aa
Gitweb: http://git.kernel.org/tip/4999201a59ef555f9105d2bb2459ed895627f7aa
Author: Bart Van Assche 
AuthorDate: Fri, 8 Aug 2014 12:35:36 +0200
Committer:  Ingo Molnar 
CommitDate: Wed, 13 Aug 2014 10:32:38 +0200

locking/spinlocks: Always evaluate the second argument of spin_lock_nested()

Evaluating a macro argument only if certain configuration options
have been selected is confusing and error-prone. Hence always
evaluate the second argument of spin_lock_nested().

An intentional side effect of this patch is that it avoids that
the following warning is reported for netif_addr_lock_nested()
when building with CONFIG_DEBUG_LOCK_ALLOC=n and with W=1:

  include/linux/netdevice.h: In function 'netif_addr_lock_nested':
  include/linux/netdevice.h:2865:6: warning: variable 'subclass' set but not 
used [-Wunused-but-set-variable]
int subclass = SINGLE_DEPTH_NESTING;
^

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra 
Cc: David Rientjes 
Cc: David S. Miller 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Oleg Nesterov 
Cc: Paul E. McKenney 
Link: http://lkml.kernel.org/r/53e4a7f8.1040...@acm.org
Signed-off-by: Ingo Molnar 
---
 include/linux/spinlock.h | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 3f2867f..262ba4e 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -197,7 +197,13 @@ static inline void do_raw_spin_unlock(raw_spinlock_t 
*lock) __releases(lock)
 _raw_spin_lock_nest_lock(lock, &(nest_lock)->dep_map); \
 } while (0)
 #else
-# define raw_spin_lock_nested(lock, subclass)  _raw_spin_lock(lock)
+/*
+ * Always evaluate the 'subclass' argument to avoid that the compiler
+ * warns about set-but-not-used variables when building with
+ * CONFIG_DEBUG_LOCK_ALLOC=n and with W=1.
+ */
+# define raw_spin_lock_nested(lock, subclass)  \
+   _raw_spin_lock(((void)(subclass), (lock)))
 # define raw_spin_lock_nest_lock(lock, nest_lock)  _raw_spin_lock(lock)
 #endif
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:locking/core] locking/lockdep: Make it clear that what lock_class::key points at is not modified

2019-07-25 Thread tip-bot for Bart Van Assche
Commit-ID:  364f6afc4f5537b79cf454eb35cae92920676075
Gitweb: https://git.kernel.org/tip/364f6afc4f5537b79cf454eb35cae92920676075
Author: Bart Van Assche 
AuthorDate: Mon, 22 Jul 2019 11:24:40 -0700
Committer:  Ingo Molnar 
CommitDate: Thu, 25 Jul 2019 15:43:26 +0200

locking/lockdep: Make it clear that what lock_class::key points at is not 
modified

This patch does not change the behavior of the lockdep code.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Link: https://lkml.kernel.org/r/20190722182443.216015-2-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 include/linux/lockdep.h| 2 +-
 kernel/locking/lockdep.c   | 2 +-
 kernel/locking/lockdep_internals.h | 3 ++-
 kernel/locking/lockdep_proc.c  | 2 +-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 0b0d7259276d..cdb3c2f06092 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -97,7 +97,7 @@ struct lock_class {
 */
struct list_headlocks_after, locks_before;
 
-   struct lockdep_subclass_key *key;
+   const struct lockdep_subclass_key *key;
unsigned intsubclass;
unsigned intdep_gen_id;
 
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 4861cf8e274b..af6627866191 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -511,7 +511,7 @@ static const char *usage_str[] =
 };
 #endif
 
-const char * __get_key_name(struct lockdep_subclass_key *key, char *str)
+const char *__get_key_name(const struct lockdep_subclass_key *key, char *str)
 {
return kallsyms_lookup((unsigned long)key, NULL, NULL, NULL, str);
 }
diff --git a/kernel/locking/lockdep_internals.h 
b/kernel/locking/lockdep_internals.h
index cc83568d5012..2e518369add4 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -116,7 +116,8 @@ extern struct lock_chain lock_chains[];
 extern void get_usage_chars(struct lock_class *class,
char usage[LOCK_USAGE_CHARS]);
 
-extern const char * __get_key_name(struct lockdep_subclass_key *key, char 
*str);
+extern const char *__get_key_name(const struct lockdep_subclass_key *key,
+ char *str);
 
 struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i);
 
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index bda006f8a88b..ed9842425cac 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -399,7 +399,7 @@ static void seq_lock_time(struct seq_file *m, struct 
lock_time *lt)
 
 static void seq_stats(struct seq_file *m, struct lock_stat_data *data)
 {
-   struct lockdep_subclass_key *ckey;
+   const struct lockdep_subclass_key *ckey;
struct lock_class_stats *stats;
struct lock_class *class;
const char *cname;


[tip:locking/core] stacktrace: Constify 'entries' arguments

2019-07-25 Thread tip-bot for Bart Van Assche
Commit-ID:  a2970421640bd9b6a78f2685d7750a791abdfd4e
Gitweb: https://git.kernel.org/tip/a2970421640bd9b6a78f2685d7750a791abdfd4e
Author: Bart Van Assche 
AuthorDate: Mon, 22 Jul 2019 11:24:41 -0700
Committer:  Ingo Molnar 
CommitDate: Thu, 25 Jul 2019 15:43:26 +0200

stacktrace: Constify 'entries' arguments

Make it clear to humans and to the compiler that the stack trace
('entries') arguments are not modified.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Link: https://lkml.kernel.org/r/20190722182443.216015-3-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 include/linux/stacktrace.h | 4 ++--
 kernel/stacktrace.c| 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/stacktrace.h b/include/linux/stacktrace.h
index f0cfd12cb45e..83bd8cb475d7 100644
--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -9,9 +9,9 @@ struct task_struct;
 struct pt_regs;
 
 #ifdef CONFIG_STACKTRACE
-void stack_trace_print(unsigned long *trace, unsigned int nr_entries,
+void stack_trace_print(const unsigned long *trace, unsigned int nr_entries,
   int spaces);
-int stack_trace_snprint(char *buf, size_t size, unsigned long *entries,
+int stack_trace_snprint(char *buf, size_t size, const unsigned long *entries,
unsigned int nr_entries, int spaces);
 unsigned int stack_trace_save(unsigned long *store, unsigned int size,
  unsigned int skipnr);
diff --git a/kernel/stacktrace.c b/kernel/stacktrace.c
index f5440abb7532..6d1f68b7e528 100644
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -20,7 +20,7 @@
  * @nr_entries:Number of entries in the storage array
  * @spaces:Number of leading spaces to print
  */
-void stack_trace_print(unsigned long *entries, unsigned int nr_entries,
+void stack_trace_print(const unsigned long *entries, unsigned int nr_entries,
   int spaces)
 {
unsigned int i;
@@ -43,7 +43,7 @@ EXPORT_SYMBOL_GPL(stack_trace_print);
  *
  * Return: Number of bytes printed.
  */
-int stack_trace_snprint(char *buf, size_t size, unsigned long *entries,
+int stack_trace_snprint(char *buf, size_t size, const unsigned long *entries,
unsigned int nr_entries, int spaces)
 {
unsigned int generated, i, total = 0;


[tip:locking/core] locking/lockdep: Reduce space occupied by stack traces

2019-07-25 Thread tip-bot for Bart Van Assche
Commit-ID:  12593b7467f9130b64a6d4b6a26ed4ec217b6784
Gitweb: https://git.kernel.org/tip/12593b7467f9130b64a6d4b6a26ed4ec217b6784
Author: Bart Van Assche 
AuthorDate: Mon, 22 Jul 2019 11:24:42 -0700
Committer:  Ingo Molnar 
CommitDate: Thu, 25 Jul 2019 15:43:27 +0200

locking/lockdep: Reduce space occupied by stack traces

Although commit 669de8bda87b ("kernel/workqueue: Use dynamic lockdep keys
for workqueues") unregisters dynamic lockdep keys when a workqueue is
destroyed, a side effect of that commit is that all stack traces
associated with the lockdep key are leaked when a workqueue is destroyed.
Fix this by storing each unique stack trace once. Other changes in this
patch are:

- Use NULL instead of { .nr_entries = 0 } to represent 'no trace'.
- Store a pointer to a stack trace in struct lock_class and struct
  lock_list instead of storing 'nr_entries' and 'offset'.

This patch avoids that the following program triggers the "BUG:
MAX_STACK_TRACE_ENTRIES too low!" complaint:

#include 
#include 

int main()
{
for (;;) {
int fd = open("/dev/infiniband/rdma_cm", O_RDWR);
close(fd);
}
}

Suggested-by: Peter Zijlstra 
Reported-by: Eric Biggers 
Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: Yuyang Du 
Link: https://lkml.kernel.org/r/20190722182443.216015-4-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 include/linux/lockdep.h|   9 +--
 kernel/locking/lockdep.c   | 128 ++---
 kernel/locking/lockdep_internals.h |   2 +
 3 files changed, 95 insertions(+), 44 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index cdb3c2f06092..b8a835fd611b 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -66,10 +66,7 @@ struct lock_class_key {
 
 extern struct lock_class_key __lockdep_no_validate__;
 
-struct lock_trace {
-   unsigned intnr_entries;
-   unsigned intoffset;
-};
+struct lock_trace;
 
 #define LOCKSTAT_POINTS4
 
@@ -105,7 +102,7 @@ struct lock_class {
 * IRQ/softirq usage tracking bits:
 */
unsigned long   usage_mask;
-   struct lock_trace   usage_traces[XXX_LOCK_USAGE_STATES];
+   const struct lock_trace *usage_traces[XXX_LOCK_USAGE_STATES];
 
/*
 * Generation counter, when doing certain classes of graph walking,
@@ -193,7 +190,7 @@ struct lock_list {
struct list_headentry;
struct lock_class   *class;
struct lock_class   *links_to;
-   struct lock_trace   trace;
+   const struct lock_trace *trace;
int distance;
 
/*
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index af6627866191..1a96869cb2f0 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -449,33 +449,72 @@ static void print_lockdep_off(const char *bug_msg)
 unsigned long nr_stack_trace_entries;
 
 #ifdef CONFIG_PROVE_LOCKING
+/**
+ * struct lock_trace - single stack backtrace
+ * @hash_entry:Entry in a stack_trace_hash[] list.
+ * @hash:  jhash() of @entries.
+ * @nr_entries:Number of entries in @entries.
+ * @entries:   Actual stack backtrace.
+ */
+struct lock_trace {
+   struct hlist_node   hash_entry;
+   u32 hash;
+   u32 nr_entries;
+   unsigned long   entries[0] __aligned(sizeof(unsigned long));
+};
+#define LOCK_TRACE_SIZE_IN_LONGS   \
+   (sizeof(struct lock_trace) / sizeof(unsigned long))
 /*
- * Stack-trace: tightly packed array of stack backtrace
- * addresses. Protected by the graph_lock.
+ * Stack-trace: sequence of lock_trace structures. Protected by the graph_lock.
  */
 static unsigned long stack_trace[MAX_STACK_TRACE_ENTRIES];
+static struct hlist_head stack_trace_hash[STACK_TRACE_HASH_SIZE];
+
+static bool traces_identical(struct lock_trace *t1, struct lock_trace *t2)
+{
+   return t1->hash == t2->hash && t1->nr_entries == t2->nr_entries &&
+   memcmp(t1->entries, t2->entries,
+  t1->nr_entries * sizeof(t1->entries[0])) == 0;
+}
 
-static int save_trace(struct lock_trace *trace)
+static struct lock_trace *save_trace(void)
 {
-   unsigned long *entries = stack_trace + nr_stack_trace_entries;
+   struct lock_trace *trace, *t2;
+   struct hlist_head *hash_head;
+   u32 hash;
unsigned int max_entries;
 
-   trace->offset = nr_stack_trace_entries;
-   max_entries = MAX_STACK_TRACE_ENTRIES - nr_stack_trace_entries;
-   trace->nr_entries = stack_trace_save(entries, max_entries, 3);
-   nr_stack_trace_entries += 

[tip:locking/core] locking/lockdep: Report more stack trace statistics

2019-07-25 Thread tip-bot for Bart Van Assche
Commit-ID:  8c779229d0f4fe83ead90bdcbbf08b02989aa200
Gitweb: https://git.kernel.org/tip/8c779229d0f4fe83ead90bdcbbf08b02989aa200
Author: Bart Van Assche 
AuthorDate: Mon, 22 Jul 2019 11:24:43 -0700
Committer:  Ingo Molnar 
CommitDate: Thu, 25 Jul 2019 15:43:28 +0200

locking/lockdep: Report more stack trace statistics

Report the number of stack traces and the number of stack trace hash
chains. These two numbers are useful because these allow to estimate
the number of stack trace hash collisions.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Link: https://lkml.kernel.org/r/20190722182443.216015-5-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c   | 29 +
 kernel/locking/lockdep_internals.h |  4 
 kernel/locking/lockdep_proc.c  |  6 ++
 3 files changed, 39 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1a96869cb2f0..3c3902c40a0e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -516,6 +516,35 @@ static struct lock_trace *save_trace(void)
 
return trace;
 }
+
+/* Return the number of stack traces in the stack_trace[] array. */
+u64 lockdep_stack_trace_count(void)
+{
+   struct lock_trace *trace;
+   u64 c = 0;
+   int i;
+
+   for (i = 0; i < ARRAY_SIZE(stack_trace_hash); i++) {
+   hlist_for_each_entry(trace, _trace_hash[i], hash_entry) {
+   c++;
+   }
+   }
+
+   return c;
+}
+
+/* Return the number of stack hash chains that have at least one stack trace. 
*/
+u64 lockdep_stack_hash_count(void)
+{
+   u64 c = 0;
+   int i;
+
+   for (i = 0; i < ARRAY_SIZE(stack_trace_hash); i++)
+   if (!hlist_empty(_trace_hash[i]))
+   c++;
+
+   return c;
+}
 #endif
 
 unsigned int nr_hardirq_chains;
diff --git a/kernel/locking/lockdep_internals.h 
b/kernel/locking/lockdep_internals.h
index 93a008bf77db..18d85aebbb57 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -140,6 +140,10 @@ extern unsigned int max_bfs_queue_depth;
 #ifdef CONFIG_PROVE_LOCKING
 extern unsigned long lockdep_count_forward_deps(struct lock_class *);
 extern unsigned long lockdep_count_backward_deps(struct lock_class *);
+#ifdef CONFIG_TRACE_IRQFLAGS
+u64 lockdep_stack_trace_count(void);
+u64 lockdep_stack_hash_count(void);
+#endif
 #else
 static inline unsigned long
 lockdep_count_forward_deps(struct lock_class *class)
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index ed9842425cac..dadb7b7fba37 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -285,6 +285,12 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
nr_process_chains);
seq_printf(m, " stack-trace entries:   %11lu [max: %lu]\n",
nr_stack_trace_entries, MAX_STACK_TRACE_ENTRIES);
+#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING)
+   seq_printf(m, " number of stack traces:%llu\n",
+  lockdep_stack_trace_count());
+   seq_printf(m, " number of stack hash chains:   %llu\n",
+  lockdep_stack_hash_count());
+#endif
seq_printf(m, " combined max dependencies: %11u\n",
(nr_hardirq_chains + 1) *
(nr_softirq_chains + 1) *


[tip:locking/core] tools/lib/lockdep/tests: Display compiler warning and error messages

2018-12-11 Thread tip-bot for Bart Van Assche
Commit-ID:  da087b2229618f78ecea5c203fed8ba2245de636
Gitweb: https://git.kernel.org/tip/da087b2229618f78ecea5c203fed8ba2245de636
Author: Bart Van Assche 
AuthorDate: Thu, 6 Dec 2018 17:11:25 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:47 +0100

tools/lib/lockdep/tests: Display compiler warning and error messages

If compilation of liblockdep fails, display an error message and exit
immediately. Display compiler warning and error messages that are
generated while building a test. Only run a test if compilation of it
succeeded.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Sasha Levin 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-2-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 tools/lib/lockdep/run_tests.sh | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index 2e570a188f16..9f31f84e7fac 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -1,13 +1,17 @@
 #! /bin/bash
 # SPDX-License-Identifier: GPL-2.0
 
-make &> /dev/null
+if ! make >/dev/null; then
+echo "Building liblockdep failed."
+echo "FAILED!"
+exit 1
+fi
 
 for i in `ls tests/*.c`; do
testname=$(basename "$i" .c)
-   gcc -o tests/$testname -pthread $i liblockdep.a -Iinclude 
-D__USE_LIBLOCKDEP &> /dev/null
echo -ne "$testname... "
-   if [ $(timeout 1 ./tests/$testname 2>&1 | wc -l) -gt 0 ]; then
+   if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude 
-D__USE_LIBLOCKDEP &&
+   [ "$(timeout 1 "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
echo "PASSED!"
else
echo "FAILED!"
@@ -19,9 +23,9 @@ done
 
 for i in `ls tests/*.c`; do
testname=$(basename "$i" .c)
-   gcc -o tests/$testname -pthread -Iinclude $i &> /dev/null
echo -ne "(PRELOAD) $testname... "
-   if [ $(timeout 1 ./lockdep ./tests/$testname 2>&1 | wc -l) -gt 0 ]; then
+   if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+   [ "$(timeout 1 ./lockdep "./tests/$testname" 2>&1 | wc -l)" -gt 
0 ]; then
echo "PASSED!"
else
echo "FAILED!"


[tip:locking/core] tools/lib/lockdep/tests: Fix shellcheck warnings

2018-12-11 Thread tip-bot for Bart Van Assche
Commit-ID:  7e9798871a9186cb831cf693d7ff58085384ccbd
Gitweb: https://git.kernel.org/tip/7e9798871a9186cb831cf693d7ff58085384ccbd
Author: Bart Van Assche 
AuthorDate: Thu, 6 Dec 2018 17:11:26 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:48 +0100

tools/lib/lockdep/tests: Fix shellcheck warnings

Use find instead of ls to avoid splitting filenames that contain spaces.
Use rm -f instead of if ... then rm ...; fi. This patch addresses all
shellcheck complaints about the run_tests.sh shell script.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Sasha Levin 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-3-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 tools/lib/lockdep/run_tests.sh | 12 
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index 9f31f84e7fac..253719ee6377 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -7,7 +7,7 @@ if ! make >/dev/null; then
 exit 1
 fi
 
-for i in `ls tests/*.c`; do
+find tests -name '*.c' | sort | while read -r i; do
testname=$(basename "$i" .c)
echo -ne "$testname... "
if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude 
-D__USE_LIBLOCKDEP &&
@@ -16,12 +16,10 @@ for i in `ls tests/*.c`; do
else
echo "FAILED!"
fi
-   if [ -f "tests/$testname" ]; then
-   rm tests/$testname
-   fi
+   rm -f "tests/$testname"
 done
 
-for i in `ls tests/*.c`; do
+find tests -name '*.c' | sort | while read -r i; do
testname=$(basename "$i" .c)
echo -ne "(PRELOAD) $testname... "
if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
@@ -30,7 +28,5 @@ for i in `ls tests/*.c`; do
else
echo "FAILED!"
fi
-   if [ -f "tests/$testname" ]; then
-   rm tests/$testname
-   fi
+   rm -f "tests/$testname"
 done


[tip:locking/core] tools/lib/lockdep/tests: Improve testing accuracy

2018-12-11 Thread tip-bot for Bart Van Assche
Commit-ID:  5ecb8e94b494af0df8de4ca9b9ef88d87b30a9c1
Gitweb: https://git.kernel.org/tip/5ecb8e94b494af0df8de4ca9b9ef88d87b30a9c1
Author: Bart Van Assche 
AuthorDate: Thu, 6 Dec 2018 17:11:27 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:49 +0100

tools/lib/lockdep/tests: Improve testing accuracy

Instead of checking whether the tests produced any output, check the
output itself. This patch avoids that e.g. debug output causes the
message "PASSED!" to be reported for failed tests.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Sasha Levin 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-4-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 tools/lib/lockdep/run_tests.sh| 5 +++--
 tools/lib/lockdep/tests/AA.sh | 2 ++
 tools/lib/lockdep/tests/ABA.sh| 2 ++
 tools/lib/lockdep/tests/ABBA.sh   | 2 ++
 tools/lib/lockdep/tests/ABBA_2threads.sh  | 2 ++
 tools/lib/lockdep/tests/ABBCCA.sh | 2 ++
 tools/lib/lockdep/tests/ABBCCDDA.sh   | 2 ++
 tools/lib/lockdep/tests/ABCABC.sh | 2 ++
 tools/lib/lockdep/tests/ABCDBCDA.sh   | 2 ++
 tools/lib/lockdep/tests/ABCDBDDA.sh   | 2 ++
 tools/lib/lockdep/tests/WW.sh | 2 ++
 tools/lib/lockdep/tests/unlock_balance.sh | 2 ++
 12 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index 253719ee6377..bc36178329a8 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -11,7 +11,7 @@ find tests -name '*.c' | sort | while read -r i; do
testname=$(basename "$i" .c)
echo -ne "$testname... "
if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude 
-D__USE_LIBLOCKDEP &&
-   [ "$(timeout 1 "./tests/$testname" 2>&1 | wc -l)" -gt 0 ]; then
+   timeout 1 "tests/$testname" 2>&1 | "tests/${testname}.sh"; then
echo "PASSED!"
else
echo "FAILED!"
@@ -23,7 +23,8 @@ find tests -name '*.c' | sort | while read -r i; do
testname=$(basename "$i" .c)
echo -ne "(PRELOAD) $testname... "
if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
-   [ "$(timeout 1 ./lockdep "./tests/$testname" 2>&1 | wc -l)" -gt 
0 ]; then
+   timeout 1 ./lockdep "tests/$testname" 2>&1 |
+   "tests/${testname}.sh"; then
echo "PASSED!"
else
echo "FAILED!"
diff --git a/tools/lib/lockdep/tests/AA.sh b/tools/lib/lockdep/tests/AA.sh
new file mode 100644
index ..f39b32865074
--- /dev/null
+++ b/tools/lib/lockdep/tests/AA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible recursive locking detected'
diff --git a/tools/lib/lockdep/tests/ABA.sh b/tools/lib/lockdep/tests/ABA.sh
new file mode 100644
index ..f39b32865074
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible recursive locking detected'
diff --git a/tools/lib/lockdep/tests/ABBA.sh b/tools/lib/lockdep/tests/ABBA.sh
new file mode 100644
index ..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABBA_2threads.sh 
b/tools/lib/lockdep/tests/ABBA_2threads.sh
new file mode 100644
index ..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBA_2threads.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABBCCA.sh 
b/tools/lib/lockdep/tests/ABBCCA.sh
new file mode 100644
index ..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBCCA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABBCCDDA.sh 
b/tools/lib/lockdep/tests/ABBCCDDA.sh
new file mode 100644
index ..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABBCCDDA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABCABC.sh 
b/tools/lib/lockdep/tests/ABCABC.sh
new file mode 100644
index ..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABCABC.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABCDBCDA.sh 
b/tools/lib/lockdep/tests/ABCDBCDA.sh
new file mode 100644
index ..fc31c607a5a8
--- /dev/null
+++ b/tools/lib/lockdep/tests/ABCDBCDA.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+grep -q 'WARNING: possible circular locking dependency detected'
diff --git a/tools/lib/lockdep/tests/ABCDBDDA.sh 

[tip:locking/core] tools/lib/lockdep/tests: Run lockdep tests a second time under Valgrind

2018-12-11 Thread tip-bot for Bart Van Assche
Commit-ID:  2b28a8609ec9891e37607ae20688b4ab34f2778c
Gitweb: https://git.kernel.org/tip/2b28a8609ec9891e37607ae20688b4ab34f2778c
Author: Bart Van Assche 
AuthorDate: Thu, 6 Dec 2018 17:11:28 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:49 +0100

tools/lib/lockdep/tests: Run lockdep tests a second time under Valgrind

This improves test coverage.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Sasha Levin 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-5-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 tools/lib/lockdep/run_tests.sh | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index bc36178329a8..c8fbd0306960 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -31,3 +31,17 @@ find tests -name '*.c' | sort | while read -r i; do
fi
rm -f "tests/$testname"
 done
+
+find tests -name '*.c' | sort | while read -r i; do
+   testname=$(basename "$i" .c)
+   echo -ne "(PRELOAD + Valgrind) $testname... "
+   if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+   { timeout 10 valgrind --read-var-info=yes ./lockdep 
"./tests/$testname" >& "tests/${testname}.vg.out"; true; } &&
+   "tests/${testname}.sh" < "tests/${testname}.vg.out" &&
+   ! grep -Eq '(^==[0-9]*== (Invalid |Uninitialised ))|Mismatched 
free|Source and destination overlap| UME ' "tests/${testname}.vg.out"; then
+   echo "PASSED!"
+   else
+   echo "FAILED!"
+   fi
+   rm -f "tests/$testname"
+done


[tip:locking/core] tools/lib/lockdep: Rename "trywlock" into "trywrlock"

2018-12-11 Thread tip-bot for Bart Van Assche
Commit-ID:  7f3c7952d111ac93573fb86f4d5aeff527a07fcc
Gitweb: https://git.kernel.org/tip/7f3c7952d111ac93573fb86f4d5aeff527a07fcc
Author: Bart Van Assche 
AuthorDate: Thu, 6 Dec 2018 17:11:29 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:50 +0100

tools/lib/lockdep: Rename "trywlock" into "trywrlock"

This patch avoids that the following compiler warning is reported while
compiling the lockdep unit tests:

include/liblockdep/rwlock.h: In function 'liblockdep_pthread_rwlock_trywlock':
include/liblockdep/rwlock.h:66:9: warning: implicit declaration of function 
'pthread_rwlock_trywlock'; did you mean 'pthread_rwlock_trywrlock'? 
[-Wimplicit-function-declaration]
  return pthread_rwlock_trywlock(>rwlock) == 0 ? 1 : 0;
 ^~~
 pthread_rwlock_trywrlock

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Sasha Levin 
Cc: Sasha Levin 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Fixes: 5a52c9b480e0 ("liblockdep: Add public headers for pthread_rwlock_t 
implementation")
Link: https://lkml.kernel.org/r/20181207011148.251812-6-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 tools/lib/lockdep/include/liblockdep/rwlock.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/lib/lockdep/include/liblockdep/rwlock.h 
b/tools/lib/lockdep/include/liblockdep/rwlock.h
index a96c3bf0fef1..365762e3a1ea 100644
--- a/tools/lib/lockdep/include/liblockdep/rwlock.h
+++ b/tools/lib/lockdep/include/liblockdep/rwlock.h
@@ -60,10 +60,10 @@ static inline int 
liblockdep_pthread_rwlock_tryrdlock(liblockdep_pthread_rwlock_
return pthread_rwlock_tryrdlock(>rwlock) == 0 ? 1 : 0;
 }
 
-static inline int 
liblockdep_pthread_rwlock_trywlock(liblockdep_pthread_rwlock_t *lock)
+static inline int 
liblockdep_pthread_rwlock_trywrlock(liblockdep_pthread_rwlock_t *lock)
 {
lock_acquire(>dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_);
-   return pthread_rwlock_trywlock(>rwlock) == 0 ? 1 : 0;
+   return pthread_rwlock_trywrlock(>rwlock) == 0 ? 1 : 0;
 }
 
 static inline int liblockdep_rwlock_destroy(liblockdep_pthread_rwlock_t *lock)
@@ -79,7 +79,7 @@ static inline int 
liblockdep_rwlock_destroy(liblockdep_pthread_rwlock_t *lock)
 #define pthread_rwlock_unlock  liblockdep_pthread_rwlock_unlock
 #define pthread_rwlock_wrlock  liblockdep_pthread_rwlock_wrlock
 #define pthread_rwlock_tryrdlock   liblockdep_pthread_rwlock_tryrdlock
-#define pthread_rwlock_trywlock
liblockdep_pthread_rwlock_trywlock
+#define pthread_rwlock_trywrlock   liblockdep_pthread_rwlock_trywrlock
 #define pthread_rwlock_destroy liblockdep_rwlock_destroy
 
 #endif


[tip:locking/core] tools/lib/lockdep: Add dummy print_irqtrace_events() implementation

2018-12-11 Thread tip-bot for Bart Van Assche
Commit-ID:  ac862d9b2fd084b50ee7a332a35d8d8d3228ce09
Gitweb: https://git.kernel.org/tip/ac862d9b2fd084b50ee7a332a35d8d8d3228ce09
Author: Bart Van Assche 
AuthorDate: Thu, 6 Dec 2018 17:11:30 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:50 +0100

tools/lib/lockdep: Add dummy print_irqtrace_events() implementation

This patch avoids that linking against liblockdep fails due to no
print_irqtrace_events() definition being available.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Sasha Levin 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-7-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 tools/lib/lockdep/lockdep.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/tools/lib/lockdep/lockdep.c b/tools/lib/lockdep/lockdep.c
index 6002fcf2f9bc..348a9d0fb766 100644
--- a/tools/lib/lockdep/lockdep.c
+++ b/tools/lib/lockdep/lockdep.c
@@ -15,6 +15,11 @@ u32 prandom_u32(void)
abort();
 }
 
+void print_irqtrace_events(struct task_struct *curr)
+{
+   abort();
+}
+
 static struct new_utsname *init_utsname(void)
 {
static struct new_utsname n = (struct new_utsname) {


[tip:locking/core] tools/lib/lockdep/tests: Test the lockdep_reset_lock() implementation

2018-12-11 Thread tip-bot for Bart Van Assche
Commit-ID:  886adbed7ac19352315e9f1dd880360c7544d25c
Gitweb: https://git.kernel.org/tip/886adbed7ac19352315e9f1dd880360c7544d25c
Author: Bart Van Assche 
AuthorDate: Thu, 6 Dec 2018 17:11:31 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:51 +0100

tools/lib/lockdep/tests: Test the lockdep_reset_lock() implementation

This patch makes sure that the lockdep_reset_lock() function gets
tested.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Sasha Levin 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-8-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 tools/lib/lockdep/include/liblockdep/common.h | 1 +
 tools/lib/lockdep/include/liblockdep/mutex.h  | 1 +
 tools/lib/lockdep/tests/ABBA.c| 3 +++
 tools/lib/lockdep/tests/ABBCCA.c  | 4 
 tools/lib/lockdep/tests/ABBCCDDA.c| 5 +
 tools/lib/lockdep/tests/ABCABC.c  | 4 
 tools/lib/lockdep/tests/ABCDBCDA.c| 5 +
 tools/lib/lockdep/tests/ABCDBDDA.c| 5 +
 tools/lib/lockdep/tests/unlock_balance.c  | 2 ++
 9 files changed, 30 insertions(+)

diff --git a/tools/lib/lockdep/include/liblockdep/common.h 
b/tools/lib/lockdep/include/liblockdep/common.h
index 8862da80995a..d640a9761f09 100644
--- a/tools/lib/lockdep/include/liblockdep/common.h
+++ b/tools/lib/lockdep/include/liblockdep/common.h
@@ -44,6 +44,7 @@ void lock_acquire(struct lockdep_map *lock, unsigned int 
subclass,
struct lockdep_map *nest_lock, unsigned long ip);
 void lock_release(struct lockdep_map *lock, int nested,
unsigned long ip);
+void lockdep_reset_lock(struct lockdep_map *lock);
 extern void debug_check_no_locks_freed(const void *from, unsigned long len);
 
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
diff --git a/tools/lib/lockdep/include/liblockdep/mutex.h 
b/tools/lib/lockdep/include/liblockdep/mutex.h
index a80ac39f966e..2073d4e1f2f0 100644
--- a/tools/lib/lockdep/include/liblockdep/mutex.h
+++ b/tools/lib/lockdep/include/liblockdep/mutex.h
@@ -54,6 +54,7 @@ static inline int 
liblockdep_pthread_mutex_trylock(liblockdep_pthread_mutex_t *l
 
 static inline int liblockdep_pthread_mutex_destroy(liblockdep_pthread_mutex_t 
*lock)
 {
+   lockdep_reset_lock(>dep_map);
return pthread_mutex_destroy(>mutex);
 }
 
diff --git a/tools/lib/lockdep/tests/ABBA.c b/tools/lib/lockdep/tests/ABBA.c
index 1460afd33d71..623313f54720 100644
--- a/tools/lib/lockdep/tests/ABBA.c
+++ b/tools/lib/lockdep/tests/ABBA.c
@@ -11,4 +11,7 @@ void main(void)
 
LOCK_UNLOCK_2(a, b);
LOCK_UNLOCK_2(b, a);
+
+   pthread_mutex_destroy();
+   pthread_mutex_destroy();
 }
diff --git a/tools/lib/lockdep/tests/ABBCCA.c b/tools/lib/lockdep/tests/ABBCCA.c
index a54c1b2af118..48446129d496 100644
--- a/tools/lib/lockdep/tests/ABBCCA.c
+++ b/tools/lib/lockdep/tests/ABBCCA.c
@@ -13,4 +13,8 @@ void main(void)
LOCK_UNLOCK_2(a, b);
LOCK_UNLOCK_2(b, c);
LOCK_UNLOCK_2(c, a);
+
+   pthread_mutex_destroy();
+   pthread_mutex_destroy();
+   pthread_mutex_destroy();
 }
diff --git a/tools/lib/lockdep/tests/ABBCCDDA.c 
b/tools/lib/lockdep/tests/ABBCCDDA.c
index aa5d194e8869..3570bf7b3804 100644
--- a/tools/lib/lockdep/tests/ABBCCDDA.c
+++ b/tools/lib/lockdep/tests/ABBCCDDA.c
@@ -15,4 +15,9 @@ void main(void)
LOCK_UNLOCK_2(b, c);
LOCK_UNLOCK_2(c, d);
LOCK_UNLOCK_2(d, a);
+
+   pthread_mutex_destroy();
+   pthread_mutex_destroy();
+   pthread_mutex_destroy();
+   pthread_mutex_destroy();
 }
diff --git a/tools/lib/lockdep/tests/ABCABC.c b/tools/lib/lockdep/tests/ABCABC.c
index b54a08e60416..a1c4659894cd 100644
--- a/tools/lib/lockdep/tests/ABCABC.c
+++ b/tools/lib/lockdep/tests/ABCABC.c
@@ -13,4 +13,8 @@ void main(void)
LOCK_UNLOCK_2(a, b);
LOCK_UNLOCK_2(c, a);
LOCK_UNLOCK_2(b, c);
+
+   pthread_mutex_destroy();
+   pthread_mutex_destroy();
+   pthread_mutex_destroy();
 }
diff --git a/tools/lib/lockdep/tests/ABCDBCDA.c 
b/tools/lib/lockdep/tests/ABCDBCDA.c
index a56742250d86..335af1c90ab5 100644
--- a/tools/lib/lockdep/tests/ABCDBCDA.c
+++ b/tools/lib/lockdep/tests/ABCDBCDA.c
@@ -15,4 +15,9 @@ void main(void)
LOCK_UNLOCK_2(c, d);
LOCK_UNLOCK_2(b, c);
LOCK_UNLOCK_2(d, a);
+
+   pthread_mutex_destroy();
+   pthread_mutex_destroy();
+   pthread_mutex_destroy();
+   pthread_mutex_destroy();
 }
diff --git a/tools/lib/lockdep/tests/ABCDBDDA.c 
b/tools/lib/lockdep/tests/ABCDBDDA.c
index 238a3353f3c3..3c5972863049 100644
--- a/tools/lib/lockdep/tests/ABCDBDDA.c
+++ b/tools/lib/lockdep/tests/ABCDBDDA.c
@@ -15,4 +15,9 @@ void main(void)
LOCK_UNLOCK_2(c, d);
LOCK_UNLOCK_2(b, d);
LOCK_UNLOCK_2(d, a);
+
+   

[tip:locking/core] locking/lockdep: Declare local symbols static

2018-12-11 Thread tip-bot for Bart Van Assche
Commit-ID:  1431a5d2cfa18d7006d9b0e7ab4548d9bb19ce55
Gitweb: https://git.kernel.org/tip/1431a5d2cfa18d7006d9b0e7ab4548d9bb19ce55
Author: Bart Van Assche 
AuthorDate: Thu, 6 Dec 2018 17:11:32 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:51 +0100

locking/lockdep: Declare local symbols static

This patch avoids that sparse complains about a missing declaration for
the lock_classes array when building with CONFIG_DEBUG_LOCKDEP=n.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Sasha Levin 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-9-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1efada2dd9dd..7434a00b2b2f 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -138,6 +138,9 @@ static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
  * get freed - this significantly simplifies the debugging code.
  */
 unsigned long nr_lock_classes;
+#ifndef CONFIG_DEBUG_LOCKDEP
+static
+#endif
 struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
 
 static inline struct lock_class *hlock_class(struct held_lock *hlock)


[tip:locking/core] locking/lockdep: Inline __lockdep_init_map()

2018-12-11 Thread tip-bot for Bart Van Assche
Commit-ID:  d35568bdb6ce4be3f885f8f189bbde5adc7e0160
Gitweb: https://git.kernel.org/tip/d35568bdb6ce4be3f885f8f189bbde5adc7e0160
Author: Bart Van Assche 
AuthorDate: Thu, 6 Dec 2018 17:11:33 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:51 +0100

locking/lockdep: Inline __lockdep_init_map()

Since the function __lockdep_init_map() only has one caller, inline it
into its caller. This patch does not change any functionality.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Sasha Levin 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-10-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 8 +---
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 7434a00b2b2f..b5c8fcb6c070 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3091,7 +3091,7 @@ static int mark_lock(struct task_struct *curr, struct 
held_lock *this,
 /*
  * Initialize a lock instance's lock-class mapping info:
  */
-static void __lockdep_init_map(struct lockdep_map *lock, const char *name,
+void lockdep_init_map(struct lockdep_map *lock, const char *name,
  struct lock_class_key *key, int subclass)
 {
int i;
@@ -3147,12 +3147,6 @@ static void __lockdep_init_map(struct lockdep_map *lock, 
const char *name,
raw_local_irq_restore(flags);
}
 }
-
-void lockdep_init_map(struct lockdep_map *lock, const char *name,
- struct lock_class_key *key, int subclass)
-{
-   __lockdep_init_map(lock, name, key, subclass);
-}
 EXPORT_SYMBOL_GPL(lockdep_init_map);
 
 struct lock_class_key __lockdep_no_validate__;


[tip:locking/core] locking/lockdep: Introduce lock_class_cache_is_registered()

2018-12-11 Thread tip-bot for Bart Van Assche
Commit-ID:  2904d9fa45d3ce7153f1e10d78c570ecf7f19c35
Gitweb: https://git.kernel.org/tip/2904d9fa45d3ce7153f1e10d78c570ecf7f19c35
Author: Bart Van Assche 
AuthorDate: Thu, 6 Dec 2018 17:11:34 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:53 +0100

locking/lockdep: Introduce lock_class_cache_is_registered()

This patch does not change any functionality but makes the
lockdep_reset_lock() function easier to read.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Sasha Levin 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-11-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 50 +---
 1 file changed, 30 insertions(+), 20 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index b5c8fcb6c070..81388d028ac7 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4201,13 +4201,33 @@ void lockdep_free_key_range(void *start, unsigned long 
size)
 */
 }
 
-void lockdep_reset_lock(struct lockdep_map *lock)
+/*
+ * Check whether any element of the @lock->class_cache[] array refers to a
+ * registered lock class. The caller must hold either the graph lock or the
+ * RCU read lock.
+ */
+static bool lock_class_cache_is_registered(struct lockdep_map *lock)
 {
struct lock_class *class;
struct hlist_head *head;
-   unsigned long flags;
int i, j;
-   int locked;
+
+   for (i = 0; i < CLASSHASH_SIZE; i++) {
+   head = classhash_table + i;
+   hlist_for_each_entry_rcu(class, head, hash_entry) {
+   for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
+   if (lock->class_cache[j] == class)
+   return true;
+   }
+   }
+   return false;
+}
+
+void lockdep_reset_lock(struct lockdep_map *lock)
+{
+   struct lock_class *class;
+   unsigned long flags;
+   int j, locked;
 
raw_local_irq_save(flags);
 
@@ -4227,24 +4247,14 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 * be gone.
 */
locked = graph_lock();
-   for (i = 0; i < CLASSHASH_SIZE; i++) {
-   head = classhash_table + i;
-   hlist_for_each_entry_rcu(class, head, hash_entry) {
-   int match = 0;
-
-   for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
-   match |= class == lock->class_cache[j];
-
-   if (unlikely(match)) {
-   if (debug_locks_off_graph_unlock()) {
-   /*
-* We all just reset everything, how 
did it match?
-*/
-   WARN_ON(1);
-   }
-   goto out_restore;
-   }
+   if (unlikely(lock_class_cache_is_registered(lock))) {
+   if (debug_locks_off_graph_unlock()) {
+   /*
+* We all just reset everything, how did it match?
+*/
+   WARN_ON(1);
}
+   goto out_restore;
}
if (locked)
graph_unlock();


[tip:locking/core] locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement

2018-12-11 Thread tip-bot for Bart Van Assche
Commit-ID:  a66b6922dc6a5ece60ea9326153da3b062977a4d
Gitweb: https://git.kernel.org/tip/a66b6922dc6a5ece60ea9326153da3b062977a4d
Author: Bart Van Assche 
AuthorDate: Thu, 6 Dec 2018 17:11:35 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:54 +0100

locking/lockdep: Remove a superfluous INIT_LIST_HEAD() statement

Initializing a list entry just before it is passed to list_add_tail_rcu()
is not necessary because list_add_tail_rcu() overwrites the next and prev
pointers anyway. Hence remove the INIT_LIST_HEAD() statement.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Sasha Levin 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-12-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 81388d028ac7..346b5a1fd062 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -792,7 +792,6 @@ register_lock_class(struct lockdep_map *lock, unsigned int 
subclass, int force)
class->key = key;
class->name = lock->name;
class->subclass = subclass;
-   INIT_LIST_HEAD(>lock_entry);
INIT_LIST_HEAD(>locks_before);
INIT_LIST_HEAD(>locks_after);
class->name_version = count_matching_names(class);


[tip:locking/core] locking/lockdep: Make concurrent lockdep_reset_lock() calls safe

2018-12-11 Thread tip-bot for Bart Van Assche
Commit-ID:  786fa29e9cb6810e21ab0d9c41a81d81d54d1d1b
Gitweb: https://git.kernel.org/tip/786fa29e9cb6810e21ab0d9c41a81d81d54d1d1b
Author: Bart Van Assche 
AuthorDate: Thu, 6 Dec 2018 17:11:36 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:55 +0100

locking/lockdep: Make concurrent lockdep_reset_lock() calls safe

Since zap_class() removes items from the all_lock_classes list and the
classhash_table, protect all zap_class() calls against concurrent
data structure modifications with the graph lock.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Sasha Levin 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-13-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 346b5a1fd062..737d2dd3ea56 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4122,6 +4122,9 @@ void lockdep_reset(void)
raw_local_irq_restore(flags);
 }
 
+/*
+ * Remove all references to a lock class. The caller must hold the graph lock.
+ */
 static void zap_class(struct lock_class *class)
 {
int i;
@@ -4229,6 +4232,7 @@ void lockdep_reset_lock(struct lockdep_map *lock)
int j, locked;
 
raw_local_irq_save(flags);
+   locked = graph_lock();
 
/*
 * Remove all classes this lock might have:
@@ -4245,7 +4249,6 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 * Debug check: in the end all mapped classes should
 * be gone.
 */
-   locked = graph_lock();
if (unlikely(lock_class_cache_is_registered(lock))) {
if (debug_locks_off_graph_unlock()) {
/*


[tip:locking/core] locking/lockdep: Stop using RCU primitives to access 'all_lock_classes'

2018-12-11 Thread tip-bot for Bart Van Assche
Commit-ID:  fe27b0de8dfcdf8482558ce5d25e697fe74d851e
Gitweb: https://git.kernel.org/tip/fe27b0de8dfcdf8482558ce5d25e697fe74d851e
Author: Bart Van Assche 
AuthorDate: Thu, 6 Dec 2018 17:11:37 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:56 +0100

locking/lockdep: Stop using RCU primitives to access 'all_lock_classes'

Due to the previous patch all code that accesses the 'all_lock_classes'
list holds the graph lock. Hence use regular list primitives instead of
their RCU variants to access this list.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Sasha Levin 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20181207011148.251812-14-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 737d2dd3ea56..5c837a537273 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -629,7 +629,8 @@ static int static_obj(void *obj)
 
 /*
  * To make lock name printouts unique, we calculate a unique
- * class->name_version generation counter:
+ * class->name_version generation counter. The caller must hold the graph
+ * lock.
  */
 static int count_matching_names(struct lock_class *new_class)
 {
@@ -639,7 +640,7 @@ static int count_matching_names(struct lock_class 
*new_class)
if (!new_class->name)
return 0;
 
-   list_for_each_entry_rcu(class, _lock_classes, lock_entry) {
+   list_for_each_entry(class, _lock_classes, lock_entry) {
if (new_class->key - new_class->subclass == class->key)
return class->name_version;
if (class->name && !strcmp(class->name, new_class->name))
@@ -803,7 +804,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int 
subclass, int force)
/*
 * Add it to the global list of classes:
 */
-   list_add_tail_rcu(>lock_entry, _lock_classes);
+   list_add_tail(>lock_entry, _lock_classes);
 
if (verbose(class)) {
graph_unlock();
@@ -4141,7 +4142,7 @@ static void zap_class(struct lock_class *class)
 * Unhash the class and remove it from the all_lock_classes list:
 */
hlist_del_rcu(>hash_entry);
-   list_del_rcu(>lock_entry);
+   list_del(>lock_entry);
 
RCU_INIT_POINTER(class->key, NULL);
RCU_INIT_POINTER(class->name, NULL);


[tip:timers/core] timekeeping: Use proper seqcount initializer

2018-12-05 Thread tip-bot for Bart Van Assche
Commit-ID:  ce10a5b3954f2514af726beb78ed8d7350c5e41c
Gitweb: https://git.kernel.org/tip/ce10a5b3954f2514af726beb78ed8d7350c5e41c
Author: Bart Van Assche 
AuthorDate: Wed, 28 Nov 2018 15:43:09 -0800
Committer:  Thomas Gleixner 
CommitDate: Wed, 5 Dec 2018 11:00:09 +0100

timekeeping: Use proper seqcount initializer

tk_core.seq is initialized open coded, but that misses to initialize the
lockdep map when lockdep is enabled. Lockdep splats involving tk_core seq
consequently lack a name and are hard to read.

Use the proper initializer which takes care of the lockdep map
initialization.

[ tglx: Massaged changelog ]

Signed-off-by: Bart Van Assche 
Signed-off-by: Thomas Gleixner 
Cc: pet...@infradead.org
Cc: t...@kernel.org
Cc: johannes.b...@intel.com
Link: https://lkml.kernel.org/r/20181128234325.110011-12-bvanass...@acm.org

---
 kernel/time/timekeeping.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index cd02bd38cf2d..c801e25875a3 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -45,7 +45,9 @@ enum timekeeping_adv_mode {
 static struct {
seqcount_t  seq;
struct timekeeper   timekeeper;
-} tk_core cacheline_aligned;
+} tk_core cacheline_aligned = {
+   .seq = SEQCNT_ZERO(tk_core.seq),
+};
 
 static DEFINE_RAW_SPINLOCK(timekeeper_lock);
 static struct timekeeper shadow_timekeeper;


[tip:locking/urgent] locking/lockdep: Make lockdep_unregister_key() honor 'debug_locks' again

2019-04-16 Thread tip-bot for Bart Van Assche
Commit-ID:  8b39adbee805c539a461dbf208b125b096152b1c
Gitweb: https://git.kernel.org/tip/8b39adbee805c539a461dbf208b125b096152b1c
Author: Bart Van Assche 
AuthorDate: Mon, 15 Apr 2019 10:05:38 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 16 Apr 2019 08:21:51 +0200

locking/lockdep: Make lockdep_unregister_key() honor 'debug_locks' again

If lockdep_register_key() and lockdep_unregister_key() are called with
debug_locks == false then the following warning is reported:

  WARNING: CPU: 2 PID: 15145 at kernel/locking/lockdep.c:4920 
lockdep_unregister_key+0x1ad/0x240

That warning is reported because lockdep_unregister_key() ignores the
value of 'debug_locks' and because the behavior of lockdep_register_key()
depends on whether or not 'debug_locks' is set. Fix this inconsistency
by making lockdep_unregister_key() take 'debug_locks' again into
account.

Signed-off-by: Bart Van Assche 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: shenghui 
Fixes: 90c1cba2b3b3 ("locking/lockdep: Zap lock classes even with lock 
debugging disabled")
Link: http://lkml.kernel.org/r/20190415170538.23491-1-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index e16766ff184b..e221be724fe8 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4907,8 +4907,9 @@ void lockdep_unregister_key(struct lock_class_key *key)
return;
 
raw_local_irq_save(flags);
-   arch_spin_lock(_lock);
-   current->lockdep_recursion = 1;
+   if (!graph_lock())
+   goto out_irq;
+
pf = get_pending_free();
hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
if (k == key) {
@@ -4920,8 +4921,8 @@ void lockdep_unregister_key(struct lock_class_key *key)
WARN_ON_ONCE(!found);
__lockdep_free_key_range(pf, key, 1);
call_rcu_zapped(pf);
-   current->lockdep_recursion = 0;
-   arch_spin_unlock(_lock);
+   graph_unlock();
+out_irq:
raw_local_irq_restore(flags);
 
/* Wait until is_dynamic_key() has finished accessing k->hash_entry. */


[tip:locking/core] locking/lockdep: Fix two 32-bit compiler warnings

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  09d75ecb122d8b600d76e3b8d53a10ffbe3bcec2
Gitweb: https://git.kernel.org/tip/09d75ecb122d8b600d76e3b8d53a10ffbe3bcec2
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:36 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:38 +0100

locking/lockdep: Fix two 32-bit compiler warnings

Use %zu to format size_t instead of %lu to avoid that the compiler
complains about a mismatch between format specifier and argument on
32-bit systems.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-2-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 7f7db23fc002..5c5283bf499c 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4266,7 +4266,7 @@ void __init lockdep_init(void)
printk("... MAX_LOCKDEP_CHAINS:  %lu\n", MAX_LOCKDEP_CHAINS);
printk("... CHAINHASH_SIZE:  %lu\n", CHAINHASH_SIZE);
 
-   printk(" memory used by lock dependency info: %lu kB\n",
+   printk(" memory used by lock dependency info: %zu kB\n",
(sizeof(struct lock_class) * MAX_LOCKDEP_KEYS +
sizeof(struct list_head) * CLASSHASH_SIZE +
sizeof(struct lock_list) * MAX_LOCKDEP_ENTRIES +
@@ -4278,7 +4278,7 @@ void __init lockdep_init(void)
) / 1024
);
 
-   printk(" per task-struct memory footprint: %lu bytes\n",
+   printk(" per task-struct memory footprint: %zu bytes\n",
sizeof(struct held_lock) * MAX_LOCK_DEPTH);
 }
 


[tip:locking/core] locking/lockdep: Fix reported required memory size (1/2)

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  7ff8517e1034f26dde03d6df4026f085480408f0
Gitweb: https://git.kernel.org/tip/7ff8517e1034f26dde03d6df4026f085480408f0
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:37 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:39 +0100

locking/lockdep: Fix reported required memory size (1/2)

Change the sizeof(array element time) * (array size) expressions into
sizeof(array). This fixes the size computations of the classhash_table[]
and chainhash_table[] arrays.

The reason is that commit:

  a63f38cc4ccf ("locking/lockdep: Convert hash tables to hlists")

changed the type of the elements of that array from 'struct list_head' into
'struct hlist_head'.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-3-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 5c5283bf499c..57a523f0273c 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4267,19 +4267,19 @@ void __init lockdep_init(void)
printk("... CHAINHASH_SIZE:  %lu\n", CHAINHASH_SIZE);
 
printk(" memory used by lock dependency info: %zu kB\n",
-   (sizeof(struct lock_class) * MAX_LOCKDEP_KEYS +
-   sizeof(struct list_head) * CLASSHASH_SIZE +
-   sizeof(struct lock_list) * MAX_LOCKDEP_ENTRIES +
-   sizeof(struct lock_chain) * MAX_LOCKDEP_CHAINS +
-   sizeof(struct list_head) * CHAINHASH_SIZE
+  (sizeof(lock_classes) +
+   sizeof(classhash_table) +
+   sizeof(list_entries) +
+   sizeof(lock_chains) +
+   sizeof(chainhash_table)
 #ifdef CONFIG_PROVE_LOCKING
-   + sizeof(struct circular_queue)
+   + sizeof(lock_cq)
 #endif
) / 1024
);
 
printk(" per task-struct memory footprint: %zu bytes\n",
-   sizeof(struct held_lock) * MAX_LOCK_DEPTH);
+  sizeof(((struct task_struct *)NULL)->held_locks));
 }
 
 static void


[tip:locking/core] locking/lockdep: Fix reported required memory size (2/2)

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  15ea86b58c71d05e0921bebcf707aa30e43e9e25
Gitweb: https://git.kernel.org/tip/15ea86b58c71d05e0921bebcf707aa30e43e9e25
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:38 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:39 +0100

locking/lockdep: Fix reported required memory size (2/2)

Lock chains are only tracked with CONFIG_PROVE_LOCKING=y. Do not report
the memory required for the lock chain array if CONFIG_PROVE_LOCKING=n.
See also commit:

  ca58abcb4a6d ("lockdep: sanitise CONFIG_PROVE_LOCKING")

Include the size of the chain_hlocks[] array.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-4-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 57a523f0273c..ec6f6aff4d8d 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4270,10 +4270,11 @@ void __init lockdep_init(void)
   (sizeof(lock_classes) +
sizeof(classhash_table) +
sizeof(list_entries) +
-   sizeof(lock_chains) +
sizeof(chainhash_table)
 #ifdef CONFIG_PROVE_LOCKING
+ sizeof(lock_cq)
+   + sizeof(lock_chains)
+   + sizeof(chain_hlocks)
 #endif
) / 1024
);


[tip:locking/core] locking/lockdep: Avoid that add_chain_cache() adds an invalid chain to the cache

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  523b113bace5e64e860d8c61d7aa25057d274753
Gitweb: https://git.kernel.org/tip/523b113bace5e64e860d8c61d7aa25057d274753
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:39 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:40 +0100

locking/lockdep: Avoid that add_chain_cache() adds an invalid chain to the cache

Make sure that add_chain_cache() returns 0 and does not modify the
chain hash if nr_chain_hlocks == MAX_LOCKDEP_CHAIN_HLOCKS before this
function is called.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-5-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 11 +--
 1 file changed, 1 insertion(+), 10 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index ec6f6aff4d8d..21d84510e28f 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2195,16 +2195,8 @@ static inline int add_chain_cache(struct task_struct 
*curr,
chain_hlocks[chain->base + j] = lock_id;
}
chain_hlocks[chain->base + j] = class - lock_classes;
-   }
-
-   if (nr_chain_hlocks < MAX_LOCKDEP_CHAIN_HLOCKS)
nr_chain_hlocks += chain->depth;
-
-#ifdef CONFIG_DEBUG_LOCKDEP
-   /*
-* Important for check_no_collision().
-*/
-   if (unlikely(nr_chain_hlocks > MAX_LOCKDEP_CHAIN_HLOCKS)) {
+   } else {
if (!debug_locks_off_graph_unlock())
return 0;
 
@@ -2212,7 +2204,6 @@ static inline int add_chain_cache(struct task_struct 
*curr,
dump_stack();
return 0;
}
-#endif
 
hlist_add_head_rcu(>entry, hash_head);
debug_atomic_inc(chain_lookup_misses);


[tip:locking/core] locking/lockdep: Reorder struct lock_class members

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  09329d1c2024522308ca4de977fc6bba753bab1a
Gitweb: https://git.kernel.org/tip/09329d1c2024522308ca4de977fc6bba753bab1a
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:40 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:40 +0100

locking/lockdep: Reorder struct lock_class members

This patch does not change any functionality but makes the patch that
frees lock classes that are no longer in use easier to read.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-6-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 include/linux/lockdep.h | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index c5335df2372f..0c38bade84b7 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -76,6 +76,13 @@ struct lock_class {
 */
struct list_headlock_entry;
 
+   /*
+* These fields represent a directed graph of lock dependencies,
+* to every node we attach a list of "forward" and a list of
+* "backward" graph nodes.
+*/
+   struct list_headlocks_after, locks_before;
+
struct lockdep_subclass_key *key;
unsigned intsubclass;
unsigned intdep_gen_id;
@@ -86,13 +93,6 @@ struct lock_class {
unsigned long   usage_mask;
struct stack_trace  usage_traces[XXX_LOCK_USAGE_STATES];
 
-   /*
-* These fields represent a directed graph of lock dependencies,
-* to every node we attach a list of "forward" and a list of
-* "backward" graph nodes.
-*/
-   struct list_headlocks_after, locks_before;
-
/*
 * Generation counter, when doing certain classes of graph walking,
 * to ensure that we check one node only once:


[tip:locking/core] locking/lockdep: Make zap_class() remove all matching lock order entries

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  86cffb80a525f7b8f969c8c79669d383e02f17d1
Gitweb: https://git.kernel.org/tip/86cffb80a525f7b8f969c8c79669d383e02f17d1
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:41 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:40 +0100

locking/lockdep: Make zap_class() remove all matching lock order entries

Make sure that all lock order entries that refer to a class are removed
from the list_entries[] array when a kernel module is unloaded.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-7-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 include/linux/lockdep.h  |  1 +
 kernel/locking/lockdep.c | 19 +--
 2 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 0c38bade84b7..b5e6bfe0ae4a 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -178,6 +178,7 @@ static inline void lockdep_copy_map(struct lockdep_map *to,
 struct lock_list {
struct list_headentry;
struct lock_class   *class;
+   struct lock_class   *links_to;
struct stack_trace  trace;
int distance;
 
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 21d84510e28f..28fbeb2a10cc 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -859,7 +859,8 @@ static struct lock_list *alloc_list_entry(void)
 /*
  * Add a new dependency to the head of the list:
  */
-static int add_lock_to_list(struct lock_class *this, struct list_head *head,
+static int add_lock_to_list(struct lock_class *this,
+   struct lock_class *links_to, struct list_head *head,
unsigned long ip, int distance,
struct stack_trace *trace)
 {
@@ -873,6 +874,7 @@ static int add_lock_to_list(struct lock_class *this, struct 
list_head *head,
return 0;
 
entry->class = this;
+   entry->links_to = links_to;
entry->distance = distance;
entry->trace = *trace;
/*
@@ -1907,14 +1909,14 @@ check_prev_add(struct task_struct *curr, struct 
held_lock *prev,
 * Ok, all validations passed, add the new lock
 * to the previous lock's dependency list:
 */
-   ret = add_lock_to_list(hlock_class(next),
+   ret = add_lock_to_list(hlock_class(next), hlock_class(prev),
   _class(prev)->locks_after,
   next->acquire_ip, distance, trace);
 
if (!ret)
return 0;
 
-   ret = add_lock_to_list(hlock_class(prev),
+   ret = add_lock_to_list(hlock_class(prev), hlock_class(next),
   _class(next)->locks_before,
   next->acquire_ip, distance, trace);
if (!ret)
@@ -4107,15 +4109,20 @@ void lockdep_reset(void)
  */
 static void zap_class(struct lock_class *class)
 {
+   struct lock_list *entry;
int i;
 
/*
 * Remove all dependencies this lock is
 * involved in:
 */
-   for (i = 0; i < nr_list_entries; i++) {
-   if (list_entries[i].class == class)
-   list_del_rcu(_entries[i].entry);
+   for (i = 0, entry = list_entries; i < nr_list_entries; i++, entry++) {
+   if (entry->class != class && entry->links_to != class)
+   continue;
+   list_del_rcu(>entry);
+   /* Clear .class and .links_to to avoid double removal. */
+   WRITE_ONCE(entry->class, NULL);
+   WRITE_ONCE(entry->links_to, NULL);
}
/*
 * Unhash the class and remove it from the all_lock_classes list:


[tip:locking/core] locking/lockdep: Initialize the locks_before and locks_after lists earlier

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  feb0a3865ed2f7d66a1f2686f7ad784422c249ad
Gitweb: https://git.kernel.org/tip/feb0a3865ed2f7d66a1f2686f7ad784422c249ad
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:42 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:41 +0100

locking/lockdep: Initialize the locks_before and locks_after lists earlier

This patch does not change any functionality. A later patch will reuse
lock classes that have been freed. In combination with that patch this
patch wil have the effect of initializing lock class order lists once
instead of every time a lock class structure is reinitialized.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-8-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 29 +++--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 28fbeb2a10cc..d1a6daf1f51f 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -735,6 +735,25 @@ static bool assign_lock_key(struct lockdep_map *lock)
return true;
 }
 
+/*
+ * Initialize the lock_classes[] array elements.
+ */
+static void init_data_structures_once(void)
+{
+   static bool initialization_happened;
+   int i;
+
+   if (likely(initialization_happened))
+   return;
+
+   initialization_happened = true;
+
+   for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+   INIT_LIST_HEAD(_classes[i].locks_after);
+   INIT_LIST_HEAD(_classes[i].locks_before);
+   }
+}
+
 /*
  * Register a lock's class in the hash-table, if the class is not present
  * yet. Otherwise we look it up. We cache the result in the lock object
@@ -775,6 +794,8 @@ register_lock_class(struct lockdep_map *lock, unsigned int 
subclass, int force)
goto out_unlock_set;
}
 
+   init_data_structures_once();
+
/*
 * Allocate a new key from the static array, and add it to
 * the hash:
@@ -793,8 +814,8 @@ register_lock_class(struct lockdep_map *lock, unsigned int 
subclass, int force)
class->key = key;
class->name = lock->name;
class->subclass = subclass;
-   INIT_LIST_HEAD(>locks_before);
-   INIT_LIST_HEAD(>locks_after);
+   WARN_ON_ONCE(!list_empty(>locks_before));
+   WARN_ON_ONCE(!list_empty(>locks_after));
class->name_version = count_matching_names(class);
/*
 * We use RCU's safe list-add method to make
@@ -4155,6 +4176,8 @@ void lockdep_free_key_range(void *start, unsigned long 
size)
int i;
int locked;
 
+   init_data_structures_once();
+
raw_local_irq_save(flags);
locked = graph_lock();
 
@@ -4218,6 +4241,8 @@ void lockdep_reset_lock(struct lockdep_map *lock)
unsigned long flags;
int j, locked;
 
+   init_data_structures_once();
+
raw_local_irq_save(flags);
locked = graph_lock();
 


[tip:locking/core] locking/lockdep: Split lockdep_free_key_range() and lockdep_reset_lock()

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  956f3563a8387beb7758f2e8ee483639ef91afc6
Gitweb: https://git.kernel.org/tip/956f3563a8387beb7758f2e8ee483639ef91afc6
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:43 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:42 +0100

locking/lockdep: Split lockdep_free_key_range() and lockdep_reset_lock()

This patch does not change the behavior of these functions but makes the
patch that frees unused lock classes easier to read.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-9-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 72 
 1 file changed, 36 insertions(+), 36 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index d1a6daf1f51f..2d4c21a02546 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4160,6 +4160,24 @@ static inline int within(const void *addr, void *start, 
unsigned long size)
return addr >= start && addr < start + size;
 }
 
+static void __lockdep_free_key_range(void *start, unsigned long size)
+{
+   struct lock_class *class;
+   struct hlist_head *head;
+   int i;
+
+   /* Unhash all classes that were created by a module. */
+   for (i = 0; i < CLASSHASH_SIZE; i++) {
+   head = classhash_table + i;
+   hlist_for_each_entry_rcu(class, head, hash_entry) {
+   if (!within(class->key, start, size) &&
+   !within(class->name, start, size))
+   continue;
+   zap_class(class);
+   }
+   }
+}
+
 /*
  * Used in module.c to remove lock classes from memory that is going to be
  * freed; and possibly re-used by other modules.
@@ -4170,30 +4188,14 @@ static inline int within(const void *addr, void *start, 
unsigned long size)
  */
 void lockdep_free_key_range(void *start, unsigned long size)
 {
-   struct lock_class *class;
-   struct hlist_head *head;
unsigned long flags;
-   int i;
int locked;
 
init_data_structures_once();
 
raw_local_irq_save(flags);
locked = graph_lock();
-
-   /*
-* Unhash all classes that were created by this module:
-*/
-   for (i = 0; i < CLASSHASH_SIZE; i++) {
-   head = classhash_table + i;
-   hlist_for_each_entry_rcu(class, head, hash_entry) {
-   if (within(class->key, start, size))
-   zap_class(class);
-   else if (within(class->name, start, size))
-   zap_class(class);
-   }
-   }
-
+   __lockdep_free_key_range(start, size);
if (locked)
graph_unlock();
raw_local_irq_restore(flags);
@@ -4235,16 +4237,11 @@ static bool lock_class_cache_is_registered(struct 
lockdep_map *lock)
return false;
 }
 
-void lockdep_reset_lock(struct lockdep_map *lock)
+/* The caller must hold the graph lock. Does not sleep. */
+static void __lockdep_reset_lock(struct lockdep_map *lock)
 {
struct lock_class *class;
-   unsigned long flags;
-   int j, locked;
-
-   init_data_structures_once();
-
-   raw_local_irq_save(flags);
-   locked = graph_lock();
+   int j;
 
/*
 * Remove all classes this lock might have:
@@ -4261,19 +4258,22 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 * Debug check: in the end all mapped classes should
 * be gone.
 */
-   if (unlikely(lock_class_cache_is_registered(lock))) {
-   if (debug_locks_off_graph_unlock()) {
-   /*
-* We all just reset everything, how did it match?
-*/
-   WARN_ON(1);
-   }
-   goto out_restore;
-   }
+   if (WARN_ON_ONCE(lock_class_cache_is_registered(lock)))
+   debug_locks_off();
+}
+
+void lockdep_reset_lock(struct lockdep_map *lock)
+{
+   unsigned long flags;
+   int locked;
+
+   init_data_structures_once();
+
+   raw_local_irq_save(flags);
+   locked = graph_lock();
+   __lockdep_reset_lock(lock);
if (locked)
graph_unlock();
-
-out_restore:
raw_local_irq_restore(flags);
 }
 


[tip:locking/core] locking/lockdep: Make it easy to detect whether or not inside a selftest

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  cdc84d794947b5431c0a6916c303aee7114819d2
Gitweb: https://git.kernel.org/tip/cdc84d794947b5431c0a6916c303aee7114819d2
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:44 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:43 +0100

locking/lockdep: Make it easy to detect whether or not inside a selftest

The patch that frees unused lock classes will modify the behavior of
lockdep_free_key_range() and lockdep_reset_lock() depending on whether
or not these functions are called from the context of the lockdep
selftests. Hence make it easy to detect whether or not lockdep code
is called from the context of a lockdep selftest.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-10-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 include/linux/lockdep.h  | 5 +
 kernel/locking/lockdep.c | 6 ++
 lib/locking-selftest.c   | 2 ++
 3 files changed, 13 insertions(+)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index b5e6bfe0ae4a..66eee1ba0f2a 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -265,6 +265,7 @@ extern void lockdep_reset(void);
 extern void lockdep_reset_lock(struct lockdep_map *lock);
 extern void lockdep_free_key_range(void *start, unsigned long size);
 extern asmlinkage void lockdep_sys_exit(void);
+extern void lockdep_set_selftest_task(struct task_struct *task);
 
 extern void lockdep_off(void);
 extern void lockdep_on(void);
@@ -395,6 +396,10 @@ static inline void lockdep_on(void)
 {
 }
 
+static inline void lockdep_set_selftest_task(struct task_struct *task)
+{
+}
+
 # define lock_acquire(l, s, t, r, c, n, i) do { } while (0)
 # define lock_release(l, n, i) do { } while (0)
 # define lock_downgrade(l, i)  do { } while (0)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 2d4c21a02546..34cd87c65f5d 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -81,6 +81,7 @@ module_param(lock_stat, int, 0644);
  * code to recurse back into the lockdep code...
  */
 static arch_spinlock_t lockdep_lock = 
(arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
+static struct task_struct *lockdep_selftest_task_struct;
 
 static int graph_lock(void)
 {
@@ -331,6 +332,11 @@ void lockdep_on(void)
 }
 EXPORT_SYMBOL(lockdep_on);
 
+void lockdep_set_selftest_task(struct task_struct *task)
+{
+   lockdep_selftest_task_struct = task;
+}
+
 /*
  * Debugging switches:
  */
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 1e1bbf171eca..a1705545e6ac 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -1989,6 +1989,7 @@ void locking_selftest(void)
 
init_shared_classes();
debug_locks_silent = !debug_locks_verbose;
+   lockdep_set_selftest_task(current);
 
DO_TESTCASE_6R("A-A deadlock", AA);
DO_TESTCASE_6R("A-B-B-A deadlock", ABBA);
@@ -2097,5 +2098,6 @@ void locking_selftest(void)
printk("-\n");
debug_locks = 1;
}
+   lockdep_set_selftest_task(NULL);
debug_locks_silent = 0;
 }


[tip:locking/core] locking/lockdep: Update two outdated comments

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  29fc33fb7283970701355dc89badba4ed21c7092
Gitweb: https://git.kernel.org/tip/29fc33fb7283970701355dc89badba4ed21c7092
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:45 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:43 +0100

locking/lockdep: Update two outdated comments

synchronize_sched() has been removed recently. Update the comments that
refer to synchronize_sched().

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Fixes: 51959d85f32d ("lockdep: Replace synchronize_sched() with 
synchronize_rcu()") # v5.0-rc1
Link: https://lkml.kernel.org/r/20190214230058.196511-11-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 34cd87c65f5d..c7ca3a4def7e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4188,9 +4188,9 @@ static void __lockdep_free_key_range(void *start, 
unsigned long size)
  * Used in module.c to remove lock classes from memory that is going to be
  * freed; and possibly re-used by other modules.
  *
- * We will have had one sync_sched() before getting here, so we're guaranteed
- * nobody will look up these exact classes -- they're properly dead but still
- * allocated.
+ * We will have had one synchronize_rcu() before getting here, so we're
+ * guaranteed nobody will look up these exact classes -- they're properly dead
+ * but still allocated.
  */
 void lockdep_free_key_range(void *start, unsigned long size)
 {
@@ -4209,8 +4209,6 @@ void lockdep_free_key_range(void *start, unsigned long 
size)
/*
 * Wait for any possible iterators from look_up_lock_class() to pass
 * before continuing to free the memory they refer to.
-*
-* sync_sched() is sufficient because the read-side is IRQ disable.
 */
synchronize_rcu();
 


[tip:locking/core] locking/lockdep: Free lock classes that are no longer in use

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  a0b0fd53e1e67639b303b15939b9c653dbe7a8c4
Gitweb: https://git.kernel.org/tip/a0b0fd53e1e67639b303b15939b9c653dbe7a8c4
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:46 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:43 +0100

locking/lockdep: Free lock classes that are no longer in use

Instead of leaving lock classes that are no longer in use in the
lock_classes array, reuse entries from that array that are no longer in
use. Maintain a linked list of free lock classes with list head
'free_lock_class'. Only add freed lock classes to the free_lock_classes
list after a grace period to avoid that a lock_classes[] element would
be reused while an RCU reader is accessing it. Since the lockdep
selftests run in a context where sleeping is not allowed and since the
selftests require that lock resetting/zapping works with debug_locks
off, make the behavior of lockdep_free_key_range() and
lockdep_reset_lock() depend on whether or not these are called from
the context of the lockdep selftests.

Thanks to Peter for having shown how to modify get_pending_free()
such that that function does not have to sleep.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-12-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 include/linux/lockdep.h  |   9 +-
 kernel/locking/lockdep.c | 396 +--
 2 files changed, 354 insertions(+), 51 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 66eee1ba0f2a..619ec3f26cdc 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -63,7 +63,8 @@ extern struct lock_class_key __lockdep_no_validate__;
 #define LOCKSTAT_POINTS4
 
 /*
- * The lock-class itself:
+ * The lock-class itself. The order of the structure members matters.
+ * reinit_class() zeroes the key member and all subsequent members.
  */
 struct lock_class {
/*
@@ -72,7 +73,9 @@ struct lock_class {
struct hlist_node   hash_entry;
 
/*
-* global list of all lock-classes:
+* Entry in all_lock_classes when in use. Entry in free_lock_classes
+* when not in use. Instances that are being freed are on one of the
+* zapped_classes lists.
 */
struct list_headlock_entry;
 
@@ -104,7 +107,7 @@ struct lock_class {
unsigned long   contention_point[LOCKSTAT_POINTS];
unsigned long   contending_point[LOCKSTAT_POINTS];
 #endif
-};
+} __no_randomize_layout;
 
 #ifdef CONFIG_LOCK_STAT
 struct lock_time {
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index c7ca3a4def7e..8ecf355dd163 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -50,6 +50,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 
@@ -135,8 +136,8 @@ static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
 /*
  * All data structures here are protected by the global debug_lock.
  *
- * Mutex key structs only get allocated, once during bootup, and never
- * get freed - this significantly simplifies the debugging code.
+ * nr_lock_classes is the number of elements of lock_classes[] that is
+ * in use.
  */
 unsigned long nr_lock_classes;
 #ifndef CONFIG_DEBUG_LOCKDEP
@@ -278,11 +279,39 @@ static inline void lock_release_holdtime(struct held_lock 
*hlock)
 #endif
 
 /*
- * We keep a global list of all lock classes. The list only grows,
- * never shrinks. The list is only accessed with the lockdep
- * spinlock lock held.
+ * We keep a global list of all lock classes. The list is only accessed with
+ * the lockdep spinlock lock held. free_lock_classes is a list with free
+ * elements. These elements are linked together by the lock_entry member in
+ * struct lock_class.
  */
 LIST_HEAD(all_lock_classes);
+static LIST_HEAD(free_lock_classes);
+
+/**
+ * struct pending_free - information about data structures about to be freed
+ * @zapped: Head of a list with struct lock_class elements.
+ */
+struct pending_free {
+   struct list_head zapped;
+};
+
+/**
+ * struct delayed_free - data structures used for delayed freeing
+ *
+ * A data structure for delayed freeing of data structures that may be
+ * accessed by RCU readers at the time these were freed.
+ *
+ * @rcu_head:  Used to schedule an RCU callback for freeing data structures.
+ * @index: Index of @pf to which freed data structures are added.
+ * @scheduled: Whether or not an RCU callback has been scheduled.
+ * @pf:Array with information about data structures about to be freed.
+ */
+static struct delayed_free {
+   struct rcu_head rcu_head;
+   int index;
+   int 

[tip:locking/core] locking/lockdep: Reuse list entries that are no longer in use

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  ace35a7ac493d4284a57ad807579011bebba891c
Gitweb: https://git.kernel.org/tip/ace35a7ac493d4284a57ad807579011bebba891c
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:47 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:44 +0100

locking/lockdep: Reuse list entries that are no longer in use

Instead of abandoning elements of list_entries[] that are no longer in
use, make alloc_list_entry() reuse array elements that have been freed.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-13-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 24 
 1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 8ecf355dd163..2c6d0b67e7b6 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -45,6 +45,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -132,6 +133,7 @@ static inline int debug_locks_off_graph_unlock(void)
 
 unsigned long nr_list_entries;
 static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
+static DECLARE_BITMAP(list_entries_in_use, MAX_LOCKDEP_ENTRIES);
 
 /*
  * All data structures here are protected by the global debug_lock.
@@ -907,7 +909,10 @@ out_set_class_cache:
  */
 static struct lock_list *alloc_list_entry(void)
 {
-   if (nr_list_entries >= MAX_LOCKDEP_ENTRIES) {
+   int idx = find_first_zero_bit(list_entries_in_use,
+ ARRAY_SIZE(list_entries));
+
+   if (idx >= ARRAY_SIZE(list_entries)) {
if (!debug_locks_off_graph_unlock())
return NULL;
 
@@ -915,7 +920,9 @@ static struct lock_list *alloc_list_entry(void)
dump_stack();
return NULL;
}
-   return list_entries + nr_list_entries++;
+   nr_list_entries++;
+   __set_bit(idx, list_entries_in_use);
+   return list_entries + idx;
 }
 
 /*
@@ -1019,7 +1026,7 @@ static inline void mark_lock_accessed(struct lock_list 
*lock,
unsigned long nr;
 
nr = lock - list_entries;
-   WARN_ON(nr >= nr_list_entries); /* Out-of-bounds, input fail */
+   WARN_ON(nr >= ARRAY_SIZE(list_entries)); /* Out-of-bounds, input fail */
lock->parent = parent;
lock->class->dep_gen_id = lockdep_dependency_gen_id;
 }
@@ -1029,7 +1036,7 @@ static inline unsigned long lock_accessed(struct 
lock_list *lock)
unsigned long nr;
 
nr = lock - list_entries;
-   WARN_ON(nr >= nr_list_entries); /* Out-of-bounds, input fail */
+   WARN_ON(nr >= ARRAY_SIZE(list_entries)); /* Out-of-bounds, input fail */
return lock->class->dep_gen_id == lockdep_dependency_gen_id;
 }
 
@@ -4276,13 +4283,13 @@ static void zap_class(struct pending_free *pf, struct 
lock_class *class)
 * Remove all dependencies this lock is
 * involved in:
 */
-   for (i = 0, entry = list_entries; i < nr_list_entries; i++, entry++) {
+   for_each_set_bit(i, list_entries_in_use, ARRAY_SIZE(list_entries)) {
+   entry = list_entries + i;
if (entry->class != class && entry->links_to != class)
continue;
+   __clear_bit(i, list_entries_in_use);
+   nr_list_entries--;
list_del_rcu(>entry);
-   /* Clear .class and .links_to to avoid double removal. */
-   WRITE_ONCE(entry->class, NULL);
-   WRITE_ONCE(entry->links_to, NULL);
}
if (list_empty(>locks_after) &&
list_empty(>locks_before)) {
@@ -4596,6 +4603,7 @@ void __init lockdep_init(void)
   (sizeof(lock_classes) +
sizeof(classhash_table) +
sizeof(list_entries) +
+   sizeof(list_entries_in_use) +
sizeof(chainhash_table) +
sizeof(delayed_free)
 #ifdef CONFIG_PROVE_LOCKING


[tip:locking/core] locking/lockdep: Introduce lockdep_next_lockchain() and lock_chain_count()

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  2212684adff79e2704a2792ff46682afb9246fc8
Gitweb: https://git.kernel.org/tip/2212684adff79e2704a2792ff46682afb9246fc8
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:48 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:44 +0100

locking/lockdep: Introduce lockdep_next_lockchain() and lock_chain_count()

This patch does not change any functionality but makes the next patch in
this series easier to read.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-14-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c   | 16 +++-
 kernel/locking/lockdep_internals.h |  3 ++-
 kernel/locking/lockdep_proc.c  | 12 ++--
 3 files changed, 23 insertions(+), 8 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 2c6d0b67e7b6..753a9b758266 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2096,7 +2096,7 @@ out_bug:
return 0;
 }
 
-unsigned long nr_lock_chains;
+static unsigned long nr_lock_chains;
 struct lock_chain lock_chains[MAX_LOCKDEP_CHAINS];
 int nr_chain_hlocks;
 static u16 chain_hlocks[MAX_LOCKDEP_CHAIN_HLOCKS];
@@ -2230,6 +2230,20 @@ static int check_no_collision(struct task_struct *curr,
return 1;
 }
 
+/*
+ * Given an index that is >= -1, return the index of the next lock chain.
+ * Return -2 if there is no next lock chain.
+ */
+long lockdep_next_lockchain(long i)
+{
+   return i + 1 < nr_lock_chains ? i + 1 : -2;
+}
+
+unsigned long lock_chain_count(void)
+{
+   return nr_lock_chains;
+}
+
 /*
  * Adds a dependency chain into chain hashtable. And must be called with
  * graph_lock held.
diff --git a/kernel/locking/lockdep_internals.h 
b/kernel/locking/lockdep_internals.h
index 2ebb9d0ea91c..d4c197425f68 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -100,7 +100,8 @@ struct lock_class *lock_chain_get_class(struct lock_chain 
*chain, int i);
 
 extern unsigned long nr_lock_classes;
 extern unsigned long nr_list_entries;
-extern unsigned long nr_lock_chains;
+long lockdep_next_lockchain(long i);
+unsigned long lock_chain_count(void);
 extern int nr_chain_hlocks;
 extern unsigned long nr_stack_trace_entries;
 
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index 3d31f9b0059e..9c49ec645d8b 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -104,18 +104,18 @@ static const struct seq_operations lockdep_ops = {
 #ifdef CONFIG_PROVE_LOCKING
 static void *lc_start(struct seq_file *m, loff_t *pos)
 {
+   if (*pos < 0)
+   return NULL;
+
if (*pos == 0)
return SEQ_START_TOKEN;
 
-   if (*pos - 1 < nr_lock_chains)
-   return lock_chains + (*pos - 1);
-
-   return NULL;
+   return lock_chains + (*pos - 1);
 }
 
 static void *lc_next(struct seq_file *m, void *v, loff_t *pos)
 {
-   (*pos)++;
+   *pos = lockdep_next_lockchain(*pos - 1) + 1;
return lc_start(m, pos);
 }
 
@@ -268,7 +268,7 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
 
 #ifdef CONFIG_PROVE_LOCKING
seq_printf(m, " dependency chains: %11lu [max: %lu]\n",
-   nr_lock_chains, MAX_LOCKDEP_CHAINS);
+   lock_chain_count(), MAX_LOCKDEP_CHAINS);
seq_printf(m, " dependency chain hlocks:   %11d [max: %lu]\n",
nr_chain_hlocks, MAX_LOCKDEP_CHAIN_HLOCKS);
 #endif


[tip:locking/core] locking/lockdep: Fix a comment in add_chain_cache()

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  527af3ea273b2cf0c017a2c90090b3c94af8aba4
Gitweb: https://git.kernel.org/tip/527af3ea273b2cf0c017a2c90090b3c94af8aba4
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:49 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:45 +0100

locking/lockdep: Fix a comment in add_chain_cache()

Reflect that add_chain_cache() is always called with the graph lock held.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-15-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 753a9b758266..ec0cb794f70d 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2266,7 +2266,7 @@ static inline int add_chain_cache(struct task_struct 
*curr,
 */
 
/*
-* We might need to take the graph lock, ensure we've got IRQs
+* The caller must hold the graph lock, ensure we've got IRQs
 * disabled to make this an IRQ-safe lock.. for recursion reasons
 * lockdep won't complain about its own locking errors.
 */


[tip:locking/core] locking/lockdep: Reuse lock chains that have been freed

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  de4643a77356a77bce73f64275b125b4b71a69cf
Gitweb: https://git.kernel.org/tip/de4643a77356a77bce73f64275b125b4b71a69cf
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:50 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:45 +0100

locking/lockdep: Reuse lock chains that have been freed

A previous patch introduced a lock chain leak. Fix that leak by reusing
lock chains that have been freed.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-16-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 57 +++-
 1 file changed, 37 insertions(+), 20 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index ec0cb794f70d..0bb204464afe 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -292,9 +292,12 @@ static LIST_HEAD(free_lock_classes);
 /**
  * struct pending_free - information about data structures about to be freed
  * @zapped: Head of a list with struct lock_class elements.
+ * @lock_chains_being_freed: Bitmap that indicates which lock_chains[] elements
+ * are about to be freed.
  */
 struct pending_free {
struct list_head zapped;
+   DECLARE_BITMAP(lock_chains_being_freed, MAX_LOCKDEP_CHAINS);
 };
 
 /**
@@ -2096,8 +2099,8 @@ out_bug:
return 0;
 }
 
-static unsigned long nr_lock_chains;
 struct lock_chain lock_chains[MAX_LOCKDEP_CHAINS];
+static DECLARE_BITMAP(lock_chains_in_use, MAX_LOCKDEP_CHAINS);
 int nr_chain_hlocks;
 static u16 chain_hlocks[MAX_LOCKDEP_CHAIN_HLOCKS];
 
@@ -2236,12 +2239,25 @@ static int check_no_collision(struct task_struct *curr,
  */
 long lockdep_next_lockchain(long i)
 {
-   return i + 1 < nr_lock_chains ? i + 1 : -2;
+   i = find_next_bit(lock_chains_in_use, ARRAY_SIZE(lock_chains), i + 1);
+   return i < ARRAY_SIZE(lock_chains) ? i : -2;
 }
 
 unsigned long lock_chain_count(void)
 {
-   return nr_lock_chains;
+   return bitmap_weight(lock_chains_in_use, ARRAY_SIZE(lock_chains));
+}
+
+/* Must be called with the graph lock held. */
+static struct lock_chain *alloc_lock_chain(void)
+{
+   int idx = find_first_zero_bit(lock_chains_in_use,
+ ARRAY_SIZE(lock_chains));
+
+   if (unlikely(idx >= ARRAY_SIZE(lock_chains)))
+   return NULL;
+   __set_bit(idx, lock_chains_in_use);
+   return lock_chains + idx;
 }
 
 /*
@@ -2260,11 +2276,6 @@ static inline int add_chain_cache(struct task_struct 
*curr,
struct lock_chain *chain;
int i, j;
 
-   /*
-* Allocate a new chain entry from the static array, and add
-* it to the hash:
-*/
-
/*
 * The caller must hold the graph lock, ensure we've got IRQs
 * disabled to make this an IRQ-safe lock.. for recursion reasons
@@ -2273,7 +2284,8 @@ static inline int add_chain_cache(struct task_struct 
*curr,
if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
return 0;
 
-   if (unlikely(nr_lock_chains >= MAX_LOCKDEP_CHAINS)) {
+   chain = alloc_lock_chain();
+   if (!chain) {
if (!debug_locks_off_graph_unlock())
return 0;
 
@@ -2281,7 +2293,6 @@ static inline int add_chain_cache(struct task_struct 
*curr,
dump_stack();
return 0;
}
-   chain = lock_chains + nr_lock_chains++;
chain->chain_key = chain_key;
chain->irq_context = hlock->irq_context;
i = get_first_held_lock(curr, hlock);
@@ -4208,7 +4219,8 @@ void lockdep_reset(void)
 }
 
 /* Remove a class from a lock chain. Must be called with the graph lock held. 
*/
-static void remove_class_from_lock_chain(struct lock_chain *chain,
+static void remove_class_from_lock_chain(struct pending_free *pf,
+struct lock_chain *chain,
 struct lock_class *class)
 {
 #ifdef CONFIG_PROVE_LOCKING
@@ -4246,6 +4258,7 @@ recalc:
 * hlist_for_each_entry_rcu() loop is safe.
 */
hlist_del_rcu(>entry);
+   __set_bit(chain - lock_chains, pf->lock_chains_being_freed);
if (chain->depth == 0)
return;
/*
@@ -4254,22 +4267,19 @@ recalc:
 */
if (lookup_chain_cache(chain_key))
return;
-   if (WARN_ON_ONCE(nr_lock_chains >= MAX_LOCKDEP_CHAINS)) {
+   new_chain = alloc_lock_chain();
+   if (WARN_ON_ONCE(!new_chain)) {
debug_locks_off();
return;
}
-   /*
-* Leak *chain because it is not safe to reinsert it before an RCU
-* grace period has expired.
-*/
- 

[tip:locking/core] locking/lockdep: Check data structure consistency

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  b526b2e39a53b312f5a6867ce57824247aa0ce8b
Gitweb: https://git.kernel.org/tip/b526b2e39a53b312f5a6867ce57824247aa0ce8b
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:51 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:45 +0100

locking/lockdep: Check data structure consistency

Debugging lockdep data structure inconsistencies is challenging. Add
code that verifies data structure consistency at runtime. That code is
disabled by default because it is very CPU intensive.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-17-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 167 +++
 1 file changed, 167 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 0bb204464afe..630be9ac6253 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -74,6 +74,8 @@ module_param(lock_stat, int, 0644);
 #define lock_stat 0
 #endif
 
+static bool check_data_structure_consistency;
+
 /*
  * lockdep_lock: protects the lockdep graph, the hashes and the
  *   class/list/hash allocators.
@@ -775,6 +777,168 @@ static bool assign_lock_key(struct lockdep_map *lock)
return true;
 }
 
+/* Check whether element @e occurs in list @h */
+static bool in_list(struct list_head *e, struct list_head *h)
+{
+   struct list_head *f;
+
+   list_for_each(f, h) {
+   if (e == f)
+   return true;
+   }
+
+   return false;
+}
+
+/*
+ * Check whether entry @e occurs in any of the locks_after or locks_before
+ * lists.
+ */
+static bool in_any_class_list(struct list_head *e)
+{
+   struct lock_class *class;
+   int i;
+
+   for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+   class = _classes[i];
+   if (in_list(e, >locks_after) ||
+   in_list(e, >locks_before))
+   return true;
+   }
+   return false;
+}
+
+static bool class_lock_list_valid(struct lock_class *c, struct list_head *h)
+{
+   struct lock_list *e;
+
+   list_for_each_entry(e, h, entry) {
+   if (e->links_to != c) {
+   printk(KERN_INFO "class %s: mismatch for lock entry 
%ld; class %s <> %s",
+  c->name ? : "(?)",
+  (unsigned long)(e - list_entries),
+  e->links_to && e->links_to->name ?
+  e->links_to->name : "(?)",
+  e->class && e->class->name ? e->class->name :
+  "(?)");
+   return false;
+   }
+   }
+   return true;
+}
+
+static u16 chain_hlocks[];
+
+static bool check_lock_chain_key(struct lock_chain *chain)
+{
+#ifdef CONFIG_PROVE_LOCKING
+   u64 chain_key = 0;
+   int i;
+
+   for (i = chain->base; i < chain->base + chain->depth; i++)
+   chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
+   /*
+* The 'unsigned long long' casts avoid that a compiler warning
+* is reported when building tools/lib/lockdep.
+*/
+   if (chain->chain_key != chain_key)
+   printk(KERN_INFO "chain %lld: key %#llx <> %#llx\n",
+  (unsigned long long)(chain - lock_chains),
+  (unsigned long long)chain->chain_key,
+  (unsigned long long)chain_key);
+   return chain->chain_key == chain_key;
+#else
+   return true;
+#endif
+}
+
+static bool in_any_zapped_class_list(struct lock_class *class)
+{
+   struct pending_free *pf;
+   int i;
+
+   for (i = 0, pf = delayed_free.pf; i < ARRAY_SIZE(delayed_free.pf);
+i++, pf++)
+   if (in_list(>lock_entry, >zapped))
+   return true;
+
+   return false;
+}
+
+static bool check_data_structures(void)
+{
+   struct lock_class *class;
+   struct lock_chain *chain;
+   struct hlist_head *head;
+   struct lock_list *e;
+   int i;
+
+   /* Check whether all classes occur in a lock list. */
+   for (i = 0; i < ARRAY_SIZE(lock_classes); i++) {
+   class = _classes[i];
+   if (!in_list(>lock_entry, _lock_classes) &&
+   !in_list(>lock_entry, _lock_classes) &&
+   !in_any_zapped_class_list(class)) {
+   printk(KERN_INFO "class %px/%s is not in any class 
list\n",
+  class, class->name ? : "(?)");
+   return false;
+   return false;
+   }
+   }
+
+   /* Check 

[tip:locking/core] locking/lockdep: Verify whether lock objects are small enough to be used as class keys

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  4bf508621855613ca2ac782f70c3171e0e8bb011
Gitweb: https://git.kernel.org/tip/4bf508621855613ca2ac782f70c3171e0e8bb011
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:52 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:46 +0100

locking/lockdep: Verify whether lock objects are small enough to be used as 
class keys

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-18-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 630be9ac6253..84427441824e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -758,6 +758,17 @@ static bool assign_lock_key(struct lockdep_map *lock)
 {
unsigned long can_addr, addr = (unsigned long)lock;
 
+#ifdef __KERNEL__
+   /*
+* lockdep_free_key_range() assumes that struct lock_class_key
+* objects do not overlap. Since we use the address of lock
+* objects as class key for static objects, check whether the
+* size of lock_class_key objects does not exceed the size of
+* the smallest lock object.
+*/
+   BUILD_BUG_ON(sizeof(struct lock_class_key) > sizeof(raw_spinlock_t));
+#endif
+
if (__is_kernel_percpu_address(addr, _addr))
lock->key = (void *)can_addr;
else if (__is_module_percpu_address(addr, _addr))


[tip:locking/core] locking/lockdep: Add support for dynamic keys

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  108c14858b9ea224686e476c8f5ec345a0df9e27
Gitweb: https://git.kernel.org/tip/108c14858b9ea224686e476c8f5ec345a0df9e27
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:53 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:47 +0100

locking/lockdep: Add support for dynamic keys

A shortcoming of the current lockdep implementation is that it requires
lock keys to be allocated statically. That forces all instances of lock
objects that occur in a given data structure to share a lock key. Since
lock dependency analysis groups lock objects per key sharing lock keys
can cause false positive lockdep reports. Make it possible to avoid
such false positive reports by allowing lock keys to be allocated
dynamically. Require that dynamically allocated lock keys are
registered before use by calling lockdep_register_key(). Complain about
attempts to register the same lock key pointer twice without calling
lockdep_unregister_key() between successive registration calls.

The purpose of the new lock_keys_hash[] data structure that keeps
track of all dynamic keys is twofold:

  - Verify whether the lockdep_register_key() and lockdep_unregister_key()
functions are used correctly.

  - Avoid that lockdep_init_map() complains when encountering a dynamically
allocated key.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-19-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 include/linux/lockdep.h  |  21 ++--
 kernel/locking/lockdep.c | 121 +++
 2 files changed, 131 insertions(+), 11 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 619ec3f26cdc..43fb35bd7baf 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -46,15 +46,19 @@ extern int lock_stat;
 #define NR_LOCKDEP_CACHING_CLASSES 2
 
 /*
- * Lock-classes are keyed via unique addresses, by embedding the
- * lockclass-key into the kernel (or module) .data section. (For
- * static locks we use the lock address itself as the key.)
+ * A lockdep key is associated with each lock object. For static locks we use
+ * the lock address itself as the key. Dynamically allocated lock objects can
+ * have a statically or dynamically allocated key. Dynamically allocated lock
+ * keys must be registered before being used and must be unregistered before
+ * the key memory is freed.
  */
 struct lockdep_subclass_key {
char __one_byte;
 } __attribute__ ((__packed__));
 
+/* hash_entry is used to keep track of dynamically allocated keys. */
 struct lock_class_key {
+   struct hlist_node   hash_entry;
struct lockdep_subclass_key subkeys[MAX_LOCKDEP_SUBCLASSES];
 };
 
@@ -273,6 +277,9 @@ extern void lockdep_set_selftest_task(struct task_struct 
*task);
 extern void lockdep_off(void);
 extern void lockdep_on(void);
 
+extern void lockdep_register_key(struct lock_class_key *key);
+extern void lockdep_unregister_key(struct lock_class_key *key);
+
 /*
  * These methods are used by specific locking variants (spinlocks,
  * rwlocks, mutexes and rwsems) to pass init/acquire/release events
@@ -434,6 +441,14 @@ static inline void lockdep_set_selftest_task(struct 
task_struct *task)
  */
 struct lock_class_key { };
 
+static inline void lockdep_register_key(struct lock_class_key *key)
+{
+}
+
+static inline void lockdep_unregister_key(struct lock_class_key *key)
+{
+}
+
 /*
  * The lockdep_map takes no space if lockdep is disabled:
  */
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 84427441824e..c73bc4334bee 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -143,6 +143,9 @@ static DECLARE_BITMAP(list_entries_in_use, 
MAX_LOCKDEP_ENTRIES);
  * nr_lock_classes is the number of elements of lock_classes[] that is
  * in use.
  */
+#define KEYHASH_BITS   (MAX_LOCKDEP_KEYS_BITS - 1)
+#define KEYHASH_SIZE   (1UL << KEYHASH_BITS)
+static struct hlist_head lock_keys_hash[KEYHASH_SIZE];
 unsigned long nr_lock_classes;
 #ifndef CONFIG_DEBUG_LOCKDEP
 static
@@ -641,7 +644,7 @@ static int very_verbose(struct lock_class *class)
  * Is this the address of a static object:
  */
 #ifdef __KERNEL__
-static int static_obj(void *obj)
+static int static_obj(const void *obj)
 {
unsigned long start = (unsigned long) &_stext,
  end   = (unsigned long) &_end,
@@ -975,6 +978,71 @@ static void init_data_structures_once(void)
}
 }
 
+static inline struct hlist_head *keyhashentry(const struct lock_class_key *key)
+{
+   unsigned long hash = hash_long((uintptr_t)key, KEYHASH_BITS);
+
+   return lock_keys_hash + hash;
+}
+
+/* Register a dynamically allocated key. */

[tip:locking/core] kernel/workqueue: Use dynamic lockdep keys for workqueues

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  669de8bda87b92ab9a2fc663b3f5743c2ad1ae9f
Gitweb: https://git.kernel.org/tip/669de8bda87b92ab9a2fc663b3f5743c2ad1ae9f
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:54 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:47 +0100

kernel/workqueue: Use dynamic lockdep keys for workqueues

The following commit:

  87915adc3f0a ("workqueue: re-add lockdep dependencies for flushing")

improved deadlock checking in the workqueue implementation. Unfortunately
that patch also introduced a few false positive lockdep complaints.

This patch suppresses these false positives by allocating the workqueue mutex
lockdep key dynamically.

An example of a false positive lockdep complaint suppressed by this patch
can be found below. The root cause of the lockdep complaint shown below
is that the direct I/O code can call alloc_workqueue() from inside a work
item created by another alloc_workqueue() call and that both workqueues
share the same lockdep key. This patch avoids that that lockdep complaint
is triggered by allocating the work queue lockdep keys dynamically.

In other words, this patch guarantees that a unique lockdep key is
associated with each work queue mutex.

  ==
  WARNING: possible circular locking dependency detected
  4.19.0-dbg+ #1 Not tainted
  fio/4129 is trying to acquire lock:
  a01cfe1a ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: 
flush_workqueue+0xd0/0x970

  but task is already holding lock:
  a0acecf9 (>s_type->i_mutex_key#14){+.+.}, at: 
ext4_file_write_iter+0x154/0x710

  which lock already depends on the new lock.

  the existing dependency chain (in reverse order) is:

  -> #2 (>s_type->i_mutex_key#14){+.+.}:
 down_write+0x3d/0x80
 __generic_file_fsync+0x77/0xf0
 ext4_sync_file+0x3c9/0x780
 vfs_fsync_range+0x66/0x100
 dio_complete+0x2f5/0x360
 dio_aio_complete_work+0x1c/0x20
 process_one_work+0x481/0x9f0
 worker_thread+0x63/0x5a0
 kthread+0x1cf/0x1f0
 ret_from_fork+0x24/0x30

  -> #1 ((work_completion)(>complete_work)){+.+.}:
 process_one_work+0x447/0x9f0
 worker_thread+0x63/0x5a0
 kthread+0x1cf/0x1f0
 ret_from_fork+0x24/0x30

  -> #0 ((wq_completion)"dio/%s"sb->s_id){+.+.}:
 lock_acquire+0xc5/0x200
 flush_workqueue+0xf3/0x970
 drain_workqueue+0xec/0x220
 destroy_workqueue+0x23/0x350
 sb_init_dio_done_wq+0x6a/0x80
 do_blockdev_direct_IO+0x1f33/0x4be0
 __blockdev_direct_IO+0x79/0x86
 ext4_direct_IO+0x5df/0xbb0
 generic_file_direct_write+0x119/0x220
 __generic_file_write_iter+0x131/0x2d0
 ext4_file_write_iter+0x3fa/0x710
 aio_write+0x235/0x330
 io_submit_one+0x510/0xeb0
 __x64_sys_io_submit+0x122/0x340
 do_syscall_64+0x71/0x220
 entry_SYSCALL_64_after_hwframe+0x49/0xbe

  other info that might help us debug this:

  Chain exists of:
(wq_completion)"dio/%s"sb->s_id --> (work_completion)(>complete_work) 
--> >s_type->i_mutex_key#14

   Possible unsafe locking scenario:

 CPU0CPU1
 
lock(>s_type->i_mutex_key#14);
 lock((work_completion)(>complete_work));
 lock(>s_type->i_mutex_key#14);
lock((wq_completion)"dio/%s"sb->s_id);

   *** DEADLOCK ***

  1 lock held by fio/4129:
   #0: a0acecf9 (>s_type->i_mutex_key#14){+.+.}, at: 
ext4_file_write_iter+0x154/0x710

  stack backtrace:
  CPU: 3 PID: 4129 Comm: fio Not tainted 4.19.0-dbg+ #1
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 
04/01/2014
  Call Trace:
   dump_stack+0x86/0xc5
   print_circular_bug.isra.32+0x20a/0x218
   __lock_acquire+0x1c68/0x1cf0
   lock_acquire+0xc5/0x200
   flush_workqueue+0xf3/0x970
   drain_workqueue+0xec/0x220
   destroy_workqueue+0x23/0x350
   sb_init_dio_done_wq+0x6a/0x80
   do_blockdev_direct_IO+0x1f33/0x4be0
   __blockdev_direct_IO+0x79/0x86
   ext4_direct_IO+0x5df/0xbb0
   generic_file_direct_write+0x119/0x220
   __generic_file_write_iter+0x131/0x2d0
   ext4_file_write_iter+0x3fa/0x710
   aio_write+0x235/0x330
   io_submit_one+0x510/0xeb0
   __x64_sys_io_submit+0x122/0x340
   do_syscall_64+0x71/0x220
   entry_SYSCALL_64_after_hwframe+0x49/0xbe

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Tejun Heo 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Link: https://lkml.kernel.org/r/20190214230058.196511-20-bvanass...@acm.org
[ Reworked the changelog a bit. ]
Signed-off-by: Ingo Molnar 
---
 include/linux/workqueue.h | 28 --
 kernel/workqueue.c| 59 +++
 2 files changed, 54 insertions(+), 33 

[tip:locking/core] lockdep/lib/tests: Fix run_tests.sh

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  d93ac78bf7b37db36fa00225f8e9a14c7ed1b2ba
Gitweb: https://git.kernel.org/tip/d93ac78bf7b37db36fa00225f8e9a14c7ed1b2ba
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:57 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:48 +0100

lockdep/lib/tests: Fix run_tests.sh

Apparently the execute bits were set for the tests/*.sh scripts on my
test setup but these are not set in the kernel tree. Fix this by adding
the interpreter path in front of the script paths.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Fixes: 5ecb8e94b494 ("tools/lib/lockdep/tests: Improve testing accuracy") # 
v5.0-rc1
Link: https://lkml.kernel.org/r/20190214230058.196511-23-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 tools/lib/lockdep/run_tests.sh | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
index c8fbd0306960..11f425662b43 100755
--- a/tools/lib/lockdep/run_tests.sh
+++ b/tools/lib/lockdep/run_tests.sh
@@ -11,7 +11,7 @@ find tests -name '*.c' | sort | while read -r i; do
testname=$(basename "$i" .c)
echo -ne "$testname... "
if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude 
-D__USE_LIBLOCKDEP &&
-   timeout 1 "tests/$testname" 2>&1 | "tests/${testname}.sh"; then
+   timeout 1 "tests/$testname" 2>&1 | /bin/bash 
"tests/${testname}.sh"; then
echo "PASSED!"
else
echo "FAILED!"
@@ -24,7 +24,7 @@ find tests -name '*.c' | sort | while read -r i; do
echo -ne "(PRELOAD) $testname... "
if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
timeout 1 ./lockdep "tests/$testname" 2>&1 |
-   "tests/${testname}.sh"; then
+   /bin/bash "tests/${testname}.sh"; then
echo "PASSED!"
else
echo "FAILED!"
@@ -37,7 +37,7 @@ find tests -name '*.c' | sort | while read -r i; do
echo -ne "(PRELOAD + Valgrind) $testname... "
if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
{ timeout 10 valgrind --read-var-info=yes ./lockdep 
"./tests/$testname" >& "tests/${testname}.vg.out"; true; } &&
-   "tests/${testname}.sh" < "tests/${testname}.vg.out" &&
+   /bin/bash "tests/${testname}.sh" < "tests/${testname}.vg.out" &&
! grep -Eq '(^==[0-9]*== (Invalid |Uninitialised ))|Mismatched 
free|Source and destination overlap| UME ' "tests/${testname}.vg.out"; then
echo "PASSED!"
else


[tip:locking/core] lockdep/lib/tests: Test dynamic key registration

2019-02-27 Thread tip-bot for Bart Van Assche
Commit-ID:  f214737b75b0ee79763b5c058b9d5e83d711348d
Gitweb: https://git.kernel.org/tip/f214737b75b0ee79763b5c058b9d5e83d711348d
Author: Bart Van Assche 
AuthorDate: Thu, 14 Feb 2019 15:00:58 -0800
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Feb 2019 07:55:48 +0100

lockdep/lib/tests: Test dynamic key registration

Make sure that the lockdep_register_key() and lockdep_unregister_key()
code is tested when running the lockdep tests.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Johannes Berg 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Cc: johannes.b...@intel.com
Cc: t...@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-24-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 tools/lib/lockdep/include/liblockdep/common.h |  2 ++
 tools/lib/lockdep/include/liblockdep/mutex.h  | 11 ++-
 tools/lib/lockdep/tests/ABBA.c|  9 +
 3 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/tools/lib/lockdep/include/liblockdep/common.h 
b/tools/lib/lockdep/include/liblockdep/common.h
index d640a9761f09..a81d91d4fc78 100644
--- a/tools/lib/lockdep/include/liblockdep/common.h
+++ b/tools/lib/lockdep/include/liblockdep/common.h
@@ -45,6 +45,8 @@ void lock_acquire(struct lockdep_map *lock, unsigned int 
subclass,
 void lock_release(struct lockdep_map *lock, int nested,
unsigned long ip);
 void lockdep_reset_lock(struct lockdep_map *lock);
+void lockdep_register_key(struct lock_class_key *key);
+void lockdep_unregister_key(struct lock_class_key *key);
 extern void debug_check_no_locks_freed(const void *from, unsigned long len);
 
 #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
diff --git a/tools/lib/lockdep/include/liblockdep/mutex.h 
b/tools/lib/lockdep/include/liblockdep/mutex.h
index 2073d4e1f2f0..783dd0df06f9 100644
--- a/tools/lib/lockdep/include/liblockdep/mutex.h
+++ b/tools/lib/lockdep/include/liblockdep/mutex.h
@@ -7,6 +7,7 @@
 
 struct liblockdep_pthread_mutex {
pthread_mutex_t mutex;
+   struct lock_class_key key;
struct lockdep_map dep_map;
 };
 
@@ -27,11 +28,10 @@ static inline int __mutex_init(liblockdep_pthread_mutex_t 
*lock,
return pthread_mutex_init(>mutex, __mutexattr);
 }
 
-#define liblockdep_pthread_mutex_init(mutex, mutexattr)\
-({ \
-   static struct lock_class_key __key; \
-   \
-   __mutex_init((mutex), #mutex, &__key, (mutexattr)); \
+#define liblockdep_pthread_mutex_init(mutex, mutexattr)
\
+({ \
+   lockdep_register_key(&(mutex)->key);\
+   __mutex_init((mutex), #mutex, &(mutex)->key, (mutexattr));  \
 })
 
 static inline int liblockdep_pthread_mutex_lock(liblockdep_pthread_mutex_t 
*lock)
@@ -55,6 +55,7 @@ static inline int 
liblockdep_pthread_mutex_trylock(liblockdep_pthread_mutex_t *l
 static inline int liblockdep_pthread_mutex_destroy(liblockdep_pthread_mutex_t 
*lock)
 {
lockdep_reset_lock(>dep_map);
+   lockdep_unregister_key(>key);
return pthread_mutex_destroy(>mutex);
 }
 
diff --git a/tools/lib/lockdep/tests/ABBA.c b/tools/lib/lockdep/tests/ABBA.c
index 623313f54720..543789bc3e37 100644
--- a/tools/lib/lockdep/tests/ABBA.c
+++ b/tools/lib/lockdep/tests/ABBA.c
@@ -14,4 +14,13 @@ void main(void)
 
pthread_mutex_destroy();
pthread_mutex_destroy();
+
+   pthread_mutex_init(, NULL);
+   pthread_mutex_init(, NULL);
+
+   LOCK_UNLOCK_2(a, b);
+   LOCK_UNLOCK_2(b, a);
+
+   pthread_mutex_destroy();
+   pthread_mutex_destroy();
 }


[tip:locking/urgent] locking/lockdep: Zap lock classes even with lock debugging disabled

2019-04-10 Thread tip-bot for Bart Van Assche
Commit-ID:  90c1cba2b3b3851c151229f61801919b2904d437
Gitweb: https://git.kernel.org/tip/90c1cba2b3b3851c151229f61801919b2904d437
Author: Bart Van Assche 
AuthorDate: Wed, 3 Apr 2019 16:35:52 -0700
Committer:  Ingo Molnar 
CommitDate: Wed, 10 Apr 2019 13:45:59 +0200

locking/lockdep: Zap lock classes even with lock debugging disabled

The following commit:

  a0b0fd53e1e6 ("locking/lockdep: Free lock classes that are no longer in use")

changed the behavior of lockdep_free_key_range() from
unconditionally zapping lock classes into only zapping lock classes if
debug_lock == true. Not zapping lock classes if debug_lock == false leaves
dangling pointers in several lockdep datastructures, e.g. lock_class::name
in the all_lock_classes list.

The shell command "cat /proc/lockdep" causes the kernel to iterate the
all_lock_classes list. Hence the "unable to handle kernel paging request" cash
that Shenghui encountered by running cat /proc/lockdep.

Since the new behavior can cause cat /proc/lockdep to crash, restore the
pre-v5.1 behavior.

This patch avoids that cat /proc/lockdep triggers the following crash
with debug_lock == false:

  BUG: unable to handle kernel paging request at fbfff40ca448
  RIP: 0010:__asan_load1+0x28/0x50
  Call Trace:
   string+0xac/0x180
   vsnprintf+0x23e/0x820
   seq_vprintf+0x82/0xc0
   seq_printf+0x92/0xb0
   print_name+0x34/0xb0
   l_show+0x184/0x200
   seq_read+0x59e/0x6c0
   proc_reg_read+0x11f/0x170
   __vfs_read+0x4d/0x90
   vfs_read+0xc5/0x1f0
   ksys_read+0xab/0x130
   __x64_sys_read+0x43/0x50
   do_syscall_64+0x71/0x210
   entry_SYSCALL_64_after_hwframe+0x49/0xbe

Reported-by: shenghui 
Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Waiman Long 
Cc: Will Deacon 
Fixes: a0b0fd53e1e6 ("locking/lockdep: Free lock classes that are no longer in 
use") # v5.1-rc1.
Link: https://lkml.kernel.org/r/20190403233552.124673-1-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 29 -
 1 file changed, 12 insertions(+), 17 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 34cdcbedda49..e16766ff184b 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4689,8 +4689,8 @@ static void free_zapped_rcu(struct rcu_head *ch)
return;
 
raw_local_irq_save(flags);
-   if (!graph_lock())
-   goto out_irq;
+   arch_spin_lock(_lock);
+   current->lockdep_recursion = 1;
 
/* closed head */
pf = delayed_free.pf + (delayed_free.index ^ 1);
@@ -4702,8 +4702,8 @@ static void free_zapped_rcu(struct rcu_head *ch)
 */
call_rcu_zapped(delayed_free.pf + delayed_free.index);
 
-   graph_unlock();
-out_irq:
+   current->lockdep_recursion = 0;
+   arch_spin_unlock(_lock);
raw_local_irq_restore(flags);
 }
 
@@ -4744,21 +4744,17 @@ static void lockdep_free_key_range_reg(void *start, 
unsigned long size)
 {
struct pending_free *pf;
unsigned long flags;
-   int locked;
 
init_data_structures_once();
 
raw_local_irq_save(flags);
-   locked = graph_lock();
-   if (!locked)
-   goto out_irq;
-
+   arch_spin_lock(_lock);
+   current->lockdep_recursion = 1;
pf = get_pending_free();
__lockdep_free_key_range(pf, start, size);
call_rcu_zapped(pf);
-
-   graph_unlock();
-out_irq:
+   current->lockdep_recursion = 0;
+   arch_spin_unlock(_lock);
raw_local_irq_restore(flags);
 
/*
@@ -4911,9 +4907,8 @@ void lockdep_unregister_key(struct lock_class_key *key)
return;
 
raw_local_irq_save(flags);
-   if (!graph_lock())
-   goto out_irq;
-
+   arch_spin_lock(_lock);
+   current->lockdep_recursion = 1;
pf = get_pending_free();
hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
if (k == key) {
@@ -4925,8 +4920,8 @@ void lockdep_unregister_key(struct lock_class_key *key)
WARN_ON_ONCE(!found);
__lockdep_free_key_range(pf, key, 1);
call_rcu_zapped(pf);
-   graph_unlock();
-out_irq:
+   current->lockdep_recursion = 0;
+   arch_spin_unlock(_lock);
raw_local_irq_restore(flags);
 
/* Wait until is_dynamic_key() has finished accessing k->hash_entry. */


[tip:locking/urgent] workqueue: Only unregister a registered lockdep key

2019-03-21 Thread tip-bot for Bart Van Assche
Commit-ID:  82efcab3b9f3ef59e9713237c6e3c05c3a95c1ae
Gitweb: https://git.kernel.org/tip/82efcab3b9f3ef59e9713237c6e3c05c3a95c1ae
Author: Bart Van Assche 
AuthorDate: Mon, 11 Mar 2019 16:02:55 -0700
Committer:  Thomas Gleixner 
CommitDate: Thu, 21 Mar 2019 12:00:18 +0100

workqueue: Only unregister a registered lockdep key

The recent change to prevent use after free and a memory leak introduced an
unconditional call to wq_unregister_lockdep() in the error handling
path. If the lockdep key had not been registered yet, then the lockdep core
emits a warning.

Only call wq_unregister_lockdep() if wq_register_lockdep() has been
called first.

Fixes: 009bb421b6ce ("workqueue, lockdep: Fix an alloc_workqueue() error path")
Reported-by: syzbot+be0c198232f86389c...@syzkaller.appspotmail.com
Signed-off-by: Bart Van Assche 
Signed-off-by: Thomas Gleixner 
Cc: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Tejun Heo 
Cc: Qian Cai 
Link: https://lkml.kernel.org/r/20190311230255.176081-1-bvanass...@acm.org

---
 kernel/workqueue.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 4026d1871407..ddee541ea97a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4266,7 +4266,7 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
INIT_LIST_HEAD(>list);
 
if (alloc_and_link_pwqs(wq) < 0)
-   goto err_free_wq;
+   goto err_unreg_lockdep;
 
if (wq_online && init_rescuer(wq) < 0)
goto err_destroy;
@@ -4292,9 +4292,10 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
 
return wq;
 
-err_free_wq:
+err_unreg_lockdep:
wq_unregister_lockdep(wq);
wq_free_lockdep(wq);
+err_free_wq:
free_workqueue_attrs(wq->unbound_attrs);
kfree(wq);
return NULL;


[tip:locking/urgent] locking/lockdep: Only call init_rcu_head() after RCU has been initialized

2019-03-09 Thread tip-bot for Bart Van Assche
Commit-ID:  0126574fca2ce0f0d5beb9dade6efb823ff7407b
Gitweb: https://git.kernel.org/tip/0126574fca2ce0f0d5beb9dade6efb823ff7407b
Author: Bart Van Assche 
AuthorDate: Sun, 3 Mar 2019 10:19:01 -0800
Committer:  Ingo Molnar 
CommitDate: Sat, 9 Mar 2019 14:15:51 +0100

locking/lockdep: Only call init_rcu_head() after RCU has been initialized

init_data_structures_once() is called for the first time before RCU has
been initialized. Make sure that init_rcu_head() is called before the
RCU head is used and after RCU has been initialized.

Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Andy Lutomirski 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: H. Peter Anvin 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Rik van Riel 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: long...@redhat.com
Link: https://lkml.kernel.org/r/c20aa0f0-42ab-a884-d931-7d4ec2bf0...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/locking/lockdep.c | 15 +++
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 35a144dfddf5..34cdcbedda49 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -982,15 +982,22 @@ static inline void check_data_structures(void) { }
  */
 static void init_data_structures_once(void)
 {
-   static bool initialization_happened;
+   static bool ds_initialized, rcu_head_initialized;
int i;
 
-   if (likely(initialization_happened))
+   if (likely(rcu_head_initialized))
return;
 
-   initialization_happened = true;
+   if (system_state >= SYSTEM_SCHEDULING) {
+   init_rcu_head(_free.rcu_head);
+   rcu_head_initialized = true;
+   }
+
+   if (ds_initialized)
+   return;
+
+   ds_initialized = true;
 
-   init_rcu_head(_free.rcu_head);
INIT_LIST_HEAD(_free.pf[0].zapped);
INIT_LIST_HEAD(_free.pf[1].zapped);
 


[tip:locking/urgent] workqueue, lockdep: Fix an alloc_workqueue() error path

2019-03-09 Thread tip-bot for Bart Van Assche
Commit-ID:  009bb421b6ceb7916ce627023d0eb7ced04c8910
Gitweb: https://git.kernel.org/tip/009bb421b6ceb7916ce627023d0eb7ced04c8910
Author: Bart Van Assche 
AuthorDate: Sun, 3 Mar 2019 14:00:46 -0800
Committer:  Ingo Molnar 
CommitDate: Sat, 9 Mar 2019 14:15:52 +0100

workqueue, lockdep: Fix an alloc_workqueue() error path

This patch fixes a use-after-free and a memory leak in an alloc_workqueue()
error path.

Repoted by syzkaller and KASAN:

  BUG: KASAN: use-after-free in __read_once_size include/linux/compiler.h:197 
[inline]
  BUG: KASAN: use-after-free in lockdep_register_key+0x3b9/0x490 
kernel/locking/lockdep.c:1023
  Read of size 8 at addr 888090fc2698 by task syz-executor134/7858

  CPU: 1 PID: 7858 Comm: syz-executor134 Not tainted 5.0.0-rc8-next-20190301 #1
  Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS 
Google 01/01/2011
  Call Trace:
   __dump_stack lib/dump_stack.c:77 [inline]
   dump_stack+0x172/0x1f0 lib/dump_stack.c:113
   print_address_description.cold+0x7c/0x20d mm/kasan/report.c:187
   kasan_report.cold+0x1b/0x40 mm/kasan/report.c:317
   __asan_report_load8_noabort+0x14/0x20 mm/kasan/generic_report.c:132
   __read_once_size include/linux/compiler.h:197 [inline]
   lockdep_register_key+0x3b9/0x490 kernel/locking/lockdep.c:1023
   wq_init_lockdep kernel/workqueue.c:3444 [inline]
   alloc_workqueue+0x427/0xe70 kernel/workqueue.c:4263
   ucma_open+0x76/0x290 drivers/infiniband/core/ucma.c:1732
   misc_open+0x398/0x4c0 drivers/char/misc.c:141
   chrdev_open+0x247/0x6b0 fs/char_dev.c:417
   do_dentry_open+0x488/0x1160 fs/open.c:771
   vfs_open+0xa0/0xd0 fs/open.c:880
   do_last fs/namei.c:3416 [inline]
   path_openat+0x10e9/0x46e0 fs/namei.c:3533
   do_filp_open+0x1a1/0x280 fs/namei.c:3563
   do_sys_open+0x3fe/0x5d0 fs/open.c:1063
   __do_sys_openat fs/open.c:1090 [inline]
   __se_sys_openat fs/open.c:1084 [inline]
   __x64_sys_openat+0x9d/0x100 fs/open.c:1084
   do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
   entry_SYSCALL_64_after_hwframe+0x49/0xbe

  Allocated by task 7789:
   save_stack+0x45/0xd0 mm/kasan/common.c:75
   set_track mm/kasan/common.c:87 [inline]
   __kasan_kmalloc mm/kasan/common.c:497 [inline]
   __kasan_kmalloc.constprop.0+0xcf/0xe0 mm/kasan/common.c:470
   kasan_kmalloc+0x9/0x10 mm/kasan/common.c:511
   __do_kmalloc mm/slab.c:3726 [inline]
   __kmalloc+0x15c/0x740 mm/slab.c:3735
   kmalloc include/linux/slab.h:553 [inline]
   kzalloc include/linux/slab.h:743 [inline]
   alloc_workqueue+0x13c/0xe70 kernel/workqueue.c:4236
   ucma_open+0x76/0x290 drivers/infiniband/core/ucma.c:1732
   misc_open+0x398/0x4c0 drivers/char/misc.c:141
   chrdev_open+0x247/0x6b0 fs/char_dev.c:417
   do_dentry_open+0x488/0x1160 fs/open.c:771
   vfs_open+0xa0/0xd0 fs/open.c:880
   do_last fs/namei.c:3416 [inline]
   path_openat+0x10e9/0x46e0 fs/namei.c:3533
   do_filp_open+0x1a1/0x280 fs/namei.c:3563
   do_sys_open+0x3fe/0x5d0 fs/open.c:1063
   __do_sys_openat fs/open.c:1090 [inline]
   __se_sys_openat fs/open.c:1084 [inline]
   __x64_sys_openat+0x9d/0x100 fs/open.c:1084
   do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
   entry_SYSCALL_64_after_hwframe+0x49/0xbe

  Freed by task 7789:
   save_stack+0x45/0xd0 mm/kasan/common.c:75
   set_track mm/kasan/common.c:87 [inline]
   __kasan_slab_free+0x102/0x150 mm/kasan/common.c:459
   kasan_slab_free+0xe/0x10 mm/kasan/common.c:467
   __cache_free mm/slab.c:3498 [inline]
   kfree+0xcf/0x230 mm/slab.c:3821
   alloc_workqueue+0xc3e/0xe70 kernel/workqueue.c:4295
   ucma_open+0x76/0x290 drivers/infiniband/core/ucma.c:1732
   misc_open+0x398/0x4c0 drivers/char/misc.c:141
   chrdev_open+0x247/0x6b0 fs/char_dev.c:417
   do_dentry_open+0x488/0x1160 fs/open.c:771
   vfs_open+0xa0/0xd0 fs/open.c:880
   do_last fs/namei.c:3416 [inline]
   path_openat+0x10e9/0x46e0 fs/namei.c:3533
   do_filp_open+0x1a1/0x280 fs/namei.c:3563
   do_sys_open+0x3fe/0x5d0 fs/open.c:1063
   __do_sys_openat fs/open.c:1090 [inline]
   __se_sys_openat fs/open.c:1084 [inline]
   __x64_sys_openat+0x9d/0x100 fs/open.c:1084
   do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
   entry_SYSCALL_64_after_hwframe+0x49/0xbe

  The buggy address belongs to the object at 888090fc2580
   which belongs to the cache kmalloc-512 of size 512
  The buggy address is located 280 bytes inside of
   512-byte region [888090fc2580, 888090fc2780)

Reported-by: syzbot+17335689e239ce135...@syzkaller.appspotmail.com
Signed-off-by: Bart Van Assche 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Andy Lutomirski 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: H. Peter Anvin 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Rik van Riel 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Fixes: 669de8bda87b ("kernel/workqueue: Use dynamic lockdep keys for 
workqueues")
Link: https://lkml.kernel.org/r/20190303220046.29448-1-bvanass...@acm.org
Signed-off-by: Ingo Molnar 
---
 kernel/workqueue.c | 2 ++
 1 file changed, 2 insertions(+)

diff