Re: [GIT PULL] locking fix

2021-03-28 Thread pr-tracker-bot
The pull request you sent on Sun, 28 Mar 2021 12:28:43 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
> locking-urgent-2021-03-28

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/47fbbc94dab61a1385f21a0a209c61b5d6b0a215

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


[GIT PULL] locking fix

2021-03-28 Thread Ingo Molnar
Linus,

Please pull the latest locking/urgent git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-2021-03-28

   # HEAD: 291da9d4a9eb3a1cb0610b7f4480f5b52b1825e7 locking/mutex: Fix non 
debug version of mutex_lock_io_nested()

Fix the non-debug mutex_lock_io_nested() method to map to mutex_lock_io() 
instead of mutex_lock().

Right now nothing uses this API explicitly, but this is an accident waiting to 
happen.

 Thanks,

Ingo

-->
Thomas Gleixner (1):
  locking/mutex: Fix non debug version of mutex_lock_io_nested()


 include/linux/mutex.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 0cd631a19727..515cff77a4f4 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -185,7 +185,7 @@ extern void mutex_lock_io(struct mutex *lock);
 # define mutex_lock_interruptible_nested(lock, subclass) 
mutex_lock_interruptible(lock)
 # define mutex_lock_killable_nested(lock, subclass) mutex_lock_killable(lock)
 # define mutex_lock_nest_lock(lock, nest_lock) mutex_lock(lock)
-# define mutex_lock_io_nested(lock, subclass) mutex_lock(lock)
+# define mutex_lock_io_nested(lock, subclass) mutex_lock_io(lock)
 #endif
 
 /*


Re: [GIT PULL] locking fix

2019-07-14 Thread pr-tracker-bot
The pull request you sent on Sun, 14 Jul 2019 13:36:21 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
> locking-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/0c85ce135456a3927f552e738f730c47ac905ac3

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker


[GIT PULL] locking fix

2019-07-14 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: 68d41d8c94a31dfb8233ab90b9baf41a2ed2da68 locking/lockdep: Fix lock 
used or unused stats error

A single fix for a locking statistics bug.

 Thanks,

Ingo

-->
Yuyang Du (1):
  locking/lockdep: Fix lock used or unused stats error


 kernel/locking/lockdep_proc.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index 9c49ec645d8b..65b6a1600c8f 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -210,6 +210,7 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
  nr_hardirq_read_safe = 0, nr_hardirq_read_unsafe = 0,
  sum_forward_deps = 0;
 
+#ifdef CONFIG_PROVE_LOCKING
list_for_each_entry(class, _lock_classes, lock_entry) {
 
if (class->usage_mask == 0)
@@ -241,12 +242,12 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
if (class->usage_mask & LOCKF_ENABLED_HARDIRQ_READ)
nr_hardirq_read_unsafe++;
 
-#ifdef CONFIG_PROVE_LOCKING
sum_forward_deps += lockdep_count_forward_deps(class);
-#endif
}
 #ifdef CONFIG_DEBUG_LOCKDEP
DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused);
+#endif
+
 #endif
seq_printf(m, " lock-classes:  %11lu [max: %lu]\n",
nr_lock_classes, MAX_LOCKDEP_KEYS);


Re: [GIT PULL] locking fix

2019-05-17 Thread Greg KH
On Thu, May 16, 2019 at 07:55:53PM -0400, Sasha Levin wrote:
> On Thu, May 16, 2019 at 11:42:58AM -0700, Linus Torvalds wrote:
> > On Thu, May 16, 2019 at 11:39 AM Greg KH  wrote:
> > > 
> > > Thanks, I'll work on that later tonight...
> > 
> > Note that it probably is almost entirely impossible to trigger the
> > problem in practice, so it's not like this is a particularly important
> > stable back-port.
> > 
> > I just happened to look at it and go "hmm, it's not _marked_ for stable".
> 
> I've addressed a missing a8654596f03 ("locking/rwsem: Enable lock event
> counting") and queued up a backport for 5.1-4.9.

Thanks for doing this.

greg k-h


Re: [GIT PULL] locking fix

2019-05-16 Thread Sasha Levin

On Thu, May 16, 2019 at 11:42:58AM -0700, Linus Torvalds wrote:

On Thu, May 16, 2019 at 11:39 AM Greg KH  wrote:


Thanks, I'll work on that later tonight...


Note that it probably is almost entirely impossible to trigger the
problem in practice, so it's not like this is a particularly important
stable back-port.

I just happened to look at it and go "hmm, it's not _marked_ for stable".


I've addressed a missing a8654596f03 ("locking/rwsem: Enable lock event
counting") and queued up a backport for 5.1-4.9.

--
Thanks,
Sasha


Re: [GIT PULL] locking fix

2019-05-16 Thread Linus Torvalds
On Thu, May 16, 2019 at 11:39 AM Greg KH  wrote:
>
> Thanks, I'll work on that later tonight...

Note that it probably is almost entirely impossible to trigger the
problem in practice, so it's not like this is a particularly important
stable back-port.

I just happened to look at it and go "hmm, it's not _marked_ for stable".

  Linus


Re: [GIT PULL] locking fix

2019-05-16 Thread Greg KH
On Thu, May 16, 2019 at 10:57:53AM -0700, Linus Torvalds wrote:
> On Thu, May 16, 2019 at 9:01 AM Ingo Molnar  wrote:
> >
> > A single rwsem fix.
> 
> Side note, this one isn't marked for stable, but I'm hoping stable
> picks it up just from the "Fixes" tag.
> 
> Stable people, we're talking about
> 
> a9e9bcb45b15 ("locking/rwsem: Prevent decrement of reader count
> before increment")
> 
> that I just pulled into my tree, and needs to go in 4.9 and later.

Thanks, I'll work on that later tonight...


greg k-h


Re: [GIT PULL] locking fix

2019-05-16 Thread pr-tracker-bot
The pull request you sent on Thu, 16 May 2019 18:01:35 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
> locking-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/f57d7715d7645b7c3d1e7b7cb79ac7690fe2d260

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker


Re: [GIT PULL] locking fix

2019-05-16 Thread Linus Torvalds
On Thu, May 16, 2019 at 9:01 AM Ingo Molnar  wrote:
>
> A single rwsem fix.

Side note, this one isn't marked for stable, but I'm hoping stable
picks it up just from the "Fixes" tag.

Stable people, we're talking about

a9e9bcb45b15 ("locking/rwsem: Prevent decrement of reader count
before increment")

that I just pulled into my tree, and needs to go in 4.9 and later.

  Linus


[GIT PULL] locking fix

2019-05-16 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: a9e9bcb45b1525ba7aea26ed9441e8632aeeda58 locking/rwsem: Prevent 
decrement of reader count before increment

A single rwsem fix.

 Thanks,

Ingo

-->
Waiman Long (1):
  locking/rwsem: Prevent decrement of reader count before increment


 kernel/locking/rwsem-xadd.c | 46 ++---
 1 file changed, 31 insertions(+), 15 deletions(-)

diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 6b3ee9948bf1..0b1f77957240 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -130,6 +130,7 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
 {
struct rwsem_waiter *waiter, *tmp;
long oldcount, woken = 0, adjustment = 0;
+   struct list_head wlist;
 
/*
 * Take a peek at the queue head waiter such that we can determine
@@ -188,18 +189,43 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
 * of the queue. We know that woken will be at least 1 as we accounted
 * for above. Note we increment the 'active part' of the count by the
 * number of readers before waking any processes up.
+*
+* We have to do wakeup in 2 passes to prevent the possibility that
+* the reader count may be decremented before it is incremented. It
+* is because the to-be-woken waiter may not have slept yet. So it
+* may see waiter->task got cleared, finish its critical section and
+* do an unlock before the reader count increment.
+*
+* 1) Collect the read-waiters in a separate list, count them and
+*fully increment the reader count in rwsem.
+* 2) For each waiters in the new list, clear waiter->task and
+*put them into wake_q to be woken up later.
 */
-   list_for_each_entry_safe(waiter, tmp, >wait_list, list) {
-   struct task_struct *tsk;
-
+   list_for_each_entry(waiter, >wait_list, list) {
if (waiter->type == RWSEM_WAITING_FOR_WRITE)
break;
 
woken++;
-   tsk = waiter->task;
+   }
+   list_cut_before(, >wait_list, >list);
+
+   adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;
+   lockevent_cond_inc(rwsem_wake_reader, woken);
+   if (list_empty(>wait_list)) {
+   /* hit end of list above */
+   adjustment -= RWSEM_WAITING_BIAS;
+   }
+
+   if (adjustment)
+   atomic_long_add(adjustment, >count);
+
+   /* 2nd pass */
+   list_for_each_entry_safe(waiter, tmp, , list) {
+   struct task_struct *tsk;
 
+   tsk = waiter->task;
get_task_struct(tsk);
-   list_del(>list);
+
/*
 * Ensure calling get_task_struct() before setting the reader
 * waiter to nil such that rwsem_down_read_failed() cannot
@@ -213,16 +239,6 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
 */
wake_q_add_safe(wake_q, tsk);
}
-
-   adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;
-   lockevent_cond_inc(rwsem_wake_reader, woken);
-   if (list_empty(>wait_list)) {
-   /* hit end of list above */
-   adjustment -= RWSEM_WAITING_BIAS;
-   }
-
-   if (adjustment)
-   atomic_long_add(adjustment, >count);
 }
 
 /*


Re: [GIT PULL] locking fix

2019-04-12 Thread pr-tracker-bot
The pull request you sent on Fri, 12 Apr 2019 13:53:43 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
> locking-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/26e2b81977bb1ad70ff9b2acb4d4cb13c23facfd

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker


[GIT PULL] locking fix

2019-04-12 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: 90c1cba2b3b3851c151229f61801919b2904d437 locking/lockdep: Zap lock 
classes even with lock debugging disabled

Fixes a crash when accessing /proc/lockdep.

 Thanks,

Ingo

-->
Bart Van Assche (1):
  locking/lockdep: Zap lock classes even with lock debugging disabled


 kernel/locking/lockdep.c | 29 -
 1 file changed, 12 insertions(+), 17 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 34cdcbedda49..e16766ff184b 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4689,8 +4689,8 @@ static void free_zapped_rcu(struct rcu_head *ch)
return;
 
raw_local_irq_save(flags);
-   if (!graph_lock())
-   goto out_irq;
+   arch_spin_lock(_lock);
+   current->lockdep_recursion = 1;
 
/* closed head */
pf = delayed_free.pf + (delayed_free.index ^ 1);
@@ -4702,8 +4702,8 @@ static void free_zapped_rcu(struct rcu_head *ch)
 */
call_rcu_zapped(delayed_free.pf + delayed_free.index);
 
-   graph_unlock();
-out_irq:
+   current->lockdep_recursion = 0;
+   arch_spin_unlock(_lock);
raw_local_irq_restore(flags);
 }
 
@@ -4744,21 +4744,17 @@ static void lockdep_free_key_range_reg(void *start, 
unsigned long size)
 {
struct pending_free *pf;
unsigned long flags;
-   int locked;
 
init_data_structures_once();
 
raw_local_irq_save(flags);
-   locked = graph_lock();
-   if (!locked)
-   goto out_irq;
-
+   arch_spin_lock(_lock);
+   current->lockdep_recursion = 1;
pf = get_pending_free();
__lockdep_free_key_range(pf, start, size);
call_rcu_zapped(pf);
-
-   graph_unlock();
-out_irq:
+   current->lockdep_recursion = 0;
+   arch_spin_unlock(_lock);
raw_local_irq_restore(flags);
 
/*
@@ -4911,9 +4907,8 @@ void lockdep_unregister_key(struct lock_class_key *key)
return;
 
raw_local_irq_save(flags);
-   if (!graph_lock())
-   goto out_irq;
-
+   arch_spin_lock(_lock);
+   current->lockdep_recursion = 1;
pf = get_pending_free();
hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
if (k == key) {
@@ -4925,8 +4920,8 @@ void lockdep_unregister_key(struct lock_class_key *key)
WARN_ON_ONCE(!found);
__lockdep_free_key_range(pf, key, 1);
call_rcu_zapped(pf);
-   graph_unlock();
-out_irq:
+   current->lockdep_recursion = 0;
+   arch_spin_unlock(_lock);
raw_local_irq_restore(flags);
 
/* Wait until is_dynamic_key() has finished accessing k->hash_entry. */


[GIT pull] locking fix for 4.16

2018-03-11 Thread Thomas Gleixner
Linus,

please pull the latest locking-urgent-for-linus git tree from:

  git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

rt_mutex_futex_unlock() grew a new irq-off call site, but the function
assumes that its always called from irq enabled context. Use the
(un)lock_irqsafe to handle te new call site correctly.

Thanks,

tglx

-->
Boqun Feng (1):
  rtmutex: Make rt_mutex_futex_unlock() safe for irq-off callsites


 kernel/locking/rtmutex.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 65cc0cb984e6..940633c63254 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1616,11 +1616,12 @@ bool __sched __rt_mutex_futex_unlock(struct rt_mutex 
*lock,
 void __sched rt_mutex_futex_unlock(struct rt_mutex *lock)
 {
DEFINE_WAKE_Q(wake_q);
+   unsigned long flags;
bool postunlock;
 
-   raw_spin_lock_irq(>wait_lock);
+   raw_spin_lock_irqsave(>wait_lock, flags);
postunlock = __rt_mutex_futex_unlock(lock, _q);
-   raw_spin_unlock_irq(>wait_lock);
+   raw_spin_unlock_irqrestore(>wait_lock, flags);
 
if (postunlock)
rt_mutex_postunlock(_q);


[GIT pull] locking fix for 4.16

2018-03-11 Thread Thomas Gleixner
Linus,

please pull the latest locking-urgent-for-linus git tree from:

  git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

rt_mutex_futex_unlock() grew a new irq-off call site, but the function
assumes that its always called from irq enabled context. Use the
(un)lock_irqsafe to handle te new call site correctly.

Thanks,

tglx

-->
Boqun Feng (1):
  rtmutex: Make rt_mutex_futex_unlock() safe for irq-off callsites


 kernel/locking/rtmutex.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 65cc0cb984e6..940633c63254 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1616,11 +1616,12 @@ bool __sched __rt_mutex_futex_unlock(struct rt_mutex 
*lock,
 void __sched rt_mutex_futex_unlock(struct rt_mutex *lock)
 {
DEFINE_WAKE_Q(wake_q);
+   unsigned long flags;
bool postunlock;
 
-   raw_spin_lock_irq(>wait_lock);
+   raw_spin_lock_irqsave(>wait_lock, flags);
postunlock = __rt_mutex_futex_unlock(lock, _q);
-   raw_spin_unlock_irq(>wait_lock);
+   raw_spin_unlock_irqrestore(>wait_lock, flags);
 
if (postunlock)
rt_mutex_postunlock(_q);


[GIT PULL] locking fix

2017-07-21 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: 69f0d429c413fe96db2c187475cebcc6e3a8c7f5 locking/rtmutex: Remove 
unnecessary priority adjustment

Remove an unnecessary priority adjustment in the rtmutex code.

 Thanks,

Ingo

-->
Alex Shi (1):
  locking/rtmutex: Remove unnecessary priority adjustment


 kernel/locking/rtmutex.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 78069895032a..649dc9d3951a 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -963,7 +963,6 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
return -EDEADLK;
 
raw_spin_lock(>pi_lock);
-   rt_mutex_adjust_prio(task);
waiter->task = task;
waiter->lock = lock;
waiter->prio = task->prio;


[GIT PULL] locking fix

2017-07-21 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: 69f0d429c413fe96db2c187475cebcc6e3a8c7f5 locking/rtmutex: Remove 
unnecessary priority adjustment

Remove an unnecessary priority adjustment in the rtmutex code.

 Thanks,

Ingo

-->
Alex Shi (1):
  locking/rtmutex: Remove unnecessary priority adjustment


 kernel/locking/rtmutex.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 78069895032a..649dc9d3951a 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -963,7 +963,6 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
return -EDEADLK;
 
raw_spin_lock(>pi_lock);
-   rt_mutex_adjust_prio(task);
waiter->task = task;
waiter->lock = lock;
waiter->prio = task->prio;


[GIT pull] locking fix for 4.12

2017-05-27 Thread Thomas Gleixner
Linus,

please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

A fix for a state leak which was introduced in the recent rework of
futex/rtmutex interaction.

Thanks,

tglx

-->
Peter Zijlstra (1):
  futex,rt_mutex: Fix rt_mutex_cleanup_proxy_lock()


 kernel/locking/rtmutex.c | 24 ++--
 1 file changed, 18 insertions(+), 6 deletions(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index b95509416909..28cd09e635ed 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1785,12 +1785,14 @@ int rt_mutex_wait_proxy_lock(struct rt_mutex *lock,
int ret;
 
raw_spin_lock_irq(>wait_lock);
-
-   set_current_state(TASK_INTERRUPTIBLE);
-
/* sleep on the mutex */
+   set_current_state(TASK_INTERRUPTIBLE);
ret = __rt_mutex_slowlock(lock, TASK_INTERRUPTIBLE, to, waiter);
-
+   /*
+* try_to_take_rt_mutex() sets the waiter bit unconditionally. We might
+* have to fix that up.
+*/
+   fixup_rt_mutex_waiters(lock);
raw_spin_unlock_irq(>wait_lock);
 
return ret;
@@ -1822,15 +1824,25 @@ bool rt_mutex_cleanup_proxy_lock(struct rt_mutex *lock,
 
raw_spin_lock_irq(>wait_lock);
/*
+* Do an unconditional try-lock, this deals with the lock stealing
+* state where __rt_mutex_futex_unlock() -> mark_wakeup_next_waiter()
+* sets a NULL owner.
+*
+* We're not interested in the return value, because the subsequent
+* test on rt_mutex_owner() will infer that. If the trylock succeeded,
+* we will own the lock and it will have removed the waiter. If we
+* failed the trylock, we're still not owner and we need to remove
+* ourselves.
+*/
+   try_to_take_rt_mutex(lock, current, waiter);
+   /*
 * Unless we're the owner; we're still enqueued on the wait_list.
 * So check if we became owner, if not, take us off the wait_list.
 */
if (rt_mutex_owner(lock) != current) {
remove_waiter(lock, waiter);
-   fixup_rt_mutex_waiters(lock);
cleanup = true;
}
-
/*
 * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might
 * have to fix that up.


[GIT pull] locking fix for 4.12

2017-05-27 Thread Thomas Gleixner
Linus,

please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

A fix for a state leak which was introduced in the recent rework of
futex/rtmutex interaction.

Thanks,

tglx

-->
Peter Zijlstra (1):
  futex,rt_mutex: Fix rt_mutex_cleanup_proxy_lock()


 kernel/locking/rtmutex.c | 24 ++--
 1 file changed, 18 insertions(+), 6 deletions(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index b95509416909..28cd09e635ed 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1785,12 +1785,14 @@ int rt_mutex_wait_proxy_lock(struct rt_mutex *lock,
int ret;
 
raw_spin_lock_irq(>wait_lock);
-
-   set_current_state(TASK_INTERRUPTIBLE);
-
/* sleep on the mutex */
+   set_current_state(TASK_INTERRUPTIBLE);
ret = __rt_mutex_slowlock(lock, TASK_INTERRUPTIBLE, to, waiter);
-
+   /*
+* try_to_take_rt_mutex() sets the waiter bit unconditionally. We might
+* have to fix that up.
+*/
+   fixup_rt_mutex_waiters(lock);
raw_spin_unlock_irq(>wait_lock);
 
return ret;
@@ -1822,15 +1824,25 @@ bool rt_mutex_cleanup_proxy_lock(struct rt_mutex *lock,
 
raw_spin_lock_irq(>wait_lock);
/*
+* Do an unconditional try-lock, this deals with the lock stealing
+* state where __rt_mutex_futex_unlock() -> mark_wakeup_next_waiter()
+* sets a NULL owner.
+*
+* We're not interested in the return value, because the subsequent
+* test on rt_mutex_owner() will infer that. If the trylock succeeded,
+* we will own the lock and it will have removed the waiter. If we
+* failed the trylock, we're still not owner and we need to remove
+* ourselves.
+*/
+   try_to_take_rt_mutex(lock, current, waiter);
+   /*
 * Unless we're the owner; we're still enqueued on the wait_list.
 * So check if we became owner, if not, take us off the wait_list.
 */
if (rt_mutex_owner(lock) != current) {
remove_waiter(lock, waiter);
-   fixup_rt_mutex_waiters(lock);
cleanup = true;
}
-
/*
 * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might
 * have to fix that up.


[GIT pull] locking fix for 4.10

2017-02-18 Thread Thomas Gleixner
Linus,

please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

Move the futex init function to core initcall so user mode helper does not
run into an uninitialized futex syscall.

Thanks,

tglx

-->
Yang Yang (1):
  futex: Move futex_init() to core_initcall


 kernel/futex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/futex.c b/kernel/futex.c
index 0842c8ca534b..cdf365036141 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -3323,4 +3323,4 @@ static int __init futex_init(void)
 
return 0;
 }
-__initcall(futex_init);
+core_initcall(futex_init);


[GIT pull] locking fix for 4.10

2017-02-18 Thread Thomas Gleixner
Linus,

please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

Move the futex init function to core initcall so user mode helper does not
run into an uninitialized futex syscall.

Thanks,

tglx

-->
Yang Yang (1):
  futex: Move futex_init() to core_initcall


 kernel/futex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/futex.c b/kernel/futex.c
index 0842c8ca534b..cdf365036141 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -3323,4 +3323,4 @@ static int __init futex_init(void)
 
return 0;
 }
-__initcall(futex_init);
+core_initcall(futex_init);


Re: [GIT pull] locking fix for 4.9

2016-10-12 Thread Steven Rostedt
On Wed, 12 Oct 2016 10:22:59 -0700
Linus Torvalds  wrote:

> On Wed, Oct 12, 2016 at 1:28 AM, Steven Rostedt  wrote:
> >
> > But I agree. We have lived a long time without the need for this
> > warning. I'm not strongly advocating keeping the warning around and
> > just disabling it totally. But it all comes down to how much we
> > trust those that inherit this after we are gone ;-)  
> 
> I'll take that as an ack for just moving it back to being globally
> disabled, and will do a commit doing that.
> 

Fine with me:

Acked-by: Steven Rostedt 

-- Steve


Re: [GIT pull] locking fix for 4.9

2016-10-12 Thread Steven Rostedt
On Wed, 12 Oct 2016 10:22:59 -0700
Linus Torvalds  wrote:

> On Wed, Oct 12, 2016 at 1:28 AM, Steven Rostedt  wrote:
> >
> > But I agree. We have lived a long time without the need for this
> > warning. I'm not strongly advocating keeping the warning around and
> > just disabling it totally. But it all comes down to how much we
> > trust those that inherit this after we are gone ;-)  
> 
> I'll take that as an ack for just moving it back to being globally
> disabled, and will do a commit doing that.
> 

Fine with me:

Acked-by: Steven Rostedt 

-- Steve


Re: [GIT pull] locking fix for 4.9

2016-10-12 Thread Linus Torvalds
On Wed, Oct 12, 2016 at 1:28 AM, Steven Rostedt  wrote:
>
> But I agree. We have lived a long time without the need for this
> warning. I'm not strongly advocating keeping the warning around and
> just disabling it totally. But it all comes down to how much we
> trust those that inherit this after we are gone ;-)

I'll take that as an ack for just moving it back to being globally
disabled, and will do a commit doing that.

  Linus


Re: [GIT pull] locking fix for 4.9

2016-10-12 Thread Linus Torvalds
On Wed, Oct 12, 2016 at 1:28 AM, Steven Rostedt  wrote:
>
> But I agree. We have lived a long time without the need for this
> warning. I'm not strongly advocating keeping the warning around and
> just disabling it totally. But it all comes down to how much we
> trust those that inherit this after we are gone ;-)

I'll take that as an ack for just moving it back to being globally
disabled, and will do a commit doing that.

  Linus


Re: [GIT pull] locking fix for 4.9

2016-10-12 Thread Steven Rostedt
On Mon, 10 Oct 2016 10:29:27 -0700
Linus Torvalds  wrote:

> On Sat, Oct 8, 2016 at 5:47 AM, Thomas Gleixner  wrote:
> >
> > A single fix which prevents newer GCCs from spamming the build output with
> > overly eager warnings about __builtin_return_address() uses which are
> > correct.  
> 
> Ugh. This feels over-engineered to me.
> 
> We already disable that warning unconditionally for the trace
> subdirectory, and for mm/usercopy.c.
> 
> I feel that the simpler solution is to just disable the warning
> globally, and not worry about "when this config option is enabled we
> need to disable it".
> 
> Basically, we disable the warning every time we ever use
> __builtin_return_address(), so maybe we should just disable it once
> and for all.

The only advantage of doing this is to make it a pain to use
__builtin_return_address(n) with n > 0, so that we don't accidentally
use it without knowing what we are doing.

> 
> It's not like the __builtin_return_address() warning is so incredibly
> useful anyway.
> 

But I agree. We have lived a long time without the need for this
warning. I'm not strongly advocating keeping the warning around and
just disabling it totally. But it all comes down to how much we
trust those that inherit this after we are gone ;-)

/me is feeling his age.

-- Steve


Re: [GIT pull] locking fix for 4.9

2016-10-12 Thread Steven Rostedt
On Mon, 10 Oct 2016 10:29:27 -0700
Linus Torvalds  wrote:

> On Sat, Oct 8, 2016 at 5:47 AM, Thomas Gleixner  wrote:
> >
> > A single fix which prevents newer GCCs from spamming the build output with
> > overly eager warnings about __builtin_return_address() uses which are
> > correct.  
> 
> Ugh. This feels over-engineered to me.
> 
> We already disable that warning unconditionally for the trace
> subdirectory, and for mm/usercopy.c.
> 
> I feel that the simpler solution is to just disable the warning
> globally, and not worry about "when this config option is enabled we
> need to disable it".
> 
> Basically, we disable the warning every time we ever use
> __builtin_return_address(), so maybe we should just disable it once
> and for all.

The only advantage of doing this is to make it a pain to use
__builtin_return_address(n) with n > 0, so that we don't accidentally
use it without knowing what we are doing.

> 
> It's not like the __builtin_return_address() warning is so incredibly
> useful anyway.
> 

But I agree. We have lived a long time without the need for this
warning. I'm not strongly advocating keeping the warning around and
just disabling it totally. But it all comes down to how much we
trust those that inherit this after we are gone ;-)

/me is feeling his age.

-- Steve


Re: [GIT pull] locking fix for 4.9

2016-10-10 Thread Linus Torvalds
On Sat, Oct 8, 2016 at 5:47 AM, Thomas Gleixner  wrote:
>
> A single fix which prevents newer GCCs from spamming the build output with
> overly eager warnings about __builtin_return_address() uses which are
> correct.

Ugh. This feels over-engineered to me.

We already disable that warning unconditionally for the trace
subdirectory, and for mm/usercopy.c.

I feel that the simpler solution is to just disable the warning
globally, and not worry about "when this config option is enabled we
need to disable it".

Basically, we disable the warning every time we ever use
__builtin_return_address(), so maybe we should just disable it once
and for all.

It's not like the __builtin_return_address() warning is so incredibly
useful anyway.

Hmm?

Linus


Re: [GIT pull] locking fix for 4.9

2016-10-10 Thread Linus Torvalds
On Sat, Oct 8, 2016 at 5:47 AM, Thomas Gleixner  wrote:
>
> A single fix which prevents newer GCCs from spamming the build output with
> overly eager warnings about __builtin_return_address() uses which are
> correct.

Ugh. This feels over-engineered to me.

We already disable that warning unconditionally for the trace
subdirectory, and for mm/usercopy.c.

I feel that the simpler solution is to just disable the warning
globally, and not worry about "when this config option is enabled we
need to disable it".

Basically, we disable the warning every time we ever use
__builtin_return_address(), so maybe we should just disable it once
and for all.

It's not like the __builtin_return_address() warning is so incredibly
useful anyway.

Hmm?

Linus


[GIT pull] locking fix for 4.9

2016-10-08 Thread Thomas Gleixner
Linus,

please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

A single fix which prevents newer GCCs from spamming the build output with
overly eager warnings about __builtin_return_address() uses which are
correct.

Thanks,

tglx

-->
Steven Rostedt (1):
  locking/lockdep: Quiet GCC about dangerous __builtin_return_address() 
operations


 include/linux/ftrace.h |  2 ++
 kernel/Makefile|  7 +++
 kernel/trace/Kconfig   |  1 +
 lib/Kconfig.debug  | 10 ++
 4 files changed, 20 insertions(+)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 6f93ac46e7f0..1218b150a6b3 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -714,6 +714,7 @@ static inline void __ftrace_enabled_restore(int enabled)
 #define CALLER_ADDR5 ((unsigned long)ftrace_return_address(5))
 #define CALLER_ADDR6 ((unsigned long)ftrace_return_address(6))
 
+#ifdef CONFIG_USING_GET_LOCK_PARENT_IP
 static inline unsigned long get_lock_parent_ip(void)
 {
unsigned long addr = CALLER_ADDR0;
@@ -725,6 +726,7 @@ static inline unsigned long get_lock_parent_ip(void)
return addr;
return CALLER_ADDR2;
 }
+#endif
 
 #ifdef CONFIG_IRQSOFF_TRACER
   extern void time_hardirqs_on(unsigned long a0, unsigned long a1);
diff --git a/kernel/Makefile b/kernel/Makefile
index e2ec54e2b952..bff8214bf5f6 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -11,6 +11,13 @@ obj-y = fork.o exec_domain.o panic.o \
notifier.o ksysfs.o cred.o reboot.o \
async.o range.o smpboot.o
 
+# Tracing may do some dangerous __builtin_return_address() operations
+# We know they are dangerous, we don't need gcc telling us that.
+ifdef CONFIG_USING_GET_LOCK_PARENT_IP
+FRAME_CFLAGS := $(call cc-disable-warning,frame-address)
+KBUILD_CFLAGS += $(FRAME_CFLAGS)
+endif
+
 obj-$(CONFIG_MULTIUSER) += groups.o
 
 ifdef CONFIG_FUNCTION_TRACER
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index ba3326785ca4..ecc0bbc2ca30 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -192,6 +192,7 @@ config PREEMPT_TRACER
select RING_BUFFER_ALLOW_SWAP
select TRACER_SNAPSHOT
select TRACER_SNAPSHOT_PER_CPU_SWAP
+   select USING_GET_LOCK_PARENT_IP
help
  This option measures the time spent in preemption-off critical
  sections, with microsecond accuracy.
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index cab7405f48d2..dbc49c48ff53 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -977,6 +977,7 @@ config TIMER_STATS
 config DEBUG_PREEMPT
bool "Debug preemptible kernel"
depends on DEBUG_KERNEL && PREEMPT && TRACE_IRQFLAGS_SUPPORT
+   select USING_GET_LOCK_PARENT_IP
default y
help
  If you say Y here then the kernel will use a debug variant of the
@@ -1159,8 +1160,17 @@ config LOCK_TORTURE_TEST
 
 endmenu # lock debugging
 
+config USING_GET_LOCK_PARENT_IP
+bool
+   help
+ Enables the use of the function get_lock_parent_ip() that
+ will use __builtin_return_address(n) with n > 0 causing
+ some gcc warnings. When this is selected, those warnings
+ will be suppressed.
+
 config TRACE_IRQFLAGS
bool
+   select USING_GET_LOCK_PARENT_IP
help
  Enables hooks to interrupt enabling and disabling for
  either tracing or lock debugging.


[GIT pull] locking fix for 4.9

2016-10-08 Thread Thomas Gleixner
Linus,

please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

A single fix which prevents newer GCCs from spamming the build output with
overly eager warnings about __builtin_return_address() uses which are
correct.

Thanks,

tglx

-->
Steven Rostedt (1):
  locking/lockdep: Quiet GCC about dangerous __builtin_return_address() 
operations


 include/linux/ftrace.h |  2 ++
 kernel/Makefile|  7 +++
 kernel/trace/Kconfig   |  1 +
 lib/Kconfig.debug  | 10 ++
 4 files changed, 20 insertions(+)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 6f93ac46e7f0..1218b150a6b3 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -714,6 +714,7 @@ static inline void __ftrace_enabled_restore(int enabled)
 #define CALLER_ADDR5 ((unsigned long)ftrace_return_address(5))
 #define CALLER_ADDR6 ((unsigned long)ftrace_return_address(6))
 
+#ifdef CONFIG_USING_GET_LOCK_PARENT_IP
 static inline unsigned long get_lock_parent_ip(void)
 {
unsigned long addr = CALLER_ADDR0;
@@ -725,6 +726,7 @@ static inline unsigned long get_lock_parent_ip(void)
return addr;
return CALLER_ADDR2;
 }
+#endif
 
 #ifdef CONFIG_IRQSOFF_TRACER
   extern void time_hardirqs_on(unsigned long a0, unsigned long a1);
diff --git a/kernel/Makefile b/kernel/Makefile
index e2ec54e2b952..bff8214bf5f6 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -11,6 +11,13 @@ obj-y = fork.o exec_domain.o panic.o \
notifier.o ksysfs.o cred.o reboot.o \
async.o range.o smpboot.o
 
+# Tracing may do some dangerous __builtin_return_address() operations
+# We know they are dangerous, we don't need gcc telling us that.
+ifdef CONFIG_USING_GET_LOCK_PARENT_IP
+FRAME_CFLAGS := $(call cc-disable-warning,frame-address)
+KBUILD_CFLAGS += $(FRAME_CFLAGS)
+endif
+
 obj-$(CONFIG_MULTIUSER) += groups.o
 
 ifdef CONFIG_FUNCTION_TRACER
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index ba3326785ca4..ecc0bbc2ca30 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -192,6 +192,7 @@ config PREEMPT_TRACER
select RING_BUFFER_ALLOW_SWAP
select TRACER_SNAPSHOT
select TRACER_SNAPSHOT_PER_CPU_SWAP
+   select USING_GET_LOCK_PARENT_IP
help
  This option measures the time spent in preemption-off critical
  sections, with microsecond accuracy.
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index cab7405f48d2..dbc49c48ff53 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -977,6 +977,7 @@ config TIMER_STATS
 config DEBUG_PREEMPT
bool "Debug preemptible kernel"
depends on DEBUG_KERNEL && PREEMPT && TRACE_IRQFLAGS_SUPPORT
+   select USING_GET_LOCK_PARENT_IP
default y
help
  If you say Y here then the kernel will use a debug variant of the
@@ -1159,8 +1160,17 @@ config LOCK_TORTURE_TEST
 
 endmenu # lock debugging
 
+config USING_GET_LOCK_PARENT_IP
+bool
+   help
+ Enables the use of the function get_lock_parent_ip() that
+ will use __builtin_return_address(n) with n > 0 causing
+ some gcc warnings. When this is selected, those warnings
+ will be suppressed.
+
 config TRACE_IRQFLAGS
bool
+   select USING_GET_LOCK_PARENT_IP
help
  Enables hooks to interrupt enabling and disabling for
  either tracing or lock debugging.


[GIT PULL] locking fix

2016-09-13 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: d7127b5e5fa0551be21b86640f1648b224e36d43 locking/barriers: Don't use 
sizeof(void) in lockless_dereference()

Another lockless_dereference() Sparse fix.

 Thanks,

Ingo

-->
Johannes Berg (1):
  locking/barriers: Don't use sizeof(void) in lockless_dereference()


 include/linux/compiler.h | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 436aa4e42221..668569844d37 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -527,13 +527,14 @@ static __always_inline void __write_once_size(volatile 
void *p, void *res, int s
  * object's lifetime is managed by something other than RCU.  That
  * "something other" might be reference counting or simple immortality.
  *
- * The seemingly unused size_t variable is to validate @p is indeed a pointer
- * type by making sure it can be dereferenced.
+ * The seemingly unused variable ___typecheck_p validates that @p is
+ * indeed a pointer type by using a pointer to typeof(*p) as the type.
+ * Taking a pointer to typeof(*p) again is needed in case p is void *.
  */
 #define lockless_dereference(p) \
 ({ \
typeof(p) _p1 = READ_ONCE(p); \
-   size_t __maybe_unused __size_of_ptr = sizeof(*(p)); \
+   typeof(*(p)) *___typecheck_p __maybe_unused; \
smp_read_barrier_depends(); /* Dependency order vs. p above. */ \
(_p1); \
 })


[GIT PULL] locking fix

2016-09-13 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: d7127b5e5fa0551be21b86640f1648b224e36d43 locking/barriers: Don't use 
sizeof(void) in lockless_dereference()

Another lockless_dereference() Sparse fix.

 Thanks,

Ingo

-->
Johannes Berg (1):
  locking/barriers: Don't use sizeof(void) in lockless_dereference()


 include/linux/compiler.h | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 436aa4e42221..668569844d37 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -527,13 +527,14 @@ static __always_inline void __write_once_size(volatile 
void *p, void *res, int s
  * object's lifetime is managed by something other than RCU.  That
  * "something other" might be reference counting or simple immortality.
  *
- * The seemingly unused size_t variable is to validate @p is indeed a pointer
- * type by making sure it can be dereferenced.
+ * The seemingly unused variable ___typecheck_p validates that @p is
+ * indeed a pointer type by using a pointer to typeof(*p) as the type.
+ * Taking a pointer to typeof(*p) again is needed in case p is void *.
  */
 #define lockless_dereference(p) \
 ({ \
typeof(p) _p1 = READ_ONCE(p); \
-   size_t __maybe_unused __size_of_ptr = sizeof(*(p)); \
+   typeof(*(p)) *___typecheck_p __maybe_unused; \
smp_read_barrier_depends(); /* Dependency order vs. p above. */ \
(_p1); \
 })


[GIT pull] locking fix for 4.7

2016-06-25 Thread Thomas Gleixner
Linus,

please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

A single fix to address a race in the static key logic.

Thanks,

tglx

-->
Paolo Bonzini (1):
  locking/static_key: Fix concurrent static_key_slow_inc()


 include/linux/jump_label.h | 16 +---
 kernel/jump_label.c| 36 +---
 2 files changed, 46 insertions(+), 6 deletions(-)

diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index 0536524bb9eb..68904469fba1 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -117,13 +117,18 @@ struct module;
 
 #include 
 
+#ifdef HAVE_JUMP_LABEL
+
 static inline int static_key_count(struct static_key *key)
 {
-   return atomic_read(>enabled);
+   /*
+* -1 means the first static_key_slow_inc() is in progress.
+*  static_key_enabled() must return true, so return 1 here.
+*/
+   int n = atomic_read(>enabled);
+   return n >= 0 ? n : 1;
 }
 
-#ifdef HAVE_JUMP_LABEL
-
 #define JUMP_TYPE_FALSE0UL
 #define JUMP_TYPE_TRUE 1UL
 #define JUMP_TYPE_MASK 1UL
@@ -162,6 +167,11 @@ extern void jump_label_apply_nops(struct module *mod);
 
 #else  /* !HAVE_JUMP_LABEL */
 
+static inline int static_key_count(struct static_key *key)
+{
+   return atomic_read(>enabled);
+}
+
 static __always_inline void jump_label_init(void)
 {
static_key_initialized = true;
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 05254eeb4b4e..4b353e0be121 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -58,13 +58,36 @@ static void jump_label_update(struct static_key *key);
 
 void static_key_slow_inc(struct static_key *key)
 {
+   int v, v1;
+
STATIC_KEY_CHECK_USE();
-   if (atomic_inc_not_zero(>enabled))
-   return;
+
+   /*
+* Careful if we get concurrent static_key_slow_inc() calls;
+* later calls must wait for the first one to _finish_ the
+* jump_label_update() process.  At the same time, however,
+* the jump_label_update() call below wants to see
+* static_key_enabled() for jumps to be updated properly.
+*
+* So give a special meaning to negative key->enabled: it sends
+* static_key_slow_inc() down the slow path, and it is non-zero
+* so it counts as "enabled" in jump_label_update().  Note that
+* atomic_inc_unless_negative() checks >= 0, so roll our own.
+*/
+   for (v = atomic_read(>enabled); v > 0; v = v1) {
+   v1 = atomic_cmpxchg(>enabled, v, v + 1);
+   if (likely(v1 == v))
+   return;
+   }
 
jump_label_lock();
-   if (atomic_inc_return(>enabled) == 1)
+   if (atomic_read(>enabled) == 0) {
+   atomic_set(>enabled, -1);
jump_label_update(key);
+   atomic_set(>enabled, 1);
+   } else {
+   atomic_inc(>enabled);
+   }
jump_label_unlock();
 }
 EXPORT_SYMBOL_GPL(static_key_slow_inc);
@@ -72,6 +95,13 @@ EXPORT_SYMBOL_GPL(static_key_slow_inc);
 static void __static_key_slow_dec(struct static_key *key,
unsigned long rate_limit, struct delayed_work *work)
 {
+   /*
+* The negative count check is valid even when a negative
+* key->enabled is in use by static_key_slow_inc(); a
+* __static_key_slow_dec() before the first static_key_slow_inc()
+* returns is unbalanced, because all other static_key_slow_inc()
+* instances block while the update is in progress.
+*/
if (!atomic_dec_and_mutex_lock(>enabled, _label_mutex)) {
WARN(atomic_read(>enabled) < 0,
 "jump label: negative count!\n");


[GIT pull] locking fix for 4.7

2016-06-25 Thread Thomas Gleixner
Linus,

please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

A single fix to address a race in the static key logic.

Thanks,

tglx

-->
Paolo Bonzini (1):
  locking/static_key: Fix concurrent static_key_slow_inc()


 include/linux/jump_label.h | 16 +---
 kernel/jump_label.c| 36 +---
 2 files changed, 46 insertions(+), 6 deletions(-)

diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index 0536524bb9eb..68904469fba1 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -117,13 +117,18 @@ struct module;
 
 #include 
 
+#ifdef HAVE_JUMP_LABEL
+
 static inline int static_key_count(struct static_key *key)
 {
-   return atomic_read(>enabled);
+   /*
+* -1 means the first static_key_slow_inc() is in progress.
+*  static_key_enabled() must return true, so return 1 here.
+*/
+   int n = atomic_read(>enabled);
+   return n >= 0 ? n : 1;
 }
 
-#ifdef HAVE_JUMP_LABEL
-
 #define JUMP_TYPE_FALSE0UL
 #define JUMP_TYPE_TRUE 1UL
 #define JUMP_TYPE_MASK 1UL
@@ -162,6 +167,11 @@ extern void jump_label_apply_nops(struct module *mod);
 
 #else  /* !HAVE_JUMP_LABEL */
 
+static inline int static_key_count(struct static_key *key)
+{
+   return atomic_read(>enabled);
+}
+
 static __always_inline void jump_label_init(void)
 {
static_key_initialized = true;
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 05254eeb4b4e..4b353e0be121 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -58,13 +58,36 @@ static void jump_label_update(struct static_key *key);
 
 void static_key_slow_inc(struct static_key *key)
 {
+   int v, v1;
+
STATIC_KEY_CHECK_USE();
-   if (atomic_inc_not_zero(>enabled))
-   return;
+
+   /*
+* Careful if we get concurrent static_key_slow_inc() calls;
+* later calls must wait for the first one to _finish_ the
+* jump_label_update() process.  At the same time, however,
+* the jump_label_update() call below wants to see
+* static_key_enabled() for jumps to be updated properly.
+*
+* So give a special meaning to negative key->enabled: it sends
+* static_key_slow_inc() down the slow path, and it is non-zero
+* so it counts as "enabled" in jump_label_update().  Note that
+* atomic_inc_unless_negative() checks >= 0, so roll our own.
+*/
+   for (v = atomic_read(>enabled); v > 0; v = v1) {
+   v1 = atomic_cmpxchg(>enabled, v, v + 1);
+   if (likely(v1 == v))
+   return;
+   }
 
jump_label_lock();
-   if (atomic_inc_return(>enabled) == 1)
+   if (atomic_read(>enabled) == 0) {
+   atomic_set(>enabled, -1);
jump_label_update(key);
+   atomic_set(>enabled, 1);
+   } else {
+   atomic_inc(>enabled);
+   }
jump_label_unlock();
 }
 EXPORT_SYMBOL_GPL(static_key_slow_inc);
@@ -72,6 +95,13 @@ EXPORT_SYMBOL_GPL(static_key_slow_inc);
 static void __static_key_slow_dec(struct static_key *key,
unsigned long rate_limit, struct delayed_work *work)
 {
+   /*
+* The negative count check is valid even when a negative
+* key->enabled is in use by static_key_slow_inc(); a
+* __static_key_slow_dec() before the first static_key_slow_inc()
+* returns is unbalanced, because all other static_key_slow_inc()
+* instances block while the update is in progress.
+*/
if (!atomic_dec_and_mutex_lock(>enabled, _label_mutex)) {
WARN(atomic_read(>enabled) < 0,
 "jump label: negative count!\n");


[GIT PULL] locking fix

2016-04-16 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: 5c8a010c2411729a07cb1b90c09fa978ac0ac6c0 locking/lockdep: Fix 
print_collision() unused warning

Fixes a build warning on certain Kconfig combinations.

 Thanks,

Ingo

-->
Borislav Petkov (1):
  locking/lockdep: Fix print_collision() unused warning


 kernel/locking/lockdep.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 2324ba5310db..ed9410936a22 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1999,6 +1999,7 @@ static inline int get_first_held_lock(struct task_struct 
*curr,
return ++i;
 }
 
+#ifdef CONFIG_DEBUG_LOCKDEP
 /*
  * Returns the next chain_key iteration
  */
@@ -2069,6 +2070,7 @@ static void print_collision(struct task_struct *curr,
printk("\nstack backtrace:\n");
dump_stack();
 }
+#endif
 
 /*
  * Checks whether the chain and the current held locks are consistent


[GIT PULL] locking fix

2016-04-16 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: 5c8a010c2411729a07cb1b90c09fa978ac0ac6c0 locking/lockdep: Fix 
print_collision() unused warning

Fixes a build warning on certain Kconfig combinations.

 Thanks,

Ingo

-->
Borislav Petkov (1):
  locking/lockdep: Fix print_collision() unused warning


 kernel/locking/lockdep.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 2324ba5310db..ed9410936a22 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1999,6 +1999,7 @@ static inline int get_first_held_lock(struct task_struct 
*curr,
return ++i;
 }
 
+#ifdef CONFIG_DEBUG_LOCKDEP
 /*
  * Returns the next chain_key iteration
  */
@@ -2069,6 +2070,7 @@ static void print_collision(struct task_struct *curr,
printk("\nstack backtrace:\n");
dump_stack();
 }
+#endif
 
 /*
  * Checks whether the chain and the current held locks are consistent


[GIT PULL] locking fix

2015-08-14 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: cba77f03f2c7b6cc0b0a44a3c679e0abade7da62 locking/pvqspinlock: Fix 
kernel panic in locking-selftest

A single fix for a locking self-test crash.

 Thanks,

Ingo

-->
Waiman Long (1):
  locking/pvqspinlock: Fix kernel panic in locking-selftest


 kernel/locking/qspinlock_paravirt.h | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/qspinlock_paravirt.h 
b/kernel/locking/qspinlock_paravirt.h
index 04ab18151cc8..df19ae4debd0 100644
--- a/kernel/locking/qspinlock_paravirt.h
+++ b/kernel/locking/qspinlock_paravirt.h
@@ -4,6 +4,7 @@
 
 #include 
 #include 
+#include 
 
 /*
  * Implement paravirt qspinlocks; the general idea is to halt the vcpus instead
@@ -286,15 +287,23 @@ __visible void __pv_queued_spin_unlock(struct qspinlock 
*lock)
 {
struct __qspinlock *l = (void *)lock;
struct pv_node *node;
+   u8 lockval = cmpxchg(>locked, _Q_LOCKED_VAL, 0);
 
/*
 * We must not unlock if SLOW, because in that case we must first
 * unhash. Otherwise it would be possible to have multiple @lock
 * entries, which would be BAD.
 */
-   if (likely(cmpxchg(>locked, _Q_LOCKED_VAL, 0) == _Q_LOCKED_VAL))
+   if (likely(lockval == _Q_LOCKED_VAL))
return;
 
+   if (unlikely(lockval != _Q_SLOW_VAL)) {
+   if (debug_locks_silent)
+   return;
+   WARN(1, "pvqspinlock: lock %p has corrupted value 0x%x!\n", 
lock, atomic_read(>val));
+   return;
+   }
+
/*
 * Since the above failed to release, this must be the SLOW path.
 * Therefore start by looking up the blocked node and unhashing it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[GIT PULL] locking fix

2015-08-14 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: cba77f03f2c7b6cc0b0a44a3c679e0abade7da62 locking/pvqspinlock: Fix 
kernel panic in locking-selftest

A single fix for a locking self-test crash.

 Thanks,

Ingo

--
Waiman Long (1):
  locking/pvqspinlock: Fix kernel panic in locking-selftest


 kernel/locking/qspinlock_paravirt.h | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/qspinlock_paravirt.h 
b/kernel/locking/qspinlock_paravirt.h
index 04ab18151cc8..df19ae4debd0 100644
--- a/kernel/locking/qspinlock_paravirt.h
+++ b/kernel/locking/qspinlock_paravirt.h
@@ -4,6 +4,7 @@
 
 #include linux/hash.h
 #include linux/bootmem.h
+#include linux/debug_locks.h
 
 /*
  * Implement paravirt qspinlocks; the general idea is to halt the vcpus instead
@@ -286,15 +287,23 @@ __visible void __pv_queued_spin_unlock(struct qspinlock 
*lock)
 {
struct __qspinlock *l = (void *)lock;
struct pv_node *node;
+   u8 lockval = cmpxchg(l-locked, _Q_LOCKED_VAL, 0);
 
/*
 * We must not unlock if SLOW, because in that case we must first
 * unhash. Otherwise it would be possible to have multiple @lock
 * entries, which would be BAD.
 */
-   if (likely(cmpxchg(l-locked, _Q_LOCKED_VAL, 0) == _Q_LOCKED_VAL))
+   if (likely(lockval == _Q_LOCKED_VAL))
return;
 
+   if (unlikely(lockval != _Q_SLOW_VAL)) {
+   if (debug_locks_silent)
+   return;
+   WARN(1, pvqspinlock: lock %p has corrupted value 0x%x!\n, 
lock, atomic_read(lock-val));
+   return;
+   }
+
/*
 * Since the above failed to release, this must be the SLOW path.
 * Therefore start by looking up the blocked node and unhashing it.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[GIT PULL] locking fix

2015-03-28 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: 35a9393c95b31870a74f51a3e7455f33f5657b6f lockdep: Fix the module 
unload key range freeing logic

A module unload lockdep race fix.

 Thanks,

Ingo

-->
Peter Zijlstra (1):
  lockdep: Fix the module unload key range freeing logic


 kernel/locking/lockdep.c | 81 
 kernel/module.c  |  8 ++---
 2 files changed, 59 insertions(+), 30 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 88d0d4420ad2..ba77ab5f64dd 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -633,7 +633,7 @@ static int count_matching_names(struct lock_class 
*new_class)
if (!new_class->name)
return 0;
 
-   list_for_each_entry(class, _lock_classes, lock_entry) {
+   list_for_each_entry_rcu(class, _lock_classes, lock_entry) {
if (new_class->key - new_class->subclass == class->key)
return class->name_version;
if (class->name && !strcmp(class->name, new_class->name))
@@ -700,10 +700,12 @@ look_up_lock_class(struct lockdep_map *lock, unsigned int 
subclass)
hash_head = classhashentry(key);
 
/*
-* We can walk the hash lockfree, because the hash only
-* grows, and we are careful when adding entries to the end:
+* We do an RCU walk of the hash, see lockdep_free_key_range().
 */
-   list_for_each_entry(class, hash_head, hash_entry) {
+   if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
+   return NULL;
+
+   list_for_each_entry_rcu(class, hash_head, hash_entry) {
if (class->key == key) {
/*
 * Huh! same key, different name? Did someone trample
@@ -728,7 +730,8 @@ register_lock_class(struct lockdep_map *lock, unsigned int 
subclass, int force)
struct lockdep_subclass_key *key;
struct list_head *hash_head;
struct lock_class *class;
-   unsigned long flags;
+
+   DEBUG_LOCKS_WARN_ON(!irqs_disabled());
 
class = look_up_lock_class(lock, subclass);
if (likely(class))
@@ -750,28 +753,26 @@ register_lock_class(struct lockdep_map *lock, unsigned 
int subclass, int force)
key = lock->key->subkeys + subclass;
hash_head = classhashentry(key);
 
-   raw_local_irq_save(flags);
if (!graph_lock()) {
-   raw_local_irq_restore(flags);
return NULL;
}
/*
 * We have to do the hash-walk again, to avoid races
 * with another CPU:
 */
-   list_for_each_entry(class, hash_head, hash_entry)
+   list_for_each_entry_rcu(class, hash_head, hash_entry) {
if (class->key == key)
goto out_unlock_set;
+   }
+
/*
 * Allocate a new key from the static array, and add it to
 * the hash:
 */
if (nr_lock_classes >= MAX_LOCKDEP_KEYS) {
if (!debug_locks_off_graph_unlock()) {
-   raw_local_irq_restore(flags);
return NULL;
}
-   raw_local_irq_restore(flags);
 
print_lockdep_off("BUG: MAX_LOCKDEP_KEYS too low!");
dump_stack();
@@ -798,7 +799,6 @@ register_lock_class(struct lockdep_map *lock, unsigned int 
subclass, int force)
 
if (verbose(class)) {
graph_unlock();
-   raw_local_irq_restore(flags);
 
printk("\nnew class %p: %s", class->key, class->name);
if (class->name_version > 1)
@@ -806,15 +806,12 @@ register_lock_class(struct lockdep_map *lock, unsigned 
int subclass, int force)
printk("\n");
dump_stack();
 
-   raw_local_irq_save(flags);
if (!graph_lock()) {
-   raw_local_irq_restore(flags);
return NULL;
}
}
 out_unlock_set:
graph_unlock();
-   raw_local_irq_restore(flags);
 
 out_set_class_cache:
if (!subclass || force)
@@ -870,11 +867,9 @@ static int add_lock_to_list(struct lock_class *class, 
struct lock_class *this,
entry->distance = distance;
entry->trace = *trace;
/*
-* Since we never remove from the dependency list, the list can
-* be walked lockless by other CPUs, it's only allocation
-* that must be protected by the spinlock. But this also means
-* we must make new entries visible only once writes to the
-* entry become visible - hence the RCU op:
+* Both allocation and removal are done under the graph lock; but
+* iteration is under RCU-sched; see look_up_lock_class() and
+* lockdep_free_key_range().
 */

[GIT PULL] locking fix

2015-03-28 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: 35a9393c95b31870a74f51a3e7455f33f5657b6f lockdep: Fix the module 
unload key range freeing logic

A module unload lockdep race fix.

 Thanks,

Ingo

--
Peter Zijlstra (1):
  lockdep: Fix the module unload key range freeing logic


 kernel/locking/lockdep.c | 81 
 kernel/module.c  |  8 ++---
 2 files changed, 59 insertions(+), 30 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 88d0d4420ad2..ba77ab5f64dd 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -633,7 +633,7 @@ static int count_matching_names(struct lock_class 
*new_class)
if (!new_class-name)
return 0;
 
-   list_for_each_entry(class, all_lock_classes, lock_entry) {
+   list_for_each_entry_rcu(class, all_lock_classes, lock_entry) {
if (new_class-key - new_class-subclass == class-key)
return class-name_version;
if (class-name  !strcmp(class-name, new_class-name))
@@ -700,10 +700,12 @@ look_up_lock_class(struct lockdep_map *lock, unsigned int 
subclass)
hash_head = classhashentry(key);
 
/*
-* We can walk the hash lockfree, because the hash only
-* grows, and we are careful when adding entries to the end:
+* We do an RCU walk of the hash, see lockdep_free_key_range().
 */
-   list_for_each_entry(class, hash_head, hash_entry) {
+   if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
+   return NULL;
+
+   list_for_each_entry_rcu(class, hash_head, hash_entry) {
if (class-key == key) {
/*
 * Huh! same key, different name? Did someone trample
@@ -728,7 +730,8 @@ register_lock_class(struct lockdep_map *lock, unsigned int 
subclass, int force)
struct lockdep_subclass_key *key;
struct list_head *hash_head;
struct lock_class *class;
-   unsigned long flags;
+
+   DEBUG_LOCKS_WARN_ON(!irqs_disabled());
 
class = look_up_lock_class(lock, subclass);
if (likely(class))
@@ -750,28 +753,26 @@ register_lock_class(struct lockdep_map *lock, unsigned 
int subclass, int force)
key = lock-key-subkeys + subclass;
hash_head = classhashentry(key);
 
-   raw_local_irq_save(flags);
if (!graph_lock()) {
-   raw_local_irq_restore(flags);
return NULL;
}
/*
 * We have to do the hash-walk again, to avoid races
 * with another CPU:
 */
-   list_for_each_entry(class, hash_head, hash_entry)
+   list_for_each_entry_rcu(class, hash_head, hash_entry) {
if (class-key == key)
goto out_unlock_set;
+   }
+
/*
 * Allocate a new key from the static array, and add it to
 * the hash:
 */
if (nr_lock_classes = MAX_LOCKDEP_KEYS) {
if (!debug_locks_off_graph_unlock()) {
-   raw_local_irq_restore(flags);
return NULL;
}
-   raw_local_irq_restore(flags);
 
print_lockdep_off(BUG: MAX_LOCKDEP_KEYS too low!);
dump_stack();
@@ -798,7 +799,6 @@ register_lock_class(struct lockdep_map *lock, unsigned int 
subclass, int force)
 
if (verbose(class)) {
graph_unlock();
-   raw_local_irq_restore(flags);
 
printk(\nnew class %p: %s, class-key, class-name);
if (class-name_version  1)
@@ -806,15 +806,12 @@ register_lock_class(struct lockdep_map *lock, unsigned 
int subclass, int force)
printk(\n);
dump_stack();
 
-   raw_local_irq_save(flags);
if (!graph_lock()) {
-   raw_local_irq_restore(flags);
return NULL;
}
}
 out_unlock_set:
graph_unlock();
-   raw_local_irq_restore(flags);
 
 out_set_class_cache:
if (!subclass || force)
@@ -870,11 +867,9 @@ static int add_lock_to_list(struct lock_class *class, 
struct lock_class *this,
entry-distance = distance;
entry-trace = *trace;
/*
-* Since we never remove from the dependency list, the list can
-* be walked lockless by other CPUs, it's only allocation
-* that must be protected by the spinlock. But this also means
-* we must make new entries visible only once writes to the
-* entry become visible - hence the RCU op:
+* Both allocation and removal are done under the graph lock; but
+* iteration is under RCU-sched; see look_up_lock_class() and
+* lockdep_free_key_range().
 */

[GIT PULL] locking fix

2015-03-01 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: 9d3e2d02f54160725d97f4ab1e1e8de493fbf33a locking/rtmutex: Set state 
back to running on error

An rtmutex deadlock path fixlet.

 Thanks,

Ingo

-->
Sebastian Andrzej Siewior (1):
  locking/rtmutex: Set state back to running on error


 kernel/locking/rtmutex.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index e16e5542bf13..6357265a31ad 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1193,6 +1193,7 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state,
ret = __rt_mutex_slowlock(lock, state, timeout, );
 
if (unlikely(ret)) {
+   __set_current_state(TASK_RUNNING);
if (rt_mutex_has_waiters(lock))
remove_waiter(lock, );
rt_mutex_handle_deadlock(ret, chwalk, );
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[GIT PULL] locking fix

2015-03-01 Thread Ingo Molnar
Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
locking-urgent-for-linus

   # HEAD: 9d3e2d02f54160725d97f4ab1e1e8de493fbf33a locking/rtmutex: Set state 
back to running on error

An rtmutex deadlock path fixlet.

 Thanks,

Ingo

--
Sebastian Andrzej Siewior (1):
  locking/rtmutex: Set state back to running on error


 kernel/locking/rtmutex.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index e16e5542bf13..6357265a31ad 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1193,6 +1193,7 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state,
ret = __rt_mutex_slowlock(lock, state, timeout, waiter);
 
if (unlikely(ret)) {
+   __set_current_state(TASK_RUNNING);
if (rt_mutex_has_waiters(lock))
remove_waiter(lock, waiter);
rt_mutex_handle_deadlock(ret, chwalk, waiter);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-28 Thread Ingo Molnar

* Linus Torvalds  wrote:

> On Sun, Oct 27, 2013 at 12:56 PM, Maarten Lankhorst
>  wrote:
> >
> > And this is why ww_ctx == NULL is now passed as an inline 
> > argument. :)
> 
> Well, more than that - the "optimization" has been done at the 
> source code level, so that the behavior is no longer a matter 
> about how well the compiler optimizes it any more.
> 
> I'm not complaining about the fix. I'm complaining about how the 
> fix was claimed to be due to a compiler bug. The "documentation" 
> for the fix (ie the commit message) was actively misleading.

Agreed, there was quite a bit of back and forth and I genuinely got 
confused and thought it's purely about a compiler bug (hence the 
misleading pull request) - will watch out for that pattern better 
next time around.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-28 Thread Ingo Molnar

* Linus Torvalds torva...@linux-foundation.org wrote:

 On Sun, Oct 27, 2013 at 12:56 PM, Maarten Lankhorst
 maarten.lankho...@canonical.com wrote:
 
  And this is why ww_ctx == NULL is now passed as an inline 
  argument. :)
 
 Well, more than that - the optimization has been done at the 
 source code level, so that the behavior is no longer a matter 
 about how well the compiler optimizes it any more.
 
 I'm not complaining about the fix. I'm complaining about how the 
 fix was claimed to be due to a compiler bug. The documentation 
 for the fix (ie the commit message) was actively misleading.

Agreed, there was quite a bit of back and forth and I genuinely got 
confused and thought it's purely about a compiler bug (hence the 
misleading pull request) - will watch out for that pattern better 
next time around.

Thanks,

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Linus Torvalds
On Sun, Oct 27, 2013 at 12:56 PM, Maarten Lankhorst
 wrote:
>
> And this is why ww_ctx == NULL is now passed as an inline argument. :)

Well, more than that - the "optimization" has been done at the source
code level, so that the behavior is no longer a matter about how well
the compiler optimizes it any more.

I'm not complaining about the fix. I'm complaining about how the fix
was claimed to be due to a compiler bug. The "documentation" for the
fix (ie the commit message) was actively misleading.

   Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Maarten Lankhorst
op 27-10-13 20:51, Linus Torvalds schreef:
> On Sun, Oct 27, 2013 at 12:37 PM, Maarten Lankhorst
>  wrote:
>> I would love for a compiler to become that smart though, but I do not think 
>> it's likely.
> Dammit, even if that is true, then write the conditional *correctly*.
>
> As mentioned, the conditional
>
> __builtin_constant_p(ww_ctx) && ww_ctx == NULL
>
> is actually sensible, in a way the original one was *not*. It actually
> tests what you apparently intended to test, and is more readable to
> humans to boot.
Yeah that mail arrived after I sent mine, I agree that this would have been 
more sensible.
> And no, it still isn't actually guaranteed to do what you want it to
> do. Historically, in gcc, __builtin_constant_p() really only ever
> worked in macros, because by the time you use it in inline functions,
> a constant NULL in the caller will have been turned into a argument
> variable in the inline function, and __builtin_constant_p() would be
> done before that was optimized away. Over the years, gcc has pushed
> some of the builtin evaluation deeper down, and these days it actually
> works within inline functions, but my point that
> __builtin_constant_p() is about a certain level of compiler
> optimization is very much true: you're actually testing for a compiler
> optimization detail.
>
> I know the LLVM people had similar issues with this comparison, so
> these days it's not even just about gcc versions. We may never have
> cared very much about icc, but llvm is actually an interesting target
> compiler.
>
And this is why ww_ctx == NULL is now passed as an inline argument. :)

~Maarten
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Linus Torvalds
On Sun, Oct 27, 2013 at 12:37 PM, Maarten Lankhorst
 wrote:
>
> I would love for a compiler to become that smart though, but I do not think 
> it's likely.

Dammit, even if that is true, then write the conditional *correctly*.

As mentioned, the conditional

__builtin_constant_p(ww_ctx) && ww_ctx == NULL

is actually sensible, in a way the original one was *not*. It actually
tests what you apparently intended to test, and is more readable to
humans to boot.

And no, it still isn't actually guaranteed to do what you want it to
do. Historically, in gcc, __builtin_constant_p() really only ever
worked in macros, because by the time you use it in inline functions,
a constant NULL in the caller will have been turned into a argument
variable in the inline function, and __builtin_constant_p() would be
done before that was optimized away. Over the years, gcc has pushed
some of the builtin evaluation deeper down, and these days it actually
works within inline functions, but my point that
__builtin_constant_p() is about a certain level of compiler
optimization is very much true: you're actually testing for a compiler
optimization detail.

I know the LLVM people had similar issues with this comparison, so
these days it's not even just about gcc versions. We may never have
cared very much about icc, but llvm is actually an interesting target
compiler.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Maarten Lankhorst
op 27-10-13 20:23, Linus Torvalds schreef:
> On Sun, Oct 27, 2013 at 12:00 PM, Maarten Lankhorst
>  wrote:
>> op 27-10-13 18:28, Linus Torvalds schreef:
>>> That expression is largely equivalent to
>>> "__builtin_constant_p(ww_ctx)" (because iff ww_ctx is constant, then
>>> the comparison to NULL is constant), which is actually much easier to
>>> read, while carrying a totally different semantic meaning. Making
>>> things worse, the comparison to NULL *may* be marked constant under
>>> some very random situations (ie the compiler could turn a "taking an
>>> address of a variable is never NULL" kind of knowledge and combining
>>> it with other knowledge, and turn a complicated "ctx" expression into
>>> a "I know this cannot be NULL" thing, and thus the "== NULL" is a
>>> constant, even though ctx itself is some dynamic calculation).
>>>
>>> Whoever wrote the original should be shot. And this commit shouldn't
>>> have been marked as being somehow about gcc-version dependence, but
>>> about removing completely crap code.
>>>
>> Unfortunately gcc disagreed there, which was another compiler bug.
> Stop this idiotic "blame gcc bug" crap. Which part of my explanation
> for why it was *NOT* a compiler bug did you not understand?
>
>> __builtin_constant_p(ww_ctx) was NOT equal to __builtin_constant_p(ww_ctx == 
>> NULL), iirc.
> See my "largely equivalent" comment, with the *EXTRA* logic that gcc
> may actually find cases where the comparison is a constant even if the
> ww_ctx thing itself isn't a constant.
Sure in the theoretical case it's possible.
>>  __builtin_constant_p(ww_ctx == NULL) is equal to 
>> __builtin_constant_p(ww_ctx != NULL), but
>> the former is more readable, since it shows we expect ww_ctx to be null.
> Stop the f*cking around already! The  whole "we expect ww_ctx to be
> null" thing shows that YOU DO NOT SEEM TO UNDERSTAND WHAT THE TEST
> ACTUALLY IS!
>
> The expression
>
>__builtin_constant_p(ww_ctx == NULL)
>
> has ABSOLUTELY NOTHING to do with whether ww_ctx is NULL or not!
> Christ, can you really not understand that?
I'm fully aware, I just think the compiler cannot know that the address is 
always non-null for a generic function that takes an argument and isn't inlined.

> For example, ww_ctx could be "_variable", and the compiler can
> - and some compiles _will_ - say that ww_ctx clearly cannot be NULL,
> so "ww_ctx == NULL" is 0, which is a constant, so the
> __builtin_constant_p() expression returns true. See? That expression
> has absolutely NOTHING to do with whether you passed in NULL or not.
> NOTHING.
but __ww_mutex_lock isn't inlined..
> That __builtin_constant_p() tests whether the comparison is
> *CONSTANT*. And "0" is just as much a constant as "1" is. Really. So
> the whole f*cking expression is total and utter crap, because it is
> entirely and utterly senseless. It lacks all meaning. It's not
> actually testing for NULL at all. Never was, never will.
>
> The *ONLY* thing it is testing for is "how much can the compiler
> optimize this", and as such the *ONLY* thing it tests for is compiler
> differences.
>
> Really. Seriously. If you start blaming the compiler for different
> compilers giving different results, the only thing *that* shows is
> that you didn't understand the expression to begin with.
>
>> But yeah I guess it was too broken in gcc after all, so that's why it had to 
>> be killed altogether.
> NO NO NO NO. No a f*cking thousand times. It's not "too broken in
> gcc". It's too broken in the source code, and the fact that you don't
> even understand that is sad. You wrote the code, and you seem to be
> unable to admit that *your* code was buggy.
>
> It's not a compiler bug. It's your bug. Stand up like a man, instead
> of trying to flail around and blame anything else but yourself.
>
> So guys, get your act together, and stop blaming the compiler already.
I never denied my original code didn't contain bugs, which is why I wrote that 
fix. I just don't believe gcc
will ever be smart enough to determine that ww_ctx is a non-null argument in 
all calls to __ww_mutex_lock,
and then determine for that reason ww_ctx != NULL would be an invariant.

I would love for a compiler to become that smart though, but I do not think 
it's likely.

But hey it was a bug, my code was buggy and I helped by suggesting how to write 
the correct fix.

~Maarten

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Linus Torvalds
On Sun, Oct 27, 2013 at 12:23 PM, Linus Torvalds
 wrote:
>
> The *ONLY* thing it is testing for is "how much can the compiler
> optimize this", and as such the *ONLY* thing it tests for is compiler
> differences.

Side note: testing "can the compiler optimize this expression at
compile time" is actually sometimes an interesting question, so it can
be a valid thing to test.

But people should understand that the question is literally about THAT
(ie visibility into compiler optimization) rather than about the value
itself.

So generally, the only thing that a __builtin_constant_p() test can be
used for is in *conjunction* with having an actual test for an actual
value, and then having special-case logic that pertains to that value.

So for example, *this* is a valid test:

if (__builtin_constant_p(ww_ctx) && ww_ctx == NULL) {
... special compile-time shortcut for the NULL value ..
} else {
... generic code that *also* handles the NULL value ..
}

and it's useful for triggering a compile-time optimized code-sequence
that is only true for NULL. But because __builtin_constant_p() is
about "how well can the compiler optimize this", that "else" statement
had better be able to handle the generic case too.

And yes, there are a few places where we do expect a certain minimal
set of optimizations. So in some cases we *might* have the rule that
the only valid use of NULL in a case like the above is when the
pointer passed in is passed in as a constant. And then we might say
"we rely on the compiler always returning true for
__builtin_constant_p(NULL)", and then we might say "so the "generic"
version of the code is guaranteed to never see NULL".

But notice how *different* that

__builtin_constant_p(ww_ctx) && ww_ctx == NULL

test is from

__builtin_constant_p(ww_ctx == NULL)

and really, the two tests are *fundamentally* really really different.
The first one can make sense. While the second one is pure and utter
garbage.

  Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Linus Torvalds
On Sun, Oct 27, 2013 at 12:00 PM, Maarten Lankhorst
 wrote:
> op 27-10-13 18:28, Linus Torvalds schreef:
>>
>> That expression is largely equivalent to
>> "__builtin_constant_p(ww_ctx)" (because iff ww_ctx is constant, then
>> the comparison to NULL is constant), which is actually much easier to
>> read, while carrying a totally different semantic meaning. Making
>> things worse, the comparison to NULL *may* be marked constant under
>> some very random situations (ie the compiler could turn a "taking an
>> address of a variable is never NULL" kind of knowledge and combining
>> it with other knowledge, and turn a complicated "ctx" expression into
>> a "I know this cannot be NULL" thing, and thus the "== NULL" is a
>> constant, even though ctx itself is some dynamic calculation).
>>
>> Whoever wrote the original should be shot. And this commit shouldn't
>> have been marked as being somehow about gcc-version dependence, but
>> about removing completely crap code.
>>
> Unfortunately gcc disagreed there, which was another compiler bug.

Stop this idiotic "blame gcc bug" crap. Which part of my explanation
for why it was *NOT* a compiler bug did you not understand?

> __builtin_constant_p(ww_ctx) was NOT equal to __builtin_constant_p(ww_ctx == 
> NULL), iirc.

See my "largely equivalent" comment, with the *EXTRA* logic that gcc
may actually find cases where the comparison is a constant even if the
ww_ctx thing itself isn't a constant.

> __builtin_constant_p(ww_ctx == NULL) is equal to __builtin_constant_p(ww_ctx 
> != NULL), but
> the former is more readable, since it shows we expect ww_ctx to be null.

Stop the f*cking around already! The  whole "we expect ww_ctx to be
null" thing shows that YOU DO NOT SEEM TO UNDERSTAND WHAT THE TEST
ACTUALLY IS!

The expression

   __builtin_constant_p(ww_ctx == NULL)

has ABSOLUTELY NOTHING to do with whether ww_ctx is NULL or not!
Christ, can you really not understand that?

For example, ww_ctx could be "_variable", and the compiler can
- and some compiles _will_ - say that ww_ctx clearly cannot be NULL,
so "ww_ctx == NULL" is 0, which is a constant, so the
__builtin_constant_p() expression returns true. See? That expression
has absolutely NOTHING to do with whether you passed in NULL or not.
NOTHING.

That __builtin_constant_p() tests whether the comparison is
*CONSTANT*. And "0" is just as much a constant as "1" is. Really. So
the whole f*cking expression is total and utter crap, because it is
entirely and utterly senseless. It lacks all meaning. It's not
actually testing for NULL at all. Never was, never will.

The *ONLY* thing it is testing for is "how much can the compiler
optimize this", and as such the *ONLY* thing it tests for is compiler
differences.

Really. Seriously. If you start blaming the compiler for different
compilers giving different results, the only thing *that* shows is
that you didn't understand the expression to begin with.

> But yeah I guess it was too broken in gcc after all, so that's why it had to 
> be killed altogether.

NO NO NO NO. No a f*cking thousand times. It's not "too broken in
gcc". It's too broken in the source code, and the fact that you don't
even understand that is sad. You wrote the code, and you seem to be
unable to admit that *your* code was buggy.

It's not a compiler bug. It's your bug. Stand up like a man, instead
of trying to flail around and blame anything else but yourself.

So guys, get your act together, and stop blaming the compiler already.

   Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Maarten Lankhorst
op 27-10-13 18:28, Linus Torvalds schreef:
> On Sat, Oct 26, 2013 at 5:19 AM, Ingo Molnar  wrote:
>> This tree fixes a boot crash in CONFIG_DEBUG_MUTEXES=y kernels, on
>> kernels built with GCC 3.x. (There are still such distros.)
> Btw, it's really not just gcc 3.x. That code was (a) incomprehensible,
> (b) wrong and (c) caused problems for LLVM too.
>
> It was wrong because "__builtin_constant_p(ww_ctx == NULL)" simply
> makes no sense.
>
> Why?
>
> That expression is largely equivalent to
> "__builtin_constant_p(ww_ctx)" (because iff ww_ctx is constant, then
> the comparison to NULL is constant), which is actually much easier to
> read, while carrying a totally different semantic meaning. Making
> things worse, the comparison to NULL *may* be marked constant under
> some very random situations (ie the compiler could turn a "taking an
> address of a variable is never NULL" kind of knowledge and combining
> it with other knowledge, and turn a complicated "ctx" expression into
> a "I know this cannot be NULL" thing, and thus the "== NULL" is a
> constant, even though ctx itself is some dynamic calculation).
>
> Whoever wrote the original should be shot. And this commit shouldn't
> have been marked as being somehow about gcc-version dependence, but
> about removing completely crap code.
>
Unfortunately gcc disagreed there, which was another compiler bug.
__builtin_constant_p(ww_ctx) was NOT equal to __builtin_constant_p(ww_ctx == 
NULL), iirc.
__builtin_constant_p(ww_ctx == NULL) is equal to __builtin_constant_p(ww_ctx != 
NULL), but
the former is more readable, since it shows we expect ww_ctx to be null.

But yeah I guess it was too broken in gcc after all, so that's why it had to be 
killed altogether.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Linus Torvalds
On Sat, Oct 26, 2013 at 5:19 AM, Ingo Molnar  wrote:
>
> This tree fixes a boot crash in CONFIG_DEBUG_MUTEXES=y kernels, on
> kernels built with GCC 3.x. (There are still such distros.)

Btw, it's really not just gcc 3.x. That code was (a) incomprehensible,
(b) wrong and (c) caused problems for LLVM too.

It was wrong because "__builtin_constant_p(ww_ctx == NULL)" simply
makes no sense.

Why?

That expression is largely equivalent to
"__builtin_constant_p(ww_ctx)" (because iff ww_ctx is constant, then
the comparison to NULL is constant), which is actually much easier to
read, while carrying a totally different semantic meaning. Making
things worse, the comparison to NULL *may* be marked constant under
some very random situations (ie the compiler could turn a "taking an
address of a variable is never NULL" kind of knowledge and combining
it with other knowledge, and turn a complicated "ctx" expression into
a "I know this cannot be NULL" thing, and thus the "== NULL" is a
constant, even though ctx itself is some dynamic calculation).

Whoever wrote the original should be shot. And this commit shouldn't
have been marked as being somehow about gcc-version dependence, but
about removing completely crap code.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Linus Torvalds
On Sat, Oct 26, 2013 at 5:19 AM, Ingo Molnar mi...@kernel.org wrote:

 This tree fixes a boot crash in CONFIG_DEBUG_MUTEXES=y kernels, on
 kernels built with GCC 3.x. (There are still such distros.)

Btw, it's really not just gcc 3.x. That code was (a) incomprehensible,
(b) wrong and (c) caused problems for LLVM too.

It was wrong because __builtin_constant_p(ww_ctx == NULL) simply
makes no sense.

Why?

That expression is largely equivalent to
__builtin_constant_p(ww_ctx) (because iff ww_ctx is constant, then
the comparison to NULL is constant), which is actually much easier to
read, while carrying a totally different semantic meaning. Making
things worse, the comparison to NULL *may* be marked constant under
some very random situations (ie the compiler could turn a taking an
address of a variable is never NULL kind of knowledge and combining
it with other knowledge, and turn a complicated ctx expression into
a I know this cannot be NULL thing, and thus the == NULL is a
constant, even though ctx itself is some dynamic calculation).

Whoever wrote the original should be shot. And this commit shouldn't
have been marked as being somehow about gcc-version dependence, but
about removing completely crap code.

Linus
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Maarten Lankhorst
op 27-10-13 18:28, Linus Torvalds schreef:
 On Sat, Oct 26, 2013 at 5:19 AM, Ingo Molnar mi...@kernel.org wrote:
 This tree fixes a boot crash in CONFIG_DEBUG_MUTEXES=y kernels, on
 kernels built with GCC 3.x. (There are still such distros.)
 Btw, it's really not just gcc 3.x. That code was (a) incomprehensible,
 (b) wrong and (c) caused problems for LLVM too.

 It was wrong because __builtin_constant_p(ww_ctx == NULL) simply
 makes no sense.

 Why?

 That expression is largely equivalent to
 __builtin_constant_p(ww_ctx) (because iff ww_ctx is constant, then
 the comparison to NULL is constant), which is actually much easier to
 read, while carrying a totally different semantic meaning. Making
 things worse, the comparison to NULL *may* be marked constant under
 some very random situations (ie the compiler could turn a taking an
 address of a variable is never NULL kind of knowledge and combining
 it with other knowledge, and turn a complicated ctx expression into
 a I know this cannot be NULL thing, and thus the == NULL is a
 constant, even though ctx itself is some dynamic calculation).

 Whoever wrote the original should be shot. And this commit shouldn't
 have been marked as being somehow about gcc-version dependence, but
 about removing completely crap code.

Unfortunately gcc disagreed there, which was another compiler bug.
__builtin_constant_p(ww_ctx) was NOT equal to __builtin_constant_p(ww_ctx == 
NULL), iirc.
__builtin_constant_p(ww_ctx == NULL) is equal to __builtin_constant_p(ww_ctx != 
NULL), but
the former is more readable, since it shows we expect ww_ctx to be null.

But yeah I guess it was too broken in gcc after all, so that's why it had to be 
killed altogether.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Linus Torvalds
On Sun, Oct 27, 2013 at 12:00 PM, Maarten Lankhorst
maarten.lankho...@canonical.com wrote:
 op 27-10-13 18:28, Linus Torvalds schreef:

 That expression is largely equivalent to
 __builtin_constant_p(ww_ctx) (because iff ww_ctx is constant, then
 the comparison to NULL is constant), which is actually much easier to
 read, while carrying a totally different semantic meaning. Making
 things worse, the comparison to NULL *may* be marked constant under
 some very random situations (ie the compiler could turn a taking an
 address of a variable is never NULL kind of knowledge and combining
 it with other knowledge, and turn a complicated ctx expression into
 a I know this cannot be NULL thing, and thus the == NULL is a
 constant, even though ctx itself is some dynamic calculation).

 Whoever wrote the original should be shot. And this commit shouldn't
 have been marked as being somehow about gcc-version dependence, but
 about removing completely crap code.

 Unfortunately gcc disagreed there, which was another compiler bug.

Stop this idiotic blame gcc bug crap. Which part of my explanation
for why it was *NOT* a compiler bug did you not understand?

 __builtin_constant_p(ww_ctx) was NOT equal to __builtin_constant_p(ww_ctx == 
 NULL), iirc.

See my largely equivalent comment, with the *EXTRA* logic that gcc
may actually find cases where the comparison is a constant even if the
ww_ctx thing itself isn't a constant.

 __builtin_constant_p(ww_ctx == NULL) is equal to __builtin_constant_p(ww_ctx 
 != NULL), but
 the former is more readable, since it shows we expect ww_ctx to be null.

Stop the f*cking around already! The  whole we expect ww_ctx to be
null thing shows that YOU DO NOT SEEM TO UNDERSTAND WHAT THE TEST
ACTUALLY IS!

The expression

   __builtin_constant_p(ww_ctx == NULL)

has ABSOLUTELY NOTHING to do with whether ww_ctx is NULL or not!
Christ, can you really not understand that?

For example, ww_ctx could be static_variable, and the compiler can
- and some compiles _will_ - say that ww_ctx clearly cannot be NULL,
so ww_ctx == NULL is 0, which is a constant, so the
__builtin_constant_p() expression returns true. See? That expression
has absolutely NOTHING to do with whether you passed in NULL or not.
NOTHING.

That __builtin_constant_p() tests whether the comparison is
*CONSTANT*. And 0 is just as much a constant as 1 is. Really. So
the whole f*cking expression is total and utter crap, because it is
entirely and utterly senseless. It lacks all meaning. It's not
actually testing for NULL at all. Never was, never will.

The *ONLY* thing it is testing for is how much can the compiler
optimize this, and as such the *ONLY* thing it tests for is compiler
differences.

Really. Seriously. If you start blaming the compiler for different
compilers giving different results, the only thing *that* shows is
that you didn't understand the expression to begin with.

 But yeah I guess it was too broken in gcc after all, so that's why it had to 
 be killed altogether.

NO NO NO NO. No a f*cking thousand times. It's not too broken in
gcc. It's too broken in the source code, and the fact that you don't
even understand that is sad. You wrote the code, and you seem to be
unable to admit that *your* code was buggy.

It's not a compiler bug. It's your bug. Stand up like a man, instead
of trying to flail around and blame anything else but yourself.

So guys, get your act together, and stop blaming the compiler already.

   Linus
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Linus Torvalds
On Sun, Oct 27, 2013 at 12:23 PM, Linus Torvalds
torva...@linux-foundation.org wrote:

 The *ONLY* thing it is testing for is how much can the compiler
 optimize this, and as such the *ONLY* thing it tests for is compiler
 differences.

Side note: testing can the compiler optimize this expression at
compile time is actually sometimes an interesting question, so it can
be a valid thing to test.

But people should understand that the question is literally about THAT
(ie visibility into compiler optimization) rather than about the value
itself.

So generally, the only thing that a __builtin_constant_p() test can be
used for is in *conjunction* with having an actual test for an actual
value, and then having special-case logic that pertains to that value.

So for example, *this* is a valid test:

if (__builtin_constant_p(ww_ctx)  ww_ctx == NULL) {
... special compile-time shortcut for the NULL value ..
} else {
... generic code that *also* handles the NULL value ..
}

and it's useful for triggering a compile-time optimized code-sequence
that is only true for NULL. But because __builtin_constant_p() is
about how well can the compiler optimize this, that else statement
had better be able to handle the generic case too.

And yes, there are a few places where we do expect a certain minimal
set of optimizations. So in some cases we *might* have the rule that
the only valid use of NULL in a case like the above is when the
pointer passed in is passed in as a constant. And then we might say
we rely on the compiler always returning true for
__builtin_constant_p(NULL), and then we might say so the generic
version of the code is guaranteed to never see NULL.

But notice how *different* that

__builtin_constant_p(ww_ctx)  ww_ctx == NULL

test is from

__builtin_constant_p(ww_ctx == NULL)

and really, the two tests are *fundamentally* really really different.
The first one can make sense. While the second one is pure and utter
garbage.

  Linus
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Maarten Lankhorst
op 27-10-13 20:23, Linus Torvalds schreef:
 On Sun, Oct 27, 2013 at 12:00 PM, Maarten Lankhorst
 maarten.lankho...@canonical.com wrote:
 op 27-10-13 18:28, Linus Torvalds schreef:
 That expression is largely equivalent to
 __builtin_constant_p(ww_ctx) (because iff ww_ctx is constant, then
 the comparison to NULL is constant), which is actually much easier to
 read, while carrying a totally different semantic meaning. Making
 things worse, the comparison to NULL *may* be marked constant under
 some very random situations (ie the compiler could turn a taking an
 address of a variable is never NULL kind of knowledge and combining
 it with other knowledge, and turn a complicated ctx expression into
 a I know this cannot be NULL thing, and thus the == NULL is a
 constant, even though ctx itself is some dynamic calculation).

 Whoever wrote the original should be shot. And this commit shouldn't
 have been marked as being somehow about gcc-version dependence, but
 about removing completely crap code.

 Unfortunately gcc disagreed there, which was another compiler bug.
 Stop this idiotic blame gcc bug crap. Which part of my explanation
 for why it was *NOT* a compiler bug did you not understand?

 __builtin_constant_p(ww_ctx) was NOT equal to __builtin_constant_p(ww_ctx == 
 NULL), iirc.
 See my largely equivalent comment, with the *EXTRA* logic that gcc
 may actually find cases where the comparison is a constant even if the
 ww_ctx thing itself isn't a constant.
Sure in the theoretical case it's possible.
  __builtin_constant_p(ww_ctx == NULL) is equal to 
 __builtin_constant_p(ww_ctx != NULL), but
 the former is more readable, since it shows we expect ww_ctx to be null.
 Stop the f*cking around already! The  whole we expect ww_ctx to be
 null thing shows that YOU DO NOT SEEM TO UNDERSTAND WHAT THE TEST
 ACTUALLY IS!

 The expression

__builtin_constant_p(ww_ctx == NULL)

 has ABSOLUTELY NOTHING to do with whether ww_ctx is NULL or not!
 Christ, can you really not understand that?
I'm fully aware, I just think the compiler cannot know that the address is 
always non-null for a generic function that takes an argument and isn't inlined.

 For example, ww_ctx could be static_variable, and the compiler can
 - and some compiles _will_ - say that ww_ctx clearly cannot be NULL,
 so ww_ctx == NULL is 0, which is a constant, so the
 __builtin_constant_p() expression returns true. See? That expression
 has absolutely NOTHING to do with whether you passed in NULL or not.
 NOTHING.
but __ww_mutex_lock isn't inlined..
 That __builtin_constant_p() tests whether the comparison is
 *CONSTANT*. And 0 is just as much a constant as 1 is. Really. So
 the whole f*cking expression is total and utter crap, because it is
 entirely and utterly senseless. It lacks all meaning. It's not
 actually testing for NULL at all. Never was, never will.

 The *ONLY* thing it is testing for is how much can the compiler
 optimize this, and as such the *ONLY* thing it tests for is compiler
 differences.

 Really. Seriously. If you start blaming the compiler for different
 compilers giving different results, the only thing *that* shows is
 that you didn't understand the expression to begin with.

 But yeah I guess it was too broken in gcc after all, so that's why it had to 
 be killed altogether.
 NO NO NO NO. No a f*cking thousand times. It's not too broken in
 gcc. It's too broken in the source code, and the fact that you don't
 even understand that is sad. You wrote the code, and you seem to be
 unable to admit that *your* code was buggy.

 It's not a compiler bug. It's your bug. Stand up like a man, instead
 of trying to flail around and blame anything else but yourself.

 So guys, get your act together, and stop blaming the compiler already.
I never denied my original code didn't contain bugs, which is why I wrote that 
fix. I just don't believe gcc
will ever be smart enough to determine that ww_ctx is a non-null argument in 
all calls to __ww_mutex_lock,
and then determine for that reason ww_ctx != NULL would be an invariant.

I would love for a compiler to become that smart though, but I do not think 
it's likely.

But hey it was a bug, my code was buggy and I helped by suggesting how to write 
the correct fix.

~Maarten

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Linus Torvalds
On Sun, Oct 27, 2013 at 12:37 PM, Maarten Lankhorst
maarten.lankho...@canonical.com wrote:

 I would love for a compiler to become that smart though, but I do not think 
 it's likely.

Dammit, even if that is true, then write the conditional *correctly*.

As mentioned, the conditional

__builtin_constant_p(ww_ctx)  ww_ctx == NULL

is actually sensible, in a way the original one was *not*. It actually
tests what you apparently intended to test, and is more readable to
humans to boot.

And no, it still isn't actually guaranteed to do what you want it to
do. Historically, in gcc, __builtin_constant_p() really only ever
worked in macros, because by the time you use it in inline functions,
a constant NULL in the caller will have been turned into a argument
variable in the inline function, and __builtin_constant_p() would be
done before that was optimized away. Over the years, gcc has pushed
some of the builtin evaluation deeper down, and these days it actually
works within inline functions, but my point that
__builtin_constant_p() is about a certain level of compiler
optimization is very much true: you're actually testing for a compiler
optimization detail.

I know the LLVM people had similar issues with this comparison, so
these days it's not even just about gcc versions. We may never have
cared very much about icc, but llvm is actually an interesting target
compiler.

Linus
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Maarten Lankhorst
op 27-10-13 20:51, Linus Torvalds schreef:
 On Sun, Oct 27, 2013 at 12:37 PM, Maarten Lankhorst
 maarten.lankho...@canonical.com wrote:
 I would love for a compiler to become that smart though, but I do not think 
 it's likely.
 Dammit, even if that is true, then write the conditional *correctly*.

 As mentioned, the conditional

 __builtin_constant_p(ww_ctx)  ww_ctx == NULL

 is actually sensible, in a way the original one was *not*. It actually
 tests what you apparently intended to test, and is more readable to
 humans to boot.
Yeah that mail arrived after I sent mine, I agree that this would have been 
more sensible.
 And no, it still isn't actually guaranteed to do what you want it to
 do. Historically, in gcc, __builtin_constant_p() really only ever
 worked in macros, because by the time you use it in inline functions,
 a constant NULL in the caller will have been turned into a argument
 variable in the inline function, and __builtin_constant_p() would be
 done before that was optimized away. Over the years, gcc has pushed
 some of the builtin evaluation deeper down, and these days it actually
 works within inline functions, but my point that
 __builtin_constant_p() is about a certain level of compiler
 optimization is very much true: you're actually testing for a compiler
 optimization detail.

 I know the LLVM people had similar issues with this comparison, so
 these days it's not even just about gcc versions. We may never have
 cared very much about icc, but llvm is actually an interesting target
 compiler.

And this is why ww_ctx == NULL is now passed as an inline argument. :)

~Maarten
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [GIT PULL] locking fix

2013-10-27 Thread Linus Torvalds
On Sun, Oct 27, 2013 at 12:56 PM, Maarten Lankhorst
maarten.lankho...@canonical.com wrote:

 And this is why ww_ctx == NULL is now passed as an inline argument. :)

Well, more than that - the optimization has been done at the source
code level, so that the behavior is no longer a matter about how well
the compiler optimizes it any more.

I'm not complaining about the fix. I'm complaining about how the fix
was claimed to be due to a compiler bug. The documentation for the
fix (ie the commit message) was actively misleading.

   Linus
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[GIT PULL] locking fix

2013-10-26 Thread Ingo Molnar
Linus,

Please pull the latest core-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
core-urgent-for-linus

   # HEAD: b0267507dfd0187fb7840a0ec461a510a7f041c5 mutex: Avoid gcc version 
dependent __builtin_constant_p() usage

This tree fixes a boot crash in CONFIG_DEBUG_MUTEXES=y kernels, on 
kernels built with GCC 3.x. (There are still such distros.)

 Thanks,

Ingo

-->
Tetsuo Handa (1):
  mutex: Avoid gcc version dependent __builtin_constant_p() usage


 kernel/mutex.c | 32 
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/kernel/mutex.c b/kernel/mutex.c
index 6d647ae..d24105b 100644
--- a/kernel/mutex.c
+++ b/kernel/mutex.c
@@ -410,7 +410,7 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock,
 static __always_inline int __sched
 __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
struct lockdep_map *nest_lock, unsigned long ip,
-   struct ww_acquire_ctx *ww_ctx)
+   struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
 {
struct task_struct *task = current;
struct mutex_waiter waiter;
@@ -450,7 +450,7 @@ __mutex_lock_common(struct mutex *lock, long state, 
unsigned int subclass,
struct task_struct *owner;
struct mspin_node  node;
 
-   if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 
0) {
+   if (use_ww_ctx && ww_ctx->acquired > 0) {
struct ww_mutex *ww;
 
ww = container_of(lock, struct ww_mutex, base);
@@ -480,7 +480,7 @@ __mutex_lock_common(struct mutex *lock, long state, 
unsigned int subclass,
if ((atomic_read(>count) == 1) &&
(atomic_cmpxchg(>count, 1, 0) == 1)) {
lock_acquired(>dep_map, ip);
-   if (!__builtin_constant_p(ww_ctx == NULL)) {
+   if (use_ww_ctx) {
struct ww_mutex *ww;
ww = container_of(lock, struct ww_mutex, base);
 
@@ -551,7 +551,7 @@ slowpath:
goto err;
}
 
-   if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 
0) {
+   if (use_ww_ctx && ww_ctx->acquired > 0) {
ret = __mutex_lock_check_stamp(lock, ww_ctx);
if (ret)
goto err;
@@ -575,7 +575,7 @@ skip_wait:
lock_acquired(>dep_map, ip);
mutex_set_owner(lock);
 
-   if (!__builtin_constant_p(ww_ctx == NULL)) {
+   if (use_ww_ctx) {
struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
struct mutex_waiter *cur;
 
@@ -615,7 +615,7 @@ mutex_lock_nested(struct mutex *lock, unsigned int subclass)
 {
might_sleep();
__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE,
-   subclass, NULL, _RET_IP_, NULL);
+   subclass, NULL, _RET_IP_, NULL, 0);
 }
 
 EXPORT_SYMBOL_GPL(mutex_lock_nested);
@@ -625,7 +625,7 @@ _mutex_lock_nest_lock(struct mutex *lock, struct 
lockdep_map *nest)
 {
might_sleep();
__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE,
-   0, nest, _RET_IP_, NULL);
+   0, nest, _RET_IP_, NULL, 0);
 }
 
 EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock);
@@ -635,7 +635,7 @@ mutex_lock_killable_nested(struct mutex *lock, unsigned int 
subclass)
 {
might_sleep();
return __mutex_lock_common(lock, TASK_KILLABLE,
-  subclass, NULL, _RET_IP_, NULL);
+  subclass, NULL, _RET_IP_, NULL, 0);
 }
 EXPORT_SYMBOL_GPL(mutex_lock_killable_nested);
 
@@ -644,7 +644,7 @@ mutex_lock_interruptible_nested(struct mutex *lock, 
unsigned int subclass)
 {
might_sleep();
return __mutex_lock_common(lock, TASK_INTERRUPTIBLE,
-  subclass, NULL, _RET_IP_, NULL);
+  subclass, NULL, _RET_IP_, NULL, 0);
 }
 
 EXPORT_SYMBOL_GPL(mutex_lock_interruptible_nested);
@@ -682,7 +682,7 @@ __ww_mutex_lock(struct ww_mutex *lock, struct 
ww_acquire_ctx *ctx)
 
might_sleep();
ret =  __mutex_lock_common(>base, TASK_UNINTERRUPTIBLE,
-  0, >dep_map, _RET_IP_, ctx);
+  0, >dep_map, _RET_IP_, ctx, 1);
if (!ret && ctx->acquired > 1)
return ww_mutex_deadlock_injection(lock, ctx);
 
@@ -697,7 +697,7 @@ __ww_mutex_lock_interruptible(struct ww_mutex *lock, struct 
ww_acquire_ctx *ctx)
 
might_sleep();
ret = __mutex_lock_common(>base, TASK_INTERRUPTIBLE,
- 0, >dep_map, _RET_IP_, ctx);
+ 0, >dep_map, _RET_IP_, ctx, 1);
 
if 

[GIT PULL] locking fix

2013-10-26 Thread Ingo Molnar
Linus,

Please pull the latest core-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 
core-urgent-for-linus

   # HEAD: b0267507dfd0187fb7840a0ec461a510a7f041c5 mutex: Avoid gcc version 
dependent __builtin_constant_p() usage

This tree fixes a boot crash in CONFIG_DEBUG_MUTEXES=y kernels, on 
kernels built with GCC 3.x. (There are still such distros.)

 Thanks,

Ingo

--
Tetsuo Handa (1):
  mutex: Avoid gcc version dependent __builtin_constant_p() usage


 kernel/mutex.c | 32 
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/kernel/mutex.c b/kernel/mutex.c
index 6d647ae..d24105b 100644
--- a/kernel/mutex.c
+++ b/kernel/mutex.c
@@ -410,7 +410,7 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock,
 static __always_inline int __sched
 __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
struct lockdep_map *nest_lock, unsigned long ip,
-   struct ww_acquire_ctx *ww_ctx)
+   struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
 {
struct task_struct *task = current;
struct mutex_waiter waiter;
@@ -450,7 +450,7 @@ __mutex_lock_common(struct mutex *lock, long state, 
unsigned int subclass,
struct task_struct *owner;
struct mspin_node  node;
 
-   if (!__builtin_constant_p(ww_ctx == NULL)  ww_ctx-acquired  
0) {
+   if (use_ww_ctx  ww_ctx-acquired  0) {
struct ww_mutex *ww;
 
ww = container_of(lock, struct ww_mutex, base);
@@ -480,7 +480,7 @@ __mutex_lock_common(struct mutex *lock, long state, 
unsigned int subclass,
if ((atomic_read(lock-count) == 1) 
(atomic_cmpxchg(lock-count, 1, 0) == 1)) {
lock_acquired(lock-dep_map, ip);
-   if (!__builtin_constant_p(ww_ctx == NULL)) {
+   if (use_ww_ctx) {
struct ww_mutex *ww;
ww = container_of(lock, struct ww_mutex, base);
 
@@ -551,7 +551,7 @@ slowpath:
goto err;
}
 
-   if (!__builtin_constant_p(ww_ctx == NULL)  ww_ctx-acquired  
0) {
+   if (use_ww_ctx  ww_ctx-acquired  0) {
ret = __mutex_lock_check_stamp(lock, ww_ctx);
if (ret)
goto err;
@@ -575,7 +575,7 @@ skip_wait:
lock_acquired(lock-dep_map, ip);
mutex_set_owner(lock);
 
-   if (!__builtin_constant_p(ww_ctx == NULL)) {
+   if (use_ww_ctx) {
struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
struct mutex_waiter *cur;
 
@@ -615,7 +615,7 @@ mutex_lock_nested(struct mutex *lock, unsigned int subclass)
 {
might_sleep();
__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE,
-   subclass, NULL, _RET_IP_, NULL);
+   subclass, NULL, _RET_IP_, NULL, 0);
 }
 
 EXPORT_SYMBOL_GPL(mutex_lock_nested);
@@ -625,7 +625,7 @@ _mutex_lock_nest_lock(struct mutex *lock, struct 
lockdep_map *nest)
 {
might_sleep();
__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE,
-   0, nest, _RET_IP_, NULL);
+   0, nest, _RET_IP_, NULL, 0);
 }
 
 EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock);
@@ -635,7 +635,7 @@ mutex_lock_killable_nested(struct mutex *lock, unsigned int 
subclass)
 {
might_sleep();
return __mutex_lock_common(lock, TASK_KILLABLE,
-  subclass, NULL, _RET_IP_, NULL);
+  subclass, NULL, _RET_IP_, NULL, 0);
 }
 EXPORT_SYMBOL_GPL(mutex_lock_killable_nested);
 
@@ -644,7 +644,7 @@ mutex_lock_interruptible_nested(struct mutex *lock, 
unsigned int subclass)
 {
might_sleep();
return __mutex_lock_common(lock, TASK_INTERRUPTIBLE,
-  subclass, NULL, _RET_IP_, NULL);
+  subclass, NULL, _RET_IP_, NULL, 0);
 }
 
 EXPORT_SYMBOL_GPL(mutex_lock_interruptible_nested);
@@ -682,7 +682,7 @@ __ww_mutex_lock(struct ww_mutex *lock, struct 
ww_acquire_ctx *ctx)
 
might_sleep();
ret =  __mutex_lock_common(lock-base, TASK_UNINTERRUPTIBLE,
-  0, ctx-dep_map, _RET_IP_, ctx);
+  0, ctx-dep_map, _RET_IP_, ctx, 1);
if (!ret  ctx-acquired  1)
return ww_mutex_deadlock_injection(lock, ctx);
 
@@ -697,7 +697,7 @@ __ww_mutex_lock_interruptible(struct ww_mutex *lock, struct 
ww_acquire_ctx *ctx)
 
might_sleep();
ret = __mutex_lock_common(lock-base, TASK_INTERRUPTIBLE,
- 0, ctx-dep_map, _RET_IP_, ctx);
+ 0, ctx-dep_map, _RET_IP_, ctx, 1);