Commit-ID: 7f56b58a92aaf2cab049f32a19af7cc57a3972f2
Gitweb: https://git.kernel.org/tip/7f56b58a92aaf2cab049f32a19af7cc57a3972f2
Author: Jason Low <jason.l...@hp.com>
AuthorDate: Thu, 26 Apr 2018 11:34:22 +0100
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 27 Ap
Commit-ID: 7f56b58a92aaf2cab049f32a19af7cc57a3972f2
Gitweb: https://git.kernel.org/tip/7f56b58a92aaf2cab049f32a19af7cc57a3972f2
Author: Jason Low
AuthorDate: Thu, 26 Apr 2018 11:34:22 +0100
Committer: Ingo Molnar
CommitDate: Fri, 27 Apr 2018 09:48:49 +0200
locking/mcs: Use
On Wed, 2016-10-12 at 10:59 -0700, Davidlohr Bueso wrote:
> On Fri, 07 Oct 2016, Peter Zijlstra wrote:
> >+/*
> >+ * Optimistic trylock that only works in the uncontended case. Make sure to
> >+ * follow with a __mutex_trylock() before failing.
> >+ */
> >+static __always_inline bool
On Wed, 2016-10-12 at 10:59 -0700, Davidlohr Bueso wrote:
> On Fri, 07 Oct 2016, Peter Zijlstra wrote:
> >+/*
> >+ * Optimistic trylock that only works in the uncontended case. Make sure to
> >+ * follow with a __mutex_trylock() before failing.
> >+ */
> >+static __always_inline bool
---
| 100 - 900 | 76,362 JPM | 76,298 JPM |
-
| 1000 - 1900 | 77,146 JPM | 76,061 JPM |
---------
Tested-by: Jason Low <jason.l...@hpe.com>
---
| 100 - 900 | 76,362 JPM | 76,298 JPM |
-
| 1000 - 1900 | 77,146 JPM | 76,061 JPM |
---------
Tested-by: Jason Low
On Wed, Oct 5, 2016 at 10:47 PM, Davidlohr Bueso wrote:
> On Wed, 05 Oct 2016, Waiman Long wrote:
>
>> diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
>> index 05a3785..1e6823a 100644
>> --- a/kernel/locking/osq_lock.c
>> +++ b/kernel/locking/osq_lock.c
>> @@
On Wed, Oct 5, 2016 at 10:47 PM, Davidlohr Bueso wrote:
> On Wed, 05 Oct 2016, Waiman Long wrote:
>
>> diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
>> index 05a3785..1e6823a 100644
>> --- a/kernel/locking/osq_lock.c
>> +++ b/kernel/locking/osq_lock.c
>> @@ -12,6 +12,23 @@
>>
On Tue, Oct 4, 2016 at 12:06 PM, Davidlohr Bueso wrote:
> On Thu, 18 Aug 2016, Waiman Long wrote:
>
>> The osq_lock() and osq_unlock() function may not provide the necessary
>> acquire and release barrier in some cases. This patch makes sure
>> that the proper barriers are
On Tue, Oct 4, 2016 at 12:06 PM, Davidlohr Bueso wrote:
> On Thu, 18 Aug 2016, Waiman Long wrote:
>
>> The osq_lock() and osq_unlock() function may not provide the necessary
>> acquire and release barrier in some cases. This patch makes sure
>> that the proper barriers are provided when
On Tue, 2016-08-23 at 09:35 -0700, Jason Low wrote:
> On Tue, 2016-08-23 at 09:17 -0700, Davidlohr Bueso wrote:
> > I have not looked at the patches yet, but are there any performance minutia
> > to be aware of?
>
> This would remove all of the mutex architecture s
On Tue, 2016-08-23 at 09:35 -0700, Jason Low wrote:
> On Tue, 2016-08-23 at 09:17 -0700, Davidlohr Bueso wrote:
> > I have not looked at the patches yet, but are there any performance minutia
> > to be aware of?
>
> This would remove all of the mutex architecture s
On Tue, 2016-08-23 at 09:17 -0700, Davidlohr Bueso wrote:
> What's the motivation here? Is it just to unify counter and owner for
> the starvation issue? If so, is this really the path we wanna take for
> a small debug corner case?
And we thought our other patch was a bit invasive :-)
> I have
On Tue, 2016-08-23 at 09:17 -0700, Davidlohr Bueso wrote:
> What's the motivation here? Is it just to unify counter and owner for
> the starvation issue? If so, is this really the path we wanna take for
> a small debug corner case?
And we thought our other patch was a bit invasive :-)
> I have
On Thu, 2016-08-18 at 17:39 -0700, Jason Low wrote:
> Imre reported an issue where threads are getting starved when trying
> to acquire a mutex. Threads acquiring a mutex can get arbitrarily delayed
> sleeping on a mutex because other threads can continually steal the lock
> in
On Thu, 2016-08-18 at 17:39 -0700, Jason Low wrote:
> Imre reported an issue where threads are getting starved when trying
> to acquire a mutex. Threads acquiring a mutex can get arbitrarily delayed
> sleeping on a mutex because other threads can continually steal the lock
> in
On Thu, 2016-08-18 at 17:58 +0200, Peter Zijlstra wrote:
> On Thu, Aug 11, 2016 at 11:01:27AM -0400, Waiman Long wrote:
> > The following is the updated patch that should fix the build error in
> > non-x86 platform.
> >
>
> This patch was whitespace challenged, but I think I munged it properly.
On Thu, 2016-08-18 at 17:58 +0200, Peter Zijlstra wrote:
> On Thu, Aug 11, 2016 at 11:01:27AM -0400, Waiman Long wrote:
> > The following is the updated patch that should fix the build error in
> > non-x86 platform.
> >
>
> This patch was whitespace challenged, but I think I munged it properly.
for too long.
Reported-by: Imre Deak <imre.d...@intel.com>
Signed-off-by: Jason Low <jason.l...@hpe.com>
---
include/linux/mutex.h | 2 +
kernel/locking/mutex.c | 122 +++--
2 files changed, 99 insertions(+), 25 deletions(-)
diff --git a/inc
for too long.
Reported-by: Imre Deak
Signed-off-by: Jason Low
---
include/linux/mutex.h | 2 +
kernel/locking/mutex.c | 122 +++--
2 files changed, 99 insertions(+), 25 deletions(-)
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index
timistic_spin(struct mutex *lock,
>* Turn on the waiter spinning flag to discourage the spinner
>* from getting the lock.
Might want to update this comment to "Turn on the yield to waiter flag
to discourage optimistic spinners from stealing the lock."
Besides that:
Acked-by: Jason Low <jason.l...@hpe.com>
lock,
>* Turn on the waiter spinning flag to discourage the spinner
>* from getting the lock.
Might want to update this comment to "Turn on the yield to waiter flag
to discourage optimistic spinners from stealing the lock."
Besides that:
Acked-by: Jason Low
Hi Wanpeng,
On Wed, 2016-08-17 at 09:41 +0800, Wanpeng Li wrote:
> 2016-08-11 2:44 GMT+08:00 Jason Low <jason.l...@hpe.com>:
> > Imre reported an issue where threads are getting starved when trying
> > to acquire a mutex. Threads acquiring a mutex can get arbitrarily
Hi Wanpeng,
On Wed, 2016-08-17 at 09:41 +0800, Wanpeng Li wrote:
> 2016-08-11 2:44 GMT+08:00 Jason Low :
> > Imre reported an issue where threads are getting starved when trying
> > to acquire a mutex. Threads acquiring a mutex can get arbitrarily delayed
> > sleeping on
;
> url:
> https://github.com/0day-ci/linux/commits/Jason-Low/locking-mutex-Prevent-lock-starvation-when-spinning-is-enabled/20160811-034327
> config: x86_64-randconfig-x013-201632 (attached as .config)
> compiler: gcc-6 (Debian 6.1.1-9) 6.1.1 20160705
> reproduce:
>
;
> url:
> https://github.com/0day-ci/linux/commits/Jason-Low/locking-mutex-Prevent-lock-starvation-when-spinning-is-enabled/20160811-034327
> config: x86_64-randconfig-x013-201632 (attached as .config)
> compiler: gcc-6 (Debian 6.1.1-9) 6.1.1 20160705
> reproduce:
>
On Thu, 2016-08-11 at 11:40 -0400, Waiman Long wrote:
> On 08/10/2016 02:44 PM, Jason Low wrote:
> > +static inline void do_yield_to_waiter(struct mutex *lock, int *wakeups)
> > +{
> > + return;
> > +}
> > +
> > +static inline void clear_yield_to_waiter(st
On Thu, 2016-08-11 at 11:40 -0400, Waiman Long wrote:
> On 08/10/2016 02:44 PM, Jason Low wrote:
> > +static inline void do_yield_to_waiter(struct mutex *lock, int *wakeups)
> > +{
> > + return;
> > +}
> > +
> > +static inline void clear_yield_to_waiter(st
On Wed, 2016-08-10 at 11:44 -0700, Jason Low wrote:
> @@ -917,11 +976,12 @@ EXPORT_SYMBOL(mutex_trylock);
> int __sched
> __ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
> {
> - int ret;
> + int ret = 1;
>
> might_
On Wed, 2016-08-10 at 11:44 -0700, Jason Low wrote:
> @@ -917,11 +976,12 @@ EXPORT_SYMBOL(mutex_trylock);
> int __sched
> __ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
> {
> - int ret;
> + int ret = 1;
>
> might_
On Wed, 2016-08-10 at 11:44 -0700, Jason Low wrote:
> Imre reported an issue where threads are getting starved when trying
> to acquire a mutex. Threads acquiring a mutex can get arbitrarily delayed
> sleeping on a mutex because other threads can continually steal the lock
> in
On Wed, 2016-08-10 at 11:44 -0700, Jason Low wrote:
> Imre reported an issue where threads are getting starved when trying
> to acquire a mutex. Threads acquiring a mutex can get arbitrarily delayed
> sleeping on a mutex because other threads can continually steal the lock
> in
for too long.
Reported-by: Imre Deak <imre.d...@intel.com>
Signed-off-by: Jason Low <jason.l...@hpe.com>
---
v1->v2:
- Addressed Waiman's suggestions of needing the yield_to_waiter
flag only in the CONFIG_SMP case.
- Make sure to only clear the flag if the thread is the top w
for too long.
Reported-by: Imre Deak
Signed-off-by: Jason Low
---
v1->v2:
- Addressed Waiman's suggestions of needing the yield_to_waiter
flag only in the CONFIG_SMP case.
- Make sure to only clear the flag if the thread is the top waiter.
- Refactor code to clear flag into an inline function
On Fri, 2016-07-22 at 12:34 +0300, Imre Deak wrote:
> On to, 2016-07-21 at 15:29 -0700, Jason Low wrote:
> > On Wed, 2016-07-20 at 14:37 -0400, Waiman Long wrote:
> > > On 07/20/2016 12:39 AM, Jason Low wrote:
> > > > On Tue, 2016-07-19 at 16:04 -0700, Jaso
On Fri, 2016-07-22 at 12:34 +0300, Imre Deak wrote:
> On to, 2016-07-21 at 15:29 -0700, Jason Low wrote:
> > On Wed, 2016-07-20 at 14:37 -0400, Waiman Long wrote:
> > > On 07/20/2016 12:39 AM, Jason Low wrote:
> > > > On Tue, 2016-07-19 at 16:04 -0700, Jaso
On Wed, 2016-07-20 at 14:37 -0400, Waiman Long wrote:
> On 07/20/2016 12:39 AM, Jason Low wrote:
> > On Tue, 2016-07-19 at 16:04 -0700, Jason Low wrote:
> >> Hi Imre,
> >>
> >> Here is a patch which prevents a thread from spending too
On Wed, 2016-07-20 at 14:37 -0400, Waiman Long wrote:
> On 07/20/2016 12:39 AM, Jason Low wrote:
> > On Tue, 2016-07-19 at 16:04 -0700, Jason Low wrote:
> >> Hi Imre,
> >>
> >> Here is a patch which prevents a thread from spending too
On Wed, 2016-07-20 at 16:29 +0300, Imre Deak wrote:
> On ti, 2016-07-19 at 21:39 -0700, Jason Low wrote:
> > On Tue, 2016-07-19 at 16:04 -0700, Jason Low wrote:
> > > Hi Imre,
> > >
> > > Here is a patch which prevents a thread from spending too m
On Wed, 2016-07-20 at 16:29 +0300, Imre Deak wrote:
> On ti, 2016-07-19 at 21:39 -0700, Jason Low wrote:
> > On Tue, 2016-07-19 at 16:04 -0700, Jason Low wrote:
> > > Hi Imre,
> > >
> > > Here is a patch which prevents a thread from spending too m
On Tue, 2016-07-19 at 16:04 -0700, Jason Low wrote:
> Hi Imre,
>
> Here is a patch which prevents a thread from spending too much "time"
> waiting for a mutex in the !CONFIG_MUTEX_SPIN_ON_OWNER case.
>
> Would you like to try this out and see if this addresses the
On Tue, 2016-07-19 at 16:04 -0700, Jason Low wrote:
> Hi Imre,
>
> Here is a patch which prevents a thread from spending too much "time"
> waiting for a mutex in the !CONFIG_MUTEX_SPIN_ON_OWNER case.
>
> Would you like to try this out and see if this addresses the
s disabled?
Thanks.
---
Signed-off-by: Jason Low <jason.l...@hpe.com>
---
include/linux/mutex.h | 2 ++
kernel/locking/mutex.c | 61 +-
2 files changed, 58 insertions(+), 5 deletions(-)
diff --git a/include/linux/mutex.h b/include/linux/mutex
s disabled?
Thanks.
---
Signed-off-by: Jason Low
---
include/linux/mutex.h | 2 ++
kernel/locking/mutex.c | 61 +-
2 files changed, 58 insertions(+), 5 deletions(-)
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 2cb7531..c1ca
On Tue, 2016-07-19 at 19:53 +0300, Imre Deak wrote:
> On ma, 2016-07-18 at 10:47 -0700, Jason Low wrote:
> > On Mon, 2016-07-18 at 19:15 +0200, Peter Zijlstra wrote:
> > > I think we went over this before, that will also completely destroy
> > > performance
On Tue, 2016-07-19 at 19:53 +0300, Imre Deak wrote:
> On ma, 2016-07-18 at 10:47 -0700, Jason Low wrote:
> > On Mon, 2016-07-18 at 19:15 +0200, Peter Zijlstra wrote:
> > > I think we went over this before, that will also completely destroy
> > > performance
n system
> performance.
>
> This patchset tries to address 2 issues with Peter's patch:
>
> 1) Ding Tianhong still find that hanging task could happen in some cases.
> 2) Jason Low found that there was performance regression for some AIM7
> workloads.
>
> By making
n system
> performance.
>
> This patchset tries to address 2 issues with Peter's patch:
>
> 1) Ding Tianhong still find that hanging task could happen in some cases.
> 2) Jason Low found that there was performance regression for some AIM7
> workloads.
>
> By making
On Mon, 2016-07-18 at 19:15 +0200, Peter Zijlstra wrote:
> On Mon, Jul 18, 2016 at 07:16:47PM +0300, Imre Deak wrote:
> > Currently a thread sleeping on a mutex wait queue can be delayed
> > indefinitely by other threads managing to steal the lock, that is
> > acquiring the lock out-of-order
On Mon, 2016-07-18 at 19:15 +0200, Peter Zijlstra wrote:
> On Mon, Jul 18, 2016 at 07:16:47PM +0300, Imre Deak wrote:
> > Currently a thread sleeping on a mutex wait queue can be delayed
> > indefinitely by other threads managing to steal the lock, that is
> > acquiring the lock out-of-order
Commit-ID: 8ee62b1870be8e630158701632a533d0378e15b8
Gitweb: http://git.kernel.org/tip/8ee62b1870be8e630158701632a533d0378e15b8
Author: Jason Low <jason.l...@hpe.com>
AuthorDate: Fri, 3 Jun 2016 22:26:02 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Wed, 8 Jun 2
Commit-ID: 8ee62b1870be8e630158701632a533d0378e15b8
Gitweb: http://git.kernel.org/tip/8ee62b1870be8e630158701632a533d0378e15b8
Author: Jason Low
AuthorDate: Fri, 3 Jun 2016 22:26:02 -0700
Committer: Ingo Molnar
CommitDate: Wed, 8 Jun 2016 15:16:42 +0200
locking/rwsem: Convert sem
add,update} definitions across the various architectures.
Suggested-by: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Jason Low <jason.l...@hpe.com>
---
arch/alpha/include/asm/rwsem.h | 26 +-
arch/ia64/include/asm/rwsem.h | 24
inclu
The rwsem-xadd count has been converted to an atomic variable and the
rwsem code now directly uses atomic_long_add() and
atomic_long_add_return(), so we can remove the arch implementations of
rwsem_atomic_add() and rwsem_atomic_update().
Signed-off-by: Jason Low <jason.l...@hpe.com>
---
add,update} definitions across the various architectures.
Suggested-by: Peter Zijlstra
Signed-off-by: Jason Low
---
arch/alpha/include/asm/rwsem.h | 26 +-
arch/ia64/include/asm/rwsem.h | 24
include/asm-generic/rwsem.h| 6 +++---
include/lin
The rwsem-xadd count has been converted to an atomic variable and the
rwsem code now directly uses atomic_long_add() and
atomic_long_add_return(), so we can remove the arch implementations of
rwsem_atomic_add() and rwsem_atomic_update().
Signed-off-by: Jason Low
---
arch/alpha/include/asm
riable to an atomic_long_t
since it is used it as an atomic variable. This allows us to also remove
the rwsem_atomic_{add,update} abstraction and reduce 100+ lines of code.
Jason Low (2):
locking/rwsem: Convert sem->count to atomic_long_t
Remove rwsem_atomic_add() and rwsem_atomic_update()
arch/alph
riable to an atomic_long_t
since it is used it as an atomic variable. This allows us to also remove
the rwsem_atomic_{add,update} abstraction and reduce 100+ lines of code.
Jason Low (2):
locking/rwsem: Convert sem->count to atomic_long_t
Remove rwsem_atomic_add() and rwsem_atomic_update()
arch/alph
On Sat, 2016-06-04 at 00:36 +0200, Peter Zijlstra wrote:
> On Fri, Jun 03, 2016 at 11:09:54AM -0700, Jason Low wrote:
> > --- a/arch/alpha/include/asm/rwsem.h
> > +++ b/arch/alpha/include/asm/rwsem.h
> > @@ -25,8 +25,8 @@ static inline void __down_read(struct rw_semaphore *sem
On Sat, 2016-06-04 at 00:36 +0200, Peter Zijlstra wrote:
> On Fri, Jun 03, 2016 at 11:09:54AM -0700, Jason Low wrote:
> > --- a/arch/alpha/include/asm/rwsem.h
> > +++ b/arch/alpha/include/asm/rwsem.h
> > @@ -25,8 +25,8 @@ static inline void __down_read(struct rw_semaphore *sem
On Fri, 2016-06-03 at 10:04 +0200, Ingo Molnar wrote:
> * Peter Zijlstra <pet...@infradead.org> wrote:
>
> > On Mon, May 16, 2016 at 06:12:25PM -0700, Linus Torvalds wrote:
> > > On Mon, May 16, 2016 at 5:37 PM, Jason Low <jason.l...@hpe.com> wrote:
> &
On Fri, 2016-06-03 at 10:04 +0200, Ingo Molnar wrote:
> * Peter Zijlstra wrote:
>
> > On Mon, May 16, 2016 at 06:12:25PM -0700, Linus Torvalds wrote:
> > > On Mon, May 16, 2016 at 5:37 PM, Jason Low wrote:
> > > >
> > > > This rest of
Commit-ID: 6e2814745c67ab422b86262b05e6f23a56f28aa3
Gitweb: http://git.kernel.org/tip/6e2814745c67ab422b86262b05e6f23a56f28aa3
Author: Jason Low <jason.l...@hpe.com>
AuthorDate: Fri, 20 May 2016 15:19:36 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 3 Jun 2
Commit-ID: 6e2814745c67ab422b86262b05e6f23a56f28aa3
Gitweb: http://git.kernel.org/tip/6e2814745c67ab422b86262b05e6f23a56f28aa3
Author: Jason Low
AuthorDate: Fri, 20 May 2016 15:19:36 -0700
Committer: Ingo Molnar
CommitDate: Fri, 3 Jun 2016 12:06:10 +0200
locking/mutex: Set and clear
Commit-ID: c0fcb6c2d332041256dc55d8a1ec3c0a2d0befb8
Gitweb: http://git.kernel.org/tip/c0fcb6c2d332041256dc55d8a1ec3c0a2d0befb8
Author: Jason Low <jason.l...@hpe.com>
AuthorDate: Mon, 16 May 2016 17:38:00 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 3 Jun 2
Commit-ID: c0fcb6c2d332041256dc55d8a1ec3c0a2d0befb8
Gitweb: http://git.kernel.org/tip/c0fcb6c2d332041256dc55d8a1ec3c0a2d0befb8
Author: Jason Low
AuthorDate: Mon, 16 May 2016 17:38:00 -0700
Committer: Ingo Molnar
CommitDate: Fri, 3 Jun 2016 09:47:13 +0200
locking/rwsem: Optimize write
and use a partially written owner value.
This is not necessary in the debug case where the owner gets
modified with the wait_lock held.
Signed-off-by: Jason Low <jason.l...@hpe.com>
Acked-by: Davidlohr Bueso <d...@stgolabs.net>
Acked-by: Waiman Long <waiman.l...@hpe.com>
---
ke
and use a partially written owner value.
This is not necessary in the debug case where the owner gets
modified with the wait_lock held.
Signed-off-by: Jason Low
Acked-by: Davidlohr Bueso
Acked-by: Waiman Long
---
kernel/locking/mutex-debug.h | 5 +
kernel/locking/mutex.h | 10
On Mon, 2016-05-23 at 14:31 -0700, Davidlohr Bueso wrote:
> On Mon, 23 May 2016, Jason Low wrote:
>
> >On Fri, 2016-05-20 at 18:00 -0700, Davidlohr Bueso wrote:
> >> On Fri, 20 May 2016, Waiman Long wrote:
> >>
> >> >I think mutex-debug.h a
On Mon, 2016-05-23 at 14:31 -0700, Davidlohr Bueso wrote:
> On Mon, 23 May 2016, Jason Low wrote:
>
> >On Fri, 2016-05-20 at 18:00 -0700, Davidlohr Bueso wrote:
> >> On Fri, 20 May 2016, Waiman Long wrote:
> >>
> >> >I think mutex-debug.h a
On Fri, 2016-05-20 at 18:00 -0700, Davidlohr Bueso wrote:
> On Fri, 20 May 2016, Waiman Long wrote:
>
> >I think mutex-debug.h also needs similar changes for completeness.
>
> Maybe, but given that with debug the wait_lock is unavoidable, doesn't
> this send the wrong message?
The
On Fri, 2016-05-20 at 18:00 -0700, Davidlohr Bueso wrote:
> On Fri, 20 May 2016, Waiman Long wrote:
>
> >I think mutex-debug.h also needs similar changes for completeness.
>
> Maybe, but given that with debug the wait_lock is unavoidable, doesn't
> this send the wrong message?
The
On Sat, 2016-05-21 at 09:04 -0700, Peter Hurley wrote:
> On 05/18/2016 12:58 PM, Jason Low wrote:
> > It should be fine to use the standard READ_ONCE here, even if it's just
> > for documentation, as it's probably not going to cost anything in
> > practice. It would be bette
On Sat, 2016-05-21 at 09:04 -0700, Peter Hurley wrote:
> On 05/18/2016 12:58 PM, Jason Low wrote:
> > It should be fine to use the standard READ_ONCE here, even if it's just
> > for documentation, as it's probably not going to cost anything in
> > practice. It would be bette
and use a partially written owner value.
Signed-off-by: Jason Low <jason.l...@hpe.com>
Acked-by: Davidlohr Bueso <d...@stgolabs.net>
---
kernel/locking/mutex-debug.h | 4 ++--
kernel/locking/mutex.h | 10 --
2 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/ke
and use a partially written owner value.
Signed-off-by: Jason Low
Acked-by: Davidlohr Bueso
---
kernel/locking/mutex-debug.h | 4 ++--
kernel/locking/mutex.h | 10 --
2 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/mutex-debug.h b/kernel/locking/mutex
On Fri, 2016-05-20 at 16:27 -0400, Waiman Long wrote:
> On 05/19/2016 06:23 PM, Jason Low wrote:
> > The mutex owner can get read and written to without the wait_lock.
> > Use WRITE_ONCE when setting and clearing the owner field in order
> > to avoid optimizations
On Fri, 2016-05-20 at 16:27 -0400, Waiman Long wrote:
> On 05/19/2016 06:23 PM, Jason Low wrote:
> > The mutex owner can get read and written to without the wait_lock.
> > Use WRITE_ONCE when setting and clearing the owner field in order
> > to avoid optimizations
read and use a
partially written owner value.
Signed-off-by: Jason Low <jason.l...@hpe.com>
---
kernel/locking/mutex.h | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h
index 5cda397..469b61e 100644
--- a/kernel/l
read and use a
partially written owner value.
Signed-off-by: Jason Low
---
kernel/locking/mutex.h | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h
index 5cda397..469b61e 100644
--- a/kernel/locking/mutex.h
+++ b/kernel
On Wed, 2016-05-18 at 12:58 -0700, Jason Low wrote:
> On Wed, 2016-05-18 at 14:29 -0400, Waiman Long wrote:
> > On 05/18/2016 01:21 PM, Jason Low wrote:
> > > On Wed, 2016-05-18 at 07:04 -0700, Davidlohr Bueso wrote:
> > >> On Tue, 17 May 2016, Waiman Long wrote
On Wed, 2016-05-18 at 12:58 -0700, Jason Low wrote:
> On Wed, 2016-05-18 at 14:29 -0400, Waiman Long wrote:
> > On 05/18/2016 01:21 PM, Jason Low wrote:
> > > On Wed, 2016-05-18 at 07:04 -0700, Davidlohr Bueso wrote:
> > >> On Tue, 17 May 2016, Waiman Long wrote
On Wed, 2016-05-18 at 14:29 -0400, Waiman Long wrote:
> On 05/18/2016 01:21 PM, Jason Low wrote:
> > On Wed, 2016-05-18 at 07:04 -0700, Davidlohr Bueso wrote:
> >> On Tue, 17 May 2016, Waiman Long wrote:
> >>
> >>> Without using WRITE_ONCE(), the compi
On Wed, 2016-05-18 at 14:29 -0400, Waiman Long wrote:
> On 05/18/2016 01:21 PM, Jason Low wrote:
> > On Wed, 2016-05-18 at 07:04 -0700, Davidlohr Bueso wrote:
> >> On Tue, 17 May 2016, Waiman Long wrote:
> >>
> >>> Without using WRITE_ONCE(), the compi
READ_ONCE() may
> >not be needed for rwsem->owner as long as the value is only used for
> >comparison and not dereferencing.
> >
> >Signed-off-by: Waiman Long <waiman.l...@hpe.com>
>
> Yes, ->owner can obviously be handled locklessly during optimistic
> spinning.
>
> Acked-by: Davidlohr Bueso <d...@stgolabs.net>
Acked-by: Jason Low <jason.l...@hpe.com>
READ_ONCE() may
> >not be needed for rwsem->owner as long as the value is only used for
> >comparison and not dereferencing.
> >
> >Signed-off-by: Waiman Long
>
> Yes, ->owner can obviously be handled locklessly during optimistic
> spinning.
>
> Acked-by: Davidlohr Bueso
Acked-by: Jason Low
On Wed, 2016-05-18 at 07:04 -0700, Davidlohr Bueso wrote:
> On Tue, 17 May 2016, Waiman Long wrote:
>
> >Without using WRITE_ONCE(), the compiler can potentially break a
> >write into multiple smaller ones (store tearing). So a read from the
> >same data by another task concurrently may return a
On Wed, 2016-05-18 at 07:04 -0700, Davidlohr Bueso wrote:
> On Tue, 17 May 2016, Waiman Long wrote:
>
> >Without using WRITE_ONCE(), the compiler can potentially break a
> >write into multiple smaller ones (store tearing). So a read from the
> >same data by another task concurrently may return a
On Tue, 2016-05-17 at 13:09 +0200, Peter Zijlstra wrote:
> On Mon, May 16, 2016 at 06:12:25PM -0700, Linus Torvalds wrote:
> > On Mon, May 16, 2016 at 5:37 PM, Jason Low <jason.l...@hpe.com> wrote:
> > >
> > > This rest of the series converts the rwsem count va
On Tue, 2016-05-17 at 13:09 +0200, Peter Zijlstra wrote:
> On Mon, May 16, 2016 at 06:12:25PM -0700, Linus Torvalds wrote:
> > On Mon, May 16, 2016 at 5:37 PM, Jason Low wrote:
> > >
> > > This rest of the series converts the rwsem count variable to an
> > >
The rwsem count has been converted to an atomic variable and we
now directly use atomic_long_add() and atomic_long_add_return()
on the count, so we can remove the asm-generic implementation of
rwsem_atomic_add() and rwsem_atomic_update().
Signed-off-by: Jason Low <jason.l...@hpe.com>
---
i
The rwsem count has been converted to an atomic variable and we
now directly use atomic_long_add() and atomic_long_add_return()
on the count, so we can remove the asm-generic implementation of
rwsem_atomic_add() and rwsem_atomic_update().
Signed-off-by: Jason Low
---
include/asm-generic/rwsem.h
The rwsem count has been converted to an atomic variable and the rwsem
code now directly uses atomic_long_add() and atomic_long_add_return(),
so we can remove the s390 implementation of rwsem_atomic_add() and
rwsem_atomic_update().
Signed-off-by: Jason Low <jason.l...@hpe.com>
---
arc
The rwsem count has been converted to an atomic variable and the rwsem
code now directly uses atomic_long_add() and atomic_long_add_return(),
so we can remove the s390 implementation of rwsem_atomic_add() and
rwsem_atomic_update().
Signed-off-by: Jason Low
---
arch/s390/include/asm/rwsem.h | 37
The rwsem count has been converted to an atomic variable and the rwsem
code now directly uses atomic_long_add() and atomic_long_add_return(),
so we can remove the x86 implementation of rwsem_atomic_add() and
rwsem_atomic_update().
Signed-off-by: Jason Low <jason.l...@hpe.com>
---
ar
The rwsem count has been converted to an atomic variable and the rwsem
code now directly uses atomic_long_add() and atomic_long_add_return(),
so we can remove the alpha implementation of rwsem_atomic_add() and
rwsem_atomic_update().
Signed-off-by: Jason Low <jason.l...@hpe.com>
---
arch
The rwsem count has been converted to an atomic variable and the rwsem
code now directly uses atomic_long_add() and atomic_long_add_return(),
so we can remove the ia64 implementation of rwsem_atomic_add() and
rwsem_atomic_update().
Signed-off-by: Jason Low <jason.l...@hpe.com>
---
arc
The rwsem count has been converted to an atomic variable and the rwsem
code now directly uses atomic_long_add() and atomic_long_add_return(),
so we can remove the x86 implementation of rwsem_atomic_add() and
rwsem_atomic_update().
Signed-off-by: Jason Low
---
arch/x86/include/asm/rwsem.h | 18
The rwsem count has been converted to an atomic variable and the rwsem
code now directly uses atomic_long_add() and atomic_long_add_return(),
so we can remove the alpha implementation of rwsem_atomic_add() and
rwsem_atomic_update().
Signed-off-by: Jason Low
---
arch/alpha/include/asm/rwsem.h
The rwsem count has been converted to an atomic variable and the rwsem
code now directly uses atomic_long_add() and atomic_long_add_return(),
so we can remove the ia64 implementation of rwsem_atomic_add() and
rwsem_atomic_update().
Signed-off-by: Jason Low
---
arch/ia64/include/asm/rwsem.h | 7
1 - 100 of 980 matches
Mail list logo