, commit
f6ff4f67cdf8455d0a4226eeeaf5af17c37d05eb upstream.
It causes oopses:
BUG: unable to handle kernel NULL pointer dereference at 0008
IP: [] radeon_fence_ref+0xd/0x50 [radeon]
Signed-off-by: Jiri Slaby <jsl...@suse.cz>
Cc: Nicolai Hähnle <nicolai.haeh...@amd.com>
Cc: Chr
On 09.03.2016 08:56, Luis Henriques wrote:
On Mon, Mar 07, 2016 at 02:58:51PM -0800, Greg Kroah-Hartman wrote:
On Mon, Mar 07, 2016 at 10:06:47PM +0100, Christian König wrote:
Am 07.03.2016 um 21:46 schrieb Greg Kroah-Hartman:
On Sun, Mar 06, 2016 at 07:50:14PM -0700, Erik Andersen wrote:
ed to ensure that nobody frees the
fences from under us.
Based on the analogous fix for amdgpu.
Signed-off-by: Nicolai Hähnle <nicolai.haeh...@amd.com>
Reviewed-by: Christian König <christian.koe...@amd.com>
Signed-off-by: Kamal Mostafa <ka...@canonical.com>
---
drivers/gpu/drm/ra
On 29.07.2016 20:38, Deucher, Alexander wrote:
-Original Message-
From: Sean Paul [mailto:seanp...@google.com]
Sent: Friday, July 29, 2016 3:35 PM
To: Wei Yongjun
Cc: Deucher, Alexander; Koenig, Christian; Dave Airlie; Jiang, Sonny; Liu, Leo;
Nath, Arindam; Zhou, David(ChunMing); Zhou,
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Fix a race condition involving 4 threads and 2 ww_mutexes as indicated in
the following example. Acquire context stamps are ordered like the thread
numbers, i.e. thread #1 should back off when it encounters a mutex locked
by thread #0 etc.
Thr
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <nicolai.haeh...@amd.com>
---
Documentation/locking/00-INDEX | 2 +-
1 file ch
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <nicolai.haeh...@amd.com>
---
include/linux/ww_mutex.h | 2 +-
1 file changed, 1
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
When ww_mutex_set_context_slowpath runs, we are in one of two situations:
1. The current task was woken up by ww_mutex_unlock.
2. The current task is racing with ww_mutex_unlock: We entered the slow
path while lock->base.count <= 0,
On 24.11.2016 12:56, Peter Zijlstra wrote:
On Thu, Nov 24, 2016 at 12:52:25PM +0100, Daniel Vetter wrote:
On Thu, Nov 24, 2016 at 12:40 PM, Peter Zijlstra wrote:
I do believe we can win a bit by keeping the wait list sorted, if we also
make sure that waiters don't add
On 23.11.2016 15:25, Daniel Vetter wrote:
On Wed, Nov 23, 2016 at 03:03:36PM +0100, Peter Zijlstra wrote:
On Wed, Nov 23, 2016 at 12:25:22PM +0100, Nicolai Hähnle wrote:
@@ -473,7 +476,14 @@ void __sched ww_mutex_unlock(struct ww_mutex *lock)
*/
mutex_clear_owner(>b
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Lock stealing is less beneficial for w/w mutexes since we may just end up
backing off if we stole from a thread with an earlier acquire stamp that
already holds another w/w mutex that we also need. So don't spin
optimistically unless we ar
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Help catch cases where mutex_lock is used directly on w/w mutexes, which
otherwise result in the w/w tasks reading uninitialized data.
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Ma
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
While adding our task as a waiter, detect if another task should back off
because of us.
With this patch, we establish the invariant that the wait list contains
at most one (sleeping) waiter with ww_ctx->acquired > 0, and
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
The wait list is sorted by stamp order, and the only waiting task that may
have to back off is the first waiter with a context.
The regular slow path does not have to wake any other tasks at all, since
all other waiters that would have to ba
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Add regular waiters in stamp order. Keep adding waiters that have no
context in FIFO order and take care not to starve them.
While adding our task as a waiter, back off if we detect that there is a
waiter with a lower stamp in front of us.
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Maarten Lankhorst <d...@mblankhorst.nl>
Cc: Daniel Vetter <dan...@ffwll.ch>
Cc: Chris Wilson <ch...@chris-wilson.co.uk>
Cc: dri-de
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Check the current owner's context once against our stamp. If our stamp is
lower, we continue to spin optimistically instead of backing off.
This is correct with respect to deadlock detection because while the
(owner, ww_ctx) pair may re-
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
In the following scenario, thread #1 should back off its attempt to lock
ww1 and unlock ww2 (assuming the acquire context stamps are ordered
accordingly).
Thread #0 Thr
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
The function will be re-used in subsequent patches.
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Maarten Lankhorst <d...@mblankhorst.nl>
Cc: Daniel Vetter <dan...@ffwll.ch>
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
We will add a new field to struct mutex_waiter. This field must be
initialized for all waiters if any waiter uses the ww_use_ctx path.
So there is a trade-off: Keep ww_mutex locking without a context on the
faster non-use_ww_ctx path, at th
It turns out that the deadlock that I found last week was already implicitly
fixed during the lock->owner redesign, by checking the WAITERS bit in the
w/w lock fast path. However, since I had already started looking into
sorting the wait list, here goes.
The basic idea is to make sure that:
1.
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Document the invariants we maintain for the wait list of ww_mutexes.
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Maarten Lankhorst <d...@mblankhorst.nl>
Cc: Daniel Vetter <dan...@ffw
On 06.12.2016 16:36, Peter Zijlstra wrote:
On Thu, Dec 01, 2016 at 03:06:48PM +0100, Nicolai Hähnle wrote:
+static inline int __sched
+__ww_mutex_add_waiter(struct mutex_waiter *waiter,
+ struct mutex *lock,
+ struct ww_acquire_ctx *ww_ctx
On 06.12.2016 16:25, Peter Zijlstra wrote:
On Thu, Dec 01, 2016 at 03:06:47PM +0100, Nicolai Hähnle wrote:
@@ -640,10 +640,11 @@ __mutex_lock_common(struct mutex *lock, long state,
unsigned int subclass,
struct mutex_waiter waiter;
unsigned long flags;
bool first
On 01.12.2016 16:59, Chris Wilson wrote:
On Thu, Dec 01, 2016 at 03:06:48PM +0100, Nicolai Hähnle wrote:
@@ -677,15 +722,25 @@ __mutex_lock_common(struct mutex *lock, long state,
unsigned int subclass,
debug_mutex_lock_common(lock, );
debug_mutex_add_waiter(lock, , task
Hi Peter and Chris,
(trying to combine the handoff discussion here)
On 06.12.2016 17:55, Peter Zijlstra wrote:
On Thu, Dec 01, 2016 at 03:06:48PM +0100, Nicolai Hähnle wrote:
@@ -693,8 +748,12 @@ __mutex_lock_common(struct mutex *lock, long state,
unsigned int subclass
On 16.12.2016 18:20, Peter Zijlstra wrote:
On Fri, Dec 16, 2016 at 03:19:43PM +0100, Nicolai Hähnle wrote:
@@ -716,7 +775,20 @@ __mutex_lock_common(struct mutex *lock, long state,
unsigned int subclass,
spin_unlock_mutex(>wait_lock, fl
Changes to patches 1 & 5 based on feedback. I've also updated the branch
at https://cgit.freedesktop.org/~nh/linux/log/?h=mutex.
There's been the question of using a balanced tree rather than a list.
Frankly, I'd say the 99% use case doesn't need it. Also, dealing with
waiters without a context
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Lock stealing is less beneficial for w/w mutexes since we may just end up
backing off if we stole from a thread with an earlier acquire stamp that
already holds another w/w mutex that we also need. So don't spin
optimistically unless we ar
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
While adding our task as a waiter, detect if another task should back off
because of us.
With this patch, we establish the invariant that the wait list contains
at most one (sleeping) waiter with ww_ctx->acquired > 0, and
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
In the following scenario, thread #1 should back off its attempt to lock
ww1 and unlock ww2 (assuming the acquire context stamps are ordered
accordingly).
Thread #0 Thr
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
We will add a new field to struct mutex_waiter. This field must be
initialized for all waiters if any waiter uses the ww_use_ctx path.
So there is a trade-off: Keep ww_mutex locking without a context on the
faster non-use_ww_ctx path, at th
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
The wait list is sorted by stamp order, and the only waiting task that may
have to back off is the first waiter with a context.
The regular slow path does not have to wake any other tasks at all, since
all other waiters that would have to ba
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Check the current owner's context once against our stamp. If our stamp is
lower, we continue to spin optimistically instead of backing off.
This is correct with respect to deadlock detection because while the
(owner, ww_ctx) pair may re-
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Add regular waiters in stamp order. Keep adding waiters that have no
context in FIFO order and take care not to starve them.
While adding our task as a waiter, back off if we detect that there is a
waiter with a lower stamp in front of us.
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Document the invariants we maintain for the wait list of ww_mutexes.
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Maarten Lankhorst <d...@mblankhorst.nl>
Cc: Daniel Vetter <dan...@ffw
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Help catch cases where mutex_lock is used directly on w/w mutexes, which
otherwise result in the w/w tasks reading uninitialized data.
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Ma
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
The function will be re-used in subsequent patches.
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Maarten Lankhorst <d...@mblankhorst.nl>
Cc: Daniel Vetter <dan...@ffwll.ch>
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
v2: use resv->lock instead of resv->lock.base (Christian König)
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Maarten Lankhorst <d...@mblankhorst.nl>
Cc: Daniel Vetter <dan.
On 30.11.2016 01:35, Chris Wilson wrote:
Signed-off-by: Chris Wilson <ch...@chris-wilson.co.uk>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Maarten Lankhorst <d...@mblankhorst.nl>
Cc: Nicolai Hähnle <nhaeh...@gmail.com>
---
kernel/locking/Makefile|
On 30.11.2016 10:40, Chris Wilson wrote:
On Mon, Nov 28, 2016 at 01:20:01PM +0100, Nicolai Hähnle wrote:
I've included timings taken from a contention-heavy stress test to some of
the patches. The stress test performs actual GPU operations which take a
good chunk of the wall time, but even so
On 30.11.2016 13:20, Chris Wilson wrote:
On Wed, Nov 30, 2016 at 12:52:28PM +0100, Nicolai Hähnle wrote:
On 30.11.2016 10:40, Chris Wilson wrote:
On Mon, Nov 28, 2016 at 01:20:01PM +0100, Nicolai Hähnle wrote:
I've included timings taken from a contention-heavy stress test to some
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
The function will be re-used in subsequent patches.
v3: rename to __ww_ctx_stamp_after (Chris Wilson)
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Maarten Lankhorst <d...@mblankhorst.nl>
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
v2: use resv->lock instead of resv->lock.base (Christian König)
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Maarten Lankhorst <d...@mblankhorst.nl>
Cc: Daniel Vetter <dan.
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Keep the documentation in the header file since there is no good
place for it in mutex.c: there are two rather different
implementations with different EXPORT_SYMBOLs for each function.
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ing
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Document the invariants we maintain for the wait list of ww_mutexes.
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Maarten Lankhorst <d...@mblankhorst.nl>
Cc: Daniel Vetter <dan...@ffw
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
In the following scenario, thread #1 should back off its attempt to lock
ww1 and unlock ww2 (assuming the acquire context stamps are ordered
accordingly).
Thread #0 Thr
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Lock stealing is less beneficial for w/w mutexes since we may just end up
backing off if we stole from a thread with an earlier acquire stamp that
already holds another w/w mutex that we also need. So don't spin
optimistically unless we ar
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
While adding our task as a waiter, detect if another task should back off
because of us.
With this patch, we establish the invariant that the wait list contains
at most one (sleeping) waiter with ww_ctx->acquired > 0, and
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
The wait list is sorted by stamp order, and the only waiting task that may
have to back off is the first waiter with a context.
The regular slow path does not have to wake any other tasks at all, since
all other waiters that would have to ba
Here's a v3 of the series. Some comments:
Patch #1 is already in drm-misc, but I left it here for now for completeness.
Patch #2 is new and affects all types of locks, not just the w/w case. It's
a race that is exceedingly unlikely: basically, we have to be interrupted
right between checking our
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
There's a possible race where the waiter in front of us leaves the wait list
due to a signal, and the current owner subsequently hands the lock off to us
even though we never observed ourselves at the front of the list.
Set the task state
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
We will add a new field to struct mutex_waiter. This field must be
initialized for all waiters if any waiter uses the ww_use_ctx path.
So there is a trade-off: Keep ww_mutex locking without a context on the
faster non-use_ww_ctx path, at th
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Add regular waiters in stamp order. Keep adding waiters that have no
context in FIFO order and take care not to starve them.
While adding our task as a waiter, back off if we detect that there is a
waiter with a lower stamp in front of us.
From: Nicolai Hähnle <nicolai.haeh...@amd.com>
Help catch cases where mutex_lock is used directly on w/w mutexes, which
otherwise result in the w/w tasks reading uninitialized data.
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Ma
On 23.12.2016 11:48, Peter Zijlstra wrote:
On Wed, Dec 21, 2016 at 07:46:33PM +0100, Nicolai Hähnle wrote:
diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
index a5960e5..b2eaaab 100644
--- a/include/linux/ww_mutex.h
+++ b/include/linux/ww_mutex.h
@@ -186,11 +186,6 @@ static
On 22.12.2016 02:58, zhoucm1 wrote:
On 2016年12月22日 02:46, Nicolai Hähnle wrote:
+static inline bool __sched
+__ww_ctx_stamp_after(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
+{
+return a->stamp - b->stamp <= LONG_MAX &&
+ (a->stamp != b->stamp ||
On 16.12.2016 21:00, Peter Zijlstra wrote:
On Fri, Dec 16, 2016 at 07:11:41PM +0100, Nicolai Hähnle wrote:
mutex_optimistic_spin() already calls __mutex_trylock, and for the no-spin
case, __mutex_unlock_slowpath() only calls wake_up_q() after releasing the
wait_lock.
mutex_optimistic_spin
On 16.12.2016 18:15, Peter Zijlstra wrote:
On Fri, Dec 16, 2016 at 03:19:43PM +0100, Nicolai Hähnle wrote:
The concern about picking up a handoff that we didn't request is real,
though it cannot happen in the first iteration. Perhaps this __mutex_trylock
can be moved to the end of the loop? See
On 16.11.2017 21:57, Dave Airlie wrote:
On 16 November 2017 at 14:59, Linus Torvalds
wrote:
On Wed, Nov 15, 2017 at 6:34 PM, Dave Airlie wrote:
There is some code touched on sound/soc, but I think the sound tree
should have the same commits
On 17.11.2017 20:18, Christian König wrote:
The obvious alternative which we are working on for a few years now is
to improve the input data we get from the hardware people.
In other words instead of getting a flat list of registers we want the
information about where and how many times a
On 30.01.2018 12:34, Michel Dänzer wrote:
On 2018-01-30 12:28 PM, Christian König wrote:
Am 30.01.2018 um 12:02 schrieb Michel Dänzer:
On 2018-01-30 11:40 AM, Christian König wrote:
Am 30.01.2018 um 10:43 schrieb Michel Dänzer:
[SNIP]
Would it be ok to hang onto potentially arbitrary mmget
On 30.01.2018 11:48, Michel Dänzer wrote:
On 2018-01-30 11:42 AM, Daniel Vetter wrote:
On Tue, Jan 30, 2018 at 10:43:10AM +0100, Michel Dänzer wrote:
On 2018-01-30 10:31 AM, Daniel Vetter wrote:
I guess a good first order approximation would be if we simply charge any
newly allocated buffers
On 17.11.2017 20:18, Christian König wrote:
The obvious alternative which we are working on for a few years now is
to improve the input data we get from the hardware people.
In other words instead of getting a flat list of registers we want the
information about where and how many times a
On 30.01.2018 11:48, Michel Dänzer wrote:
On 2018-01-30 11:42 AM, Daniel Vetter wrote:
On Tue, Jan 30, 2018 at 10:43:10AM +0100, Michel Dänzer wrote:
On 2018-01-30 10:31 AM, Daniel Vetter wrote:
I guess a good first order approximation would be if we simply charge any
newly allocated buffers
On 30.01.2018 12:34, Michel Dänzer wrote:
On 2018-01-30 12:28 PM, Christian König wrote:
Am 30.01.2018 um 12:02 schrieb Michel Dänzer:
On 2018-01-30 11:40 AM, Christian König wrote:
Am 30.01.2018 um 10:43 schrieb Michel Dänzer:
[SNIP]
Would it be ok to hang onto potentially arbitrary mmget
On 16.11.2017 21:57, Dave Airlie wrote:
On 16 November 2017 at 14:59, Linus Torvalds
wrote:
On Wed, Nov 15, 2017 at 6:34 PM, Dave Airlie wrote:
There is some code touched on sound/soc, but I think the sound tree
should have the same commits from the same base,so this may luck different
if
From: Nicolai Hähnle
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle
---
include/linux/ww_mutex.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
index 760399a
From: Nicolai Hähnle
When ww_mutex_set_context_slowpath runs, we are in one of two situations:
1. The current task was woken up by ww_mutex_unlock.
2. The current task is racing with ww_mutex_unlock: We entered the slow
path while lock->base.count <= 0, but skipped th
From: Nicolai Hähnle
Fix a race condition involving 4 threads and 2 ww_mutexes as indicated in
the following example. Acquire context stamps are ordered like the thread
numbers, i.e. thread #1 should back off when it encounters a mutex locked
by thread #0 etc.
Thread #0Thread #1Thread
From: Nicolai Hähnle
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle
---
Documentation/locking/00-INDEX | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/locking/00-INDEX b/Documentation/locking/00-INDEX
On 23.11.2016 15:25, Daniel Vetter wrote:
On Wed, Nov 23, 2016 at 03:03:36PM +0100, Peter Zijlstra wrote:
On Wed, Nov 23, 2016 at 12:25:22PM +0100, Nicolai Hähnle wrote:
@@ -473,7 +476,14 @@ void __sched ww_mutex_unlock(struct ww_mutex *lock)
*/
mutex_clear_owner(>b
On 24.11.2016 12:56, Peter Zijlstra wrote:
On Thu, Nov 24, 2016 at 12:52:25PM +0100, Daniel Vetter wrote:
On Thu, Nov 24, 2016 at 12:40 PM, Peter Zijlstra wrote:
I do believe we can win a bit by keeping the wait list sorted, if we also
make sure that waiters don't add themselves in the
On 06.12.2016 16:25, Peter Zijlstra wrote:
On Thu, Dec 01, 2016 at 03:06:47PM +0100, Nicolai Hähnle wrote:
@@ -640,10 +640,11 @@ __mutex_lock_common(struct mutex *lock, long state,
unsigned int subclass,
struct mutex_waiter waiter;
unsigned long flags;
bool first
On 06.12.2016 16:36, Peter Zijlstra wrote:
On Thu, Dec 01, 2016 at 03:06:48PM +0100, Nicolai Hähnle wrote:
+static inline int __sched
+__ww_mutex_add_waiter(struct mutex_waiter *waiter,
+ struct mutex *lock,
+ struct ww_acquire_ctx *ww_ctx
Hi Peter and Chris,
(trying to combine the handoff discussion here)
On 06.12.2016 17:55, Peter Zijlstra wrote:
On Thu, Dec 01, 2016 at 03:06:48PM +0100, Nicolai Hähnle wrote:
@@ -693,8 +748,12 @@ __mutex_lock_common(struct mutex *lock, long state,
unsigned int subclass
On 01.12.2016 16:59, Chris Wilson wrote:
On Thu, Dec 01, 2016 at 03:06:48PM +0100, Nicolai Hähnle wrote:
@@ -677,15 +722,25 @@ __mutex_lock_common(struct mutex *lock, long state,
unsigned int subclass,
debug_mutex_lock_common(lock, );
debug_mutex_add_waiter(lock, , task
On 16.12.2016 18:15, Peter Zijlstra wrote:
On Fri, Dec 16, 2016 at 03:19:43PM +0100, Nicolai Hähnle wrote:
The concern about picking up a handoff that we didn't request is real,
though it cannot happen in the first iteration. Perhaps this __mutex_trylock
can be moved to the end of the loop? See
On 16.12.2016 18:20, Peter Zijlstra wrote:
On Fri, Dec 16, 2016 at 03:19:43PM +0100, Nicolai Hähnle wrote:
@@ -716,7 +775,20 @@ __mutex_lock_common(struct mutex *lock, long state,
unsigned int subclass,
spin_unlock_mutex(>wait_lock, fl
On 16.12.2016 21:00, Peter Zijlstra wrote:
On Fri, Dec 16, 2016 at 07:11:41PM +0100, Nicolai Hähnle wrote:
mutex_optimistic_spin() already calls __mutex_trylock, and for the no-spin
case, __mutex_unlock_slowpath() only calls wake_up_q() after releasing the
wait_lock.
mutex_optimistic_spin
It turns out that the deadlock that I found last week was already implicitly
fixed during the lock->owner redesign, by checking the WAITERS bit in the
w/w lock fast path. However, since I had already started looking into
sorting the wait list, here goes.
The basic idea is to make sure that:
1.
From: Nicolai Hähnle
Add regular waiters in stamp order. Keep adding waiters that have no
context in FIFO order and take care not to starve them.
While adding our task as a waiter, back off if we detect that there is a
waiter with a lower stamp in front of us.
Make sure to call lock_contended
From: Nicolai Hähnle
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Maarten Lankhorst
Cc: Daniel Vetter
Cc: Chris Wilson
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle
---
drivers/gpu/drm/vgem/vgem_fence.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git
From: Nicolai Hähnle
Check the current owner's context once against our stamp. If our stamp is
lower, we continue to spin optimistically instead of backing off.
This is correct with respect to deadlock detection because while the
(owner, ww_ctx) pair may re-appear if the owner task manages
From: Nicolai Hähnle
In the following scenario, thread #1 should back off its attempt to lock
ww1 and unlock ww2 (assuming the acquire context stamps are ordered
accordingly).
Thread #0 Thread #1
- -
successfully
From: Nicolai Hähnle
The function will be re-used in subsequent patches.
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Maarten Lankhorst
Cc: Daniel Vetter
Cc: Chris Wilson
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle
---
kernel/locking/mutex.c | 10 --
1 file
From: Nicolai Hähnle
We will add a new field to struct mutex_waiter. This field must be
initialized for all waiters if any waiter uses the ww_use_ctx path.
So there is a trade-off: Keep ww_mutex locking without a context on the
faster non-use_ww_ctx path, at the cost of adding
From: Nicolai Hähnle
Document the invariants we maintain for the wait list of ww_mutexes.
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Maarten Lankhorst
Cc: Daniel Vetter
Cc: Chris Wilson
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle
---
Documentation/locking/ww-mutex
From: Nicolai Hähnle
Lock stealing is less beneficial for w/w mutexes since we may just end up
backing off if we stole from a thread with an earlier acquire stamp that
already holds another w/w mutex that we also need. So don't spin
optimistically unless we are sure that there is no other waiter
From: Nicolai Hähnle
Help catch cases where mutex_lock is used directly on w/w mutexes, which
otherwise result in the w/w tasks reading uninitialized data.
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Maarten Lankhorst
Cc: Daniel Vetter
Cc: Chris Wilson
Cc: dri-de...@lists.freedesktop.org
Signed
From: Nicolai Hähnle
While adding our task as a waiter, detect if another task should back off
because of us.
With this patch, we establish the invariant that the wait list contains
at most one (sleeping) waiter with ww_ctx->acquired > 0, and this waiter
will be the first waiter with a c
From: Nicolai Hähnle
The wait list is sorted by stamp order, and the only waiting task that may
have to back off is the first waiter with a context.
The regular slow path does not have to wake any other tasks at all, since
all other waiters that would have to back off were either woken up when
From: Nicolai Hähnle
Keep the documentation in the header file since there is no good
place for it in mutex.c: there are two rather different
implementations with different EXPORT_SYMBOLs for each function.
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Maarten Lankhorst
Cc: Daniel Vetter
Cc: Chris
From: Nicolai Hähnle
Document the invariants we maintain for the wait list of ww_mutexes.
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Maarten Lankhorst
Cc: Daniel Vetter
Cc: Chris Wilson
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle
---
Documentation/locking/ww-mutex
From: Nicolai Hähnle
In the following scenario, thread #1 should back off its attempt to lock
ww1 and unlock ww2 (assuming the acquire context stamps are ordered
accordingly).
Thread #0 Thread #1
- -
successfully
From: Nicolai Hähnle
Lock stealing is less beneficial for w/w mutexes since we may just end up
backing off if we stole from a thread with an earlier acquire stamp that
already holds another w/w mutex that we also need. So don't spin
optimistically unless we are sure that there is no other waiter
From: Nicolai Hähnle
While adding our task as a waiter, detect if another task should back off
because of us.
With this patch, we establish the invariant that the wait list contains
at most one (sleeping) waiter with ww_ctx->acquired > 0, and this waiter
will be the first waiter with a c
From: Nicolai Hähnle
Help catch cases where mutex_lock is used directly on w/w mutexes, which
otherwise result in the w/w tasks reading uninitialized data.
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Maarten Lankhorst
Cc: Daniel Vetter
Cc: Chris Wilson
Cc: dri-de...@lists.freedesktop.org
Signed
From: Nicolai Hähnle
Add regular waiters in stamp order. Keep adding waiters that have no
context in FIFO order and take care not to starve them.
While adding our task as a waiter, back off if we detect that there is a
waiter with a lower stamp in front of us.
Make sure to call lock_contended
From: Nicolai Hähnle
We will add a new field to struct mutex_waiter. This field must be
initialized for all waiters if any waiter uses the ww_use_ctx path.
So there is a trade-off: Keep ww_mutex locking without a context on the
faster non-use_ww_ctx path, at the cost of adding
1 - 100 of 126 matches
Mail list logo