ecursive)
new:
LOCK_* : stands for non-recursive(write lock and non-recursive
read lock)
LOCK_*_RR: stands for recursive read lock
Such a change is needed for a future improvement on recursive read
related irq inversion deadlock detection.
Signed-off-by: Boqun Feng
---
Document
ks, four kinds of dependencies could all exist
between them, so we use 4 bit for the presence of each kind(stored in
lock_list::dep). Helper functions and marcos are also introduced to
convert a pair of locks into ::dep bit and maintain the addition of
different kinds of dependencies.
Signed-off-by:
e
bit: we now mark lock_class::lockdep_dependency_gen_id to indicate _all
the dependencies_ in its lock_{after,before} have been visited in the
__bfs()(note we only take one direction in a __bfs() search). In this
way, every dependency is guaranteed to be visited until we find a match.
Signed-off-by: Boqun Feng
return value
of __bfs() and its friends, this improves the code readability of the
code, and further, could help if we want to extend the BFS.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 136 ---
1 file changed, 80 insertions(+), 56
Hi Ingo and Peter,
This is V4 for recursive read lock support in lockdep. I add the
explanation about reasoning in patch #16 and would like to check that
with you first in case I made fundamental mistakes.
Changes since V3:
* Reduce the unnecessary cost in structure lock_list as suggested
On Wed, Jan 03, 2018 at 03:04:36PM +0530, afzal mohammed wrote:
> Let PDF & HTML's be created out of memory-barriers Text by
> reStructuring.
>
> reStructuring done were,
> 1. Section headers modification, lower header case except start
> 2. Removal of manual index(contents section), since it now
Hi Shoaib,
Good to see you set out a patchset ;-)
On Tue, Jan 02, 2018 at 02:49:25PM -0800, Rao Shoaib wrote:
>
>
> On 01/02/2018 02:23 PM, Matthew Wilcox wrote:
> > On Tue, Jan 02, 2018 at 12:11:37PM -0800, rao.sho...@oracle.com wrote:
> > > -#define kfree_rcu(ptr, rcu_head)
On Thu, Dec 07, 2017 at 06:56:17AM -0800, Paul E. McKenney wrote:
> On Thu, Dec 07, 2017 at 03:03:50PM +0800, Boqun Feng wrote:
> > Hi Paul,
> >
> > On Wed, Dec 06, 2017 at 02:04:21PM -0800, Paul E. McKenney wrote:
> > > On Tue, Dec 05, 2017 at 03:37:44PM -0800, Paul
Hi Paul,
On Wed, Dec 06, 2017 at 02:04:21PM -0800, Paul E. McKenney wrote:
> On Tue, Dec 05, 2017 at 03:37:44PM -0800, Paul E. McKenney wrote:
> > On Mon, Dec 04, 2017 at 09:42:08AM -0800, Paul E. McKenney wrote:
> > > On Fri, Dec 01, 2017 at 10:25:29AM -0800, Paul E. McKenney wrote:
> > > > Hello
On Thu, Nov 30, 2017 at 10:46:22AM -0500, Alan Stern wrote:
> On Thu, 30 Nov 2017, Boqun Feng wrote:
>
> > On Wed, Nov 29, 2017 at 02:44:37PM -0500, Alan Stern wrote:
> > > On Wed, 29 Nov 2017, Daniel Lustig wrote:
> > >
> > > > While we're
On Wed, Nov 29, 2017 at 02:44:37PM -0500, Alan Stern wrote:
> On Wed, 29 Nov 2017, Daniel Lustig wrote:
>
> > While we're here, let me ask about another test which isn't directly
> > about unlock/lock but which is still somewhat related to this
> > discussion:
> >
> > "MP+wmb+xchg-acq" (or some s
for direct testing. As with herd7, the klitmus7
> > > > > code is freely available from
> > > > > http://diy.inria.fr/sources/index.html
> > > > > (and via "git" at https://github.com/herd/herdtools7).
> > > > >
> &
On Thu, Nov 16, 2017 at 01:31:21AM +, Daniel Lustig wrote:
> > -Original Message-
> > From: Boqun Feng [mailto:boqun.f...@gmail.com]
> > Sent: Wednesday, November 15, 2017 5:19 PM
> > To: Daniel Lustig
> > Cc: Palmer Dabbelt ; will.dea...@arm.com; Arnd
On Wed, Nov 15, 2017 at 11:59:44PM +, Daniel Lustig wrote:
> > On Wed, 15 Nov 2017 10:06:01 PST (-0800), will.dea...@arm.com wrote:
> >> On Tue, Nov 14, 2017 at 12:30:59PM -0800, Palmer Dabbelt wrote:
> >> > On Tue, 24 Oct 2017 07:10:33 PDT (-0700), will.dea...@arm.com wrote:
> >> >>On Tue, Sep
On Tue, Nov 07, 2017 at 10:59:29AM -0700, Andreas Dilger wrote:
> On Nov 7, 2017, at 4:59 AM, Jan Kara wrote:
> > On Mon 06-11-17 10:47:08, Davidlohr Bueso wrote:
> >> + /*
> >> + * Serialize dlist->used_lists such that a 0->1 transition is not
> >> + * missed by another thread checking if an
On Tue, Nov 07, 2017 at 02:40:37AM +, Mathieu Desnoyers wrote:
> - On Nov 6, 2017, at 9:07 PM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > On Mon, Nov 06, 2017 at 03:56:38PM -0500, Mathieu Desnoyers wrote:
> > [...]
> >> +static int cpu_op_pin_pages(unsigned l
On Mon, Nov 06, 2017 at 03:56:38PM -0500, Mathieu Desnoyers wrote:
[...]
> +static int cpu_op_pin_pages(unsigned long addr, unsigned long len,
> + struct page ***pinned_pages_ptr, size_t *nr_pinned,
> + int write)
> +{
> + struct page *pages[2];
> + int ret, nr_pages
On Mon, Nov 06, 2017 at 03:56:31PM -0500, Mathieu Desnoyers wrote:
[...]
> +
> +/*
> + * struct rseq is aligned on 4 * 8 bytes to ensure it is always
> + * contained within a single cache-line.
> + *
> + * A single struct rseq per thread is allowed.
> + */
> +struct rseq {
> + /*
> + * Res
ate the use-after-unlock problem.
> >
Better than what I proposed, thanks for looking into this!
> > Reported-by: Boqun Feng
> > Signed-off-by: Waiman Long
>
> Looks good to me. You can add:
>
> Rev
On Thu, Oct 26, 2017 at 02:28:55PM -0400, Waiman Long wrote:
> On 10/05/2017 02:43 PM, Waiman Long wrote:
> >
> > This is a follow up of the following patchset:
> >
> > [PATCH v7 0/4] vfs: Use per-cpu list for SB's s_inodes list
> > https://lkml.org/lkml/2016/4/12/1009
> >
> > This patchset pro
On Thu, Oct 05, 2017 at 06:43:23PM +, Waiman Long wrote:
[...]
> +/*
> + * Find the first entry of the next available list.
> + */
> +extern struct dlock_list_node *
> +__dlock_list_next_list(struct dlock_list_iter *iter);
> +
> +/**
> + * __dlock_list_next_entry - Iterate to the next entry of
equence made of zero or more loads, one
> > speculative word-sized store, completed by a word-sized store with
> > release semantic,
> > - rseq_finish_memcpy():
> > End of restartable sequence made of zero or more loads, a
> > speculative copy of a variable length
On Wed, Oct 11, 2017 at 10:32:30PM +, Paul E. McKenney wrote:
> Hello!
>
> At Linux Plumbers Conference, we got requests for a recipes document,
> and a further request to point to actual code in the Linux kernel.
> I have pulled together some examples for various litmus-test families,
> as sh
On Thu, Oct 05, 2017 at 06:43:23PM +, Waiman Long wrote:
[...]
> +/*
> + * As all the locks in the dlock list are dynamically allocated, they need
> + * to belong to their own special lock class to avoid warning and stack
> + * trace in kernel log when lockdep is enabled. Statically allocated l
one with a pool flag. The only reason it became
> a mutex is that pool destruction path wants to exclude parallel
> managing operations.
>
> This patch replaces the mutex with a new pool flag POOL_MANAGER_ACTIVE
> and make the destruction path wait for the current manager on a wai
On Mon, Oct 09, 2017 at 09:40:43AM +, Lai Jiangshan wrote:
[...]
> > Reported-by: Josef Bacik
> > Signed-off-by: Boqun Feng
> > Cc: Peter Zijlstra
> > ---
> > kernel/workqueue.c | 35 ++-
> > 1 file changed, 34 insertions(
On Sun, Oct 08, 2017 at 07:03:47PM +, Tejun Heo wrote:
> Hello, Boqun.
>
Hi Tejun,
> On Sun, Oct 08, 2017 at 05:02:23PM +0800, Boqun Feng wrote:
> > Josef reported a HARDIRQ-safe -> HARDIRQ-unsafe lock order detected by
> > lockdep:
> >
> > | [ 1270.4722
vercome this, put the worker back to IDLE state before it drops
pool::lock in manage_workers(), and make the worker check again whether
it's DIE after it re-grabs the pool::lock. In this way, we fix the
potential deadlock reported by lockdep without introducing another.
Reported-by: Josef Bacik
zero to _QW_WAITING is left alone, since (a) this doesn't need acquire
> semantics and (b) should be fast.
>
> Cc: Peter Zijlstra
> Cc: Ingo Molnar
> Cc: Waiman Long
> Cc: Boqun Feng
> Cc: "Paul E. McKenney"
>
) is introduced,
> so that we know whether it's called from a context interrupting the
> kernel, and the parameter is set properly in all the callsites.
>
> Cc: "Paul E. McKenney"
> Cc: Peter Zijlstra
> Cc: Wanpeng Li
> Cc: sta...@vger.kernel.org
> Signed-o
oduced,
so that we know whether it's called from a context interrupting the
kernel, and we set that parameter properly in all the callsites.
Cc: "Paul E. McKenney"
Cc: Peter Zijlstra
Cc: Wanpeng Li
Signed-off-by: Boqun Feng
---
arch/x86/include/asm/kvm_para.h | 4 ++
On Mon, Oct 02, 2017 at 01:41:03PM +, Paolo Bonzini wrote:
[...]
> >
> > Wanpeng, the callsite of kvm_async_pf_task_wait() in
> > kvm_handle_page_fault() is for nested scenario, right? I take it we
> > should handle it as if the fault happens when l1 guest is running in
> > kernel mode, so @us
anpeng Li
[The explanation for async PF is contributed by Paolo Bonzini]
Signed-off-by: Boqun Feng
---
v1 --> v2:
* Add more accurate explanation of async PF from Paolo in the
commit message.
* Extend the kvm_async_pf_task_wait() to have a second parameter
@user to in
On Sat, Sep 30, 2017 at 05:15:15PM +, Paul E. McKenney wrote:
> On Sat, Sep 30, 2017 at 07:41:56AM +0800, Boqun Feng wrote:
> > On Fri, Sep 29, 2017 at 04:43:39PM +, Paul E. McKenney wrote:
> > > On Fri, Sep 29, 2017 at 04:53:57PM +0200, Paolo Bonzini wrote:
> >
On Fri, Sep 29, 2017 at 04:43:39PM +, Paul E. McKenney wrote:
> On Fri, Sep 29, 2017 at 04:53:57PM +0200, Paolo Bonzini wrote:
> > On 29/09/2017 13:01, Boqun Feng wrote:
> > > Sasha Levin reported a WARNING:
> > >
> > > | WARNING: CPU: 0 PID:
section.
Reported-by: Sasha Levin
Cc: "Paul E. McKenney"
Cc: Peter Zijlstra
Signed-off-by: Boqun Feng
---
arch/x86/kernel/kvm.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index aa60a08b65b1..e675704fa6f7 100644
---
On Fri, Sep 29, 2017 at 10:01:24AM +, Paolo Bonzini wrote:
> On 29/09/2017 11:30, Boqun Feng wrote:
> > On Thu, Sep 28, 2017 at 04:05:14PM +, Paul E. McKenney wrote:
> > [...]
> >>> __schedule+0x201/0x2240 kernel/sched/core.c:3292
> >>> schedu
On Thu, Sep 28, 2017 at 04:05:14PM +, Paul E. McKenney wrote:
[...]
> > __schedule+0x201/0x2240 kernel/sched/core.c:3292
> > schedule+0x113/0x460 kernel/sched/core.c:3421
> > kvm_async_pf_task_wait+0x43f/0x940 arch/x86/kernel/kvm.c:158
>
> It is kvm_async_pf_task_wait() that calls schedule(
On Wed, Sep 27, 2017 at 01:31:45AM +, Byungchul Park wrote:
>
> Sometimes, it gives a wrong scenario. For example:
>
> lock target
> lock source
> lock parent
> lock target
> lock parent of parent
> lock paren
On Tue, Sep 19, 2017 at 12:52:06PM +, Boqun Feng wrote:
> For a potential deadlock about CROSSRELEASE as follow:
>
> P1 P2
> === =
> lock(A)
> lock(X)
> lock(A)
>
urrent check_usage() only checks 1) and 2), so this patch adds
checks for 3) and 4) and makes sure when find_usage_{back,for}wards find
an irq-read-{,un}safe lock, the traverse path should ends at a
dependency --(*N)-->. Note when we search backwards, --(*N)--> indicates
a real dependency --(N*)--
e chainkeys, the chain_hlocks
array now store the "hlock_id"s rather than lock_class indexes.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 60 ++--
1 file changed, 38 insertions(+), 22 deletions(-)
diff --git a/kernel/locking/lockdep.c
or
on detecting recursive read lock related deadlocks.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 47 +++
1 file changed, 47 insertions(+)
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index cd0b5c964bd0..1f794bb441a9 100644
This reverts commit d82fed75294229abc9d757f08a4817febae6c4f4.
Since we now could handle mixed read-write deadlock detection well, the
self tests could be detected as expected, no need to use this
work-around.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 6 --
1 file changed, 6
-(*R)-->next
To do so, we need to pass the recursive-read status of @next into
check_redundant(). This patch changes the parameter of check_redundant()
and the match function to achieve this.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 13 -
1 file changed, 8 insertions(+
not deadlock.
Those self testcases are valuable for the development of supporting
recursive read related deadlock detection.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 161 +
1 file changed, 161 insertions(+)
diff --git a/lib/locking
Since we have all the fundamental to handle recursive read locks, we now
add them into the dependency graph.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 16 +++-
1 file changed, 3 insertions(+), 13 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking
Now since we can handle recursive read related irq inversion deadlocks
correctly, uncomment the irq_read_recursion2 and add more testcases.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 59 --
1 file changed, 47 insertions(+), 12
-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 16
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 9e7647e40918..a68f7df8adc5 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1342,6
if our current tail is --(*R)--> and 2) greedily pick a
--(*N)--> as hard as possible.
With this extension for __bfs(), we now need to initialize the root of
__bfs() properly(with a correct ->is_rr), to do so, we introduce some
helper functions, which also cleans up a little bit for the __b
e
bit: we now mark lock_class::lockdep_dependency_gen_id to indicate _all
the dependencies_ in its lock_{after,before} have been visited in the
__bfs()(note we only take one direction in a __bfs() search). In this
way, every dependency is guaranteed to be visited until we find a match.
Signed-off-by: Boqun Feng
ks, four kinds of dependencies could all exist
between them, so we use 4 bit for the presence of each kind(stored in
lock_list::dep). Helper functions and marcos are also introduced to
convert a pair of locks into ::dep bit and maintain the addition of
different kinds of dependencies.
Signed-off-by:
Hi Ingo and Peter,
This is V3 for recursive read lock support in lockdep.
Changes since V2:
* Add one revert patch for commit d82fed752942
("locking/lockdep/selftests: Fix mixed read-write ABBA tests"),
since we could handle recursive read lock correctly, so we don't
return value
of __bfs() and its friends, this improves the code readability of the
code, and further, could help if we want to extend the BFS.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 134 ---
1 file changed, 79 insertions(+), 55
ecursive)
new:
LOCK_* : stands for non-recursive(write lock and non-recursive
read lock)
LOCK_*_RR: stands for recursive read lock
Such a change is needed for a future improvement on recursive read
related irq inversion deadlock detection.
Signed-off-by: Boqun Feng
---
Document
On Sun, Sep 24, 2017 at 02:23:04PM +, Mathieu Desnoyers wrote:
[...]
> >>
> >> copy_mm() is performed without holding current->sighand->siglock, so
> >> it appears to be racing with concurrent membarrier register cmd.
> >
> > Speak of racing, I think we currently have a problem if we do a
> >
On Fri, Sep 22, 2017 at 03:10:10PM +, Mathieu Desnoyers wrote:
> - On Sep 22, 2017, at 4:59 AM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > On Tue, Sep 19, 2017 at 06:13:41PM -0400, Mathieu Desnoyers wrote:
> > [...]
> >> +static inline void membarrier_arch_
On Tue, Sep 19, 2017 at 06:13:41PM -0400, Mathieu Desnoyers wrote:
[...]
> +static inline void membarrier_arch_sched_in(struct task_struct *prev,
> + struct task_struct *next)
> +{
> + /*
> + * Only need the full barrier when switching between processes.
> + */
> + if
On Fri, Sep 22, 2017 at 10:24:41AM +0200, Peter Zijlstra wrote:
> On Fri, Sep 22, 2017 at 11:22:06AM +0800, Boqun Feng wrote:
>
> > The idea is in membarrier_private_expedited(), we go through all ->curr
> > on each CPU and
> >
> > 1) If it's a userspace
On Fri, Sep 22, 2017 at 11:22:06AM +0800, Boqun Feng wrote:
> Hi Mathieu,
>
> On Tue, Sep 19, 2017 at 06:13:41PM -0400, Mathieu Desnoyers wrote:
> > Provide a new command allowing processes to register their intent to use
> > the private expedited command.
> >
> >
arch-specific membarrier_arch_sched_in(). This fixes allnoconfig
> build on PowerPC.
> - Move asm/membarrier.h include under CONFIG_MEMBARRIER, fixing
> allnoconfig build on PowerPC.
> - Build and runtime tested on PowerPC.
>
> Signed-off-by: Mathieu Desnoyers
> CC: Pe
On Tue, Sep 19, 2017 at 08:52:06PM +0800, Boqun Feng wrote:
> For a potential deadlock about CROSSRELEASE as follow:
>
> P1 P2
> === =
> lock(A)
> lock(X)
> lock(A)
>
7;s better to print a proper scenario related to CROSSRELEASE to help
users find their bugs more easily, so improve this.
Cc: "Paul E. McKenney"
Cc: Byungchul Park
Cc: Steven Rostedt
Signed-off-by: Boqun Feng
---
The sample of print_circular_lock_scenario() is from Paul Mckenney.
ker
On Mon, Sep 18, 2017 at 09:04:56PM -0700, Paul E. McKenney wrote:
> On Tue, Sep 19, 2017 at 11:48:22AM +0900, Byungchul Park wrote:
> > On Mon, Sep 18, 2017 at 07:33:29PM -0700, Paul E. McKenney wrote:
> > > > > Hello Paul and Steven,
> > > > >
So I think this is another false positive, and the r
On Mon, Sep 18, 2017 at 07:25:48AM -0700, Paul E. McKenney wrote:
> On Mon, Sep 18, 2017 at 03:52:42PM +0800, Boqun Feng wrote:
> > On Sun, Sep 17, 2017 at 04:05:09PM -0700, Paul E. McKenney wrote:
> > > Hello!
> > >
> >
> > Hi Paul,
> >
> > &g
On Sun, Sep 17, 2017 at 04:05:09PM -0700, Paul E. McKenney wrote:
> Hello!
>
Hi Paul,
> The topic of memory-ordering recipes came up at the Linux Plumbers
> Conference microconference on Friday, so I thought that I should summarize
> what is currently "out there":
>
> 1.memory-barriers.txt:
On Thu, Sep 07, 2017 at 09:28:48AM +0200, Peter Zijlstra wrote:
> On Thu, Sep 07, 2017 at 11:34:12AM +0530, Prateek Sood wrote:
> > Remove circular dependency deadlock in a scenario where hotplug of CPU is
> > being done while there is updation in cgroup and cpuset triggered from
> > userspace.
> >
t; > requeue.
> >
> > If I got anything wrong, feel free to educate me by adding comments to
> > clarify things ;-)
> >
> > Cc: Alan Stern
> > Cc: Will Deacon
> > Cc: Ming Lei
> > Cc: Christoph Hellwig
> > Cc: Jens Axboe
> > Cc: Andrea
On Wed, Sep 06, 2017 at 04:28:11PM +0800, Boqun Feng wrote:
> Hi Ingo and Peter,
>
> This is V2 for recursive read lock support in lockdep. I fix several
> bugs in V1 and also add irq inversion detection support for recursive
> read locks.
>
> V1: https://marc.i
Since we have all the fundamental to handle recursive read locks, we now
add them into the dependency graph.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 16 +++-
1 file changed, 3 insertions(+), 13 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking
the chain_hlocks
now store the "hlock_id"s rather than lock_class indexes.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 60 ++--
1 file changed, 38 insertions(+), 22 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/lockin
Now since we can handle recursive read related irq inversion deadlocks
correctly, uncomment the irq_read_recursion2 and add more testcases.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 59 --
1 file changed, 47 insertions(+), 12
development of supporting
recursive read related deadlock detection.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 161 +
1 file changed, 161 insertions(+)
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index cbdcec6a776e
or
on detecting recursive read lock related deadlocks.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 47 +++
1 file changed, 47 insertions(+)
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index cd0b5c964bd0..1f794bb441a9 100644
urrent check_usage() only checks 1) and 2), so this patch adds
checks for 3) and 4) and makes sure when find_usage_{back,for}wards find
an irq-read-{,un}safe lock, the traverse path should ends at a
dependency --(*N)-->. Note when we search backwards, --(*N)--> indicates
a real dependency --(N*)--
-(*R)-->next
To do so, we need to pass the recursive-read status of next into
check_redundant(). This patch changes the parameter of check_redundant()
and the match function to achieve this.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 13 -
1 file changed, 8 insertions(+
her add
necessary changes in next versions or leave those as TODOs)
Such a change is needed for a future improvement on recursive read
related irq inversion deadlock detection.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --
our current tail is --(*R)--> and 2) greedily pick a
--(*N)--> as hard as possible.
With this extension for __bfs(), we now only need to initialize the root
of __bfs() properly(with a correct ->is_rr), to do so, we introduce some
helper functions, which also cleans up a little bit for t
-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index d9959f25247a..8a09b1a02342 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1345,6
pair of two locks, four kinds of dependencies could all exist
between them, so we use 4 bit for the presence of each kind(stored in
lock_list::dep).
Signed-off-by: Boqun Feng
---
include/linux/lockdep.h | 2 ++
kernel/locking/lockdep.c | 46 +++
it, we now mark lock_class::lockdep_dependency_gen_id to indicate all
the dependencies in its lock_{after,before} have been visited in the
__bfs(), note we only take one direction in a __bfs() search. In this
way, each dependency is guaranteed to be visited until we find a match.
Signed-off-by: Boqun Feng
---
ker
return value
of __bfs() and its friends, this improves the code readability of the
code, and further, could help if we want to extend the BFS.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 134 ---
1 file changed, 79 insertions(+), 55
Hi Ingo and Peter,
This is V2 for recursive read lock support in lockdep. I fix several
bugs in V1 and also add irq inversion detection support for recursive
read locks.
V1: https://marc.info/?l=linux-kernel&m=150393341825453
As Peter pointed out:
https://marc.info/?l=linux-kernel&m=15
On Wed, Sep 06, 2017 at 08:52:35AM +0900, Byungchul Park wrote:
> On Tue, Sep 05, 2017 at 03:46:43PM +0200, Peter Zijlstra wrote:
> > On Tue, Sep 05, 2017 at 07:58:38PM +0900, Byungchul Park wrote:
> > > On Tue, Sep 05, 2017 at 07:31:44PM +0900, Byungchul Park wrote:
> > > > Recursive-read and the
fits the conceptual
semantics we have been using, but also makes the implementation
requirement more accurate.
In the future, we can either make compiler writers accept our use of
'volatile', or(if that fails) find another way to provide this
guarantee.
Cc: Akira Yokosawa
Cc: Paul E. M
0.12
> Non-registered processes: 2.73 0.08
> Registered processes: 3.07 0.02
>
> Changes since v1:
> - Add missing MEMBARRIER_CMD_REGISTER_SYNC_CORE header documentation,
> - Add benchmarks to commit message.
>
> Signed-off-by: Ma
Commit-ID: ec81048cc340bb03334e6ca62661ecc0a684897a
Gitweb: http://git.kernel.org/tip/ec81048cc340bb03334e6ca62661ecc0a684897a
Author: Boqun Feng
AuthorDate: Wed, 23 Aug 2017 23:25:38 +0800
Committer: Ingo Molnar
CommitDate: Tue, 29 Aug 2017 15:14:38 +0200
sched/completion: Avoid
Commit-ID: 1c322ac06d9af7ea259098ae5dc977855207d335
Gitweb: http://git.kernel.org/tip/1c322ac06d9af7ea259098ae5dc977855207d335
Author: Boqun Feng
AuthorDate: Thu, 24 Aug 2017 22:22:36 +0800
Committer: Ingo Molnar
CommitDate: Tue, 29 Aug 2017 15:14:38 +0200
acpi/nfit: Fix
the chain_hlocks
now store the "hlock_id"s rather than lock_class indexes.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 60 ++--
1 file changed, 38 insertions(+), 22 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/lockin
gt; B and B-->C). In other
words, a lock cannot be the transfer station if it only has *->R
dependencies with previous locks and R->* dependencies with following
locks.
If we could still find a circle under this rule, a deadlock is reported.
Signed-off-by: Boqun Feng
---
include/linux/loc
return value
of __bfs() and its friends, this improves the code readability of the
code, and further, could help if we want to extend the BFS.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 134 ---
1 file changed, 79 insertions(+), 55
or
on detecting recursive read lock related deadlocks.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 47 +++
1 file changed, 47 insertions(+)
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 3c7151a6cd98..747a5379aeee 100644
development of supporting
recursive read related deadlock detection.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 161 +
1 file changed, 161 insertions(+)
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 747a5379aeee
(Resend because getting weird reject, Sorry)
Hi Ingo and Peter,
As Peter pointed out:
https://marc.info/?l=linux-kernel&m=150349072023540
The lockdep current has a limit support for recursive read locks, the
deadlock case as follow could not be detected:
read_lock(A);
the chain_hlocks
now store the "hlock_id"s rather than lock_class indexes.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 60 ++--
1 file changed, 38 insertions(+), 22 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/lockin
development of supporting
recursive read related deadlock detection.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 161 +
1 file changed, 161 insertions(+)
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 747a5379aeee
return value
of __bfs() and its friends, this improves the code readability of the
code, and further, could help if we want to extend the BFS.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 134 ---
1 file changed, 79 insertions(+), 55
gt; B and B-->C). In other
words, a lock cannot be the transfer station if it only has *->R
dependencies with previous locks and R->* dependencies with following
locks.
If we could still find a circle under this rule, a deadlock is reported.
Signed-off-by: Boqun Feng
---
include/linux/loc
or
on detecting recursive read lock related deadlocks.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 47 +++
1 file changed, 47 insertions(+)
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 3c7151a6cd98..747a5379aeee 100644
Hi Ingo and Peter,
As Peter pointed out:
https://marc.info/?l=linux-kernel&m=150349072023540
The lockdep current has a limit support for recursive read locks, the
deadlock case as follow could not be detected:
read_lock(A);
lock(B);
lock(B
401 - 500 of 983 matches
Mail list logo