Re: [RFC tip/locking/lockdep v5 07/17] lockdep: Adjust check_redundant() for recursive read change
On Thu, Feb 22, 2018 at 06:29:06PM +0100, Peter Zijlstra wrote: > On Thu, Feb 22, 2018 at 03:08:54PM +0800, Boqun Feng wrote: > > As we have four kinds of dependencies now, check_redundant() should only > > report redundant if we have a dependency path which is equal or > > _stronger_ than the current dependency. For example if in > > check_prev_add() we have: > > > > prev->read == 2 && next->read != 2 > > > > , we should only report redundant if we find a path like: > > > > prev--(RN)-->--(*N)-->next > > > > and if we have: > > > > prev->read == 2 && next->read == 2 > > > > , we could report redundant if we find a path like: > > > > prev--(RN)-->--(*N)-->next > > > > or > > > > prev--(RN)-->--(*R)-->next > > > > To do so, we need to pass the recursive-read status of @next into > > check_redundant(). > > Very hard to read that. > Right.. and I find a bug about this, let me explain below.. > > Signed-off-by: Boqun Feng > > --- > > kernel/locking/lockdep.c | 13 - > > 1 file changed, 8 insertions(+), 5 deletions(-) > > > > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c > > index e1be088a34c4..0b0ad3db78b4 100644 > > --- a/kernel/locking/lockdep.c > > +++ b/kernel/locking/lockdep.c > > @@ -1338,9 +1338,12 @@ print_circular_bug_header(struct lock_list *entry, > > unsigned int depth, > > return 0; > > } > > > > -static inline int class_equal(struct lock_list *entry, void *data) > > +static inline int hlock_equal(struct lock_list *entry, void *data) > > { > > - return entry->class == data; > > + struct held_lock *hlock = (struct held_lock *)data; > > + > > + return hlock_class(hlock) == entry->class && > > + (hlock->read == 2 || !entry->is_rr); > > } > > So I guess @data = @next, and we're checking if @prev -> @next already > exists. > > Since we only care about forward dependencies, @next->read==2 means *R > and if so, any existing link is equal or stronger. If @next->read!=2, it > means *N and we must regard *R as weaker and not match. > Yep, the idea if we find a path @prev -> .. -> @next is RN, and @prev -> @next is RR, then we treat RR as weaker and redundant. Basically, if we find a strong path that could replace the about-to-add dependency (the path is stronger than or equal to the dependency), we report redundant (a match). But I miss something here, as you may see both the start and end of the path are important, so say we find a strong path RN, but the about-to-add dependency is NR, we can not report it as redundant, because the path can not replace the dependency. To make sure we find a path whose start point is stronger than @prev, we need a trick, we should initialize the ->only_xr (or ->have_xr) of the root (start point) of __bfs() to be @prev->read != 2, therefore if @prev is N, __bfs() will pick N* for the first dependency, otherwise, __bfs() can pick N* or R* for the first dependency. I use a similar setup before check_noncircular(), which sets the initial ->only_xr to be @next->read == 2, because we need @prev -> @next -> to be strong. But I should use a opposite setup for check_redundant() as I describe above. Anyway, I will fix this and prove (hopefully) enough comments for those tricks. Regards, Boqun > OK, that seems to be fine, but again, that function _really_ could do > with a comment. > > signature.asc Description: PGP signature
Re: [RFC tip/locking/lockdep v5 07/17] lockdep: Adjust check_redundant() for recursive read change
On Thu, Feb 22, 2018 at 03:08:54PM +0800, Boqun Feng wrote: > As we have four kinds of dependencies now, check_redundant() should only > report redundant if we have a dependency path which is equal or > _stronger_ than the current dependency. For example if in > check_prev_add() we have: > > prev->read == 2 && next->read != 2 > > , we should only report redundant if we find a path like: > > prev--(RN)-->--(*N)-->next > > and if we have: > > prev->read == 2 && next->read == 2 > > , we could report redundant if we find a path like: > > prev--(RN)-->--(*N)-->next > > or > > prev--(RN)-->--(*R)-->next > > To do so, we need to pass the recursive-read status of @next into > check_redundant(). Very hard to read that. > Signed-off-by: Boqun Feng > --- > kernel/locking/lockdep.c | 13 - > 1 file changed, 8 insertions(+), 5 deletions(-) > > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c > index e1be088a34c4..0b0ad3db78b4 100644 > --- a/kernel/locking/lockdep.c > +++ b/kernel/locking/lockdep.c > @@ -1338,9 +1338,12 @@ print_circular_bug_header(struct lock_list *entry, > unsigned int depth, > return 0; > } > > -static inline int class_equal(struct lock_list *entry, void *data) > +static inline int hlock_equal(struct lock_list *entry, void *data) > { > - return entry->class == data; > + struct held_lock *hlock = (struct held_lock *)data; > + > + return hlock_class(hlock) == entry->class && > +(hlock->read == 2 || !entry->is_rr); > } So I guess @data = @next, and we're checking if @prev -> @next already exists. Since we only care about forward dependencies, @next->read==2 means *R and if so, any existing link is equal or stronger. If @next->read!=2, it means *N and we must regard *R as weaker and not match. OK, that seems to be fine, but again, that function _really_ could do with a comment.
[RFC tip/locking/lockdep v5 07/17] lockdep: Adjust check_redundant() for recursive read change
As we have four kinds of dependencies now, check_redundant() should only report redundant if we have a dependency path which is equal or _stronger_ than the current dependency. For example if in check_prev_add() we have: prev->read == 2 && next->read != 2 , we should only report redundant if we find a path like: prev--(RN)-->--(*N)-->next and if we have: prev->read == 2 && next->read == 2 , we could report redundant if we find a path like: prev--(RN)-->--(*N)-->next or prev--(RN)-->--(*R)-->next To do so, we need to pass the recursive-read status of @next into check_redundant(). This patch changes the parameter of check_redundant() and the match function to achieve this. Signed-off-by: Boqun Feng --- kernel/locking/lockdep.c | 13 - 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index e1be088a34c4..0b0ad3db78b4 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -1338,9 +1338,12 @@ print_circular_bug_header(struct lock_list *entry, unsigned int depth, return 0; } -static inline int class_equal(struct lock_list *entry, void *data) +static inline int hlock_equal(struct lock_list *entry, void *data) { - return entry->class == data; + struct held_lock *hlock = (struct held_lock *)data; + + return hlock_class(hlock) == entry->class && + (hlock->read == 2 || !entry->is_rr); } static inline int hlock_conflict(struct lock_list *entry, void *data) @@ -1480,14 +1483,14 @@ check_noncircular(struct lock_list *root, struct held_lock *target, } static noinline enum bfs_result -check_redundant(struct lock_list *root, struct lock_class *target, +check_redundant(struct lock_list *root, struct held_lock *target, struct lock_list **target_entry) { enum bfs_result result; debug_atomic_inc(nr_redundant_checks); - result = __bfs_forwards(root, target, class_equal, target_entry); + result = __bfs_forwards(root, target, hlock_equal, target_entry); return result; } @@ -2060,7 +2063,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev, * Is the -> link redundant? */ bfs_init_root(&this, prev); - ret = check_redundant(&this, hlock_class(next), &target_entry); + ret = check_redundant(&this, next, &target_entry); if (ret == BFS_RMATCH) { debug_atomic_inc(nr_redundant); return 2; -- 2.16.1