Now that mark_lock() is going to handle multiple softirq vectors at once for a given lock usage, the current fast path optimization that simply check if the new bit usage is already present won't work anymore.
Indeed if the new usage is only partially present, such as for some softirq vectors and not for others, we may spuriously ignore all the verifications for the new vectors. What we must check instead is a bit different: we have to make sure that the new usage with all its vectors are entirely present in the current usage mask before ignoring further checks. Reviewed-by: David S. Miller <[email protected]> Signed-off-by: Frederic Weisbecker <[email protected]> Cc: Mauro Carvalho Chehab <[email protected]> Cc: Joel Fernandes <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Pavan Kondeti <[email protected]> Cc: Paul E . McKenney <[email protected]> Cc: David S . Miller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Sebastian Andrzej Siewior <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> --- kernel/locking/lockdep.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 0988de06a7ed..9a5f2dbc3812 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -3159,7 +3159,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this, * If already set then do not dirty the cacheline, * nor do any checks: */ - if (likely(hlock_class(this)->usage_mask & new_mask)) + if (likely(!(new_mask & ~hlock_class(this)->usage_mask))) return 1; if (!graph_lock()) @@ -3167,7 +3167,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this, /* * Make sure we didn't race: */ - if (unlikely(hlock_class(this)->usage_mask & new_mask)) { + if (unlikely(!(new_mask & ~hlock_class(this)->usage_mask))) { graph_unlock(); return 1; } -- 2.21.0

