This is already done for us internally by the signal machinery.

Cc: pet...@infradead.org
Cc: mi...@kernel.org
Signed-off-by: Davidlohr Bueso <d...@stgolabs.net>
---
 kernel/locking/mutex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 3f8a35104285..db578783dd36 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -987,7 +987,7 @@ __mutex_lock_common(struct mutex *lock, long state, 
unsigned int subclass,
                 * wait_lock. This ensures the lock cancellation is ordered
                 * against mutex_unlock() and wake-ups do not go missing.
                 */
-               if (unlikely(signal_pending_state(state, current))) {
+               if (signal_pending_state(state, current)) {
                        ret = -EINTR;
                        goto err;
                }
-- 
2.16.4

Reply via email to