Am 26.09.22 um 20:22 schrieb Peter Zijlstra:
On Mon, Sep 26, 2022 at 08:06:46PM +0200, Peter Zijlstra wrote:

Let me go git-grep some to see if there's more similar fail.

I've ended up with the below...

Tested-by: Christian Borntraeger <borntrae...@de.ibm.com>

Kind of scary that nobody else has reported any regression. I guess the 
freezable variant is just not used widely.

---
  include/linux/wait.h | 2 +-
  kernel/hung_task.c   | 8 ++++++--
  kernel/sched/core.c  | 2 +-
  3 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index 14ad8a0e9fac..7f5a51aae0a7 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -281,7 +281,7 @@ static inline void wake_up_pollfree(struct wait_queue_head 
*wq_head)
#define ___wait_is_interruptible(state) \
        (!__builtin_constant_p(state) ||                                        
\
-               state == TASK_INTERRUPTIBLE || state == TASK_KILLABLE)          
\
+        (state & (TASK_INTERRUPTIBLE | TASK_WAKEKILL)))
extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags); diff --git a/kernel/hung_task.c b/kernel/hung_task.c
index f1321c03c32a..4a8a713fd67b 100644
--- a/kernel/hung_task.c
+++ b/kernel/hung_task.c
@@ -191,6 +191,8 @@ static void check_hung_uninterruptible_tasks(unsigned long 
timeout)
        hung_task_show_lock = false;
        rcu_read_lock();
        for_each_process_thread(g, t) {
+               unsigned int state;
+
                if (!max_count--)
                        goto unlock;
                if (time_after(jiffies, last_break + HUNG_TASK_LOCK_BREAK)) {
@@ -198,8 +200,10 @@ static void check_hung_uninterruptible_tasks(unsigned long 
timeout)
                                goto unlock;
                        last_break = jiffies;
                }
-               /* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */
-               if (READ_ONCE(t->__state) == TASK_UNINTERRUPTIBLE)
+               /* skip the TASK_KILLABLE tasks -- these can be killed */
+               state == READ_ONCE(t->__state);
+               if ((state & TASK_UNINTERRUPTIBLE) &&
+                   !(state & TASK_WAKEKILL))
                        check_hung_task(t, timeout);
        }
   unlock:
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1095917ed048..12ee5b98e2c4 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8885,7 +8885,7 @@ state_filter_match(unsigned long state_filter, struct 
task_struct *p)
         * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
         * TASK_KILLABLE).
         */
-       if (state_filter == TASK_UNINTERRUPTIBLE && state == TASK_IDLE)
+       if (state_filter == TASK_UNINTERRUPTIBLE && state & TASK_NOLOAD)
                return false;
return true;
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to