On 2/3/26 10:27 PM, Chen Ridong wrote:

On 2026/2/3 4:11, Waiman Long wrote:
The update_isolation_cpumasks() function can be called either directly
from regular cpuset control file write with cpuset_full_lock() called
or via the CPU hotplug path with cpus_write_lock and cpuset_mutex held.

As we are going to enable dynamic update to the nozh_full housekeeping
cpumask (HK_TYPE_KERNEL_NOISE) soon with the help of CPU hotplug,
allowing the CPU hotplug path to call into housekeeping_update() directly
from update_isolation_cpumasks() will likely cause deadlock. So we
have to defer any call to housekeeping_update() after the CPU hotplug
operation has finished. This is now done via the workqueue where
the actual housekeeping_update() call, if needed, will happen after
cpus_write_lock is released.

We can't use the synchronous task_work API as call from CPU hotplug
path happen in the per-cpu kthread of the CPU that is being shut down
or brought up. Because of the asynchronous nature of workqueue, the
HK_TYPE_DOMAIN housekeeping cpumask will be updated a bit later than the
"cpuset.cpus.isolated" control file in this case.

Also add a check in test_cpuset_prs.sh and modify some existing
test cases to confirm that "cpuset.cpus.isolated" and HK_TYPE_DOMAIN
housekeeping cpumask will both be updated.

Signed-off-by: Waiman Long <[email protected]>
---
  kernel/cgroup/cpuset.c                        | 37 +++++++++++++++++--
  .../selftests/cgroup/test_cpuset_prs.sh       | 13 +++++--
  2 files changed, 44 insertions(+), 6 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index d705c5ba64a7..e98a2e953392 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1302,6 +1302,17 @@ static bool prstate_housekeeping_conflict(int prstate, 
struct cpumask *new_cpus)
        return false;
  }
+static void isolcpus_workfn(struct work_struct *work)
+{
+       cpuset_full_lock();
+       if (isolated_cpus_updating) {
+               isolated_cpus_updating = false;
+               WARN_ON_ONCE(housekeeping_update(isolated_cpus) < 0);
+               rebuild_sched_domains_locked();
+       }
+       cpuset_full_unlock();
+}
+
  /*
   * update_isolation_cpumasks - Update external isolation related CPU masks
   *
@@ -1310,14 +1321,34 @@ static bool prstate_housekeeping_conflict(int prstate, 
struct cpumask *new_cpus)
   */
  static void update_isolation_cpumasks(void)
  {
-       int ret;
+       static DECLARE_WORK(isolcpus_work, isolcpus_workfn);
if (!isolated_cpus_updating)
                return;
- ret = housekeeping_update(isolated_cpus);
-       WARN_ON_ONCE(ret < 0);
+       /*
+        * This function can be reached either directly from regular cpuset
+        * control file write or via CPU hotplug. In the latter case, it is
+        * the per-cpu kthread that calls cpuset_handle_hotplug() on behalf
+        * of the task that initiates CPU shutdown or bringup.
+        *
+        * To have better flexibility and prevent the possibility of deadlock
+        * when calling from CPU hotplug, we defer the housekeeping_update()
+        * call to after the current cpuset critical section has finished.
+        * This is done via workqueue.
+        */
+       if (current->flags & PF_KTHREAD) {
+               /*
+                * We rely on WORK_STRUCT_PENDING_BIT to not requeue a work
+                * item that is still pending.
+                */
+               queue_work(system_unbound_wq, &isolcpus_work);
+               /* Also defer sched domains regeneration to the work function */
+               force_sd_rebuild = false;
Eh, looking at the call path:

cpuset_hotplug_update_tasks
        update_parent_effective_cpumask
                update_isolation_cpumasks
                force_sd_rebuild = false;
        cpuset_force_rebuild();

Setting force_sd_rebuild to false here might be redundant, given that
cpuset_force_rebuild() is called immediately afterward.

Thank for spotting that. I will try to address this.

Thanks,
Longman


Reply via email to