On 10/24/2025 11:52 AM, Marco Crivellari wrote: > Currently if a user enqueue a work item using schedule_delayed_work() the > used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use > WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to > schedule_work() that is using system_wq and queue_work(), that makes use > again of WORK_CPU_UNBOUND. > > This lack of consistentcy cannot be addressed without refactoring the API. > > system_wq should be the per-cpu workqueue, yet in this name nothing makes > that clear, so replace system_wq with system_percpu_wq. > > The old wq (system_wq) will be kept for a few release cycles. > > Suggested-by: Tejun Heo <[email protected]> > Signed-off-by: Marco Crivellari <[email protected]> > --- > drivers/accel/ivpu/ivpu_hw_btrs.c | 2 +- > drivers/accel/ivpu/ivpu_ipc.c | 2 +- > drivers/accel/ivpu/ivpu_job.c | 2 +- > drivers/accel/ivpu/ivpu_mmu.c | 2 +- > drivers/accel/ivpu/ivpu_pm.c | 2 +- > 5 files changed, 5 insertions(+), 5 deletions(-) > > diff --git a/drivers/accel/ivpu/ivpu_hw_btrs.c > b/drivers/accel/ivpu/ivpu_hw_btrs.c > index afdb3b2aa72a..27a345f3befe 100644 > --- a/drivers/accel/ivpu/ivpu_hw_btrs.c > +++ b/drivers/accel/ivpu/ivpu_hw_btrs.c > @@ -673,7 +673,7 @@ bool ivpu_hw_btrs_irq_handler_lnl(struct ivpu_device > *vdev, int irq) > > if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, SURV_ERR, status)) { > ivpu_dbg(vdev, IRQ, "Survivability IRQ\n"); > - queue_work(system_wq, &vdev->irq_dct_work); > + queue_work(system_percpu_wq, &vdev->irq_dct_work); > } > > if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, FREQ_CHANGE, status)) { > diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c > index 5f00809d448a..1f13bf95b2b3 100644 > --- a/drivers/accel/ivpu/ivpu_ipc.c > +++ b/drivers/accel/ivpu/ivpu_ipc.c > @@ -459,7 +459,7 @@ void ivpu_ipc_irq_handler(struct ivpu_device *vdev) > } > } > > - queue_work(system_wq, &vdev->irq_ipc_work); > + queue_work(system_percpu_wq, &vdev->irq_ipc_work); > } > > void ivpu_ipc_irq_work_fn(struct work_struct *work) > diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c > index 060f1fc031d3..7a1f78b84b09 100644 > --- a/drivers/accel/ivpu/ivpu_job.c > +++ b/drivers/accel/ivpu/ivpu_job.c > @@ -574,7 +574,7 @@ static int ivpu_job_signal_and_destroy(struct ivpu_device > *vdev, u32 job_id, u32 > * status and ensure both are handled in the same way > */ > job->file_priv->has_mmu_faults = true; > - queue_work(system_wq, &vdev->context_abort_work); > + queue_work(system_percpu_wq, &vdev->context_abort_work); > return 0; > } > > diff --git a/drivers/accel/ivpu/ivpu_mmu.c b/drivers/accel/ivpu/ivpu_mmu.c > index 5ea010568faa..e1baf6b64935 100644 > --- a/drivers/accel/ivpu/ivpu_mmu.c > +++ b/drivers/accel/ivpu/ivpu_mmu.c > @@ -970,7 +970,7 @@ void ivpu_mmu_irq_evtq_handler(struct ivpu_device *vdev) > } > } > > - queue_work(system_wq, &vdev->context_abort_work); > + queue_work(system_percpu_wq, &vdev->context_abort_work); > } > > void ivpu_mmu_evtq_dump(struct ivpu_device *vdev) > diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c > index ffa2ba7cafe2..0cff8f808429 100644 > --- a/drivers/accel/ivpu/ivpu_pm.c > +++ b/drivers/accel/ivpu/ivpu_pm.c > @@ -226,7 +226,7 @@ void ivpu_start_job_timeout_detection(struct ivpu_device > *vdev) > unsigned long timeout_ms = ivpu_tdr_timeout_ms ? ivpu_tdr_timeout_ms : > vdev->timeout.tdr; > > /* No-op if already queued */ > - queue_delayed_work(system_wq, &vdev->pm->job_timeout_work, > msecs_to_jiffies(timeout_ms)); > + queue_delayed_work(system_percpu_wq, &vdev->pm->job_timeout_work, > msecs_to_jiffies(timeout_ms)); Thanks for the patch. Please fix the checkpatch warning:
WARNING: line length of 104 exceeds 100 columns #90: FILE: drivers/accel/ivpu/ivpu_pm.c:229: + queue_delayed_work(system_percpu_wq, &vdev->pm->job_timeout_work, msecs_to_jiffies(timeout_ms)); > } > > void ivpu_stop_job_timeout_detection(struct ivpu_device *vdev) Also there's a typo "consistentcy" -> "consistency" that can get fixed with together with that warning. Tested-by: Karol Wachowski <[email protected]>
