On Wed, Dec 24, 2025 at 05:13:01PM +0100, Marco Crivellari wrote:
> This patch continues the effort to refactor workqueue APIs, which has begun
> with the changes introducing new workqueues and a new alloc_workqueue flag:
> 
>    commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
>    commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
> 
> The point of the refactoring is to eventually alter the default behavior of
> workqueues to become unbound by default so that their workload placement is
> optimized by the scheduler.
> 
> Before that to happen after a careful review and conversion of each individual
> case, workqueue users must be converted to the better named new workqueues 
> with
> no intended behaviour changes:
> 
>    system_wq -> system_percpu_wq
>    system_unbound_wq -> system_dfl_wq
> 
> This way the old obsolete workqueues (system_wq, system_unbound_wq) can be
> removed in the future.

This looks good to me.

Acked-by: Corey Minyard <[email protected]>

> 
> Suggested-by: Tejun Heo <[email protected]>
> Signed-off-by: Marco Crivellari <[email protected]>
> ---
>  drivers/char/ipmi/ipmi_msghandler.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/char/ipmi/ipmi_msghandler.c 
> b/drivers/char/ipmi/ipmi_msghandler.c
> index 3f48fc6ab596..ebdc8f683981 100644
> --- a/drivers/char/ipmi/ipmi_msghandler.c
> +++ b/drivers/char/ipmi/ipmi_msghandler.c
> @@ -973,7 +973,7 @@ static int deliver_response(struct ipmi_smi *intf, struct 
> ipmi_recv_msg *msg)
>               mutex_lock(&intf->user_msgs_mutex);
>               list_add_tail(&msg->link, &intf->user_msgs);
>               mutex_unlock(&intf->user_msgs_mutex);
> -             queue_work(system_wq, &intf->smi_work);
> +             queue_work(system_percpu_wq, &intf->smi_work);
>       }
>  
>       return rv;
> @@ -4935,7 +4935,7 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
>       if (run_to_completion)
>               smi_work(&intf->smi_work);
>       else
> -             queue_work(system_wq, &intf->smi_work);
> +             queue_work(system_percpu_wq, &intf->smi_work);
>  }
>  EXPORT_SYMBOL(ipmi_smi_msg_received);
>  
> @@ -4945,7 +4945,7 @@ void ipmi_smi_watchdog_pretimeout(struct ipmi_smi *intf)
>               return;
>  
>       atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1);
> -     queue_work(system_wq, &intf->smi_work);
> +     queue_work(system_percpu_wq, &intf->smi_work);
>  }
>  EXPORT_SYMBOL(ipmi_smi_watchdog_pretimeout);
>  
> @@ -5115,7 +5115,7 @@ static bool ipmi_timeout_handler(struct ipmi_smi *intf,
>                                      flags);
>       }
>  
> -     queue_work(system_wq, &intf->smi_work);
> +     queue_work(system_percpu_wq, &intf->smi_work);
>  
>       return need_timer;
>  }
> @@ -5171,7 +5171,7 @@ static void ipmi_timeout(struct timer_list *unused)
>       if (atomic_read(&stop_operation))
>               return;
>  
> -     queue_work(system_wq, &ipmi_timer_work);
> +     queue_work(system_percpu_wq, &ipmi_timer_work);
>  }
>  
>  static void need_waiter(struct ipmi_smi *intf)
> -- 
> 2.52.0
> 


_______________________________________________
Openipmi-developer mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/openipmi-developer

Reply via email to