If the workqueue allocation fails, the driver is marked as not initialized,
and timer and panic_notifier will be left registered.

Instead of removing those when workqueue allocation fails, do the workqueue
initialization before doing it, and cleanup srcu_struct if it fails.

Fixes: 1d49eb91e86e ("ipmi: Move remove_work to dedicated workqueue")
Signed-off-by: Thadeu Lima de Souza Cascardo <casca...@canonical.com>
Cc: Corey Minyard <cminy...@mvista.com>
Cc: Ioanna Alifieraki <ioanna-maria.alifier...@canonical.com>
Cc: sta...@vger.kernel.org
---
 drivers/char/ipmi/ipmi_msghandler.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/drivers/char/ipmi/ipmi_msghandler.c 
b/drivers/char/ipmi/ipmi_msghandler.c
index 84975b21fff2..266c7bc58dda 100644
--- a/drivers/char/ipmi/ipmi_msghandler.c
+++ b/drivers/char/ipmi/ipmi_msghandler.c
@@ -5396,20 +5396,23 @@ static int ipmi_init_msghandler(void)
        if (rv)
                goto out;
 
-       timer_setup(&ipmi_timer, ipmi_timeout, 0);
-       mod_timer(&ipmi_timer, jiffies + IPMI_TIMEOUT_JIFFIES);
-
-       atomic_notifier_chain_register(&panic_notifier_list, &panic_block);
-
        remove_work_wq = 
create_singlethread_workqueue("ipmi-msghandler-remove-wq");
        if (!remove_work_wq) {
                pr_err("unable to create ipmi-msghandler-remove-wq workqueue");
                rv = -ENOMEM;
-               goto out;
+               goto out_wq;
        }
 
+       timer_setup(&ipmi_timer, ipmi_timeout, 0);
+       mod_timer(&ipmi_timer, jiffies + IPMI_TIMEOUT_JIFFIES);
+
+       atomic_notifier_chain_register(&panic_notifier_list, &panic_block);
+
        initialized = true;
 
+out_wq:
+       if (rv)
+               cleanup_srcu_struct(&ipmi_interfaces_srcu);
 out:
        mutex_unlock(&ipmi_interfaces_mutex);
        return rv;
-- 
2.32.0



_______________________________________________
Openipmi-developer mailing list
Openipmi-developer@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openipmi-developer

Reply via email to