[ Upstream commit 063773011d33bb36588a90385aa9eb75d13c6d80 ]

Lockdep reports the following issue on my setup:

Possible unsafe locking scenario:

CPU0                    CPU1
----                    ----
lock((work_completion)(&(&rdev->disable_work)->work));
                        lock(regulator_list_mutex);
                        lock((work_completion)(&(&rdev->disable_work)->work));
lock(regulator_list_mutex);

The problem is that regulator_unregister takes the
regulator_list_mutex and then calls flush_work on disable_work. But
regulator_disable_work calls regulator_lock_dependent which will
also take the regulator_list_mutex. Resulting in a deadlock if the
flush_work call actually needs to flush the work.

Fix this issue by moving the flush_work outside of the
regulator_list_mutex. The list mutex is not used to guard the point at
which the delayed work is queued, so its use adds no additional safety.

Fixes: f8702f9e4aa7 ("regulator: core: Use ww_mutex for regulators locking")
Signed-off-by: Charles Keepax <ckee...@opensource.cirrus.com>
Reviewed-by: Dmitry Osipenko <dig...@gmail.com>
Signed-off-by: Mark Brown <broo...@kernel.org>
Signed-off-by: Sasha Levin <sas...@kernel.org>
---
 drivers/regulator/core.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
index fb9fe26fd0fa1..218b9331475b7 100644
--- a/drivers/regulator/core.c
+++ b/drivers/regulator/core.c
@@ -5101,10 +5101,11 @@ void regulator_unregister(struct regulator_dev *rdev)
                regulator_put(rdev->supply);
        }
 
+       flush_work(&rdev->disable_work.work);
+
        mutex_lock(&regulator_list_mutex);
 
        debugfs_remove_recursive(rdev->debugfs);
-       flush_work(&rdev->disable_work.work);
        WARN_ON(rdev->open_count);
        regulator_remove_coupling(rdev);
        unset_regulator_supplies(rdev);
-- 
2.20.1



Reply via email to