When tiecap is used as a module, then while doing a rmmod I
get the following dump.
root@am437x-evm:/# rmmod pwm_tiecap
[ 219.539245]
[ 219.540771] ======================================================
[ 219.546936] [ INFO: possible circular locking dependency detected ]
[ 219.553192] 3.12.4-01557-g9921cde-dirty #134 Not tainted
[ 219.558471] -------------------------------------------------------
[ 219.564727] rmmod/1517 is trying to acquire lock:
[ 219.569427] (s_active#35){++++.+}, at: [<c017ab00>]
sysfs_hash_and_remove+0x4c/0x8c
[ 219.577239]
[ 219.577239] but task is already holding lock:
[ 219.583068] (pwm_lock){+.+.+.}, at: [<c0303598>] pwmchip_remove+0x14/0xf8
[ 219.589996]
[ 219.589996] which lock already depends on the new lock.
[ 219.589996]
[ 219.598144]
[ 219.598144] the existing dependency chain (in reverse order) is:
[ 219.605590]
-> #1 (pwm_lock){+.+.+.}:
[ 219.609497] [<c00a2d1c>] lock_acquire+0x9c/0x128
[ 219.614746] [<c0639bc0>] mutex_lock_nested+0x50/0x3dc
[ 219.620391] [<c0303974>] pwm_request_from_chip+0x38/0x6c
[ 219.626312] [<c0303fe0>] pwm_export_store+0x50/0x140
[ 219.631896] [<c039aba8>] dev_attr_store+0x18/0x24
[ 219.637207] [<c017aff0>] sysfs_write_file+0x16c/0x1a0
[ 219.642883] [<c0119084>] vfs_write+0xb0/0x188
[ 219.647857] [<c0119478>] SyS_write+0x3c/0x70
[ 219.652770] [<c0014100>] ret_fast_syscall+0x0/0x48
[ 219.658172]
-> #0 (s_active#35){++++.+}:
[ 219.662353] [<c00a2778>] __lock_acquire+0x1b28/0x1b70
[ 219.667999] [<c00a2d1c>] lock_acquire+0x9c/0x128
[ 219.673248] [<c017c780>] sysfs_addrm_finish+0xe8/0x158
[ 219.678985] [<c017ab00>] sysfs_hash_and_remove+0x4c/0x8c
[ 219.684906] [<c017e224>] remove_files+0x38/0x74
[ 219.690063] [<c017e2a4>] sysfs_remove_group+0x44/0x108
[ 219.695800] [<c017e38c>] sysfs_remove_groups+0x24/0x34
[ 219.701538] [<c039bc2c>] device_del+0xec/0x178
[ 219.706604] [<c039bcc4>] device_unregister+0xc/0x18
[ 219.712097] [<c0303658>] pwmchip_remove+0xd4/0xf8
[ 219.717407] [<c039fdc4>] platform_drv_remove+0x18/0x1c
[ 219.723175] [<c039e6c4>] __device_release_driver+0x70/0xc8
[ 219.729248] [<c039eec8>] driver_detach+0xb4/0xb8
[ 219.734497] [<c039e4ec>] bus_remove_driver+0x8c/0xd0
[ 219.740081] [<c00abd2c>] SyS_delete_module+0x118/0x22c
[ 219.745819] [<c0014100>] ret_fast_syscall+0x0/0x48
[ 219.751220]
[ 219.751220] other info that might help us debug this:
[ 219.751220]
[ 219.759216] Possible unsafe locking scenario:
[ 219.759216]
[ 219.765106] CPU0 CPU1
[ 219.769622] ---- ----
[ 219.774139] lock(pwm_lock);
[ 219.777130] lock(s_active#35);
[ 219.782897] lock(pwm_lock);
[ 219.788391] lock(s_active#35);
[ 219.791656]
[ 219.791656] *** DEADLOCK ***
[ 219.791656]
[ 219.797546] 3 locks held by rmmod/1517:
[ 219.801391] #0: (&__lockdep_no_validate__){......}, at: [<c039ee58>]
driver_detach+0x44/0xb8
[ 219.810028] #1: (&__lockdep_no_validate__){......}, at: [<c039ee64>]
driver_detach+0x50/0xb8
[ 219.818695] #2: (pwm_lock){+.+.+.}, at: [<c0303598>]
pwmchip_remove+0x14/0xf8
[ 219.826049]
[ 219.826049] stack backtrace:
[ 219.830413] CPU: 0 PID: 1517 Comm: rmmod Not tainted
3.12.4-01557-g9921cde-dirty #134
[ 219.838256] [<c001cc98>] (unwind_backtrace+0x0/0xf0) from [<c0018124>]
(show_stack+0x10/0x14)
[ 219.846771] [<c0018124>] (show_stack+0x10/0x14) from [<c0636728>]
(dump_stack+0x74/0xb4)
[ 219.854858] [<c0636728>] (dump_stack+0x74/0xb4) from [<c06344e4>]
(print_circular_bug+0x284/0x2d8)
[ 219.863830] [<c06344e4>] (print_circular_bug+0x284/0x2d8) from [<c00a2778>]
(__lock_acquire+0x1b28/0x1b70)
[ 219.873443] [<c00a2778>] (__lock_acquire+0x1b28/0x1b70) from [<c00a2d1c>]
(lock_acquire+0x9c/0x128)
[ 219.882476] [<c00a2d1c>] (lock_acquire+0x9c/0x128) from [<c017c780>]
(sysfs_addrm_finish+0xe8/0x158)
[ 219.891601] [<c017c780>] (sysfs_addrm_finish+0xe8/0x158) from [<c017ab00>]
(sysfs_hash_and_remove+0x4c/0x8c)
[ 219.901397] [<c017ab00>] (sysfs_hash_and_remove+0x4c/0x8c) from [<c017e224>]
(remove_files+0x38/0x74)
[ 219.910614] [<c017e224>] (remove_files+0x38/0x74) from [<c017e2a4>]
(sysfs_remove_group+0x44/0x108)
[ 219.919647] [<c017e2a4>] (sysfs_remove_group+0x44/0x108) from [<c017e38c>]
(sysfs_remove_groups+0x24/0x34)
[ 219.929260] [<c017e38c>] (sysfs_remove_groups+0x24/0x34) from [<c039bc2c>]
(device_del+0xec/0x178)
[ 219.938201] [<c039bc2c>] (device_del+0xec/0x178) from [<c039bcc4>]
(device_unregister+0xc/0x18)
[ 219.946899] [<c039bcc4>] (device_unregister+0xc/0x18) from [<c0303658>]
(pwmchip_remove+0xd4/0xf8)
[ 219.955841] [<c0303658>] (pwmchip_remove+0xd4/0xf8) from [<c039fdc4>]
(platform_drv_remove+0x18/0x1c)
[ 219.965057] [<c039fdc4>] (platform_drv_remove+0x18/0x1c) from [<c039e6c4>]
(__device_release_driver+0x70/0xc8)
[ 219.975006] [<c039e6c4>] (__device_release_driver+0x70/0xc8) from
[<c039eec8>] (driver_detach+0xb4/0xb8)
[ 219.984466] [<c039eec8>] (driver_detach+0xb4/0xb8) from [<c039e4ec>]
(bus_remove_driver+0x8c/0xd0)
[ 219.993438] [<c039e4ec>] (bus_remove_driver+0x8c/0xd0) from [<c00abd2c>]
(SyS_delete_module+0x118/0x22c)
[ 220.002899] [<c00abd2c>] (SyS_delete_module+0x118/0x22c) from [<c0014100>]
(ret_fast_syscall+0x0/0x48)
Looks like s_active lock cannot be held while pwm lock is held.
The patch fixes the above issue by unlocking the pwm lock before acquiring the
sysfs lock.
Signed-off-by: Sourav Poddar <[email protected]>
---
drivers/pwm/core.c | 4 +++-
1 files changed, 3 insertions(+), 1 deletions(-)
diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
index 2ca9504..3e1d499 100644
--- a/drivers/pwm/core.c
+++ b/drivers/pwm/core.c
@@ -300,6 +300,7 @@ int pwmchip_remove(struct pwm_chip *chip)
if (test_bit(PWMF_REQUESTED, &pwm->flags)) {
ret = -EBUSY;
+ mutex_unlock(&pwm_lock);
goto out;
}
}
@@ -311,10 +312,11 @@ int pwmchip_remove(struct pwm_chip *chip)
free_pwms(chip);
+ mutex_unlock(&pwm_lock);
+
pwmchip_sysfs_unexport(chip);
out:
- mutex_unlock(&pwm_lock);
return ret;
}
EXPORT_SYMBOL_GPL(pwmchip_remove);
--
1.7.1
--
To unsubscribe from this list: send the line "unsubscribe linux-pwm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html