To successfully reproduce this, mahmoh and I discovered that you seem to
have to run it from a root shell - running it under sudo produces
reasonable (i.e. immediate) umount times. I therefore suspected some
obscure capability thing, but I don't see any such differences in the
kernel code. Further, I've found that this only seems to be a problem
when there's one runnable process on the system. If you start another
program, the umount will immediately exit.

Profiling turned up no cpu obvious cpu intensive kernel functions. I
took a few samples of a hung umount using sysrq+t, and found that we
seem to always be stuck w/ the following backtrace:

[ 5093.862717] umount          D c0557538     0 11174   2118 0x00000000
[ 5093.862729] Backtrace: 
[ 5093.862741] [<c0557214>] (__schedule+0x0/0x5bc) from [<c0557aec>] 
(schedule+0x50/0x68)
[ 5093.862756] [<c0557a9c>] (schedule+0x0/0x68) from [<c0558070>] 
(schedule_timeout+0x1d4/0x254)
[ 5093.862770] [<c0557e9c>] (schedule_timeout+0x0/0x254) from [<c0557938>] 
(wait_for_common+0xc4/0x17c)
[ 5093.862785] [<c0557874>] (wait_for_common+0x0/0x17c) from [<c0557a98>] 
(wait_for_completion+0x18/0x1c)
[ 5093.862802] [<c0557a80>] (wait_for_completion+0x0/0x1c) from [<c00e8140>] 
(_rcu_barrier.isra.31+0x98/0xb8)
[ 5093.862817] [<c00e80a8>] (_rcu_barrier.isra.31+0x0/0xb8) from [<c00e8178>] 
(rcu_barrier_sched+0x18/0x1c)
[ 5093.862827]  r6:c06eb748 r5:c06fb8c4 r4:ed68c000 r3:c01397b4
[ 5093.862847] [<c00e8160>] (rcu_barrier_sched+0x0/0x1c) from [<c00e818c>] 
(rcu_barrier+0x10/0x14)
[ 5093.862864] [<c00e817c>] (rcu_barrier+0x0/0x14) from [<c0139d58>] 
(deactivate_locked_super+0x54/0x68)
[ 5093.862879] [<c0139d04>] (deactivate_locked_super+0x0/0x68) from 
[<c013a4a8>] (deactivate_super+0x68/0x70)
[ 5093.862888]  r5:ed68c000 r4:ed68c000
[ 5093.862904] [<c013a440>] (deactivate_super+0x0/0x70) from [<c01548cc>] 
(mntput_no_expire+0xc8/0x11c)
[ 5093.862913]  r4:ed6c6780 r3:db28e34c
[ 5093.862927] [<c0154804>] (mntput_no_expire+0x0/0x11c) from [<c0155a74>] 
(sys_umount+0x68/0xc8)
[ 5093.862936]  r7:00000034 r6:00efe8b0 r5:00000000 r4:00000000
[ 5093.862955] [<c0155a0c>] (sys_umount+0x0/0xc8) from [<c0047c80>] 
(ret_fast_syscall+0x0/0x30)
[ 5093.862964]  r5:0001b320 r4:00efe8b0

Reading up on RCU, it looks like we are waiting for all rcu callbacks to
complete on all CPUs before moving forward - but I'm not sure how to
identify why that has not occurred.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/939240

Title:
  [public] armadaxp kernel slow to umount

To manage notifications about this bug go to:
https://bugs.launchpad.net/eilt/+bug/939240/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to