From: Wenlin Kang <[email protected]>
With RT, if a high priority task that runs d_alloc_parallel() preempts a
low priority task that runs __d_add(), it can make d_alloc_parallel() go
into an infinite retry.
low priority task: high priority task:
__lookup_slow()
d_add()
__d_add
start_dir_add(dir) __lookup_slow()
d_alloc_parallel()
When the low priority task finished start_dir_add(), i_dir_seq has been
incremented to an odd value. Since __d_add() uses spin_lock(), migration
is disabled, so if the low priority task is preempted during start_dir_add()
/end_dir_add(), then high priority task can go into an infinite waiting for
i_dir_seq.
This patch avoid the issue by enable preempt_disable() during start_dir_add()
/end_dir_add().
Signed-off-by: Wenlin Kang <[email protected]>
---
fs/dcache.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/fs/dcache.c b/fs/dcache.c
index 3c87c97..d40e9f1 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2582,6 +2582,9 @@ static inline void __d_add(struct dentry *dentry, struct
inode *inode)
{
struct inode *dir = NULL;
unsigned n;
+#ifdef CONFIG_PREEMPT_RT_FULL
+ preempt_disable();
+#endif
spin_lock(&dentry->d_lock);
if (unlikely(d_in_lookup(dentry))) {
dir = dentry->d_parent->d_inode;
@@ -2600,6 +2603,9 @@ static inline void __d_add(struct dentry *dentry, struct
inode *inode)
if (dir)
end_dir_add(dir, n);
spin_unlock(&dentry->d_lock);
+#ifdef CONFIG_PREEMPT_RT_FULL
+ preempt_enable();
+#endif
if (inode)
spin_unlock(&inode->i_lock);
}
--
1.9.1
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#9849):
https://lists.yoctoproject.org/g/linux-yocto/message/9849
Mute This Topic: https://lists.yoctoproject.org/mt/82675358/21656
Group Owner: [email protected]
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-