From: Zheng Yejian <zhengyeji...@huawei.com>

[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]

When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.

If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.

To avoid it, call cond_resched() after each cpu buffer allocation.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyeji...@huawei.com

Cc: <mhira...@kernel.org>
Signed-off-by: Zheng Yejian <zhengyeji...@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rost...@goodmis.org>
Signed-off-by: Sasha Levin <sas...@kernel.org>
---
 kernel/trace/ring_buffer.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index db7cefd196cec..b15d72284c7f7 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2176,6 +2176,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, 
unsigned long size,
                                err = -ENOMEM;
                                goto out_err;
                        }
+
+                       cond_resched();
                }
 
                cpus_read_lock();
-- 
2.40.1


Reply via email to