On Sat, 25 Oct 2025 12:00:16 -0400
Sasha Levin <[email protected]> wrote:

> - The change inserts `cond_resched()` inside the inner iteration over
>   every ftrace record (`kernel/trace/ftrace.c:7538`). That loop holds
>   the ftrace mutex and, for each record, invokes heavy helpers like
>   `test_for_valid_rec()` which in turn calls `kallsyms_lookup()`
>   (`kernel/trace/ftrace.c:4289`). On huge modules (e.g. amdgpu) this can
>   run for tens of milliseconds with preemption disabled, triggering the

It got the "preemption disabled" wrong. Well maybe when running
PREEMPT_NONE it is, but the description doesn't imply that.

-- Steve


>   documented soft lockup/panic during module load.
> - `ftrace_module_enable()` runs only in process context via
>   `prepare_coming_module()` (`kernel/module/main.c:3279`), so adding a
>   voluntary reschedule point is safe; the same pattern already exists in
>   other long-running ftrace loops (see commits d0b24b4e91fc and
>   42ea22e754ba), so this brings consistency without changing control
>   flow or semantics.
> - No data structures or interfaces change, and the code still executes
>   under the same locking (`ftrace_lock`, `text_mutex` when the arch
>   overrides `ftrace_arch_code_modify_prepare()`), so the risk of
>   regression is minimal: the new call simply yields CPU if needed while
>   keeping the locks held, preventing watchdog-induced crashes but
>   otherwise behaving identically.


Reply via email to