On Fri, 2 Jan 2026 19:23:33 -0500
Joel Fernandes <[email protected]> wrote:
> +#ifdef CONFIG_RCU_PER_CPU_BLOCKED_LISTS
> +/*
> + * Promote blocked tasks from a single CPU's per-CPU list to the rnp list.
> + *
> + * If there are no tracked blockers (gp_tasks NULL) and this CPU
> + * is still blocking the corresponding GP (bit set in qsmask), set
> + * the pointer to ensure the GP machinery knows about the blocking task.
> + * This handles late promotion during QS reporting, where tasks may have
> + * blocked after rcu_gp_init() or sync_exp_reset_tree() ran their scans.
> + */
> +static void rcu_promote_blocked_tasks_rdp(struct rcu_data *rdp,
> + struct rcu_node *rnp)
> +{
> + struct task_struct *t, *tmp;
> +
> + raw_lockdep_assert_held_rcu_node(rnp);
> +
> + raw_spin_lock(&rdp->blkd_lock);
> + list_for_each_entry_safe(t, tmp, &rdp->blkd_list, rcu_rdp_entry) {
How big can this list be? This would be considered an unbounded latency for
PREEMPT_RT. If this is needed, then we need to disable this when PREEMPT_RT
is enabled.
-- Steve
> + /*
> + * Skip tasks already on rnp list. A non-NULL
> + * rcu_blocked_node indicates the task was already
> + * promoted or added directly during blocking.
> + * TODO: Should be WARN_ON_ONCE() after the last patch?
> + */