On Wed, Oct 02, 2019 at 06:38:54PM -0700, [email protected] wrote:
> From: "Paul E. McKenney" <[email protected]>
> 
> Callback invocation can run for a significant time period, and within
> CONFIG_NO_HZ_FULL=y kernels, this period will be devoid of scheduler-clock
> interrupts.  In-kernel execution without such interrupts can cause all
> manner of malfunction, with RCU CPU stall warnings being but one result.
> 
> This commit therefore forces scheduling-clock interrupts on whenever more
> than a few RCU callbacks are invoked.  Because offloaded callback invocation
> can be preempted, this forcing is withdrawn on each context switch.  This
> in turn requires that the loop invoking RCU callbacks reiterate the forcing
> periodically.
> 
> [ paulmck: Apply Joel Fernandes TICK_DEP_MASK_RCU->TICK_DEP_BIT_RCU fix. ]
> Signed-off-by: Paul E. McKenney <[email protected]>
> ---
>  kernel/rcu/tree.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 8110514..db673ae 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -2151,6 +2151,8 @@ static void rcu_do_batch(struct rcu_data *rdp)
>       rcu_nocb_unlock_irqrestore(rdp, flags);
>  
>       /* Invoke callbacks. */
> +     if (IS_ENABLED(CONFIG_NO_HZ_FULL))

No need for the IS_ENABLED(), the API takes care of that.

> +             tick_dep_set_task(current, TICK_DEP_BIT_RCU);
>       rhp = rcu_cblist_dequeue(&rcl);
>       for (; rhp; rhp = rcu_cblist_dequeue(&rcl)) {
>               debug_rcu_head_unqueue(rhp);
> @@ -2217,6 +2219,8 @@ static void rcu_do_batch(struct rcu_data *rdp)
>       /* Re-invoke RCU core processing if there are callbacks remaining. */
>       if (!offloaded && rcu_segcblist_ready_cbs(&rdp->cblist))
>               invoke_rcu_core();
> +     if (IS_ENABLED(CONFIG_NO_HZ_FULL))

Same here.

Thanks.

> +             tick_dep_clear_task(current, TICK_DEP_BIT_RCU);
>  }
>  
>  /*
> -- 
> 2.9.5
> 

Reply via email to