On Tue, Mar 19, 2024 at 01:44:40PM -0700, Yan Zhai wrote:
> When there are heavy load, cpumap kernel threads can be busy polling
> packets from redirect queues and block out RCU tasks from reaching
> quiescent states. It is insufficient to just call cond_resched() in such
> context. Periodically raise a consolidated RCU QS before cond_resched
> fixes the problem.
> 
> Fixes: 6710e1126934 ("bpf: introduce new bpf cpu map type 
> BPF_MAP_TYPE_CPUMAP")
> Reviewed-by: Jesper Dangaard Brouer <[email protected]>
> Signed-off-by: Yan Zhai <[email protected]>

Acked-by: Paul E. McKenney <[email protected]>

> ---
>  kernel/bpf/cpumap.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
> index 9ee8da477465..a8e34416e960 100644
> --- a/kernel/bpf/cpumap.c
> +++ b/kernel/bpf/cpumap.c
> @@ -263,6 +263,7 @@ static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry 
> *rcpu, void **frames,
>  static int cpu_map_kthread_run(void *data)
>  {
>       struct bpf_cpu_map_entry *rcpu = data;
> +     unsigned long last_qs = jiffies;
>  
>       complete(&rcpu->kthread_running);
>       set_current_state(TASK_INTERRUPTIBLE);
> @@ -288,10 +289,12 @@ static int cpu_map_kthread_run(void *data)
>                       if (__ptr_ring_empty(rcpu->queue)) {
>                               schedule();
>                               sched = 1;
> +                             last_qs = jiffies;
>                       } else {
>                               __set_current_state(TASK_RUNNING);
>                       }
>               } else {
> +                     rcu_softirq_qs_periodic(last_qs);
>                       sched = cond_resched();
>               }
>  
> -- 
> 2.30.2
> 
> 

Reply via email to