* Simon Horman <[email protected]> wrote:

> This avoids the situation where a dump of a large number of connections
> may prevent scheduling for a long time while also avoiding excessive
> calls to rcu_read_unlock() and rcu_read_lock().
> 
> Cc: Eric Dumazet <[email protected]>
> Cc: Julian Anastasov <[email protected]>
> Signed-off-by: Simon Horman <[email protected]>
> ---
>  net/netfilter/ipvs/ip_vs_conn.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
> index a083bda..42a7b33 100644
> --- a/net/netfilter/ipvs/ip_vs_conn.c
> +++ b/net/netfilter/ipvs/ip_vs_conn.c
> @@ -975,8 +975,7 @@ static void *ip_vs_conn_array(struct seq_file *seq, 
> loff_t pos)
>                               return cp;
>                       }
>               }
> -             rcu_read_unlock();
> -             rcu_read_lock();
> +             cond_resched_rcu_lock();
>       }
>  
>       return NULL;
> @@ -1015,8 +1014,7 @@ static void *ip_vs_conn_seq_next(struct seq_file *seq, 
> void *v, loff_t *pos)
>                       iter->l = &ip_vs_conn_tab[idx];
>                       return cp;
>               }
> -             rcu_read_unlock();
> -             rcu_read_lock();
> +             cond_resched_rcu_lock();

Feel free to route this via the networking tree.

Note that this change isn't a pure clean-up but has functional effects as 
well: on !PREEMPT or PREEMPT_VOLUNTARY kernels it will add in a potential 
cond_resched() - while previously it would only rcu unlock and relock 
(which in itself isn't scheduling).

This should probably be pointed out in the changelog.

Thanks,

        Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to