On Thu, Dec 20, 2018 at 12:59 PM Jonathan Lemon
<jonathan.le...@gmail.com> wrote:
>
> This protects against callers like inet_diag_dump_icsk(), which may walk the
> chain on another cpu and change the refcount before the tw structure is ready.
>
> Signed-off-by: Jonathan Lemon <jonathan.le...@gmail.com>
> ---
>  net/ipv4/inet_timewait_sock.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
> index 88c5069b5d20..128cfcada5e6 100644
> --- a/net/ipv4/inet_timewait_sock.c
> +++ b/net/ipv4/inet_timewait_sock.c
> @@ -125,8 +125,6 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, 
> struct sock *sk,
>         if (__sk_nulls_del_node_init_rcu(sk))
>                 sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
>
> -       spin_unlock(lock);
> -
>         /* tw_refcnt is set to 3 because we have :
>          * - one reference for bhash chain.
>          * - one reference for ehash chain.
> @@ -137,6 +135,8 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, 
> struct sock *sk,
>          * so we are not allowed to use tw anymore.
>          */
>         refcount_set(&tw->tw_refcnt, 3);
> +
> +       spin_unlock(lock);


Hi Jonathan

Nice catch, but this patch is not correct.

We need to make  inet_diag_dump_icsk() more robust, otherwise we would have to
change other points in the stack (not only for TIMEWAIT sockets), and
that is a bit too risky
in term of locking dependencies.

Please try the following fix instead :

Fixes: 67db3e4bfbc9 ("tcp: no longer hold ehash lock while calling
tcp_get_info()")

diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c
index 
4e5bc4b2f14e6786ceb7d63e5902f8fc17819dfa..1a4e9ff02762ed757545da13de1ee352f38c867b
100644
--- a/net/ipv4/inet_diag.c
+++ b/net/ipv4/inet_diag.c
@@ -998,7 +998,9 @@ void inet_diag_dump_icsk(struct inet_hashinfo
*hashinfo, struct sk_buff *skb,
                        if (!inet_diag_bc_sk(bc, sk))
                                goto next_normal;

-                       sock_hold(sk);
+                       if (!refcount_inc_not_zero(&sk->sk_refcnt))
+                               goto next_normal;
+
                        num_arr[accum] = num;
                        sk_arr[accum] = sk;
                        if (++accum == SKARR_SZ)

Reply via email to