Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))

2016-06-30 Thread Steven Rostedt
On Thu, 30 Jun 2016 16:07:26 -0400
Steven Rostedt  wrote:

> I can reproduce this by having the client unmount and remount the
> directory.

It gets even more interesting. When I unmount the directory, the hidden
port does not go away. It is still there. But if I mount it again, it
goes away (until it times out again).

Even more info:

When I first mount it, it creates 3 sockets, where one immediately is
closed:

tcp0  0 192.168.23.9:892192.168.23.22:44672 TIME_WAIT   
-   
tcp0  0 192.168.23.9:2049   192.168.23.22:815   ESTABLISHED 
-   
tcp0  0 192.168.23.9:754192.168.23.22:44672 ESTABLISHED 
-   

(192.168.23.22 is the machine remotely mounting a directory from the
server 192.168.23.9)

The trace of port 892 is this:

   kworker/u32:1-13473 [000]   4093.915114: xs_setup_tcp: RPC:   set up 
xprt to 192.168.23.22 (port 44672) via tcp
   kworker/u32:1-13473 [000]   4093.915122: xprt_create_transport: RPC: 
  created transport 8803b1c38000 with 65536 slots
kworker/0:1H-129   [000]   4093.915152: xprt_alloc_slot: RPC:47 
reserved req 88040b27ca00 xid c50ccaff
kworker/0:1H-129   [000]   4093.915157: xprt_connect: RPC:47 
xprt_connect xprt 8803b1c38000 is not connected
kworker/0:1H-129   [000]   4093.915159: xs_connect: RPC:   
xs_connect scheduled xprt 8803b1c38000
kworker/0:1H-129   [000] ..s.  4093.915170: inet_csk_get_port: snum 892
kworker/0:1H-129   [000] ..s.  4093.915177: 
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread
kworker/0:1H-129   [000] ..s.  4093.915178: inet_bind_hash: add 892 
8803bb9b5cc0
kworker/0:1H-129   [000] ..s.  4093.915184: 
 => inet_csk_get_port
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread
kworker/0:1H-129   [000]   4093.915185: xs_bind: RPC:   xs_bind 
4.136.255.255:892: ok (0)
kworker/0:1H-129   [000]   4093.915186: xs_tcp_setup_socket: RPC:   
worker connecting xprt 8803b1c38000 via tcp to 192.168.23.22 (port 44672)
kworker/0:1H-129   [000]   4093.915221: xs_tcp_setup_socket: RPC:   
8803b1c38000 connect status 115 connected 0 sock state 2
  -0 [003] ..s.  4093.915434: xs_tcp_state_change: RPC:   
xs_tcp_state_change client 8803b1c38000...
  -0 [003] ..s.  4093.915435: xs_tcp_state_change: RPC:   
state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
kworker/3:1H-145   [003]   4093.915558: xprt_connect_status: RPC:47 
xprt_connect_status: retrying
kworker/3:1H-145   [003]   4093.915560: xprt_prepare_transmit: RPC:
47 xprt_prepare_transmit
kworker/3:1H-145   [003]   4093.915562: xprt_transmit: RPC:47 
xprt_transmit(72)
kworker/3:1H-145   [003]   4093.915588: xs_tcp_send_request: RPC:   
xs_tcp_send_request(72) = 0
kworker/3:1H-145   [003]   4093.915589: xprt_transmit: RPC:47 xmit 
complete
  -0 [003] ..s.  4093.915969: xs_tcp_data_ready: RPC:   
xs_tcp_data_ready...
kworker/3:1H-145   [003]   4093.916081: xs_tcp_data_recv: RPC:   
xs_tcp_data_recv started
kworker/3:1H-145   [003]   4093.916083: xs_tcp_data_recv: RPC:   
reading TCP record fragment of length 24
kworker/3:1H-145   [003]   4093.916084: xs_tcp_data_recv: RPC:   
reading XID (4 bytes)
kworker/3:1H-145   [003]   4093.916085: xs_tcp_data_recv: RPC:   
reading request with XID c50ccaff
kworker/3:1H-145   [003]   4093.916086: xs_tcp_data_recv: RPC:   
reading CALL/REPLY flag (4 bytes)
kworker/3:1H-145   [003]   4093.916087: xs_tcp_data_recv: RPC:   
read reply XID c50ccaff
kworker/3:1H-145   [003] ..s.  4093.916088: xs_tcp_data_recv: RPC:   
XID c50ccaff read 16 bytes
kworker/3:1H-145   [003] ..s.  4093.916089: xs_tcp_data_recv: RPC:   
xprt = 8803b1c38000, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
kworker/3:1H-145   [003] ..s.  4093.916090: xprt_complete_rqst: RPC:47 
xid c50ccaff complete (24 bytes received)
kworker/3:1H-145   [003]   4093.916091: xs_tcp_data_recv: RPC:   
xs_tcp_data_recv done
kworker/3:1H-145   [003]   4093.916098: xprt_release: RPC:47 
release request 88040b27ca00
   kworker/u32:1-13473 [002]   4093.976056: xprt_destroy: RPC:   
destroying transport 8803b1c38000
   kworker/u32:1-13473 [002]   

Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))

2016-06-30 Thread Steven Rostedt
On Thu, 30 Jun 2016 18:30:42 +
Trond Myklebust  wrote:


> Wait. So the NFS mount is still active, it’s just that the socket
> disconnected due to no traffic? That should be OK. Granted that the
> port can’t be reused by another process, but you really don’t want
> that: what if there are no other ports available and you start
> writing to a file on the NFS partition?

What would cause the port to be connected to a socket again? I copied a
large file to the nfs mount, and the hidden port is still there?

Remember, this wasn't always the case, the hidden port is a recent
issue.

I ran wireshark on this and it appears to create two ports for NFS. One
of them is canceled by the client (sends a FIN/ACK) and this port is
what lays around never to be used again, and uses the other port for
all connections after that.

When I unmount the NFS directory, the port is finally freed (but has no
socket attached to it). What is the purpose of keeping this port around?

I can reproduce this by having the client unmount and remount the
directory.

-- Steve


Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))

2016-06-30 Thread Steven Rostedt
On Thu, 30 Jun 2016 11:23:41 -0400
Steven Rostedt  wrote:


> I can add more trace_printk()s if it would help.

I added a trace_printk() in inet_bind_bucket_destroy() to print out
some information on the socket used by xs_bind(), and it shows that the
bind destroy is called, but the list is not empty.



/*
 * Caller must hold hashbucket lock for this tb with local BH disabled
 */
void inet_bind_bucket_destroy(struct kmem_cache *cachep, struct 
inet_bind_bucket *tb)
{
if (!current->mm && xs_port == tb->port) {
trace_printk("destroy %d empty=%d %p\n",
 tb->port, hlist_empty(>owners), tb);
trace_dump_stack(1);
}
if (hlist_empty(>owners)) {
__hlist_del(>node);
kmem_cache_free(cachep, tb);
}
}

I created "xs_port" to hold the port of the variable used by xs_bind,
and when it is called, the hlist_empty(>owners) returns false.

I'll add more trace_printks to find out where those owners are being
added.

-- Steve


Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))

2016-06-30 Thread Steven Rostedt
On Thu, 30 Jun 2016 13:17:47 +
Trond Myklebust  wrote:

> > On Jun 30, 2016, at 08:59, Steven Rostedt  wrote:
> > 
> > [ resending as a new email, as I'm assuming people do not sort their
> >  INBOX via last email on thread, thus my last email is sitting in the
> >  bottom of everyone's INBOX ]
> > 
> > I've hit this again. Not sure when it started, but I applied my old
> > debug trace_printk() patch (attached) and rebooted (4.5.7). I just
> > tested the latest kernel from Linus's tree (from last nights pull), and
> > it still gives me the problem.
> > 
> > Here's the trace I have:
> > 
> >kworker/3:1H-134   [003] ..s.61.036129: inet_csk_get_port: snum 805

Here's were the port is taken

> >kworker/3:1H-134   [003] ..s.61.036135:   
> > => sched_clock
> > => inet_addr_type_table
> > => security_capable
> > => inet_bind
> > => xs_bind
> > => release_sock
> > => sock_setsockopt
> > => __sock_create
> > => xs_create_sock.isra.19
> > => xs_tcp_setup_socket
> > => process_one_work
> > => worker_thread
> > => worker_thread
> > => kthread
> > => ret_from_fork
> > => kthread
> >kworker/3:1H-134   [003] ..s.61.036136: inet_bind_hash: add 805
> >kworker/3:1H-134   [003] ..s.61.036138:   
> > => inet_csk_get_port
> > => sched_clock
> > => inet_addr_type_table
> > => security_capable
> > => inet_bind
> > => xs_bind
> > => release_sock
> > => sock_setsockopt
> > => __sock_create
> > => xs_create_sock.isra.19
> > => xs_tcp_setup_socket
> > => process_one_work
> > => worker_thread
> > => worker_thread
> > => kthread
> > => ret_from_fork
> > => kthread
> >kworker/3:1H-134   [003] 61.036139: xs_bind: RPC:   xs_bind 
> > 4.136.255.255:805: ok (0)

Here's where it is bounded.

> >kworker/3:1H-134   [003] 61.036140: xs_tcp_setup_socket: RPC:
> >worker connecting xprt 880407eca800 via tcp to 192.168.23.22 (port 
> > 43651)
> >kworker/3:1H-134   [003] 61.036162: xs_tcp_setup_socket: RPC:
> >880407eca800 connect status 115 connected 0 sock state 2
> >  -0 [001] ..s.61.036450: xs_tcp_state_change: RPC:
> >xs_tcp_state_change client 880407eca800...
> >  -0 [001] ..s.61.036452: xs_tcp_state_change: RPC:
> >state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
> >kworker/1:1H-136   [001] 61.036476: xprt_connect_status: RPC:
> > 43 xprt_connect_status: retrying
> >kworker/1:1H-136   [001] 61.036478: xprt_prepare_transmit: RPC:  
> >   43 xprt_prepare_transmit
> >kworker/1:1H-136   [001] 61.036479: xprt_transmit: RPC:43 
> > xprt_transmit(72)
> >kworker/1:1H-136   [001] 61.036486: xs_tcp_send_request: RPC:
> >xs_tcp_send_request(72) = 0
> >kworker/1:1H-136   [001] 61.036487: xprt_transmit: RPC:43 
> > xmit complete
> >  -0 [001] ..s.61.036789: xs_tcp_data_ready: RPC:  
> >  xs_tcp_data_ready...
> >kworker/1:1H-136   [001] 61.036798: xs_tcp_data_recv: RPC:   
> > xs_tcp_data_recv started
> >kworker/1:1H-136   [001] 61.036799: xs_tcp_data_recv: RPC:   
> > reading TCP record fragment of length 24
> >kworker/1:1H-136   [001] 61.036799: xs_tcp_data_recv: RPC:   
> > reading XID (4 bytes)
> >kworker/1:1H-136   [001] 61.036800: xs_tcp_data_recv: RPC:   
> > reading request with XID 2f4c3f88
> >kworker/1:1H-136   [001] 61.036800: xs_tcp_data_recv: RPC:   
> > reading CALL/REPLY flag (4 bytes)
> >kworker/1:1H-136   [001] 61.036801: xs_tcp_data_recv: RPC:   
> > read reply XID 2f4c3f88
> >kworker/1:1H-136   [001] ..s.61.036801: xs_tcp_data_recv: RPC:   
> > XID 2f4c3f88 read 16 bytes
> >kworker/1:1H-136   [001] ..s.61.036802: xs_tcp_data_recv: RPC:   
> > xprt = 880407eca800, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
> >kworker/1:1H-136   [001] ..s.61.036802: xprt_complete_rqst: RPC:
> > 43 xid 2f4c3f88 complete (24 bytes received)
> >kworker/1:1H-136   [001] 61.036803: xs_tcp_data_recv: RPC:   
> > xs_tcp_data_recv done
> >kworker/1:1H-136   [001] 61.036812: xprt_release: RPC:43 
> > release request 88040b270800
> > 
> > 
> > # unhide-tcp 
> > Unhide-tcp 20130526
> > Copyright © 2013 Yago Jesus & Patrick Gouin
> > License GPLv3+ : GNU GPL version 3 or later
> > http://www.unhide-forensics.info
> > Used options: 
> > [*]Starting TCP checking
> > 
> > Found Hidden port that not appears in ss: 805
> >   
> 
> What is a “Hidden port that not appears in ss: 805”, and what does this 
> report mean? Are we failing to close a socket?

I believe hidden ports are ports that are bound to no socket.
Basically, a "port leak". Where they are in limbo and can never be
reused.

I looked at my past report, and everthing is exactly like the issue
before. When I first boot my box, the port is 

Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))

2016-06-30 Thread Trond Myklebust

> On Jun 30, 2016, at 08:59, Steven Rostedt  wrote:
> 
> [ resending as a new email, as I'm assuming people do not sort their
>  INBOX via last email on thread, thus my last email is sitting in the
>  bottom of everyone's INBOX ]
> 
> I've hit this again. Not sure when it started, but I applied my old
> debug trace_printk() patch (attached) and rebooted (4.5.7). I just
> tested the latest kernel from Linus's tree (from last nights pull), and
> it still gives me the problem.
> 
> Here's the trace I have:
> 
>kworker/3:1H-134   [003] ..s.61.036129: inet_csk_get_port: snum 805
>kworker/3:1H-134   [003] ..s.61.036135: 
> => sched_clock
> => inet_addr_type_table
> => security_capable
> => inet_bind
> => xs_bind
> => release_sock
> => sock_setsockopt
> => __sock_create
> => xs_create_sock.isra.19
> => xs_tcp_setup_socket
> => process_one_work
> => worker_thread
> => worker_thread
> => kthread
> => ret_from_fork
> => kthread  
>kworker/3:1H-134   [003] ..s.61.036136: inet_bind_hash: add 805
>kworker/3:1H-134   [003] ..s.61.036138: 
> => inet_csk_get_port
> => sched_clock
> => inet_addr_type_table
> => security_capable
> => inet_bind
> => xs_bind
> => release_sock
> => sock_setsockopt
> => __sock_create
> => xs_create_sock.isra.19
> => xs_tcp_setup_socket
> => process_one_work
> => worker_thread
> => worker_thread
> => kthread
> => ret_from_fork
> => kthread  
>kworker/3:1H-134   [003] 61.036139: xs_bind: RPC:   xs_bind 
> 4.136.255.255:805: ok (0)
>kworker/3:1H-134   [003] 61.036140: xs_tcp_setup_socket: RPC:  
>  worker connecting xprt 880407eca800 via tcp to 192.168.23.22 (port 43651)
>kworker/3:1H-134   [003] 61.036162: xs_tcp_setup_socket: RPC:  
>  880407eca800 connect status 115 connected 0 sock state 2
>  -0 [001] ..s.61.036450: xs_tcp_state_change: RPC:  
>  xs_tcp_state_change client 880407eca800...
>  -0 [001] ..s.61.036452: xs_tcp_state_change: RPC:  
>  state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
>kworker/1:1H-136   [001] 61.036476: xprt_connect_status: RPC:
> 43 xprt_connect_status: retrying
>kworker/1:1H-136   [001] 61.036478: xprt_prepare_transmit: RPC:
> 43 xprt_prepare_transmit
>kworker/1:1H-136   [001] 61.036479: xprt_transmit: RPC:43 
> xprt_transmit(72)
>kworker/1:1H-136   [001] 61.036486: xs_tcp_send_request: RPC:  
>  xs_tcp_send_request(72) = 0
>kworker/1:1H-136   [001] 61.036487: xprt_transmit: RPC:43 xmit 
> complete
>  -0 [001] ..s.61.036789: xs_tcp_data_ready: RPC:   
> xs_tcp_data_ready...
>kworker/1:1H-136   [001] 61.036798: xs_tcp_data_recv: RPC:   
> xs_tcp_data_recv started
>kworker/1:1H-136   [001] 61.036799: xs_tcp_data_recv: RPC:   
> reading TCP record fragment of length 24
>kworker/1:1H-136   [001] 61.036799: xs_tcp_data_recv: RPC:   
> reading XID (4 bytes)
>kworker/1:1H-136   [001] 61.036800: xs_tcp_data_recv: RPC:   
> reading request with XID 2f4c3f88
>kworker/1:1H-136   [001] 61.036800: xs_tcp_data_recv: RPC:   
> reading CALL/REPLY flag (4 bytes)
>kworker/1:1H-136   [001] 61.036801: xs_tcp_data_recv: RPC:   
> read reply XID 2f4c3f88
>kworker/1:1H-136   [001] ..s.61.036801: xs_tcp_data_recv: RPC:   
> XID 2f4c3f88 read 16 bytes
>kworker/1:1H-136   [001] ..s.61.036802: xs_tcp_data_recv: RPC:   
> xprt = 880407eca800, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
>kworker/1:1H-136   [001] ..s.61.036802: xprt_complete_rqst: RPC:43 
> xid 2f4c3f88 complete (24 bytes received)
>kworker/1:1H-136   [001] 61.036803: xs_tcp_data_recv: RPC:   
> xs_tcp_data_recv done
>kworker/1:1H-136   [001] 61.036812: xprt_release: RPC:43 
> release request 88040b270800
> 
> 
> # unhide-tcp 
> Unhide-tcp 20130526
> Copyright © 2013 Yago Jesus & Patrick Gouin
> License GPLv3+ : GNU GPL version 3 or later
> http://www.unhide-forensics.info
> Used options: 
> [*]Starting TCP checking
> 
> Found Hidden port that not appears in ss: 805
> 

What is a “Hidden port that not appears in ss: 805”, and what does this report 
mean? Are we failing to close a socket?

Cheers
  Trond



It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))

2016-06-30 Thread Steven Rostedt
[ resending as a new email, as I'm assuming people do not sort their
  INBOX via last email on thread, thus my last email is sitting in the
  bottom of everyone's INBOX ]

I've hit this again. Not sure when it started, but I applied my old
debug trace_printk() patch (attached) and rebooted (4.5.7). I just
tested the latest kernel from Linus's tree (from last nights pull), and
it still gives me the problem.

Here's the trace I have:

kworker/3:1H-134   [003] ..s.61.036129: inet_csk_get_port: snum 805
kworker/3:1H-134   [003] ..s.61.036135: 
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread  
kworker/3:1H-134   [003] ..s.61.036136: inet_bind_hash: add 805
kworker/3:1H-134   [003] ..s.61.036138: 
 => inet_csk_get_port
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread  
kworker/3:1H-134   [003] 61.036139: xs_bind: RPC:   xs_bind 
4.136.255.255:805: ok (0)
kworker/3:1H-134   [003] 61.036140: xs_tcp_setup_socket: RPC:   
worker connecting xprt 880407eca800 via tcp to 192.168.23.22 (port 43651)
kworker/3:1H-134   [003] 61.036162: xs_tcp_setup_socket: RPC:   
880407eca800 connect status 115 connected 0 sock state 2
  -0 [001] ..s.61.036450: xs_tcp_state_change: RPC:   
xs_tcp_state_change client 880407eca800...
  -0 [001] ..s.61.036452: xs_tcp_state_change: RPC:   
state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
kworker/1:1H-136   [001] 61.036476: xprt_connect_status: RPC:43 
xprt_connect_status: retrying
kworker/1:1H-136   [001] 61.036478: xprt_prepare_transmit: RPC:
43 xprt_prepare_transmit
kworker/1:1H-136   [001] 61.036479: xprt_transmit: RPC:43 
xprt_transmit(72)
kworker/1:1H-136   [001] 61.036486: xs_tcp_send_request: RPC:   
xs_tcp_send_request(72) = 0
kworker/1:1H-136   [001] 61.036487: xprt_transmit: RPC:43 xmit 
complete
  -0 [001] ..s.61.036789: xs_tcp_data_ready: RPC:   
xs_tcp_data_ready...
kworker/1:1H-136   [001] 61.036798: xs_tcp_data_recv: RPC:   
xs_tcp_data_recv started
kworker/1:1H-136   [001] 61.036799: xs_tcp_data_recv: RPC:   
reading TCP record fragment of length 24
kworker/1:1H-136   [001] 61.036799: xs_tcp_data_recv: RPC:   
reading XID (4 bytes)
kworker/1:1H-136   [001] 61.036800: xs_tcp_data_recv: RPC:   
reading request with XID 2f4c3f88
kworker/1:1H-136   [001] 61.036800: xs_tcp_data_recv: RPC:   
reading CALL/REPLY flag (4 bytes)
kworker/1:1H-136   [001] 61.036801: xs_tcp_data_recv: RPC:   
read reply XID 2f4c3f88
kworker/1:1H-136   [001] ..s.61.036801: xs_tcp_data_recv: RPC:   
XID 2f4c3f88 read 16 bytes
kworker/1:1H-136   [001] ..s.61.036802: xs_tcp_data_recv: RPC:   
xprt = 880407eca800, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
kworker/1:1H-136   [001] ..s.61.036802: xprt_complete_rqst: RPC:43 
xid 2f4c3f88 complete (24 bytes received)
kworker/1:1H-136   [001] 61.036803: xs_tcp_data_recv: RPC:   
xs_tcp_data_recv done
kworker/1:1H-136   [001] 61.036812: xprt_release: RPC:43 
release request 88040b270800


# unhide-tcp 
Unhide-tcp 20130526
Copyright © 2013 Yago Jesus & Patrick Gouin
License GPLv3+ : GNU GPL version 3 or later
http://www.unhide-forensics.info
Used options: 
[*]Starting TCP checking

Found Hidden port that not appears in ss: 805

-- Steve
---
 net/ipv4/inet_connection_sock.c |4 
 net/ipv4/inet_hashtables.c  |5 +
 net/sunrpc/xprt.c   |5 +
 net/sunrpc/xprtsock.c   |5 +
 5 files changed, 22 insertions(+)

Index: linux-build.git/net/ipv4/inet_connection_sock.c
===
--- linux-build.git.orig/net/ipv4/inet_connection_sock.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/ipv4/inet_connection_sock.c	2016-06-22 11:56:20.002662092 -0400
@@ -232,6 +232,10 @@ tb_found:
 		}
 	}
 success:
+	if (!current->mm) {
+		trace_printk("snum %d\n", snum);
+		trace_dump_stack(1);
+	}
 	if (!inet_csk(sk)->icsk_bind_hash)
 		inet_bind_hash(sk, tb, port);
 	WARN_ON(inet_csk(sk)->icsk_bind_hash != tb);
Index: linux-build.git/net/ipv4/inet_hashtables.c
===
--- linux-build.git.orig/net/ipv4/inet_hashtables.c	2016-06-22 

It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))

2016-06-22 Thread Steven Rostedt
I've hit this again. Not sure when it started, but I applied my old
debug trace_printk() patch (attached) and rebooted (4.5.7). I just
tested the latest kernel from Linus's tree (from last nights pull), and
it still gives me the problem.

Here's the trace I have:

kworker/3:1H-134   [003] ..s.61.036129: inet_csk_get_port: snum 805
kworker/3:1H-134   [003] ..s.61.036135: 
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread
kworker/3:1H-134   [003] ..s.61.036136: inet_bind_hash: add 805
kworker/3:1H-134   [003] ..s.61.036138: 
 => inet_csk_get_port
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread
kworker/3:1H-134   [003] 61.036139: xs_bind: RPC:   xs_bind 
4.136.255.255:805: ok (0)
kworker/3:1H-134   [003] 61.036140: xs_tcp_setup_socket: RPC:   
worker connecting xprt 880407eca800 via tcp to 192.168.23.22 (port 43651)
kworker/3:1H-134   [003] 61.036162: xs_tcp_setup_socket: RPC:   
880407eca800 connect status 115 connected 0 sock state 2
  -0 [001] ..s.61.036450: xs_tcp_state_change: RPC:   
xs_tcp_state_change client 880407eca800...
  -0 [001] ..s.61.036452: xs_tcp_state_change: RPC:   
state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
kworker/1:1H-136   [001] 61.036476: xprt_connect_status: RPC:43 
xprt_connect_status: retrying
kworker/1:1H-136   [001] 61.036478: xprt_prepare_transmit: RPC:
43 xprt_prepare_transmit
kworker/1:1H-136   [001] 61.036479: xprt_transmit: RPC:43 
xprt_transmit(72)
kworker/1:1H-136   [001] 61.036486: xs_tcp_send_request: RPC:   
xs_tcp_send_request(72) = 0
kworker/1:1H-136   [001] 61.036487: xprt_transmit: RPC:43 xmit 
complete
  -0 [001] ..s.61.036789: xs_tcp_data_ready: RPC:   
xs_tcp_data_ready...
kworker/1:1H-136   [001] 61.036798: xs_tcp_data_recv: RPC:   
xs_tcp_data_recv started
kworker/1:1H-136   [001] 61.036799: xs_tcp_data_recv: RPC:   
reading TCP record fragment of length 24
kworker/1:1H-136   [001] 61.036799: xs_tcp_data_recv: RPC:   
reading XID (4 bytes)
kworker/1:1H-136   [001] 61.036800: xs_tcp_data_recv: RPC:   
reading request with XID 2f4c3f88
kworker/1:1H-136   [001] 61.036800: xs_tcp_data_recv: RPC:   
reading CALL/REPLY flag (4 bytes)
kworker/1:1H-136   [001] 61.036801: xs_tcp_data_recv: RPC:   
read reply XID 2f4c3f88
kworker/1:1H-136   [001] ..s.61.036801: xs_tcp_data_recv: RPC:   
XID 2f4c3f88 read 16 bytes
kworker/1:1H-136   [001] ..s.61.036802: xs_tcp_data_recv: RPC:   
xprt = 880407eca800, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
kworker/1:1H-136   [001] ..s.61.036802: xprt_complete_rqst: RPC:43 
xid 2f4c3f88 complete (24 bytes received)
kworker/1:1H-136   [001] 61.036803: xs_tcp_data_recv: RPC:   
xs_tcp_data_recv done
kworker/1:1H-136   [001] 61.036812: xprt_release: RPC:43 
release request 88040b270800


# unhide-tcp 
Unhide-tcp 20130526
Copyright © 2013 Yago Jesus & Patrick Gouin
License GPLv3+ : GNU GPL version 3 or later
http://www.unhide-forensics.info
Used options: 
[*]Starting TCP checking

Found Hidden port that not appears in ss: 805

-- Steve
---
 net/ipv4/inet_connection_sock.c |4 
 net/ipv4/inet_hashtables.c  |5 +
 net/sunrpc/xprt.c   |5 +
 net/sunrpc/xprtsock.c   |5 +
 5 files changed, 22 insertions(+)

Index: linux-build.git/net/ipv4/inet_connection_sock.c
===
--- linux-build.git.orig/net/ipv4/inet_connection_sock.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/ipv4/inet_connection_sock.c	2016-06-22 11:56:20.002662092 -0400
@@ -232,6 +232,10 @@ tb_found:
 		}
 	}
 success:
+	if (!current->mm) {
+		trace_printk("snum %d\n", snum);
+		trace_dump_stack(1);
+	}
 	if (!inet_csk(sk)->icsk_bind_hash)
 		inet_bind_hash(sk, tb, port);
 	WARN_ON(inet_csk(sk)->icsk_bind_hash != tb);
Index: linux-build.git/net/ipv4/inet_hashtables.c
===
--- linux-build.git.orig/net/ipv4/inet_hashtables.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/ipv4/inet_hashtables.c	2016-06-22 11:55:05.948267360 -0400
@@ -93,6 +93,11 @@ void inet_bind_bucket_destroy(struct kme
 void