[jira] [Updated] (TS-3299) Inactivity timeout audit broken
[ https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheer Vinukonda updated TS-3299: -- Affects Version/s: 5.2.0 Inactivity timeout audit broken --- Key: TS-3299 URL: https://issues.apache.org/jira/browse/TS-3299 Project: Traffic Server Issue Type: Bug Components: Core Affects Versions: 5.2.0 Reporter: Sudheer Vinukonda Fix For: 5.3.0 The patch in TS-3196 seems to result in fd leak in our prod. There are a bunch of hung sockets (in close-wait state), stuck forever (remain leaked for days after stopping the traffic). Debugging further, it seems that the inactivity audit is broken by this patch. Some info below for the leaked sockets (in close_wait state): {code} $ ss -s ; sudo traffic_line -r proxy.process.net.connections_currently_open; sudo traffic_line -r proxy.process.http.current_client_connections; sudo traffic_line -r proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof traffic_server)/fd/ 3/dev/null| wc -l Total: 29367 (kernel 29437) TCP: 78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0), ports 918 Transport Total IPIPv6 * 29437 - - RAW 0 0 0 UDP 16133 TCP 31642 31637 5 INET 31658 31650 8 FRAG 0 0 0 Password: 27689 1 1 27939 A snippet from lsof -p $(pidof traffic_server) [ET_NET 10024 nobody 240u IPv4 21385754290t0TCP 67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT) [ET_NET 10024 nobody 241u IPv4 21370939450t0TCP 67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT) [ET_NET 10024 nobody 243u IPv4 21360187890t0TCP 67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT) [ET_NET 10024 nobody 245u IPv4 21359962930t0TCP 67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT) [ET_NET 10024 nobody 248u IPv4 21364688960t0TCP 67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT) [ET_NET 10024 nobody 253u IPv4 21402138640t0TCP 67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT) [ET_NET 10024 nobody 259u IPv4 21378611760t0TCP 67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT) [ET_NET 10024 nobody 260u IPv4 21390814930t0TCP 67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT) [ET_NET 10024 nobody 261u IPv4 21349485650t0TCP 67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT) [ET_NET 10024 nobody 262u IPv4 21357080460t0TCP 67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT) [ET_NET 10024 nobody 263u IPv4 21388968970t0TCP 67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT) [ET_NET 10024 nobody 264u IPv4 21355890290t0TCP 67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT) [ET_NET 10024 nobody 265u IPv4 21349302350t0TCP 67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT) [ET_NET 10024 nobody 267u IPv4 21378375150t0TCP 67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT) [ET_NET 10024 nobody 269u IPv4 21352728550t0TCP 67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT) [ET_NET 10024 nobody 270u IPv4 21358208020t0TCP 67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT) [ET_NET 10024 nobody 271u IPv4 21354750420t0TCP 67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT) [ET_NET 10024 nobody 272u IPv4 21353289740t0TCP 67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT) [ET_NET 10024 nobody 273u IPv4 21375427910t0TCP 67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT) [ET_NET 10024 nobody 274u IPv4 21348061350t0TCP 67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT) [ET_NET 10024 nobody 275u IPv4 21401260170t0TCP 67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT) [ET_NET 10024 nobody 276u IPv4 21346360890t0TCP 67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT) [ET_NET 10024 nobody 278u IPv4 21347083390t0TCP 67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT) [ET_NET 10024 nobody 279u IPv4 21345808880t0TCP 67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT) [ET_NET 10024 nobody 281u IPv4 21348681310t0TCP 67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT) [ET_NET 10024 nobody 282u IPv4 21392756010t0TCP 67.195.33.62:443-168.99.78.5:63345 (CLOSE_WAIT) [ET_NET 10024
[jira] [Updated] (TS-3299) Inactivity timeout audit broken
[ https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheer Vinukonda updated TS-3299: -- Fix Version/s: 5.3.0 Inactivity timeout audit broken --- Key: TS-3299 URL: https://issues.apache.org/jira/browse/TS-3299 Project: Traffic Server Issue Type: Bug Components: Core Affects Versions: 5.2.0 Reporter: Sudheer Vinukonda Fix For: 5.3.0 The patch in TS-3196 seems to result in fd leak in our prod. There are a bunch of hung sockets (in close-wait state), stuck forever (remain leaked for days after stopping the traffic). Debugging further, it seems that the inactivity audit is broken by this patch. Some info below for the leaked sockets (in close_wait state): {code} $ ss -s ; sudo traffic_line -r proxy.process.net.connections_currently_open; sudo traffic_line -r proxy.process.http.current_client_connections; sudo traffic_line -r proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof traffic_server)/fd/ 3/dev/null| wc -l Total: 29367 (kernel 29437) TCP: 78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0), ports 918 Transport Total IPIPv6 * 29437 - - RAW 0 0 0 UDP 16133 TCP 31642 31637 5 INET 31658 31650 8 FRAG 0 0 0 Password: 27689 1 1 27939 A snippet from lsof -p $(pidof traffic_server) [ET_NET 10024 nobody 240u IPv4 21385754290t0TCP 67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT) [ET_NET 10024 nobody 241u IPv4 21370939450t0TCP 67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT) [ET_NET 10024 nobody 243u IPv4 21360187890t0TCP 67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT) [ET_NET 10024 nobody 245u IPv4 21359962930t0TCP 67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT) [ET_NET 10024 nobody 248u IPv4 21364688960t0TCP 67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT) [ET_NET 10024 nobody 253u IPv4 21402138640t0TCP 67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT) [ET_NET 10024 nobody 259u IPv4 21378611760t0TCP 67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT) [ET_NET 10024 nobody 260u IPv4 21390814930t0TCP 67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT) [ET_NET 10024 nobody 261u IPv4 21349485650t0TCP 67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT) [ET_NET 10024 nobody 262u IPv4 21357080460t0TCP 67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT) [ET_NET 10024 nobody 263u IPv4 21388968970t0TCP 67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT) [ET_NET 10024 nobody 264u IPv4 21355890290t0TCP 67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT) [ET_NET 10024 nobody 265u IPv4 21349302350t0TCP 67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT) [ET_NET 10024 nobody 267u IPv4 21378375150t0TCP 67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT) [ET_NET 10024 nobody 269u IPv4 21352728550t0TCP 67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT) [ET_NET 10024 nobody 270u IPv4 21358208020t0TCP 67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT) [ET_NET 10024 nobody 271u IPv4 21354750420t0TCP 67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT) [ET_NET 10024 nobody 272u IPv4 21353289740t0TCP 67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT) [ET_NET 10024 nobody 273u IPv4 21375427910t0TCP 67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT) [ET_NET 10024 nobody 274u IPv4 21348061350t0TCP 67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT) [ET_NET 10024 nobody 275u IPv4 21401260170t0TCP 67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT) [ET_NET 10024 nobody 276u IPv4 21346360890t0TCP 67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT) [ET_NET 10024 nobody 278u IPv4 21347083390t0TCP 67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT) [ET_NET 10024 nobody 279u IPv4 21345808880t0TCP 67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT) [ET_NET 10024 nobody 281u IPv4 21348681310t0TCP 67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT) [ET_NET 10024 nobody 282u IPv4 21392756010t0TCP 67.195.33.62:443-168.99.78.5:63345 (CLOSE_WAIT) [ET_NET 10024
[jira] [Updated] (TS-3299) Inactivity timeout audit broken
[ https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheer Vinukonda updated TS-3299: -- Description: The patch in TS-3196 seems to result in fd leak in our prod. There are a bunch of hung sockets (in close-wait state), stuck forever (remain leaked for days after stopping the traffic). Debugging further, it seems that the inactivity audit is broken by this patch. [NOTE: We have spdy enabled in prod, but, I am not entirely sure, if this bug only affects spdy connections] Some info below for the leaked sockets (in close_wait state): {code} $ ss -s ; sudo traffic_line -r proxy.process.net.connections_currently_open; sudo traffic_line -r proxy.process.http.current_client_connections; sudo traffic_line -r proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof traffic_server)/fd/ 3/dev/null| wc -l Total: 29367 (kernel 29437) TCP: 78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0), ports 918 Transport Total IPIPv6 * 29437 - - RAW 0 0 0 UDP 16133 TCP 31642 31637 5 INET 31658 31650 8 FRAG 0 0 0 Password: 27689 1 1 27939 A snippet from lsof -p $(pidof traffic_server) [ET_NET 10024 nobody 240u IPv4 21385754290t0TCP 67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT) [ET_NET 10024 nobody 241u IPv4 21370939450t0TCP 67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT) [ET_NET 10024 nobody 243u IPv4 21360187890t0TCP 67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT) [ET_NET 10024 nobody 245u IPv4 21359962930t0TCP 67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT) [ET_NET 10024 nobody 248u IPv4 21364688960t0TCP 67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT) [ET_NET 10024 nobody 253u IPv4 21402138640t0TCP 67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT) [ET_NET 10024 nobody 259u IPv4 21378611760t0TCP 67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT) [ET_NET 10024 nobody 260u IPv4 21390814930t0TCP 67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT) [ET_NET 10024 nobody 261u IPv4 21349485650t0TCP 67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT) [ET_NET 10024 nobody 262u IPv4 21357080460t0TCP 67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT) [ET_NET 10024 nobody 263u IPv4 21388968970t0TCP 67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT) [ET_NET 10024 nobody 264u IPv4 21355890290t0TCP 67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT) [ET_NET 10024 nobody 265u IPv4 21349302350t0TCP 67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT) [ET_NET 10024 nobody 267u IPv4 21378375150t0TCP 67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT) [ET_NET 10024 nobody 269u IPv4 21352728550t0TCP 67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT) [ET_NET 10024 nobody 270u IPv4 21358208020t0TCP 67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT) [ET_NET 10024 nobody 271u IPv4 21354750420t0TCP 67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT) [ET_NET 10024 nobody 272u IPv4 21353289740t0TCP 67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT) [ET_NET 10024 nobody 273u IPv4 21375427910t0TCP 67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT) [ET_NET 10024 nobody 274u IPv4 21348061350t0TCP 67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT) [ET_NET 10024 nobody 275u IPv4 21401260170t0TCP 67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT) [ET_NET 10024 nobody 276u IPv4 21346360890t0TCP 67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT) [ET_NET 10024 nobody 278u IPv4 21347083390t0TCP 67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT) [ET_NET 10024 nobody 279u IPv4 21345808880t0TCP 67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT) [ET_NET 10024 nobody 281u IPv4 21348681310t0TCP 67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT) [ET_NET 10024 nobody 282u IPv4 21392756010t0TCP 67.195.33.62:443-168.99.78.5:63345 (CLOSE_WAIT) [ET_NET 10024 nobody 283u IPv4 21380196440t0TCP 67.195.33.62:443-209.242.204.18:47547 (CLOSE_WAIT) [ET_NET 10024 nobody 284u IPv4 21348732110t0TCP 67.195.33.62:443-99.47.247.0:58672 (CLOSE_WAIT) [ET_NET 10024 nobody 285u IPv4 21376793930t0TCP
[jira] [Updated] (TS-3299) Inactivity timeout audit broken
[ https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheer Vinukonda updated TS-3299: -- Description: The patch in TS-3196 seems to result in fd leak in our prod. There are a bunch of hung sockets (in close-wait state), stuck forever (remain leaked for days after stopping the traffic). Debugging further, it seems that the inactivity audit is broken by this patch. Some info below for the leaked sockets (in close_wait state): {code} $ ss -s ; sudo traffic_line -r proxy.process.net.connections_currently_open; sudo traffic_line -r proxy.process.http.current_client_connections; sudo traffic_line -r proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof traffic_server)/fd/ 3/dev/null| wc -l Total: 29367 (kernel 29437) TCP: 78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0), ports 918 Transport Total IPIPv6 * 29437 - - RAW 0 0 0 UDP 16133 TCP 31642 31637 5 INET 31658 31650 8 FRAG 0 0 0 Password: 27689 1 1 27939 A snippet from lsof -p $(pidof traffic_server) [ET_NET 10024 nobody 240u IPv4 21385754290t0TCP 67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT) [ET_NET 10024 nobody 241u IPv4 21370939450t0TCP 67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT) [ET_NET 10024 nobody 243u IPv4 21360187890t0TCP 67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT) [ET_NET 10024 nobody 245u IPv4 21359962930t0TCP 67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT) [ET_NET 10024 nobody 248u IPv4 21364688960t0TCP 67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT) [ET_NET 10024 nobody 253u IPv4 21402138640t0TCP 67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT) [ET_NET 10024 nobody 259u IPv4 21378611760t0TCP 67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT) [ET_NET 10024 nobody 260u IPv4 21390814930t0TCP 67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT) [ET_NET 10024 nobody 261u IPv4 21349485650t0TCP 67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT) [ET_NET 10024 nobody 262u IPv4 21357080460t0TCP 67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT) [ET_NET 10024 nobody 263u IPv4 21388968970t0TCP 67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT) [ET_NET 10024 nobody 264u IPv4 21355890290t0TCP 67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT) [ET_NET 10024 nobody 265u IPv4 21349302350t0TCP 67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT) [ET_NET 10024 nobody 267u IPv4 21378375150t0TCP 67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT) [ET_NET 10024 nobody 269u IPv4 21352728550t0TCP 67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT) [ET_NET 10024 nobody 270u IPv4 21358208020t0TCP 67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT) [ET_NET 10024 nobody 271u IPv4 21354750420t0TCP 67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT) [ET_NET 10024 nobody 272u IPv4 21353289740t0TCP 67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT) [ET_NET 10024 nobody 273u IPv4 21375427910t0TCP 67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT) [ET_NET 10024 nobody 274u IPv4 21348061350t0TCP 67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT) [ET_NET 10024 nobody 275u IPv4 21401260170t0TCP 67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT) [ET_NET 10024 nobody 276u IPv4 21346360890t0TCP 67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT) [ET_NET 10024 nobody 278u IPv4 21347083390t0TCP 67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT) [ET_NET 10024 nobody 279u IPv4 21345808880t0TCP 67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT) [ET_NET 10024 nobody 281u IPv4 21348681310t0TCP 67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT) [ET_NET 10024 nobody 282u IPv4 21392756010t0TCP 67.195.33.62:443-168.99.78.5:63345 (CLOSE_WAIT) [ET_NET 10024 nobody 283u IPv4 21380196440t0TCP 67.195.33.62:443-209.242.204.18:47547 (CLOSE_WAIT) [ET_NET 10024 nobody 284u IPv4 21348732110t0TCP 67.195.33.62:443-99.47.247.0:58672 (CLOSE_WAIT) [ET_NET 10024 nobody 285u IPv4 21376793930t0TCP 67.195.33.62:443-174.138.184.227:43908 (CLOSE_WAIT) [ET_NET 10024 nobody 287u IPv4 21374178940t0TCP
[jira] [Updated] (TS-3299) Inactivity timeout audit broken
[ https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheer Vinukonda updated TS-3299: -- Description: The patch in TS-3196 seems to result in fd leak in our prod. There are a bunch of hung sockets (in close-wait state), stuck forever (remain leaked for days after stopping the traffic). Debugging further, it seems that the inactivity audit is broken by this patch. Some info below for the leaked sockets (in close_wait state): {code} $ ss -s ; sudo traffic_line -r proxy.process.net.connections_currently_open; sudo traffic_line -r proxy.process.http.current_client_connections; sudo traffic_line -r proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof traffic_server)/fd/ 3/dev/null| wc -l Total: 29367 (kernel 29437) TCP: 78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0), ports 918 Transport Total IPIPv6 * 29437 - - RAW 0 0 0 UDP 16133 TCP 31642 31637 5 INET 31658 31650 8 FRAG 0 0 0 Password: 27689 1 1 27939 A snippet from lsof -p $(pidof traffic_server) [ET_NET 10024 nobody 240u IPv4 21385754290t0TCP 67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT) [ET_NET 10024 nobody 241u IPv4 21370939450t0TCP 67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT) [ET_NET 10024 nobody 243u IPv4 21360187890t0TCP 67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT) [ET_NET 10024 nobody 245u IPv4 21359962930t0TCP 67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT) [ET_NET 10024 nobody 248u IPv4 21364688960t0TCP 67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT) [ET_NET 10024 nobody 253u IPv4 21402138640t0TCP 67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT) [ET_NET 10024 nobody 259u IPv4 21378611760t0TCP 67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT) [ET_NET 10024 nobody 260u IPv4 21390814930t0TCP 67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT) [ET_NET 10024 nobody 261u IPv4 21349485650t0TCP 67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT) [ET_NET 10024 nobody 262u IPv4 21357080460t0TCP 67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT) [ET_NET 10024 nobody 263u IPv4 21388968970t0TCP 67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT) [ET_NET 10024 nobody 264u IPv4 21355890290t0TCP 67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT) [ET_NET 10024 nobody 265u IPv4 21349302350t0TCP 67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT) [ET_NET 10024 nobody 267u IPv4 21378375150t0TCP 67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT) [ET_NET 10024 nobody 269u IPv4 21352728550t0TCP 67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT) [ET_NET 10024 nobody 270u IPv4 21358208020t0TCP 67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT) [ET_NET 10024 nobody 271u IPv4 21354750420t0TCP 67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT) [ET_NET 10024 nobody 272u IPv4 21353289740t0TCP 67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT) [ET_NET 10024 nobody 273u IPv4 21375427910t0TCP 67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT) [ET_NET 10024 nobody 274u IPv4 21348061350t0TCP 67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT) [ET_NET 10024 nobody 275u IPv4 21401260170t0TCP 67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT) [ET_NET 10024 nobody 276u IPv4 21346360890t0TCP 67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT) [ET_NET 10024 nobody 278u IPv4 21347083390t0TCP 67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT) [ET_NET 10024 nobody 279u IPv4 21345808880t0TCP 67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT) [ET_NET 10024 nobody 281u IPv4 21348681310t0TCP 67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT) [ET_NET 10024 nobody 282u IPv4 21392756010t0TCP 67.195.33.62:443-168.99.78.5:63345 (CLOSE_WAIT) [ET_NET 10024 nobody 283u IPv4 21380196440t0TCP 67.195.33.62:443-209.242.204.18:47547 (CLOSE_WAIT) [ET_NET 10024 nobody 284u IPv4 21348732110t0TCP 67.195.33.62:443-99.47.247.0:58672 (CLOSE_WAIT) [ET_NET 10024 nobody 285u IPv4 21376793930t0TCP 67.195.33.62:443-174.138.184.227:43908 (CLOSE_WAIT) [ET_NET 10024 nobody 287u IPv4 21374178940t0TCP