[jira] [Updated] (TS-3299) Inactivity timeout audit broken

2015-01-15 Thread Sudheer Vinukonda (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudheer Vinukonda updated TS-3299:
--
Affects Version/s: 5.2.0

 Inactivity timeout audit broken
 ---

 Key: TS-3299
 URL: https://issues.apache.org/jira/browse/TS-3299
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Affects Versions: 5.2.0
Reporter: Sudheer Vinukonda
 Fix For: 5.3.0


 The patch in TS-3196 seems to result in fd leak in our prod. There are a 
 bunch of hung sockets (in close-wait state), stuck forever (remain leaked for 
 days after stopping the traffic). Debugging further, it seems that the 
 inactivity audit is broken by this patch.
 Some info below for the leaked sockets (in close_wait state):
 {code}
 $ ss -s ; sudo traffic_line -r
 proxy.process.net.connections_currently_open; sudo traffic_line -r
 proxy.process.http.current_client_connections; sudo traffic_line -r
 proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
 traffic_server)/fd/ 3/dev/null| wc -l
 Total: 29367 (kernel 29437)
 TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
 ports 918
 Transport Total IPIPv6
 *  29437 - -
 RAW  0 0 0
 UDP  16133
 TCP  31642 31637 5
 INET  31658 31650 8
 FRAG  0 0 0
 Password: 
 27689
 1
 1
 27939
 A snippet from lsof -p $(pidof traffic_server)
 [ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
 67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
 [ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
 67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
 [ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
 67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
 [ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
 67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
 [ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
 67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
 [ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
 67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
 [ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
 67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
 [ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
 67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
 [ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
 67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
 [ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
 67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
 [ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
 67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
 [ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
 67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
 [ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
 67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
 [ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
 67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
 [ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
 67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
 [ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
 67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT)
 [ET_NET 10024 nobody  271u  IPv4 21354750420t0TCP 
 67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT)
 [ET_NET 10024 nobody  272u  IPv4 21353289740t0TCP 
 67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT)
 [ET_NET 10024 nobody  273u  IPv4 21375427910t0TCP 
 67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT)
 [ET_NET 10024 nobody  274u  IPv4 21348061350t0TCP 
 67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT)
 [ET_NET 10024 nobody  275u  IPv4 21401260170t0TCP 
 67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT)
 [ET_NET 10024 nobody  276u  IPv4 21346360890t0TCP 
 67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT)
 [ET_NET 10024 nobody  278u  IPv4 21347083390t0TCP 
 67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT)
 [ET_NET 10024 nobody  279u  IPv4 21345808880t0TCP 
 67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT)
 [ET_NET 10024 nobody  281u  IPv4 21348681310t0TCP 
 67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT)
 [ET_NET 10024 nobody  282u  IPv4 21392756010t0TCP 
 67.195.33.62:443-168.99.78.5:63345 (CLOSE_WAIT)
 [ET_NET 10024 

[jira] [Updated] (TS-3299) Inactivity timeout audit broken

2015-01-15 Thread Sudheer Vinukonda (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudheer Vinukonda updated TS-3299:
--
Fix Version/s: 5.3.0

 Inactivity timeout audit broken
 ---

 Key: TS-3299
 URL: https://issues.apache.org/jira/browse/TS-3299
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Affects Versions: 5.2.0
Reporter: Sudheer Vinukonda
 Fix For: 5.3.0


 The patch in TS-3196 seems to result in fd leak in our prod. There are a 
 bunch of hung sockets (in close-wait state), stuck forever (remain leaked for 
 days after stopping the traffic). Debugging further, it seems that the 
 inactivity audit is broken by this patch.
 Some info below for the leaked sockets (in close_wait state):
 {code}
 $ ss -s ; sudo traffic_line -r
 proxy.process.net.connections_currently_open; sudo traffic_line -r
 proxy.process.http.current_client_connections; sudo traffic_line -r
 proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
 traffic_server)/fd/ 3/dev/null| wc -l
 Total: 29367 (kernel 29437)
 TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
 ports 918
 Transport Total IPIPv6
 *  29437 - -
 RAW  0 0 0
 UDP  16133
 TCP  31642 31637 5
 INET  31658 31650 8
 FRAG  0 0 0
 Password: 
 27689
 1
 1
 27939
 A snippet from lsof -p $(pidof traffic_server)
 [ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
 67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
 [ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
 67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
 [ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
 67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
 [ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
 67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
 [ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
 67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
 [ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
 67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
 [ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
 67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
 [ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
 67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
 [ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
 67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
 [ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
 67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
 [ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
 67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
 [ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
 67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
 [ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
 67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
 [ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
 67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
 [ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
 67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
 [ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
 67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT)
 [ET_NET 10024 nobody  271u  IPv4 21354750420t0TCP 
 67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT)
 [ET_NET 10024 nobody  272u  IPv4 21353289740t0TCP 
 67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT)
 [ET_NET 10024 nobody  273u  IPv4 21375427910t0TCP 
 67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT)
 [ET_NET 10024 nobody  274u  IPv4 21348061350t0TCP 
 67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT)
 [ET_NET 10024 nobody  275u  IPv4 21401260170t0TCP 
 67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT)
 [ET_NET 10024 nobody  276u  IPv4 21346360890t0TCP 
 67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT)
 [ET_NET 10024 nobody  278u  IPv4 21347083390t0TCP 
 67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT)
 [ET_NET 10024 nobody  279u  IPv4 21345808880t0TCP 
 67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT)
 [ET_NET 10024 nobody  281u  IPv4 21348681310t0TCP 
 67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT)
 [ET_NET 10024 nobody  282u  IPv4 21392756010t0TCP 
 67.195.33.62:443-168.99.78.5:63345 (CLOSE_WAIT)
 [ET_NET 10024 

Jenkins build is back to normal : clang-analyzer #280

2015-01-15 Thread jenkins
See https://ci.trafficserver.apache.org/job/clang-analyzer/280/changes



[jira] [Updated] (TS-3299) InactivityCop broken

2015-01-15 Thread Sudheer Vinukonda (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudheer Vinukonda updated TS-3299:
--
Summary: InactivityCop broken  (was: Inactivity timeout audit broken)

 InactivityCop broken
 

 Key: TS-3299
 URL: https://issues.apache.org/jira/browse/TS-3299
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Affects Versions: 5.2.0
Reporter: Sudheer Vinukonda
 Fix For: 5.3.0


 The patch in TS-3196 seems to result in fd leak in our prod. There are a 
 bunch of hung sockets (in close-wait state), stuck forever (remain leaked for 
 days after stopping the traffic). Debugging further, it seems that the 
 inactivity audit is broken by this patch. [NOTE: We have spdy enabled in 
 prod, but, I am not entirely sure, if this bug only affects spdy connections]
 Some info below for the leaked sockets (in close_wait state):
 {code}
 $ ss -s ; sudo traffic_line -r
 proxy.process.net.connections_currently_open; sudo traffic_line -r
 proxy.process.http.current_client_connections; sudo traffic_line -r
 proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
 traffic_server)/fd/ 3/dev/null| wc -l
 Total: 29367 (kernel 29437)
 TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
 ports 918
 Transport Total IPIPv6
 *  29437 - -
 RAW  0 0 0
 UDP  16133
 TCP  31642 31637 5
 INET  31658 31650 8
 FRAG  0 0 0
 Password: 
 27689
 1
 1
 27939
 A snippet from lsof -p $(pidof traffic_server)
 [ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
 67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
 [ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
 67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
 [ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
 67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
 [ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
 67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
 [ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
 67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
 [ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
 67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
 [ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
 67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
 [ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
 67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
 [ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
 67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
 [ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
 67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
 [ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
 67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
 [ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
 67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
 [ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
 67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
 [ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
 67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
 [ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
 67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
 [ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
 67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT)
 [ET_NET 10024 nobody  271u  IPv4 21354750420t0TCP 
 67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT)
 [ET_NET 10024 nobody  272u  IPv4 21353289740t0TCP 
 67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT)
 [ET_NET 10024 nobody  273u  IPv4 21375427910t0TCP 
 67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT)
 [ET_NET 10024 nobody  274u  IPv4 21348061350t0TCP 
 67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT)
 [ET_NET 10024 nobody  275u  IPv4 21401260170t0TCP 
 67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT)
 [ET_NET 10024 nobody  276u  IPv4 21346360890t0TCP 
 67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT)
 [ET_NET 10024 nobody  278u  IPv4 21347083390t0TCP 
 67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT)
 [ET_NET 10024 nobody  279u  IPv4 21345808880t0TCP 
 67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT)
 [ET_NET 10024 nobody  281u  IPv4 21348681310t0TCP 
 67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT)
 

[jira] [Updated] (TS-3299) InactivityCop broken

2015-01-15 Thread Sudheer Vinukonda (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudheer Vinukonda updated TS-3299:
--
Description: 
The patch in TS-3196 seems to result in fd leak in our prod. There are a bunch 
of hung sockets (in close-wait state), stuck forever (remain leaked for days 
after stopping the traffic). Debugging further, it seems that the InactivityCop 
is broken by this patch. [NOTE: We have spdy enabled in prod, but, I am not 
entirely sure, if this bug only affects spdy connections]

Some info below for the leaked sockets (in close_wait state):

{code}
$ ss -s ; sudo traffic_line -r
proxy.process.net.connections_currently_open; sudo traffic_line -r
proxy.process.http.current_client_connections; sudo traffic_line -r
proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
traffic_server)/fd/ 3/dev/null| wc -l
Total: 29367 (kernel 29437)
TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
ports 918

Transport Total IPIPv6
*  29437 - -
RAW  0 0 0
UDP  16133
TCP  31642 31637 5
INET  31658 31650 8
FRAG  0 0 0

Password: 
27689
1
1
27939

A snippet from lsof -p $(pidof traffic_server)

[ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
[ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
[ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
[ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
[ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
[ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
[ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
[ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
[ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
[ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
[ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
[ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
[ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
[ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
[ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
[ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT)
[ET_NET 10024 nobody  271u  IPv4 21354750420t0TCP 
67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT)
[ET_NET 10024 nobody  272u  IPv4 21353289740t0TCP 
67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT)
[ET_NET 10024 nobody  273u  IPv4 21375427910t0TCP 
67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT)
[ET_NET 10024 nobody  274u  IPv4 21348061350t0TCP 
67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT)
[ET_NET 10024 nobody  275u  IPv4 21401260170t0TCP 
67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT)
[ET_NET 10024 nobody  276u  IPv4 21346360890t0TCP 
67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT)
[ET_NET 10024 nobody  278u  IPv4 21347083390t0TCP 
67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT)
[ET_NET 10024 nobody  279u  IPv4 21345808880t0TCP 
67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT)
[ET_NET 10024 nobody  281u  IPv4 21348681310t0TCP 
67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT)
[ET_NET 10024 nobody  282u  IPv4 21392756010t0TCP 
67.195.33.62:443-168.99.78.5:63345 (CLOSE_WAIT)
[ET_NET 10024 nobody  283u  IPv4 21380196440t0TCP 
67.195.33.62:443-209.242.204.18:47547 (CLOSE_WAIT)
[ET_NET 10024 nobody  284u  IPv4 21348732110t0TCP 
67.195.33.62:443-99.47.247.0:58672 (CLOSE_WAIT)
[ET_NET 10024 nobody  285u  IPv4 21376793930t0TCP 

[jira] [Updated] (TS-3299) InactivityCop broken

2015-01-15 Thread Sudheer Vinukonda (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudheer Vinukonda updated TS-3299:
--
Description: 
The patch in TS-3196 seems to result in fd leak in our prod. There are a bunch 
of hung sockets (in close-wait state), stuck forever (remain leaked for days 
after stopping the traffic). Debugging further, it seems that the InactivityCop 
is broken by this patch. [NOTE: We have spdy enabled in prod, but, I am not 
entirely sure, if this bug only affects spdy connections]

Some info below for the leaked sockets (in close_wait state):

{code}
$ ss -s ; sudo traffic_line -r
proxy.process.net.connections_currently_open; sudo traffic_line -r
proxy.process.http.current_client_connections; sudo traffic_line -r
proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
traffic_server)/fd/ 3/dev/null| wc -l
Total: 29367 (kernel 29437)
TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
ports 918

Transport Total IPIPv6
*  29437 - -
RAW  0 0 0
UDP  16133
TCP  31642 31637 5
INET  31658 31650 8
FRAG  0 0 0

Password: 
27689
1
1
27939

A snippet from lsof -p $(pidof traffic_server)

[ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
[ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
[ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
[ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
[ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
[ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
[ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
[ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
[ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
[ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
[ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
[ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
[ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
[ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
[ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
[ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT)
[ET_NET 10024 nobody  271u  IPv4 21354750420t0TCP 
67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT)
[ET_NET 10024 nobody  272u  IPv4 21353289740t0TCP 
67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT)
[ET_NET 10024 nobody  273u  IPv4 21375427910t0TCP 
67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT)
[ET_NET 10024 nobody  274u  IPv4 21348061350t0TCP 
67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT)
[ET_NET 10024 nobody  275u  IPv4 21401260170t0TCP 
67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT)
[ET_NET 10024 nobody  276u  IPv4 21346360890t0TCP 
67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT)
[ET_NET 10024 nobody  278u  IPv4 21347083390t0TCP 
67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT)
[ET_NET 10024 nobody  279u  IPv4 21345808880t0TCP 
67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT)
[ET_NET 10024 nobody  281u  IPv4 21348681310t0TCP 
67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT)
[ET_NET 10024 nobody  282u  IPv4 21392756010t0TCP 
67.195.33.62:443-168.99.78.5:63345 (CLOSE_WAIT)
[ET_NET 10024 nobody  283u  IPv4 21380196440t0TCP 
67.195.33.62:443-209.242.204.18:47547 (CLOSE_WAIT)
[ET_NET 10024 nobody  284u  IPv4 21348732110t0TCP 
67.195.33.62:443-99.47.247.0:58672 (CLOSE_WAIT)
[ET_NET 10024 nobody  285u  IPv4 21376793930t0TCP 

[jira] [Updated] (TS-3294) 5.3.0 Coverity Fixes

2015-01-15 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3294:
--
Assignee: Sudheer Vinukonda

 5.3.0 Coverity Fixes
 

 Key: TS-3294
 URL: https://issues.apache.org/jira/browse/TS-3294
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cleanup, Quality
Reporter: Sudheer Vinukonda
Assignee: Sudheer Vinukonda
 Fix For: 5.3.0


 Tracker Jira for 5.3.0 Coverity Fixes (Sudheer Vinukonda)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3298) simplify `make release`

2015-01-15 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3298:
--
Fix Version/s: (was: 5.2.0)
   5.3.0

 simplify `make release`
 ---

 Key: TS-3298
 URL: https://issues.apache.org/jira/browse/TS-3298
 Project: Traffic Server
  Issue Type: Improvement
  Components: Build
Reporter: Igor Galić
 Fix For: 5.3.0

 Attachments: 
 0001-remove-confusing-make-targets-change-version-separat.patch


 our release is *still* confusing ;)
 i have tried to stream-line it a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3298) simplify `make release`

2015-01-15 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3298:
--
Fix Version/s: 5.2.0

 simplify `make release`
 ---

 Key: TS-3298
 URL: https://issues.apache.org/jira/browse/TS-3298
 Project: Traffic Server
  Issue Type: Improvement
  Components: Build
Reporter: Igor Galić
 Fix For: 5.3.0

 Attachments: 
 0001-remove-confusing-make-targets-change-version-separat.patch


 our release is *still* confusing ;)
 i have tried to stream-line it a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3293) Need to review various protocol accept objects and make them more widely available

2015-01-15 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3293:
--
Fix Version/s: 5.3.0

 Need to review various protocol accept objects and make them more widely 
 available
 --

 Key: TS-3293
 URL: https://issues.apache.org/jira/browse/TS-3293
 Project: Traffic Server
  Issue Type: Bug
Reporter: Susan Hinrichs
Assignee: Susan Hinrichs
 Fix For: 5.3.0


 This came up most recently in propagating tr-pass information for TS-3292
 The early configuration is being duplicated in too many objects.  The 
 information is being propagated differently for HTTP and SSL (who knows what 
 is happening with SPDY).  We should take a step back to review and unify this 
 information.  
 Alan took a first pass on this review with his Early Intervention talk from 
 the Fall 2014 summit 
 https://www.dropbox.com/s/4vw91czj41rdxjo/ATS-Early-Intervention.pptx?dl=0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3294) 5.3.0 Coverity Fixes

2015-01-15 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3294:
--
Fix Version/s: 5.3.0

 5.3.0 Coverity Fixes
 

 Key: TS-3294
 URL: https://issues.apache.org/jira/browse/TS-3294
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cleanup, Quality
Reporter: Sudheer Vinukonda
 Fix For: 5.3.0


 Tracker Jira for 5.3.0 Coverity Fixes (Sudheer Vinukonda)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3299) Inactivity timeout audit broken

2015-01-15 Thread Sudheer Vinukonda (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudheer Vinukonda updated TS-3299:
--
Description: 
The patch in TS-3196 seems to result in fd leak in our prod. There are a bunch 
of hung sockets (in close-wait state), stuck forever (remain leaked for days 
after stopping the traffic). Debugging further, it seems that the inactivity 
audit is broken by this patch. [NOTE: We have spdy enabled in prod, but, I am 
not entirely sure, if this bug only affects spdy connections]

Some info below for the leaked sockets (in close_wait state):

{code}
$ ss -s ; sudo traffic_line -r
proxy.process.net.connections_currently_open; sudo traffic_line -r
proxy.process.http.current_client_connections; sudo traffic_line -r
proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
traffic_server)/fd/ 3/dev/null| wc -l
Total: 29367 (kernel 29437)
TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
ports 918

Transport Total IPIPv6
*  29437 - -
RAW  0 0 0
UDP  16133
TCP  31642 31637 5
INET  31658 31650 8
FRAG  0 0 0

Password: 
27689
1
1
27939

A snippet from lsof -p $(pidof traffic_server)

[ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
[ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
[ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
[ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
[ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
[ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
[ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
[ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
[ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
[ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
[ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
[ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
[ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
[ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
[ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
[ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT)
[ET_NET 10024 nobody  271u  IPv4 21354750420t0TCP 
67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT)
[ET_NET 10024 nobody  272u  IPv4 21353289740t0TCP 
67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT)
[ET_NET 10024 nobody  273u  IPv4 21375427910t0TCP 
67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT)
[ET_NET 10024 nobody  274u  IPv4 21348061350t0TCP 
67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT)
[ET_NET 10024 nobody  275u  IPv4 21401260170t0TCP 
67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT)
[ET_NET 10024 nobody  276u  IPv4 21346360890t0TCP 
67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT)
[ET_NET 10024 nobody  278u  IPv4 21347083390t0TCP 
67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT)
[ET_NET 10024 nobody  279u  IPv4 21345808880t0TCP 
67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT)
[ET_NET 10024 nobody  281u  IPv4 21348681310t0TCP 
67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT)
[ET_NET 10024 nobody  282u  IPv4 21392756010t0TCP 
67.195.33.62:443-168.99.78.5:63345 (CLOSE_WAIT)
[ET_NET 10024 nobody  283u  IPv4 21380196440t0TCP 
67.195.33.62:443-209.242.204.18:47547 (CLOSE_WAIT)
[ET_NET 10024 nobody  284u  IPv4 21348732110t0TCP 
67.195.33.62:443-99.47.247.0:58672 (CLOSE_WAIT)
[ET_NET 10024 nobody  285u  IPv4 21376793930t0TCP 

[jira] [Created] (TS-3299) Inactivity timeout audit broken

2015-01-15 Thread Sudheer Vinukonda (JIRA)
Sudheer Vinukonda created TS-3299:
-

 Summary: Inactivity timeout audit broken
 Key: TS-3299
 URL: https://issues.apache.org/jira/browse/TS-3299
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Sudheer Vinukonda


The patch in TS-3196 seems to result in fd leak in our prod. There are a bunch 
of hung sockets (in close-wait state), stuck forever (remain leaked for days 
after stopping the traffic). Debugging further, it seems that the inactivity 
audit is broken by this patch.

Some info below for the leaked sockets (in close_wait state):

{code}
$ ss -s ; sudo traffic_line -r
proxy.process.net.connections_currently_open; sudo traffic_line -r
proxy.process.http.current_client_connections; sudo traffic_line -r
proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
traffic_server)/fd/ 3/dev/null| wc -l
Total: 29367 (kernel 29437)
TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
ports 918

Transport Total IPIPv6
*  29437 - -
RAW  0 0 0
UDP  16133
TCP  31642 31637 5
INET  31658 31650 8
FRAG  0 0 0

Password: 
27689
1
1
27939

A snippet from lsof -p $(pidof traffic_server)

[ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
[ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
[ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
[ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
[ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
[ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
[ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
[ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
[ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
[ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
[ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
[ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
[ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
[ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
[ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
[ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT)
[ET_NET 10024 nobody  271u  IPv4 21354750420t0TCP 
67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT)
[ET_NET 10024 nobody  272u  IPv4 21353289740t0TCP 
67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT)
[ET_NET 10024 nobody  273u  IPv4 21375427910t0TCP 
67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT)
[ET_NET 10024 nobody  274u  IPv4 21348061350t0TCP 
67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT)
[ET_NET 10024 nobody  275u  IPv4 21401260170t0TCP 
67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT)
[ET_NET 10024 nobody  276u  IPv4 21346360890t0TCP 
67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT)
[ET_NET 10024 nobody  278u  IPv4 21347083390t0TCP 
67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT)
[ET_NET 10024 nobody  279u  IPv4 21345808880t0TCP 
67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT)
[ET_NET 10024 nobody  281u  IPv4 21348681310t0TCP 
67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT)
[ET_NET 10024 nobody  282u  IPv4 21392756010t0TCP 
67.195.33.62:443-168.99.78.5:63345 (CLOSE_WAIT)
[ET_NET 10024 nobody  283u  IPv4 21380196440t0TCP 
67.195.33.62:443-209.242.204.18:47547 (CLOSE_WAIT)
[ET_NET 10024 nobody  284u  IPv4 21348732110t0TCP 
67.195.33.62:443-99.47.247.0:58672 (CLOSE_WAIT)
[ET_NET 10024 nobody  285u  IPv4 2137679393

[jira] [Commented] (TS-3297) encapsulate parse errors

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278912#comment-14278912
 ] 

ASF subversion and git services commented on TS-3297:
-

Commit be1c8584edc8dd4eb733c2e8c75dfc9fff2fd6f6 in trafficserver's branch 
refs/heads/master from [~jpe...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=be1c858 ]

TS-3297: remove dead store


 encapsulate parse errors
 

 Key: TS-3297
 URL: https://issues.apache.org/jira/browse/TS-3297
 Project: Traffic Server
  Issue Type: Improvement
  Components: Configuration, Core
Reporter: James Peach
Assignee: James Peach
 Fix For: 5.3.0


 The core configuration file parsing routines often (but not always) return an 
 allocated error message on error. Start replacing this with a parse error 
 object so that the memory management is more robust.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (TS-3299) Inactivity timeout audit broken

2015-01-15 Thread Sudheer Vinukonda (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278918#comment-14278918
 ] 

Sudheer Vinukonda edited comment on TS-3299 at 1/15/15 4:44 PM:


Below's a possible fix (yet to be tested):

{code}
% diff --git a/iocore/net/UnixNetVConnection.cc
b/iocore/net/UnixNetVConnection.cc
index 92cb7c4..104f0ea 100644
--- a/iocore/net/UnixNetVConnection.cc
+++ b/iocore/net/UnixNetVConnection.cc
@@ -137,6 +137,9 @@ read_signal_and_update(int event, UnixNetVConnection *vc)
   vc-recursion++;
   if(vc-read.vio._cont) {
 vc-read.vio._cont-handleEvent(event, vc-read.vio);
+  } else {
+ink_assert (event == VC_EVENT_INACTIVITY_TIMEOUT || event == VC_EVENT_EOS 
|| event == VC_EVENT_ERROR || event == VC_EVENT_ACTIVE_TIMEOUT);
+vc-closed = 1
   }
   if (!--vc-recursion  vc-closed) {
 /* BZ  31932 */
{code}


was (Author: sudheerv):
Below's a possible fix (yet to be tested):

{code}
% diff --git a/iocore/net/UnixNetVConnection.cc
b/iocore/net/UnixNetVConnection.cc
index 92cb7c4..104f0ea 100644
--- a/iocore/net/UnixNetVConnection.cc
+++ b/iocore/net/UnixNetVConnection.cc
@@ -137,6 +137,9 @@ read_signal_and_update(int event, UnixNetVConnection *vc)
   vc-recursion++;
   if(vc-read.vio._cont) {
 vc-read.vio._cont-handleEvent(event, vc-read.vio);
+  } else {
+ink_assert (event == VC_EVENT_INACTIVITY_TIMEOUT);
+vc-closed = 1
   }
   if (!--vc-recursion  vc-closed) {
 /* BZ  31932 */
{code}

 Inactivity timeout audit broken
 ---

 Key: TS-3299
 URL: https://issues.apache.org/jira/browse/TS-3299
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Sudheer Vinukonda

 The patch in TS-3196 seems to result in fd leak in our prod. There are a 
 bunch of hung sockets (in close-wait state), stuck forever (remain leaked for 
 days after stopping the traffic). Debugging further, it seems that the 
 inactivity audit is broken by this patch.
 Some info below for the leaked sockets (in close_wait state):
 {code}
 $ ss -s ; sudo traffic_line -r
 proxy.process.net.connections_currently_open; sudo traffic_line -r
 proxy.process.http.current_client_connections; sudo traffic_line -r
 proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
 traffic_server)/fd/ 3/dev/null| wc -l
 Total: 29367 (kernel 29437)
 TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
 ports 918
 Transport Total IPIPv6
 *  29437 - -
 RAW  0 0 0
 UDP  16133
 TCP  31642 31637 5
 INET  31658 31650 8
 FRAG  0 0 0
 Password: 
 27689
 1
 1
 27939
 A snippet from lsof -p $(pidof traffic_server)
 [ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
 67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
 [ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
 67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
 [ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
 67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
 [ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
 67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
 [ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
 67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
 [ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
 67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
 [ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
 67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
 [ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
 67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
 [ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
 67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
 [ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
 67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
 [ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
 67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
 [ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
 67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
 [ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
 67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
 [ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
 67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
 [ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
 67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
 [ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
 67.195.33.62:443-24.75.122.66:14345 

[jira] [Updated] (TS-3272) TS_SSL_SNI_HOOK continuously called

2015-01-15 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-3272:
---
Attachment: ts-3272-5.diff

TS-3272-5.diff contains a fix for the looping SNI callback.

The original code was calling all the plugin SNI callbacks on every master SNI 
callback.

Changed this to call every plugin SNI callback at most once.  This is 
consistent with the semantics of the preaccept callback.

 TS_SSL_SNI_HOOK continuously called 
 

 Key: TS-3272
 URL: https://issues.apache.org/jira/browse/TS-3272
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Lev Stipakov
Assignee: Susan Hinrichs
 Fix For: 5.3.0

 Attachments: SSLNetVConnection.patch, plugin.cc, ts-3272-2.diff, 
 ts-3272-3.diff, ts-3272-4.diff, ts-3272-5.diff, ts-3272-master.diff, 
 ts-3272.diff


 I have created a simple plugin with TS_SSL_SNI_HOOK handler. Handler starts 
 thread. Thread function has small sleep and after that calls TSVConnTunnel.
 The problem is that during sleep duration TS_SSL_SNI_HOOK handler gets 
 continuously called.
 I would expect that TS_SSL_SNI_HOOK handler should be called just once.
 See plugin code in attach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3299) Inactivity timeout audit broken

2015-01-15 Thread Sudheer Vinukonda (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudheer Vinukonda updated TS-3299:
--
Description: 
The patch in TS-3196 seems to result in fd leak in our prod. There are a bunch 
of hung sockets (in close-wait state), stuck forever (remain leaked for days 
after stopping the traffic). Debugging further, it seems that the inactivity 
audit is broken by this patch.

Some info below for the leaked sockets (in close_wait state):

{code}
$ ss -s ; sudo traffic_line -r
proxy.process.net.connections_currently_open; sudo traffic_line -r
proxy.process.http.current_client_connections; sudo traffic_line -r
proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
traffic_server)/fd/ 3/dev/null| wc -l
Total: 29367 (kernel 29437)
TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
ports 918

Transport Total IPIPv6
*  29437 - -
RAW  0 0 0
UDP  16133
TCP  31642 31637 5
INET  31658 31650 8
FRAG  0 0 0

Password: 
27689
1
1
27939

A snippet from lsof -p $(pidof traffic_server)

[ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
[ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
[ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
[ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
[ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
[ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
[ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
[ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
[ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
[ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
[ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
[ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
[ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
[ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
[ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
[ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT)
[ET_NET 10024 nobody  271u  IPv4 21354750420t0TCP 
67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT)
[ET_NET 10024 nobody  272u  IPv4 21353289740t0TCP 
67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT)
[ET_NET 10024 nobody  273u  IPv4 21375427910t0TCP 
67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT)
[ET_NET 10024 nobody  274u  IPv4 21348061350t0TCP 
67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT)
[ET_NET 10024 nobody  275u  IPv4 21401260170t0TCP 
67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT)
[ET_NET 10024 nobody  276u  IPv4 21346360890t0TCP 
67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT)
[ET_NET 10024 nobody  278u  IPv4 21347083390t0TCP 
67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT)
[ET_NET 10024 nobody  279u  IPv4 21345808880t0TCP 
67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT)
[ET_NET 10024 nobody  281u  IPv4 21348681310t0TCP 
67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT)
[ET_NET 10024 nobody  282u  IPv4 21392756010t0TCP 
67.195.33.62:443-168.99.78.5:63345 (CLOSE_WAIT)
[ET_NET 10024 nobody  283u  IPv4 21380196440t0TCP 
67.195.33.62:443-209.242.204.18:47547 (CLOSE_WAIT)
[ET_NET 10024 nobody  284u  IPv4 21348732110t0TCP 
67.195.33.62:443-99.47.247.0:58672 (CLOSE_WAIT)
[ET_NET 10024 nobody  285u  IPv4 21376793930t0TCP 
67.195.33.62:443-174.138.184.227:43908 (CLOSE_WAIT)
[ET_NET 10024 nobody  287u  IPv4 21374178940t0TCP 

[jira] [Updated] (TS-3299) Inactivity timeout audit broken

2015-01-15 Thread Sudheer Vinukonda (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudheer Vinukonda updated TS-3299:
--
Description: 
The patch in TS-3196 seems to result in fd leak in our prod. There are a bunch 
of hung sockets (in close-wait state), stuck forever (remain leaked for days 
after stopping the traffic). Debugging further, it seems that the inactivity 
audit is broken by this patch.

Some info below for the leaked sockets (in close_wait state):

{code}
$ ss -s ; sudo traffic_line -r
proxy.process.net.connections_currently_open; sudo traffic_line -r
proxy.process.http.current_client_connections; sudo traffic_line -r
proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
traffic_server)/fd/ 3/dev/null| wc -l
Total: 29367 (kernel 29437)
TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
ports 918

Transport Total IPIPv6
*  29437 - -
RAW  0 0 0
UDP  16133
TCP  31642 31637 5
INET  31658 31650 8
FRAG  0 0 0

Password: 
27689
1
1
27939

A snippet from lsof -p $(pidof traffic_server)

[ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
[ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
[ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
[ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
[ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
[ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
[ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
[ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
[ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
[ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
[ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
[ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
[ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
[ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
[ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
[ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT)
[ET_NET 10024 nobody  271u  IPv4 21354750420t0TCP 
67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT)
[ET_NET 10024 nobody  272u  IPv4 21353289740t0TCP 
67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT)
[ET_NET 10024 nobody  273u  IPv4 21375427910t0TCP 
67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT)
[ET_NET 10024 nobody  274u  IPv4 21348061350t0TCP 
67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT)
[ET_NET 10024 nobody  275u  IPv4 21401260170t0TCP 
67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT)
[ET_NET 10024 nobody  276u  IPv4 21346360890t0TCP 
67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT)
[ET_NET 10024 nobody  278u  IPv4 21347083390t0TCP 
67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT)
[ET_NET 10024 nobody  279u  IPv4 21345808880t0TCP 
67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT)
[ET_NET 10024 nobody  281u  IPv4 21348681310t0TCP 
67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT)
[ET_NET 10024 nobody  282u  IPv4 21392756010t0TCP 
67.195.33.62:443-168.99.78.5:63345 (CLOSE_WAIT)
[ET_NET 10024 nobody  283u  IPv4 21380196440t0TCP 
67.195.33.62:443-209.242.204.18:47547 (CLOSE_WAIT)
[ET_NET 10024 nobody  284u  IPv4 21348732110t0TCP 
67.195.33.62:443-99.47.247.0:58672 (CLOSE_WAIT)
[ET_NET 10024 nobody  285u  IPv4 21376793930t0TCP 
67.195.33.62:443-174.138.184.227:43908 (CLOSE_WAIT)
[ET_NET 10024 nobody  287u  IPv4 21374178940t0TCP 

[jira] [Commented] (TS-3196) Core dump when handle event check_inactivity

2015-01-15 Thread Sudheer Vinukonda (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278915#comment-14278915
 ] 

Sudheer Vinukonda commented on TS-3196:
---

This ticket seems to have broken the inactivity timeout audit. Opened TS-3299 
to track the issue.

 Core dump when handle event check_inactivity
 

 Key: TS-3196
 URL: https://issues.apache.org/jira/browse/TS-3196
 Project: Traffic Server
  Issue Type: Bug
  Components: Network
Reporter: zouyu
 Fix For: 5.2.0


 The core dump is the same as TS-2882. 
 When we meet with error, we use the spdy protocol.
 (gdb) 
 #0  0x0070f3f7 in handleEvent (event=105, vc=0x2b5f1c27baf0) at 
 ../../iocore/eventsystem/I_Continuation.h:146
 #1  read_signal_and_update (event=105, vc=0x2b5f1c27baf0) at 
 UnixNetVConnection.cc:138
 #2  0x0071381f in UnixNetVConnection::mainEvent (this=0x2b5f1c27baf0, 
 event=value optimized out, e=value optimized out) at 
 UnixNetVConnection.cc:1066
 #3  0x007080c5 in handleEvent (this=0x1bd9130, event=value optimized 
 out, e=0x13888d0) at ../../iocore/eventsystem/I_Continuation.h:146
 #4  InactivityCop::check_inactivity (this=0x1bd9130, event=value optimized 
 out, e=0x13888d0) at UnixNet.cc:80
 #5  0x0073409f in handleEvent (this=0x2b5ec2eb2010, e=0x13888d0, 
 calling_code=2) at I_Continuation.h:146
 #6  EThread::process_event (this=0x2b5ec2eb2010, e=0x13888d0, calling_code=2) 
 at UnixEThread.cc:145
 #7  0x00734c13 in EThread::execute (this=0x2b5ec2eb2010) at 
 UnixEThread.cc:224
 #8  0x0073344a in spawn_thread_internal (a=0x174b830) at Thread.cc:88
 #9  0x2b5c8b3de851 in start_thread () from /lib64/libpthread.so.0
 #10 0x00366a4e894d in clone () from /lib64/libc.so.6
 And actually, the UnixNetVConnection pointer is a SSLNetVConnection pointer. 
 and the crashed continuation is 'ProtocolProbeTrampoline'.
 The root cause for this problem is that 
 'ProtocolProbeTrampoline::ioCompletionEvent', this function will hand over the
 process logic to next upper protocol such as http and spdy after ssl handshake
 is finished in our case.
 From the core dump, if we use ssl and spdy, so in the function
 'ProtocolProbeTrampoline::ioCompletionEvent', it will first disable the read 
 io
 adn handover the continuation of read.vio._cont from 'ProtocolProbeTrampoline'
 to 'SpdyClientSession' by below two steps:
 1. first disable current io by using 'netvc-do_io_read(this, 0, NULL)'.
 2. change the read.vio._cont by using
 'probeParent-endpoint[key]-accept(netvc, this-iobuf, reader);' which will 
 be
 'SpdySessionAccept::accept' in our case. and in the end, it will call
 'SpdyClientSession::state_session_start' which will call 
 'this-vc-do_io_read'
 to set read.vio._cont to SpdyClientSession.
 the main point is that the function 'SpdySessionAccept::accept' function will
 put an SpdyClientSession event into eventProcessor for net
 'eventProcessor.schedule_imm(sm, ET_NET);'
 and because each 'net' thread will put a check_inactivity event into the queue
 So, in the end, between 'netvc-do_io_read(this, 0, NULL)' and
 spdy handler is set, the
 check_inactivity may be executed, and if so, the crash will occur.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3299) Inactivity timeout audit broken

2015-01-15 Thread Sudheer Vinukonda (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278918#comment-14278918
 ] 

Sudheer Vinukonda commented on TS-3299:
---

Below's a possible fix (yet to be tested):

{code}
% diff --git a/iocore/net/UnixNetVConnection.cc
b/iocore/net/UnixNetVConnection.cc
index 92cb7c4..104f0ea 100644
--- a/iocore/net/UnixNetVConnection.cc
+++ b/iocore/net/UnixNetVConnection.cc
@@ -137,6 +137,9 @@ read_signal_and_update(int event, UnixNetVConnection *vc)
   vc-recursion++;
   if(vc-read.vio._cont) {
 vc-read.vio._cont-handleEvent(event, vc-read.vio);
+  } else {
+ink_assert (event == VC_EVENT_INACTIVITY_TIMEOUT);
+vc-closed = 1
   }
   if (!--vc-recursion  vc-closed) {
 /* BZ  31932 */
{code}

 Inactivity timeout audit broken
 ---

 Key: TS-3299
 URL: https://issues.apache.org/jira/browse/TS-3299
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Sudheer Vinukonda

 The patch in TS-3196 seems to result in fd leak in our prod. There are a 
 bunch of hung sockets (in close-wait state), stuck forever (remain leaked for 
 days after stopping the traffic). Debugging further, it seems that the 
 inactivity audit is broken by this patch.
 Some info below for the leaked sockets (in close_wait state):
 {code}
 $ ss -s ; sudo traffic_line -r
 proxy.process.net.connections_currently_open; sudo traffic_line -r
 proxy.process.http.current_client_connections; sudo traffic_line -r
 proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
 traffic_server)/fd/ 3/dev/null| wc -l
 Total: 29367 (kernel 29437)
 TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
 ports 918
 Transport Total IPIPv6
 *  29437 - -
 RAW  0 0 0
 UDP  16133
 TCP  31642 31637 5
 INET  31658 31650 8
 FRAG  0 0 0
 Password: 
 27689
 1
 1
 27939
 A snippet from lsof -p $(pidof traffic_server)
 [ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
 67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
 [ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
 67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
 [ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
 67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
 [ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
 67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
 [ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
 67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
 [ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
 67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
 [ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
 67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
 [ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
 67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
 [ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
 67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
 [ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
 67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
 [ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
 67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
 [ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
 67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
 [ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
 67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
 [ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
 67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
 [ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
 67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
 [ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
 67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT)
 [ET_NET 10024 nobody  271u  IPv4 21354750420t0TCP 
 67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT)
 [ET_NET 10024 nobody  272u  IPv4 21353289740t0TCP 
 67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT)
 [ET_NET 10024 nobody  273u  IPv4 21375427910t0TCP 
 67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT)
 [ET_NET 10024 nobody  274u  IPv4 21348061350t0TCP 
 67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT)
 [ET_NET 10024 nobody  275u  IPv4 21401260170t0TCP 
 67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT)
 [ET_NET 10024 nobody  276u  IPv4 21346360890t0TCP 
 

[jira] [Created] (TS-3301) TLS ticket rotation

2015-01-15 Thread Brian Geffon (JIRA)
Brian Geffon created TS-3301:


 Summary: TLS ticket rotation
 Key: TS-3301
 URL: https://issues.apache.org/jira/browse/TS-3301
 Project: Traffic Server
  Issue Type: Bug
  Components: Core, SSL
Reporter: Brian Geffon


We all know that it is bad security practice to use the same password/key all 
the time. This project tries to rotate TLS session ticket keys periodically. 
When an admin runs traffic_line -x after a new ticket key is put in the key 
file ssl_ticket.key, an event will be generated and ATS will reconfigure SSL. 
The keys are read in all at the same time and the first entry is the most 
recent key. A new key is assumed to be put at the beginning of ssl_ticket.key 
file and an old key is chopped off at the end from the file.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3301) TLS ticket rotation

2015-01-15 Thread Brian Geffon (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Geffon updated TS-3301:
-
Attachment: traffic_line_rotation_6.diff

 TLS ticket rotation
 ---

 Key: TS-3301
 URL: https://issues.apache.org/jira/browse/TS-3301
 Project: Traffic Server
  Issue Type: Bug
  Components: Core, SSL
Reporter: Brian Geffon
Assignee: Brian Geffon
 Fix For: 5.3.0

 Attachments: traffic_line_rotation_6.diff


 We all know that it is bad security practice to use the same password/key all 
 the time. This project tries to rotate TLS session ticket keys periodically. 
 When an admin runs traffic_line -x after a new ticket key is put in the key 
 file ssl_ticket.key, an event will be generated and ATS will reconfigure SSL. 
 The keys are read in all at the same time and the first entry is the most 
 recent key. A new key is assumed to be put at the beginning of ssl_ticket.key 
 file and an old key is chopped off at the end from the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3301) TLS ticket rotation

2015-01-15 Thread Brian Geffon (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Geffon updated TS-3301:
-
Description: 
We all know that it is bad security practice to use the same password/key all 
the time. This project tries to rotate TLS session ticket keys periodically. 
When an admin runs traffic_line -x after a new ticket key is put in the key 
file ssl_ticket.key, an event will be generated and ATS will reconfigure SSL. 
The keys are read in all at the same time and the first entry is the most 
recent key. A new key is assumed to be put at the beginning of ssl_ticket.key 
file and an old key is chopped off at the end from the file.

Author: Bin Zeng bz...@linkedin.com

  was:
We all know that it is bad security practice to use the same password/key all 
the time. This project tries to rotate TLS session ticket keys periodically. 
When an admin runs traffic_line -x after a new ticket key is put in the key 
file ssl_ticket.key, an event will be generated and ATS will reconfigure SSL. 
The keys are read in all at the same time and the first entry is the most 
recent key. A new key is assumed to be put at the beginning of ssl_ticket.key 
file and an old key is chopped off at the end from the file.



 TLS ticket rotation
 ---

 Key: TS-3301
 URL: https://issues.apache.org/jira/browse/TS-3301
 Project: Traffic Server
  Issue Type: New Feature
  Components: Core, SSL
Reporter: Brian Geffon
Assignee: Brian Geffon
 Fix For: 5.3.0

 Attachments: traffic_line_rotation_6.diff


 We all know that it is bad security practice to use the same password/key all 
 the time. This project tries to rotate TLS session ticket keys periodically. 
 When an admin runs traffic_line -x after a new ticket key is put in the key 
 file ssl_ticket.key, an event will be generated and ATS will reconfigure SSL. 
 The keys are read in all at the same time and the first entry is the most 
 recent key. A new key is assumed to be put at the beginning of ssl_ticket.key 
 file and an old key is chopped off at the end from the file.
 Author: Bin Zeng bz...@linkedin.com



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-3301) TLS ticket rotation

2015-01-15 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach reassigned TS-3301:
---

Assignee: James Peach  (was: Brian Geffon)

Will review and land tomorrow.

 TLS ticket rotation
 ---

 Key: TS-3301
 URL: https://issues.apache.org/jira/browse/TS-3301
 Project: Traffic Server
  Issue Type: New Feature
  Components: Core, SSL
Reporter: Brian Geffon
Assignee: James Peach
 Fix For: 5.3.0

 Attachments: traffic_line_rotation_6.diff


 We all know that it is bad security practice to use the same password/key all 
 the time. This project tries to rotate TLS session ticket keys periodically. 
 When an admin runs traffic_line -x after a new ticket key is put in the key 
 file ssl_ticket.key, an event will be generated and ATS will reconfigure SSL. 
 The keys are read in all at the same time and the first entry is the most 
 recent key. A new key is assumed to be put at the beginning of ssl_ticket.key 
 file and an old key is chopped off at the end from the file.
 Author: Bin Zeng bz...@linkedin.com



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3300) Add an @internal ACL filter

2015-01-15 Thread James Peach (JIRA)
James Peach created TS-3300:
---

 Summary: Add an @internal ACL filter
 Key: TS-3300
 URL: https://issues.apache.org/jira/browse/TS-3300
 Project: Traffic Server
  Issue Type: New Feature
  Components: Configuration, Remap API
Reporter: James Peach


It's common to want remap rules that are only supposed to be used by plugins. 
Add explicit support for ACL matching on whether a request is internal or not. 

Here's an example of the proposed configuration:
{code}
map http://internal.jpeach.org/ http://127.0.0.1/ \
  @plugin=generator.so \
  @action=allow @internal

map http://external.jpeach.org/ http://private.jpeach.org/ \
  @plugin=authproxy.so @pparam=--auth-transform=head \
  @action=deny @internal

map http://private.jpeach.org/ http://private.jpeach.org/ \
  @action=allow @internal
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3300) Add an @internal ACL filter

2015-01-15 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach updated TS-3300:

Fix Version/s: 5.3.0
 Assignee: James Peach

 Add an @internal ACL filter
 ---

 Key: TS-3300
 URL: https://issues.apache.org/jira/browse/TS-3300
 Project: Traffic Server
  Issue Type: New Feature
  Components: Configuration, Remap API
Reporter: James Peach
Assignee: James Peach
 Fix For: 5.3.0


 It's common to want remap rules that are only supposed to be used by plugins. 
 Add explicit support for ACL matching on whether a request is internal or 
 not. 
 Here's an example of the proposed configuration:
 {code}
 map http://internal.jpeach.org/ http://127.0.0.1/ \
   @plugin=generator.so \
   @action=allow @internal
 map http://external.jpeach.org/ http://private.jpeach.org/ \
   @plugin=authproxy.so @pparam=--auth-transform=head \
   @action=deny @internal
 map http://private.jpeach.org/ http://private.jpeach.org/ \
   @action=allow @internal
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-3301) TLS ticket rotation

2015-01-15 Thread Brian Geffon (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Geffon reassigned TS-3301:


Assignee: Brian Geffon

 TLS ticket rotation
 ---

 Key: TS-3301
 URL: https://issues.apache.org/jira/browse/TS-3301
 Project: Traffic Server
  Issue Type: Bug
  Components: Core, SSL
Reporter: Brian Geffon
Assignee: Brian Geffon

 We all know that it is bad security practice to use the same password/key all 
 the time. This project tries to rotate TLS session ticket keys periodically. 
 When an admin runs traffic_line -x after a new ticket key is put in the key 
 file ssl_ticket.key, an event will be generated and ATS will reconfigure SSL. 
 The keys are read in all at the same time and the first entry is the most 
 recent key. A new key is assumed to be put at the beginning of ssl_ticket.key 
 file and an old key is chopped off at the end from the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3302) Build failure on FreeBSD11

2015-01-15 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3302:
--
Priority: Critical  (was: Major)

 Build failure on FreeBSD11
 --

 Key: TS-3302
 URL: https://issues.apache.org/jira/browse/TS-3302
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Leif Hedstrom
Priority: Critical
 Fix For: 5.3.0


 {code}
 gmake[2]: Entering directory '/usr/home/leif/apache/trafficserver/iocore/net'
   CXX  Connection.o
 In file included from Connection.cc:32:
 In file included from ./P_Net.h:102:
 In file included from ./P_Socks.h:30:
 In file included from ../../proxy/ParentSelection.h:37:
 In file included from ../../proxy/ControlMatcher.h:91:
 In file included from ../../lib/ts/IpMap.h:8:
 ../../lib/ts/IntrusiveDList.h:121:18: error: reference to 
 'bidirectional_iterator_tag' is ambiguous
 typedef std::bidirectional_iterator_tag iterator_category;
  ^
 /usr/include/c++/v1/iterator:353:30: note: candidate found by name lookup is 
 'std::__1::bidirectional_iterator_tag'
 struct _LIBCPP_TYPE_VIS_ONLY bidirectional_iterator_tag : public 
 forward_iterator_tag {};
  ^
 ../../lib/ts/IntrusiveDList.h:49:10: note: candidate found by name lookup is 
 'std::bidirectional_iterator_tag'
   struct bidirectional_iterator_tag;
  ^
 In file included from Connection.cc:32:
 In file included from ./P_Net.h:102:
 In file included from ./P_Socks.h:30:
 In file included from ../../proxy/ParentSelection.h:37:
 In file included from ../../proxy/ControlMatcher.h:91:
 ../../lib/ts/IpMap.h:140:18: error: reference to 'bidirectional_iterator_tag' 
 is ambiguous
 typedef std::bidirectional_iterator_tag iterator_category;
  ^
 /usr/include/c++/v1/iterator:353:30: note: candidate found by name lookup is 
 'std::__1::bidirectional_iterator_tag'
 struct _LIBCPP_TYPE_VIS_ONLY bidirectional_iterator_tag : public 
 forward_iterator_tag {};
  ^
 ../../lib/ts/IntrusiveDList.h:49:10: note: candidate found by name lookup is 
 'std::bidirectional_iterator_tag'
   struct bidirectional_iterator_tag;
  ^
 2 errors generated.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3302) Build failure on FreeBSD11

2015-01-15 Thread Leif Hedstrom (JIRA)
Leif Hedstrom created TS-3302:
-

 Summary: Build failure on FreeBSD11
 Key: TS-3302
 URL: https://issues.apache.org/jira/browse/TS-3302
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Leif Hedstrom


{code}
gmake[2]: Entering directory '/usr/home/leif/apache/trafficserver/iocore/net'
  CXX  Connection.o
In file included from Connection.cc:32:
In file included from ./P_Net.h:102:
In file included from ./P_Socks.h:30:
In file included from ../../proxy/ParentSelection.h:37:
In file included from ../../proxy/ControlMatcher.h:91:
In file included from ../../lib/ts/IpMap.h:8:
../../lib/ts/IntrusiveDList.h:121:18: error: reference to 
'bidirectional_iterator_tag' is ambiguous
typedef std::bidirectional_iterator_tag iterator_category;
 ^
/usr/include/c++/v1/iterator:353:30: note: candidate found by name lookup is 
'std::__1::bidirectional_iterator_tag'
struct _LIBCPP_TYPE_VIS_ONLY bidirectional_iterator_tag : public 
forward_iterator_tag {};
 ^
../../lib/ts/IntrusiveDList.h:49:10: note: candidate found by name lookup is 
'std::bidirectional_iterator_tag'
  struct bidirectional_iterator_tag;
 ^
In file included from Connection.cc:32:
In file included from ./P_Net.h:102:
In file included from ./P_Socks.h:30:
In file included from ../../proxy/ParentSelection.h:37:
In file included from ../../proxy/ControlMatcher.h:91:
../../lib/ts/IpMap.h:140:18: error: reference to 'bidirectional_iterator_tag' 
is ambiguous
typedef std::bidirectional_iterator_tag iterator_category;
 ^
/usr/include/c++/v1/iterator:353:30: note: candidate found by name lookup is 
'std::__1::bidirectional_iterator_tag'
struct _LIBCPP_TYPE_VIS_ONLY bidirectional_iterator_tag : public 
forward_iterator_tag {};
 ^
../../lib/ts/IntrusiveDList.h:49:10: note: candidate found by name lookup is 
'std::bidirectional_iterator_tag'
  struct bidirectional_iterator_tag;
 ^
2 errors generated.
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3302) Build failure on FreeBSD11

2015-01-15 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3302:
--
Fix Version/s: 5.3.0

 Build failure on FreeBSD11
 --

 Key: TS-3302
 URL: https://issues.apache.org/jira/browse/TS-3302
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Leif Hedstrom
 Fix For: 5.3.0


 {code}
 gmake[2]: Entering directory '/usr/home/leif/apache/trafficserver/iocore/net'
   CXX  Connection.o
 In file included from Connection.cc:32:
 In file included from ./P_Net.h:102:
 In file included from ./P_Socks.h:30:
 In file included from ../../proxy/ParentSelection.h:37:
 In file included from ../../proxy/ControlMatcher.h:91:
 In file included from ../../lib/ts/IpMap.h:8:
 ../../lib/ts/IntrusiveDList.h:121:18: error: reference to 
 'bidirectional_iterator_tag' is ambiguous
 typedef std::bidirectional_iterator_tag iterator_category;
  ^
 /usr/include/c++/v1/iterator:353:30: note: candidate found by name lookup is 
 'std::__1::bidirectional_iterator_tag'
 struct _LIBCPP_TYPE_VIS_ONLY bidirectional_iterator_tag : public 
 forward_iterator_tag {};
  ^
 ../../lib/ts/IntrusiveDList.h:49:10: note: candidate found by name lookup is 
 'std::bidirectional_iterator_tag'
   struct bidirectional_iterator_tag;
  ^
 In file included from Connection.cc:32:
 In file included from ./P_Net.h:102:
 In file included from ./P_Socks.h:30:
 In file included from ../../proxy/ParentSelection.h:37:
 In file included from ../../proxy/ControlMatcher.h:91:
 ../../lib/ts/IpMap.h:140:18: error: reference to 'bidirectional_iterator_tag' 
 is ambiguous
 typedef std::bidirectional_iterator_tag iterator_category;
  ^
 /usr/include/c++/v1/iterator:353:30: note: candidate found by name lookup is 
 'std::__1::bidirectional_iterator_tag'
 struct _LIBCPP_TYPE_VIS_ONLY bidirectional_iterator_tag : public 
 forward_iterator_tag {};
  ^
 ../../lib/ts/IntrusiveDList.h:49:10: note: candidate found by name lookup is 
 'std::bidirectional_iterator_tag'
   struct bidirectional_iterator_tag;
  ^
 2 errors generated.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-3303) tcpinfo: Log file can get closed when disk is full

2015-01-15 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom reassigned TS-3303:
-

Assignee: Leif Hedstrom

 tcpinfo: Log file can get closed when disk is full
 --

 Key: TS-3303
 URL: https://issues.apache.org/jira/browse/TS-3303
 Project: Traffic Server
  Issue Type: Bug
  Components: Plugins
Reporter: Leif Hedstrom
Assignee: Leif Hedstrom
 Fix For: 5.3.0


 It's unfortunate, but it seems that TSTextLogObjectWrite() returns TSError 
 for both disk full and write failed. I think, but not 100% sure, that 
 this can trip us to delete the log object more than once, due to a race 
 condition in the code.
 My suggestion is to just not close the log object, but hope that we the log 
 rotation / cleanup code will free up disk space, such that future log writes 
 succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3303) tcpinfo: Log file can get closed when disk is full

2015-01-15 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3303:
--
Fix Version/s: 5.3.0

 tcpinfo: Log file can get closed when disk is full
 --

 Key: TS-3303
 URL: https://issues.apache.org/jira/browse/TS-3303
 Project: Traffic Server
  Issue Type: Bug
  Components: Plugins
Reporter: Leif Hedstrom
Assignee: Leif Hedstrom
 Fix For: 5.3.0


 It's unfortunate, but it seems that TSTextLogObjectWrite() returns TSError 
 for both disk full and write failed. I think, but not 100% sure, that 
 this can trip us to delete the log object more than once, due to a race 
 condition in the code.
 My suggestion is to just not close the log object, but hope that we the log 
 rotation / cleanup code will free up disk space, such that future log writes 
 succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: fedora_21-master » gcc,fedora_21,spdy #79

2015-01-15 Thread jenkins
See 
https://ci.trafficserver.apache.org/job/fedora_21-master/compiler=gcc,label=fedora_21,type=spdy/79/

--
[...truncated 17458 lines...]
0|0|96.36.6.0|96.36.6.0|per_ip| |F|0|0|1970/01/01 
00:00:00|4399754033265211263|0|0|1|0
0|0|177.220.13.0|177.220.13.0|per_ip| |F|0|0|1970/01/01 
00:00:00|16875767461899985215|0|0|1|0
0|0|51.88.12.0|51.88.12.0|per_ip| |F|0|0|1970/01/01 
00:00:00|4061969290583226943|0|0|1|0
0|0|14.60.0.0|14.60.0.0|per_ip| |F|0|0|1970/01/01 
00:00:00|239415512081474687|0|0|1|0
0|0|102.92.6.0|102.92.6.0|per_ip| |F|0|0|1970/01/01 
00:00:00|1137332129227578751|0|0|1|0
0|0|60.13.3.0|60.13.3.0|per_ip| |F|0|0|1970/01/01 
00:00:00|14197390378702809023|0|0|1|0
0|0|203.51.8.0|203.51.8.0|per_ip| |F|0|0|1970/01/01 
00:00:00|13615035985041572415|0|0|1|0
0|0|55.111.15.0|55.111.15.0|per_ip| |F|0|0|1970/01/01 
00:00:00|8432989278542744703|0|0|1|0
0|0|150.46.8.0|150.46.8.0|per_ip| |F|0|0|1970/01/01 
00:00:00|13328697346056062847|0|0|1|0
0|0|21.63.0.0|21.63.0.0|per_ip| |F|0|0|1970/01/01 
00:00:00|6952614330233815423|0|0|1|0
0|0|244.107.1.0|244.107.1.0|per_ip| |F|0|0|1970/01/01 
00:00:00|12483119336099556159|0|0|1|0
0|0|17.27.9.0|17.27.9.0|per_ip| |F|0|0|1970/01/01 
00:00:00|8404669866067743999|0|0|1|0
0|0|228.205.2.0|228.205.2.0|per_ip| |F|0|0|1970/01/01 
00:00:00|2895386713077538495|0|0|1|0
0|0|70.174.5.0|70.174.5.0|per_ip| |F|0|0|1970/01/01 
00:00:00|4529860252146780415|0|0|1|0
0|0|225.210.0.0|225.210.0.0|per_ip| |F|0|0|1970/01/01 
00:00:00|11678389739593155327|0|0|1|0
0|0|214.17.4.0|214.17.4.0|per_ip| |F|0|0|1970/01/01 
00:00:00|2883867808774114623|0|0|1|0
0|0|83.100.11.0|83.100.11.0|per_ip| |F|0|0|1970/01/01 
00:00:00|2537790405278217983|0|0|1|0
0|0|173.104.15.0|173.104.15.0|per_ip| |F|0|0|1970/01/01 
00:00:00|911394549535940799|0|0|1|0
0|0|80.78.2.0|80.78.2.0|per_ip| |F|0|0|1970/01/01 
00:00:00|14462494887552707071|0|0|1|0
0|0|17.111.2.0|17.111.2.0|per_ip| |F|0|0|1970/01/01 
00:00:00|7267807459263132863|0|0|1|0
0|0|3.40.11.0|3.40.11.0|per_ip| |F|0|0|1970/01/01 
00:00:00|11579072041960787903|0|0|1|0
0|0|128.101.6.0|128.101.6.0|per_ip| |F|0|0|1970/01/01 
00:00:00|7709350586869578111|0|0|1|0
0|0|182.198.13.0|182.198.13.0|per_ip| |F|0|0|1970/01/01 
00:00:00|13551071601454609279|0|0|1|0
0|0|31.169.9.0|31.169.9.0|per_ip| |F|0|0|1970/01/01 
00:00:00|10107774636954373631|0|0|1|0
0|0|79.109.1.0|79.109.1.0|per_ip| |F|0|0|1970/01/01 
00:00:00|2563204556911855615|0|0|1|0
0|0|166.147.15.0|166.147.15.0|per_ip| |F|0|0|1970/01/01 
00:00:00|16690438423652680191|0|0|1|0
0|0|125.174.12.0|125.174.12.0|per_ip| |F|0|0|1970/01/01 
00:00:00|15615066061444725759|0|0|1|0
0|0|141.164.10.0|141.164.10.0|per_ip| |F|0|0|1970/01/01 
00:00:00|11615299697242551807|0|0|1|0
0|0|69.248.5.0|69.248.5.0|per_ip| |F|0|0|1970/01/01 
00:00:00|6084256814888038399|0|0|1|0
0|0|116.85.3.0|116.85.3.0|per_ip| |F|0|0|1970/01/01 
00:00:00|7242195152836692351|0|0|1|0
0|0|143.218.0.0|143.218.0.0|per_ip| |F|0|0|1970/01/01 
00:00:00|6521020107008611135|0|0|1|0
0|0|80.50.15.0|80.50.15.0|per_ip| |F|0|0|1970/01/01 
00:00:00|4147214773346524415|0|0|1|0
0|0|57.94.10.0|57.94.10.0|per_ip| |F|0|0|1970/01/01 
00:00:00|5026071791723911167|0|0|1|0
0|0|29.85.2.0|29.85.2.0|per_ip| |F|0|0|1970/01/01 
00:00:00|14621794615183880575|0|0|1|0
0|0|194.121.12.0|194.121.12.0|per_ip| |F|0|0|1970/01/01 
00:00:00|14117829214976018943|0|0|1|0
0|0|213.51.8.0|213.51.8.0|per_ip| |F|0|0|1970/01/01 
00:00:00|5260504740229391231|0|0|1|0
0|0|230.18.4.0|230.18.4.0|per_ip| |F|0|0|1970/01/01 
00:00:00|10768572095690624319|0|0|1|0
0|0|164.219.5.0|164.219.5.0|per_ip| |F|0|0|1970/01/01 
00:00:00|3105496754250598207|0|0|1|0
0|0|52.184.6.0|52.184.6.0|per_ip| |F|0|0|1970/01/01 
00:00:00|1021934490480831|0|0|1|0
0|0|240.45.0.0|240.45.0.0|per_ip| |F|0|0|1970/01/01 
00:00:00|17270606309438257599|0|0|1|0
0|0|215.142.5.0|215.142.5.0|per_ip| |F|0|0|1970/01/01 
00:00:00|2638784259838792511|0|0|1|0
0|0|164.65.4.0|164.65.4.0|per_ip| |F|0|0|1970/01/01 
00:00:00|12219986955307756607|0|0|1|0
0|0|46.140.3.0|46.140.3.0|per_ip| |F|0|0|1970/01/01 
00:00:00|18065683639601501439|0|0|1|0
0|0|117.181.7.0|117.181.7.0|per_ip| |F|0|0|1970/01/01 
00:00:00|24547227623807|0|0|1|0
0|0|228.128.0.0|228.128.0.0|per_ip| |F|0|0|1970/01/01 
00:00:00|4063706092132838015|0|0|1|0
0|0|116.215.11.0|116.215.11.0|per_ip| |F|0|0|1970/01/01 
00:00:00|9266402555104158591|0|0|1|0
0|0|22.185.4.0|22.185.4.0|per_ip| |F|0|0|1970/01/01 
00:00:00|13022231484020723007|0|0|1|0
0|0|147.225.15.0|147.225.15.0|per_ip| |F|0|0|1970/01/01 
00:00:00|4887674450441148287|0|0|1|0
0|0|5.124.14.0|5.124.14.0|per_ip| |F|0|0|1970/01/01 
00:00:00|627245539450020031|0|0|1|0
0|0|177.153.14.0|177.153.14.0|per_ip| |F|0|0|1970/01/01 
00:00:00|5668871752088226687|0|0|1|0
0|0|246.106.0.0|246.106.0.0|per_ip| |F|0|0|1970/01/01 
00:00:00|11899907850820445439|0|0|1|0
0|0|109.13.6.0|109.13.6.0|per_ip| |F|0|0|1970/01/01 
00:00:00|14117134646447333119|0|0|1|0
0|0|215.99.5.0|215.99.5.0|per_ip| |F|0|0|1970/01/01 

[jira] [Commented] (TS-3303) tcpinfo: Log file can get closed when disk is full

2015-01-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279658#comment-14279658
 ] 

ASF subversion and git services commented on TS-3303:
-

Commit 49a2c805ac0d5032de9c1fb35766e49bb18960e8 in trafficserver's branch 
refs/heads/master from [~zwoop]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=49a2c80 ]

TS-3303 tcpinfo: we can close the log file object multiple times


 tcpinfo: Log file can get closed when disk is full
 --

 Key: TS-3303
 URL: https://issues.apache.org/jira/browse/TS-3303
 Project: Traffic Server
  Issue Type: Bug
  Components: Plugins
Reporter: Leif Hedstrom
Assignee: Leif Hedstrom
 Fix For: 5.3.0


 It's unfortunate, but it seems that TSTextLogObjectWrite() returns TSError 
 for both disk full and write failed. I think, but not 100% sure, that 
 this can trip us to delete the log object more than once, due to a race 
 condition in the code.
 My suggestion is to just not close the log object, but hope that we the log 
 rotation / cleanup code will free up disk space, such that future log writes 
 succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-2758) unable to get file descriptor from transaction close

2015-01-15 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom reassigned TS-2758:
-

Assignee: Leif Hedstrom

 unable to get file descriptor from transaction close
 

 Key: TS-2758
 URL: https://issues.apache.org/jira/browse/TS-2758
 Project: Traffic Server
  Issue Type: Bug
  Components: TS API
Reporter: James Peach
Assignee: Leif Hedstrom
 Fix For: sometime


 While testing the {{tcpinfo}} plugin, I found that {{TSHttpSsnClientFdGet}} 
 fails from the {{TS_HTTP_TXN_CLOSE_HOOK}}. It would be really useful for the 
 {{tcpinfo}} plugin to be able to collect statistics at the end of 
 transactions and sessions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-2758) unable to get file descriptor from transaction close

2015-01-15 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2758:
--
Assignee: (was: Leif Hedstrom)

 unable to get file descriptor from transaction close
 

 Key: TS-2758
 URL: https://issues.apache.org/jira/browse/TS-2758
 Project: Traffic Server
  Issue Type: Bug
  Components: TS API
Reporter: James Peach
 Fix For: sometime


 While testing the {{tcpinfo}} plugin, I found that {{TSHttpSsnClientFdGet}} 
 fails from the {{TS_HTTP_TXN_CLOSE_HOOK}}. It would be really useful for the 
 {{tcpinfo}} plugin to be able to collect statistics at the end of 
 transactions and sessions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3298) simplify `make release`

2015-01-15 Thread JIRA
Igor Galić created TS-3298:
--

 Summary: simplify `make release`
 Key: TS-3298
 URL: https://issues.apache.org/jira/browse/TS-3298
 Project: Traffic Server
  Issue Type: Improvement
  Components: Build
Reporter: Igor Galić


our release is *still* confusing ;)
i have tried to stream-line it a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3298) simplify `make release`

2015-01-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/TS-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Galić updated TS-3298:
---
Attachment: 0001-remove-confusing-make-targets-change-version-separat.patch

 simplify `make release`
 ---

 Key: TS-3298
 URL: https://issues.apache.org/jira/browse/TS-3298
 Project: Traffic Server
  Issue Type: Improvement
  Components: Build
Reporter: Igor Galić
 Attachments: 
 0001-remove-confusing-make-targets-change-version-separat.patch


 our release is *still* confusing ;)
 i have tried to stream-line it a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3298) simplify `make release`

2015-01-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/TS-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278537#comment-14278537
 ] 

Igor Galić commented on TS-3298:


from #launchpad:
{code}
02:40  cjwatson You can do it if you like, but you don't need to
02:40  cjwatson It's common enough to just rename for the purpose of packaging
02:40  igalic cjwatson: that's just for our RC process..
02:41  igalic cjwatson: and we do rename ourselves when the rc is final.
02:41  igalic but this would make rcs testable as packages of themselves.
02:41  igalic without much fuzz.
02:41  cjwatson Sure, I'm just saying that you don't need to switch from -rc 
to ~rc upstream in order to satisfy Debian-format packaging rules
02:41  cjwatson Nothing stopping you if that's your primary target, either
02:42  igalic yeah, it's not
{code}

 simplify `make release`
 ---

 Key: TS-3298
 URL: https://issues.apache.org/jira/browse/TS-3298
 Project: Traffic Server
  Issue Type: Improvement
  Components: Build
Reporter: Igor Galić
 Attachments: 
 0001-remove-confusing-make-targets-change-version-separat.patch


 our release is *still* confusing ;)
 i have tried to stream-line it a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: clang-analyzer #279

2015-01-15 Thread jenkins
See https://ci.trafficserver.apache.org/job/clang-analyzer/279/

--
[...truncated 1793 lines...]
Making all in tcpinfo
make[2]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/tcpinfo'
  CXX  tcpinfo.lo
  CXXLDtcpinfo.la
make[2]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/tcpinfo'
Making all in experimental
make[2]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental'
Making all in authproxy
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/authproxy'
  CXX  utils.lo
  CXX  authproxy.lo
  CXXLDauthproxy.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/authproxy'
Making all in background_fetch
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/background_fetch'
  CXX  background_fetch.lo
  CXXLDbackground_fetch.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/background_fetch'
Making all in balancer
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/balancer'
  CXX  roundrobin.lo
  CXX  balancer.lo
  CXX  hash.lo
  CXXLDbalancer.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/balancer'
Making all in buffer_upload
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/buffer_upload'
  CXX  buffer_upload.lo
  CXXLDbuffer_upload.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/buffer_upload'
Making all in channel_stats
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/channel_stats'
  CXX  channel_stats.lo
  CXXLDchannel_stats.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/channel_stats'
Making all in collapsed_connection
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/collapsed_connection'
  CXX  collapsed_connection.lo
  CXX  MurmurHash3.lo
  CXXLDcollapsed_connection.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/collapsed_connection'
Making all in custom_redirect
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/custom_redirect'
  CXX  custom_redirect.lo
  CXXLDcustom_redirect.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/custom_redirect'
Making all in epic
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/epic'
  CXX  epic.lo
  CXXLDepic.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/epic'
Making all in escalate
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/escalate'
  CXX  escalate.lo
  CXXLDescalate.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/escalate'
Making all in esi
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/esi'
  CXX  esi.lo
  CXX  serverIntercept.lo
  CXX  combo_handler.lo
  CXX  lib/DocNode.lo
  CXX  lib/EsiParser.lo
  CXX  lib/EsiGzip.lo
  CXX  lib/EsiGunzip.lo
  CXX  lib/EsiProcessor.lo
In file included from combo_handler.cc:27:
In file included from 
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.8.3/../../../../include/c++/4.8.3/vector:64:
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.8.3/../../../../include/c++/4.8.3/bits/stl_vector.h:771:9:
 warning: Returning null reference
  { return *(this-_M_impl._M_start + __n); }
^~
1 warning generated.
  CXX  lib/Expression.lo
  CXX  lib/FailureInfo.lo
  CXX  lib/HandlerManager.lo
  CXX  lib/Stats.lo
  CXX  lib/Utils.lo
  CXX  lib/Variables.lo
  CXX  lib/gzip.lo
  CXX  test/print_funcs.lo
  CXX  test/HandlerMap.lo
  CXX  test/StubIncludeHandler.lo
  CXX  test/TestHandlerManager.lo
  CXX  fetcher/HttpDataFetcherImpl.lo
  CXXLDlibesicore.la
  CXXLDlibtest.la
  CXXLDesi.la
  CXXLDcombo_handler.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/esi'
Making all in 

Build failed in Jenkins: tsqa-master #28

2015-01-15 Thread jenkins
See https://ci.trafficserver.apache.org/job/tsqa-master/28/

--
[...truncated 35916 lines...]
/lib64/libc.so.6(abort+0x148)[0x2b9eeed22cd8]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z12ink_fatal_vaPKcP13__va_list_tag+0x0)[0x2b9eec28ab99]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z12ink_fatal_vaPKcP13__va_list_tag+0xb9)[0x2b9eec28ac52]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z9ink_fatalPKcz+0x9f)[0x2b9eec28acf9]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(+0x3b63e)[0x2b9eec28863e]
/opt/jenkins/tsqa-master/bin/traffic_server(_TSAssert+0x0)[0x51b384]
/opt/jenkins/tsqa-master/bin/traffic_server(TSTextLogObjectDestroy+0x30)[0x52b394]
--
traffic_server - STACK TRACE: 
/opt/jenkins/tsqa-master/bin/traffic_server(_Z19crash_logger_invokeiP9siginfo_tPv+0xc3)[0x504b85]
/lib64/libpthread.so.0(+0xf130)[0x2ad56b691130]
/lib64/libc.so.6(gsignal+0x39)[0x2ad56c65f5c9]
/lib64/libc.so.6(abort+0x148)[0x2ad56c660cd8]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z12ink_fatal_vaPKcP13__va_list_tag+0x0)[0x2ad569bc8b99]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z12ink_fatal_vaPKcP13__va_list_tag+0xb9)[0x2ad569bc8c52]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z9ink_fatalPKcz+0x9f)[0x2ad569bc8cf9]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(+0x3b63e)[0x2ad569bc663e]
/opt/jenkins/tsqa-master/bin/traffic_server(_TSAssert+0x0)[0x51b384]
/opt/jenkins/tsqa-master/bin/traffic_server(TSTextLogObjectDestroy+0x30)[0x52b394]
FAIL: detected a crash
traffic_server - STACK TRACE: 
/opt/jenkins/tsqa-master/bin/traffic_server(_Z19crash_logger_invokeiP9siginfo_tPv+0xc3)[0x504b85]
/lib64/libpthread.so.0(+0xf130)[0x2b9eedd53130]
/lib64/libc.so.6(gsignal+0x39)[0x2b9eeed215c9]
/lib64/libc.so.6(abort+0x148)[0x2b9eeed22cd8]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z12ink_fatal_vaPKcP13__va_list_tag+0x0)[0x2b9eec28ab99]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z12ink_fatal_vaPKcP13__va_list_tag+0xb9)[0x2b9eec28ac52]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z9ink_fatalPKcz+0x9f)[0x2b9eec28acf9]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(+0x3b63e)[0x2b9eec28863e]
/opt/jenkins/tsqa-master/bin/traffic_server(_TSAssert+0x0)[0x51b384]
/opt/jenkins/tsqa-master/bin/traffic_server(TSTextLogObjectDestroy+0x30)[0x52b394]
--
traffic_server - STACK TRACE: 
/opt/jenkins/tsqa-master/bin/traffic_server(_Z19crash_logger_invokeiP9siginfo_tPv+0xc3)[0x504b85]
/lib64/libpthread.so.0(+0xf130)[0x2ad56b691130]
/lib64/libc.so.6(gsignal+0x39)[0x2ad56c65f5c9]
/lib64/libc.so.6(abort+0x148)[0x2ad56c660cd8]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z12ink_fatal_vaPKcP13__va_list_tag+0x0)[0x2ad569bc8b99]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z12ink_fatal_vaPKcP13__va_list_tag+0xb9)[0x2ad569bc8c52]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z9ink_fatalPKcz+0x9f)[0x2ad569bc8cf9]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(+0x3b63e)[0x2ad569bc663e]
/opt/jenkins/tsqa-master/bin/traffic_server(_TSAssert+0x0)[0x51b384]
/opt/jenkins/tsqa-master/bin/traffic_server(TSTextLogObjectDestroy+0x30)[0x52b394]
FAIL: detected a crash
traffic_server - STACK TRACE: 
/opt/jenkins/tsqa-master/bin/traffic_server(_Z19crash_logger_invokeiP9siginfo_tPv+0xc3)[0x504b85]
/lib64/libpthread.so.0(+0xf130)[0x2b9eedd53130]
/lib64/libc.so.6(gsignal+0x39)[0x2b9eeed215c9]
/lib64/libc.so.6(abort+0x148)[0x2b9eeed22cd8]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z12ink_fatal_vaPKcP13__va_list_tag+0x0)[0x2b9eec28ab99]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z12ink_fatal_vaPKcP13__va_list_tag+0xb9)[0x2b9eec28ac52]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z9ink_fatalPKcz+0x9f)[0x2b9eec28acf9]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(+0x3b63e)[0x2b9eec28863e]
/opt/jenkins/tsqa-master/bin/traffic_server(_TSAssert+0x0)[0x51b384]
/opt/jenkins/tsqa-master/bin/traffic_server(TSTextLogObjectDestroy+0x30)[0x52b394]
--
traffic_server - STACK TRACE: 
/opt/jenkins/tsqa-master/bin/traffic_server(_Z19crash_logger_invokeiP9siginfo_tPv+0xc3)[0x504b85]
/lib64/libpthread.so.0(+0xf130)[0x2ad56b691130]
/lib64/libc.so.6(gsignal+0x39)[0x2ad56c65f5c9]
/lib64/libc.so.6(abort+0x148)[0x2ad56c660cd8]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z12ink_fatal_vaPKcP13__va_list_tag+0x0)[0x2ad569bc8b99]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z12ink_fatal_vaPKcP13__va_list_tag+0xb9)[0x2ad569bc8c52]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(_Z9ink_fatalPKcz+0x9f)[0x2ad569bc8cf9]
/opt/jenkins/tsqa-master/lib/libtsutil.so.5(+0x3b63e)[0x2ad569bc663e]
/opt/jenkins/tsqa-master/bin/traffic_server(_TSAssert+0x0)[0x51b384]
/opt/jenkins/tsqa-master/bin/traffic_server(TSTextLogObjectDestroy+0x30)[0x52b394]
FAIL: detected a crash
./functions: line 177: 25216 Terminated  ( tsexec traffic_cop 
--stdout  $log )
./functions: line 177: 25462 Terminated  reload
MSG: shutting down ...
Failure: test-log-refcounting
test-multicert-loading
test-multicert-loading
test-privilege-elevation
test-multicert-loading
test-privilege-elevation
-- Starting test: 

[jira] [Updated] (TS-3301) TLS ticket rotation

2015-01-15 Thread Brian Geffon (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Geffon updated TS-3301:
-
Fix Version/s: 5.3.0

 TLS ticket rotation
 ---

 Key: TS-3301
 URL: https://issues.apache.org/jira/browse/TS-3301
 Project: Traffic Server
  Issue Type: Bug
  Components: Core, SSL
Reporter: Brian Geffon
Assignee: Brian Geffon
 Fix For: 5.3.0

 Attachments: traffic_line_rotation_6.diff


 We all know that it is bad security practice to use the same password/key all 
 the time. This project tries to rotate TLS session ticket keys periodically. 
 When an admin runs traffic_line -x after a new ticket key is put in the key 
 file ssl_ticket.key, an event will be generated and ATS will reconfigure SSL. 
 The keys are read in all at the same time and the first entry is the most 
 recent key. A new key is assumed to be put at the beginning of ssl_ticket.key 
 file and an old key is chopped off at the end from the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3301) TLS ticket rotation

2015-01-15 Thread Brian Geffon (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Geffon updated TS-3301:
-
Issue Type: New Feature  (was: Bug)

 TLS ticket rotation
 ---

 Key: TS-3301
 URL: https://issues.apache.org/jira/browse/TS-3301
 Project: Traffic Server
  Issue Type: New Feature
  Components: Core, SSL
Reporter: Brian Geffon
Assignee: Brian Geffon
 Fix For: 5.3.0

 Attachments: traffic_line_rotation_6.diff


 We all know that it is bad security practice to use the same password/key all 
 the time. This project tries to rotate TLS session ticket keys periodically. 
 When an admin runs traffic_line -x after a new ticket key is put in the key 
 file ssl_ticket.key, an event will be generated and ATS will reconfigure SSL. 
 The keys are read in all at the same time and the first entry is the most 
 recent key. A new key is assumed to be put at the beginning of ssl_ticket.key 
 file and an old key is chopped off at the end from the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : fedora_21-master » gcc,fedora_21,spdy #80

2015-01-15 Thread jenkins
See 
https://ci.trafficserver.apache.org/job/fedora_21-master/compiler=gcc,label=fedora_21,type=spdy/80/



Jenkins build is back to normal : tsqa-master #29

2015-01-15 Thread jenkins
See https://ci.trafficserver.apache.org/job/tsqa-master/29/changes



[jira] [Created] (TS-3303) tcpinfo: Log file can get closed when disk is full

2015-01-15 Thread Leif Hedstrom (JIRA)
Leif Hedstrom created TS-3303:
-

 Summary: tcpinfo: Log file can get closed when disk is full
 Key: TS-3303
 URL: https://issues.apache.org/jira/browse/TS-3303
 Project: Traffic Server
  Issue Type: Bug
  Components: Plugins
Reporter: Leif Hedstrom


It's unfortunate, but it seems that TSTextLogObjectWrite() returns TSError for 
both disk full and write failed. I think, but not 100% sure, that this can 
trip us to delete the log object more than once, due to a race condition in the 
code.

My suggestion is to just not close the log object, but hope that we the log 
rotation / cleanup code will free up disk space, such that future log writes 
succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)