[jira] [Created] (TS-3164) why the load of trafficserver occurrs a abrupt rise on a occasion ?

2014-11-04 Thread taoyunxing (JIRA)
taoyunxing created TS-3164:
--

 Summary: why the load of trafficserver occurrs a abrupt rise on a 
occasion ?
 Key: TS-3164
 URL: https://issues.apache.org/jira/browse/TS-3164
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: taoyunxing


I use Tsar to monitor the traffic status of the ATS 4.2.0, and come across the 
following problem:
Time   ---cpu-- ---mem-- ---tcp-- -traffic --sda--- --sdb--- 
--sdc---  ---load- 
Time util util   retranbytin  bytout util util 
util load1   
03/11/14-18:20  40.6787.19 3.3624.5M   43.9M13.0294.68 
0.00  5.34   
03/11/14-18:25  40.3087.20 3.2722.5M   42.6M12.3894.87 
0.00  5.79   
03/11/14-18:30  40.8484.67 3.4421.4M   42.0M13.2995.37 
0.00  6.28   
03/11/14-18:35  43.6387.36 3.2123.8M   45.0M13.2393.99 
0.00  7.37   
03/11/14-18:40  42.2587.37 3.0924.2M   44.8M12.8495.77 
0.00  7.25   
03/11/14-18:45  42.9687.44 3.4623.3M   46.0M12.9695.84 
0.00  7.10   
03/11/14-18:50  44.0087.42 3.4922.3M   43.0M14.1794.99 
0.00  6.57   
03/11/14-18:55  42.2087.44 3.4622.3M   43.6M13.1996.05 
0.00  6.09   
03/11/14-19:00  44.9087.53 3.6023.6M   46.5M13.6196.67 
0.00  8.06   
03/11/14-19:05  46.2687.73 3.2425.8M   49.1M15.3994.05 
0.00  9.98   
03/11/14-19:10  43.8587.69 3.1925.4M   50.9M12.8897.80 
0.00  7.99   
03/11/14-19:15  45.2887.69 3.3625.6M   49.6M13.1096.86 
0.00  7.47   
03/11/14-19:20  44.1185.20 3.2924.1M   47.8M14.2496.75 
0.00  5.82   
03/11/14-19:25  45.2687.78 3.5224.4M   47.7M13.2195.44 
0.00  7.61   
03/11/14-19:30  44.8387.80 3.6425.7M   50.8M13.2798.02 
0.00  6.85   
03/11/14-19:35  44.8987.78 3.6123.9M   49.0M13.3497.42 
0.00  7.04   
03/11/14-19:40  69.2188.88 0.5518.3M   33.7M11.3971.23 
0.00 65.80   
03/11/14-19:45  72.4788.66 0.2715.4M   31.6M11.5172.31 
0.00 11.56   
03/11/14-19:50  44.8788.72 4.1122.7M   46.3M12.9997.33 
0.00  8.29   
in addition, top command show
hi:0
ni:0
si:45.56
st:0
sy:13.92
us:12.58
wa:14.3
id:15.96

who help me ? think in advance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3164) why the load of trafficserver occurrs a abrupt rise on a occasion ?

2014-11-04 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-3164:
---
Description: 
I use Tsar to monitor the traffic status of the ATS 4.2.0, and come across the 
following problem:
Time   ---cpu-- ---mem-- ---tcp-- -traffic --sda--- --sdb--- 
--sdc---  ---load- 
Time util util   retranbytin  bytout util util 
util load1   
03/11/14-18:20  40.6787.19 3.3624.5M   43.9M13.0294.68 
0.00  5.34   
03/11/14-18:25  40.3087.20 3.2722.5M   42.6M12.3894.87 
0.00  5.79   
03/11/14-18:30  40.8484.67 3.4421.4M   42.0M13.2995.37 
0.00  6.28   
03/11/14-18:35  43.6387.36 3.2123.8M   45.0M13.2393.99 
0.00  7.37   
03/11/14-18:40  42.2587.37 3.0924.2M   44.8M12.8495.77 
0.00  7.25   
03/11/14-18:45  42.9687.44 3.4623.3M   46.0M12.9695.84 
0.00  7.10   
03/11/14-18:50  44.0087.42 3.4922.3M   43.0M14.1794.99 
0.00  6.57   
03/11/14-18:55  42.2087.44 3.4622.3M   43.6M13.1996.05 
0.00  6.09   
03/11/14-19:00  44.9087.53 3.6023.6M   46.5M13.6196.67 
0.00  8.06   
03/11/14-19:05  46.2687.73 3.2425.8M   49.1M15.3994.05 
0.00  9.98   
03/11/14-19:10  43.8587.69 3.1925.4M   50.9M12.8897.80 
0.00  7.99   
03/11/14-19:15  45.2887.69 3.3625.6M   49.6M13.1096.86 
0.00  7.47   
03/11/14-19:20  44.1185.20 3.2924.1M   47.8M14.2496.75 
0.00  5.82   
03/11/14-19:25  45.2687.78 3.5224.4M   47.7M13.2195.44 
0.00  7.61   
03/11/14-19:30  44.8387.80 3.6425.7M   50.8M13.2798.02 
0.00  6.85   
03/11/14-19:35  44.8987.78 3.6123.9M   49.0M13.3497.42 
0.00  7.04   
03/11/14-19:40  69.2188.88 0.5518.3M   33.7M11.3971.23 
0.00 65.80   
03/11/14-19:45  72.4788.66 0.2715.4M   31.6M11.5172.31 
0.00 11.56   
03/11/14-19:50  44.8788.72 4.1122.7M   46.3M12.9997.33 
0.00  8.29   
in addition, top command show
hi:0
ni:0
si:45.56
st:0
sy:13.92
us:12.58
wa:14.3
id:15.96

who help me ? thanks in advance.

  was:
I use Tsar to monitor the traffic status of the ATS 4.2.0, and come across the 
following problem:
Time   ---cpu-- ---mem-- ---tcp-- -traffic --sda--- --sdb--- 
--sdc---  ---load- 
Time util util   retranbytin  bytout util util 
util load1   
03/11/14-18:20  40.6787.19 3.3624.5M   43.9M13.0294.68 
0.00  5.34   
03/11/14-18:25  40.3087.20 3.2722.5M   42.6M12.3894.87 
0.00  5.79   
03/11/14-18:30  40.8484.67 3.4421.4M   42.0M13.2995.37 
0.00  6.28   
03/11/14-18:35  43.6387.36 3.2123.8M   45.0M13.2393.99 
0.00  7.37   
03/11/14-18:40  42.2587.37 3.0924.2M   44.8M12.8495.77 
0.00  7.25   
03/11/14-18:45  42.9687.44 3.4623.3M   46.0M12.9695.84 
0.00  7.10   
03/11/14-18:50  44.0087.42 3.4922.3M   43.0M14.1794.99 
0.00  6.57   
03/11/14-18:55  42.2087.44 3.4622.3M   43.6M13.1996.05 
0.00  6.09   
03/11/14-19:00  44.9087.53 3.6023.6M   46.5M13.6196.67 
0.00  8.06   
03/11/14-19:05  46.2687.73 3.2425.8M   49.1M15.3994.05 
0.00  9.98   
03/11/14-19:10  43.8587.69 3.1925.4M   50.9M12.8897.80 
0.00  7.99   
03/11/14-19:15  45.2887.69 3.3625.6M   49.6M13.1096.86 
0.00  7.47   
03/11/14-19:20  44.1185.20 3.2924.1M   47.8M14.2496.75 
0.00  5.82   
03/11/14-19:25  45.2687.78 3.5224.4M   47.7M13.2195.44 
0.00  7.61   
03/11/14-19:30  44.8387.80 3.6425.7M   50.8M13.2798.02 
0.00  6.85   
03/11/14-19:35  44.8987.78 3.6123.9M   49.0M13.3497.42 
0.00  7.04   
03/11/14-19:40  69.2188.88 0.5518.3M   33.7M11.3971.23 
0.00 65.80   
03/11/14-19:45  72.4788.66 0.2715.4M   31.6M11.5172.31 
0.00 11.56   
03/11/14-19:50  44.8788.72 4.1122.7M   46.3M12.9997.33 
0.00  8.29   
in addition, top command show
hi:0
ni:0
si:45.56
st:0
sy:13.92
us:12.58
wa:14.3
id:15.96

who help me ? think in advance.

Environment: CentOS 6.3 64bit, 8 cores, 128G mem 

 why the load of trafficserver occurrs a abrupt rise on a occasion ?
 ---

 Key: TS-3164

[jira] [Assigned] (TS-60) support writing large buffers via zero-copy

2014-11-04 Thread Alan M. Carroll (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-60?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan M. Carroll reassigned TS-60:
-

Assignee: Alan M. Carroll  (was: John Plevyak)

 support writing large buffers via zero-copy
 ---

 Key: TS-60
 URL: https://issues.apache.org/jira/browse/TS-60
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache
Affects Versions: 3.0.0
 Environment: all
Reporter: John Plevyak
Assignee: Alan M. Carroll
 Fix For: sometime

   Original Estimate: 48h
  Remaining Estimate: 48h

 Currently all write data is written from the aggregation buffer.  In order to 
 support large buffer writes efficiently
 it would be nice to be able to write directly from page aligned memory.  This 
 would be bother more efficient and
 would help support large objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-54) UnixNet cleanup, encapsulation of event subsystem

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-54?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-54:
-
Assignee: Phil Sorber  (was: John Plevyak)

 UnixNet cleanup, encapsulation of event subsystem
 -

 Key: TS-54
 URL: https://issues.apache.org/jira/browse/TS-54
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cleanup
 Environment: all unixesque
Reporter: John Plevyak
Assignee: Phil Sorber
 Fix For: sometime

 Attachments: proposed-jp-v1-List.h, ts-List-net-cleanup-jp-v2.patch, 
 ts-List_and_net-cleanup-jp-v1.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 The UnixNet subsystem was modified for epoll, but lots of the old data 
 structures, data members and code
 remain for the old bucket approach.
 The epoll code should also be encapsulated to simplify support for other 
 platforms and a possible move
 to an event library.
 The current code is complicated by limitations in Queue which require 
 specifying the link field for every
 use, but which can be fixed by in the template.
 Finally, the current code does an unnecessary allocation for the epoll struct 
 which should be part of the NetVConnection etc.
 and it takes a lock for the enable_queue which can be avoided by using the 
 non-locking AtomicSSL.
 This work is also good preparation for evaluating libev or libevent as it 
 will reduce the amount of code which
 will have to be changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-61) multiple do_io_pread on a CacheVConnection

2014-11-04 Thread Alan M. Carroll (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-61?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196303#comment-14196303
 ] 

Alan M. Carroll commented on TS-61:
---

I think I'll roll this in to the partial object caching.

 multiple do_io_pread on a CacheVConnection
 --

 Key: TS-61
 URL: https://issues.apache.org/jira/browse/TS-61
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache
Affects Versions: 3.0.0
Reporter: John Plevyak
Assignee: Alan M. Carroll
 Fix For: sometime

 Attachments: pread-2.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 The current TS-46 patch includes do_io_pread support but allows only a single 
 do_io_pread.
 In order to efficiently support range requests with multiple ranges it would 
 be helpful to be able
 to do multiple do_io_pread's on a single open CacheVConnection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-81) Have one single place to store and lookup remap rules irrespective of type

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-81?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-81:
-
Fix Version/s: (was: 5.3.0)
   sometime

 Have one single place to store and lookup remap rules irrespective of type
 --

 Key: TS-81
 URL: https://issues.apache.org/jira/browse/TS-81
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0.0a
Reporter: Manjesh Nilange
Priority: Minor
  Labels: A
 Fix For: sometime


 Currently, remap rules are stored in different structures and looked up 
 separately based on type (forward, reverse, etc.). It'd be better design and 
 more maintainable to process (store, search) all rules in one structure and 
 then use type to determine action.
 A fundamental problem with our current implementation is that the order in 
 remap.config is not honored. E.g. a map directive always takes precedence 
 over any redirect directives (if both matches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-110) Improve regex remap to allow substitutions in path field

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-110:
--
Fix Version/s: (was: 5.3.0)
   sometime

 Improve regex remap to allow substitutions in path field
 

 Key: TS-110
 URL: https://issues.apache.org/jira/browse/TS-110
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.0.0
Reporter: Manjesh Nilange
Assignee: Brian Geffon
Priority: Minor
 Fix For: sometime


 Currently, regex support covers only the host field of the remap rules. It'd 
 be nice to extend this to allow substitutions into the path field as well. 
 This will allow rules like:
 regex_map http://www.example-(.*).com/ http://real-example.com/$1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-81) Have one single place to store and lookup remap rules irrespective of type

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-81?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-81:
-
Labels:   (was: A)

 Have one single place to store and lookup remap rules irrespective of type
 --

 Key: TS-81
 URL: https://issues.apache.org/jira/browse/TS-81
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0.0a
Reporter: Manjesh Nilange
Priority: Minor
 Fix For: sometime


 Currently, remap rules are stored in different structures and looked up 
 separately based on type (forward, reverse, etc.). It'd be better design and 
 more maintainable to process (store, search) all rules in one structure and 
 then use type to determine action.
 A fundamental problem with our current implementation is that the order in 
 remap.config is not honored. E.g. a map directive always takes precedence 
 over any redirect directives (if both matches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-110) Improve regex remap to allow substitutions in path field

2014-11-04 Thread Susan Hinrichs (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196312#comment-14196312
 ] 

Susan Hinrichs commented on TS-110:
---

Brian, are you still working on this?  Any target for ETA?

 Improve regex remap to allow substitutions in path field
 

 Key: TS-110
 URL: https://issues.apache.org/jira/browse/TS-110
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.0.0
Reporter: Manjesh Nilange
Assignee: Brian Geffon
Priority: Minor
 Fix For: sometime


 Currently, regex support covers only the host field of the remap rules. It'd 
 be nice to extend this to allow substitutions into the path field as well. 
 This will allow rules like:
 regex_map http://www.example-(.*).com/ http://real-example.com/$1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-119) RAM only caching

2014-11-04 Thread Alan M. Carroll (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196314#comment-14196314
 ] 

Alan M. Carroll commented on TS-119:


This may end up being part of the Cache API toolkit (which is also tiered 
caching).

 RAM only caching
 

 Key: TS-119
 URL: https://issues.apache.org/jira/browse/TS-119
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache
Reporter: John Plevyak
Priority: Minor
 Fix For: sometime


 It would be nice if we could run with a RAM only cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-119) RAM only caching

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-119:
--
Assignee: (was: Phil Sorber)

 RAM only caching
 

 Key: TS-119
 URL: https://issues.apache.org/jira/browse/TS-119
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache
Reporter: John Plevyak
Priority: Minor
 Fix For: sometime


 It would be nice if we could run with a RAM only cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-203) config files ownership

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-203:
--
Fix Version/s: (was: 5.2.0)
   6.0.0

 config files ownership
 --

 Key: TS-203
 URL: https://issues.apache.org/jira/browse/TS-203
 Project: Traffic Server
  Issue Type: Bug
  Components: Build
Reporter: Leif Hedstrom
Priority: Minor
 Fix For: 6.0.0


 It's semi-odd that the admin user (nobody) is also the user as to which 
 traffic_server process changes it's euid to. This means that the 
 traffic_server process has write permissions on the config files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-228) Cleanup usage of long in all code

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-228:
--
Assignee: (was: James Peach)

 Cleanup usage of long in all code
 ---

 Key: TS-228
 URL: https://issues.apache.org/jira/browse/TS-228
 Project: Traffic Server
  Issue Type: Bug
  Components: Cleanup
Affects Versions: 2.1.0
Reporter: John Plevyak
Priority: Critical
 Fix For: 5.2.0


 Solaris 64-bit SunPro long is 64-bit while for g++ 64-bit long is 32-bit.
 This is a potential can of worms which at the least is making records.snap
 incompatible but at worse could be the cause of other bugs.
 In any case we should not be using long in the TS code, but instead use
 either int which is always 32-bits or inkXXX of a particular size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-228) Cleanup usage of long in all code

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-228:
--
Fix Version/s: (was: 5.2.0)
   5.3.0

 Cleanup usage of long in all code
 ---

 Key: TS-228
 URL: https://issues.apache.org/jira/browse/TS-228
 Project: Traffic Server
  Issue Type: Bug
  Components: Cleanup
Affects Versions: 2.1.0
Reporter: John Plevyak
Priority: Critical
 Fix For: 5.3.0


 Solaris 64-bit SunPro long is 64-bit while for g++ 64-bit long is 32-bit.
 This is a potential can of worms which at the least is making records.snap
 incompatible but at worse could be the cause of other bugs.
 In any case we should not be using long in the TS code, but instead use
 either int which is always 32-bits or inkXXX of a particular size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-242) Connect timeout doesn't reset until first byte is received from server

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-242:
--
Fix Version/s: (was: 5.3.0)
   sometime

 Connect timeout doesn't reset until first byte is received from server
 --

 Key: TS-242
 URL: https://issues.apache.org/jira/browse/TS-242
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Steve Jiang
Assignee: Alan M. Carroll
Priority: Minor
 Fix For: sometime


 proxy.config.http.connect_attempts_timeout
 proxy.config.http.parent_proxy.connect_attempts_timeout
 proxy.config.http.post_connect_attempts_timeout
 These timeouts are implemented with inactivity timeout on the netvc and don't 
 behave as expected.  
 If the connect succeeds (the remote server successfully accepted) but the 
 remote server does not respond with any bytes within the timeout period, TS 
 still treats it as a connect timeout.  If retries are enabled and the origin 
 server is slow to start sending responses (but not down), it will keep 
 sending requests and closing the connection after getting no response within 
 the connect timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-275) API: TSHttpTxnConnectTimeoutSet isn't documented

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-275:
--
Labels: Docs  (was: )

 API: TSHttpTxnConnectTimeoutSet isn't documented
 

 Key: TS-275
 URL: https://issues.apache.org/jira/browse/TS-275
 Project: Traffic Server
  Issue Type: Bug
  Components: Documentation, TS API
Reporter: Miles Libbey
Priority: Minor
  Labels: Docs
 Fix For: Docs


 Someone had a question about INKHttpTxnConnectTimeoutSet -- it's not 
 documented in the SDK guide.  Can someone please provide:
 - Where it should live (ie, Function Reference/HTTP Functions/  HTTP 
 Transaction Functions)
 - suggested text in a similar format to 
 http://incubator.apache.org/trafficserver/docs/sdk/HTTPFunctions.html
 (ie, what it does; prototype; description; returns, and possibly an example)
 (Bryan also mentioned that a few timeout apis were added last year, but, I 
 don't see any others with timeout in them (except for 
 https://issues.apache.org/jira/browse/TS-256 -- INKHttpSchedule -- are we 
 missing others?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-302) -fstrict-aliasing optimizer generates bad code

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-302:
--
Assignee: (was: Phil Sorber)

 -fstrict-aliasing optimizer generates bad code
 --

 Key: TS-302
 URL: https://issues.apache.org/jira/browse/TS-302
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cleanup
Reporter: Miles Libbey
Priority: Minor
 Fix For: sometime

 Attachments: no-no-fstrict-alias.patch, ts-302.patch


 (moving from yahoo bug 525119)
 Original description
 by Leif Hedstrom 4 years ago at 2005-12-16 08:41
 Not sure if this is a compiler bug or a code issue on our side, but enabling 
 the
 -fstrict-aliasing optimization option generates faulty code. This optimization
 technique is enabled implicitly with -Os, -O2 and -O3, so for now I'm 
 explicitly
 turning it off with
-O3 -fno-strict-aliasing
 This solves the problem where the traffic server would return data of 0 or 1
 length out of the cache. This initially looked like the cache corruption
 problem, but is completely different and unrelated. The cache corruption 
 problem
 has been fixed and closed.
 I'm opening this bug as a reminder, at some point we should isolate which code
 triggers the strict-aliasing problem, and confirm if it's a compiler bug or a
 problem in our code.
   
  
 Comment 1
  by Michael Bigby 4 years ago at 2005-12-16 09:07:40
 I'm recommending that we get Ed's input on this.  He may some insight compiler
 issues...
   
  
 Comment 2
  by Leif Hedstrom  4 years ago at 2005-12-16 10:02:07
 That'd be great!
 We haven't had a chance yet to review the code that might be affecting this,
 it's obviously something with unions and how the compiler handles
 storage/alignment on the members.
   
  
 Comment 3
  by Ed Hall  4 years ago at 2006-03-03 11:46:52
 This is with gcc 2.95.3, correct?  There have been a number of complaints 
 around
 the 'net about problems with -fstrict-aliasing.  I've not really looked very
 deeply into it, though I should mention that certain C code was known at the
 time to malfunction when by-the-standard aliasing rules were enforced 
 (starting
 with the Linux kernel).
 In essense, the -fstrict-aliasing optimizations assume that any particular 
 part
 of memory accessed via a specific type of pointer won't be accessed as another
 type. There are a set of optimizations that are safe only when it can be
 guaranteed that a given bit of memory hasn't been manipulated via pointer; if
 the compiler assumes that the rather arcane C aliasing rules have been 
 followed
 (aliasing in this case meaning accessing a given bit of memory with more 
 than
 one type of pointer), there are more situations where such optimizations can 
 be
 applied.  Code which uses type casts where unions might be more appropriate is
 the most likely to break aliasing rules.
 In any case, gcc 3/4 is less aggressive (and perhaps less buggy) in applying 
 the
 C aliasing rules, and has added warnings for obvious violations.  It's never
 been clear to me if gcc 2.95.3 was actually broken or not, or if there simply
 was a lot of code out there that ran afoul of the standard.
   
  
 Comment 4
  by Leif Hedstrom  4 years ago at 2006-03-03 12:50:22
 Actually, the problem only occured after we converted the code from gcc-2.9x 
 to
 gcc-3.4.4. We have since cleared out a *lot* of compiler warnings (thousands 
 and
 thousands), so maybe we should try again to compile without the
 -fno-strict-aliasing, and see if gcc will point us to some places where we do
 dangerous things. The code does some very scary things manipulating objects
 directly, by byte-offsets for instance.
 I think it's pretty easy to reproduce the problem, it basically renders the
 cache completely useless, returning objects of size 0 or 1.
   
  
 Comment 5
  by Ed Hall 4 years ago at 2006-03-03 16:44:04
 Ah, that makes sense.  I just checked, and the -fstrict-aliasing option wasn't
 part of the -O2 optimizations on gcc 2.95, but got added sometime during gcc 3
 development.
   
  
 Comment 6
  by Ed Hall 4 years ago at 2006-03-03 16:46:43
 (Just to be clear, -fstrict-aliasing was *available* with gcc 2.95.3, it just
 wasn't activated by the -O optimization flags.)
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-306) enable log rotation for diags.log

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-306:
--
Assignee: kang li

 enable log rotation for diags.log
 -

 Key: TS-306
 URL: https://issues.apache.org/jira/browse/TS-306
 Project: Traffic Server
  Issue Type: Improvement
  Components: Logging
Reporter: Miles Libbey
Assignee: kang li
Priority: Critical
 Fix For: 5.3.0


 (from yahoo bug 913896)
 Original description
 by Leif Hedstrom 3 years ago at 2006-12-04 12:42
 There might be reasons why this file might get filled up, e.g. libraries used 
 by plugins producing output on STDOUT/STDERR. A few suggestions have been
 made, to somehow rotate traffic.out. One possible solution (suggested by 
 Ryan) is to use cronolog (http://cronolog.org/), which seems like a fine idea.
   
  
 Comment 1
  by Joseph Rothrock  2 years ago at 2007-10-17 09:13:24
 Maybe consider rolling diags.log as well. -Feature enhancement.
   
 Comment 2
  by Kevin Dalley 13 months ago at 2009-03-04 15:32:18
 When traffic.out gets filled up, error.log stops filing up, even though 
 rotation is turned on. This is
 counter-intuitive.  Rotation does not control traffic.out, but a large 
 traffic.out will stop error.log from being
 written.
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-315) Add switch to disable config file generation/runtime behavior changing

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-315:
--
Fix Version/s: (was: 5.2.0)
   sometime

 Add switch to disable config file generation/runtime behavior changing
 --

 Key: TS-315
 URL: https://issues.apache.org/jira/browse/TS-315
 Project: Traffic Server
  Issue Type: Sub-task
  Components: Configuration
Reporter: Miles Libbey
Priority: Minor
 Fix For: sometime


 (was yahoo bug 1863676)
 Original description
 by Michael S. Fischer  2 years ago at 2008-04-09 09:52
 In production, in order to improve site stability, it is imperative that TS 
 never accidentally overwrite its own
 configuration files.  
 For this reason, we'd like to request a switch be added to TS, preferably via 
 the command line, that disables all
 automatic configuration file generation or other  runtime behavioral changes 
 initiated by any form of IPC other than
 'traffic_line -x'  (including the web interface, etc.)
   
  
 Comment 1
  by Bjornar Sandvik 2 years ago at 2008-04-09 09:57:17
 A very crucial request, in my opinion. If TS needs to be able to read 
 command-line config changes on the fly, these
 changes should be stored in another config file (for example 
 remap.config.local instead of remap.config). We have a
 patch config package that overwrites 4 of the config files under 
 /home/conf/ts/, and with all packages 
 we'd like to think that the content of these files can't change outside our 
 control.

 Comment 2
  by Bryan Call  2 years ago at 2008-04-09 11:02:46
 traffic_line -x doesn't modify the configuration, it reloads the 
 configuration files.  If we want to have an option for
 this it would be good to have it as an option configuration file (CONFIG 
 proxy.config.write_protect INT 1).
 It would be an equivalent of write protecting floppies (ahh the memories)...
   
  
 Comment 3
  by Michael S. Fischer  2 years ago at 2008-04-09 11:09:09
 I don't think it would be a good idea to have this in the configuration file, 
 as it would introduce a chicken/egg
 problem.
   
  
 Comment 4
  by Leif Hedstrom 19 months ago at 2008-08-27 12:43:17
 So I'm not 100% positive that this isn't just a bad interaction. Now, it's 
 only
 triggered when trafficserver is running, but usually what ends up happening 
 is that we get a records.config which
 looks like it's the default config that comes with the trafficserver package.
 It's possible it's all one and the same issue, or we might have two issues.
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-337) Cache Replacement Algorithm plug-in API

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs reassigned TS-337:
-

Assignee: Susan Hinrichs

 Cache Replacement Algorithm plug-in API
 ---

 Key: TS-337
 URL: https://issues.apache.org/jira/browse/TS-337
 Project: Traffic Server
  Issue Type: New Feature
  Components: TS API
Affects Versions: 2.0.0
Reporter: Mark Nottingham
Assignee: Susan Hinrichs
 Fix For: sometime


 New cache replacement algorithms are often proposed, and often it's not a 
 one size fits all problem; different workloads require different approaches.
 To facilitate this, TS should have a pluggable cache replacement policy API, 
 both for the memory and disk cache.
 Squid has done this to good effect; see
   http://www.squid-cache.org/cgi-bin/cvsweb.cgi/squid/src/repl/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-337) Cache Replacement Algorithm plug-in API

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-337:
--
Assignee: Alan M. Carroll  (was: Susan Hinrichs)

 Cache Replacement Algorithm plug-in API
 ---

 Key: TS-337
 URL: https://issues.apache.org/jira/browse/TS-337
 Project: Traffic Server
  Issue Type: New Feature
  Components: TS API
Affects Versions: 2.0.0
Reporter: Mark Nottingham
Assignee: Alan M. Carroll
 Fix For: sometime


 New cache replacement algorithms are often proposed, and often it's not a 
 one size fits all problem; different workloads require different approaches.
 To facilitate this, TS should have a pluggable cache replacement policy API, 
 both for the memory and disk cache.
 Squid has done this to good effect; see
   http://www.squid-cache.org/cgi-bin/cvsweb.cgi/squid/src/repl/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3164) why the load of trafficserver occurrs a abrupt rise on a occasion ?

2014-11-04 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach updated TS-3164:

Description: 
I use Tsar to monitor the traffic status of the ATS 4.2.0, and come across the 
following problem:

{code}
Time   ---cpu-- ---mem-- ---tcp-- -traffic --sda--- --sdb--- 
--sdc---  ---load- 
Time util util   retranbytin  bytout util util 
util load1   
03/11/14-18:20  40.6787.19 3.3624.5M   43.9M13.0294.68 
0.00  5.34   
03/11/14-18:25  40.3087.20 3.2722.5M   42.6M12.3894.87 
0.00  5.79   
03/11/14-18:30  40.8484.67 3.4421.4M   42.0M13.2995.37 
0.00  6.28   
03/11/14-18:35  43.6387.36 3.2123.8M   45.0M13.2393.99 
0.00  7.37   
03/11/14-18:40  42.2587.37 3.0924.2M   44.8M12.8495.77 
0.00  7.25   
03/11/14-18:45  42.9687.44 3.4623.3M   46.0M12.9695.84 
0.00  7.10   
03/11/14-18:50  44.0087.42 3.4922.3M   43.0M14.1794.99 
0.00  6.57   
03/11/14-18:55  42.2087.44 3.4622.3M   43.6M13.1996.05 
0.00  6.09   
03/11/14-19:00  44.9087.53 3.6023.6M   46.5M13.6196.67 
0.00  8.06   
03/11/14-19:05  46.2687.73 3.2425.8M   49.1M15.3994.05 
0.00  9.98   
03/11/14-19:10  43.8587.69 3.1925.4M   50.9M12.8897.80 
0.00  7.99   
03/11/14-19:15  45.2887.69 3.3625.6M   49.6M13.1096.86 
0.00  7.47   
03/11/14-19:20  44.1185.20 3.2924.1M   47.8M14.2496.75 
0.00  5.82   
03/11/14-19:25  45.2687.78 3.5224.4M   47.7M13.2195.44 
0.00  7.61   
03/11/14-19:30  44.8387.80 3.6425.7M   50.8M13.2798.02 
0.00  6.85   
03/11/14-19:35  44.8987.78 3.6123.9M   49.0M13.3497.42 
0.00  7.04   
03/11/14-19:40  69.2188.88 0.5518.3M   33.7M11.3971.23 
0.00 65.80   
03/11/14-19:45  72.4788.66 0.2715.4M   31.6M11.5172.31 
0.00 11.56   
03/11/14-19:50  44.8788.72 4.1122.7M   46.3M12.9997.33 
0.00  8.29
{code}
   
in addition, top command show

{code}
hi:0
ni:0
si:45.56
st:0
sy:13.92
us:12.58
wa:14.3
id:15.96
{code}

who help me ? thanks in advance.

  was:
I use Tsar to monitor the traffic status of the ATS 4.2.0, and come across the 
following problem:
Time   ---cpu-- ---mem-- ---tcp-- -traffic --sda--- --sdb--- 
--sdc---  ---load- 
Time util util   retranbytin  bytout util util 
util load1   
03/11/14-18:20  40.6787.19 3.3624.5M   43.9M13.0294.68 
0.00  5.34   
03/11/14-18:25  40.3087.20 3.2722.5M   42.6M12.3894.87 
0.00  5.79   
03/11/14-18:30  40.8484.67 3.4421.4M   42.0M13.2995.37 
0.00  6.28   
03/11/14-18:35  43.6387.36 3.2123.8M   45.0M13.2393.99 
0.00  7.37   
03/11/14-18:40  42.2587.37 3.0924.2M   44.8M12.8495.77 
0.00  7.25   
03/11/14-18:45  42.9687.44 3.4623.3M   46.0M12.9695.84 
0.00  7.10   
03/11/14-18:50  44.0087.42 3.4922.3M   43.0M14.1794.99 
0.00  6.57   
03/11/14-18:55  42.2087.44 3.4622.3M   43.6M13.1996.05 
0.00  6.09   
03/11/14-19:00  44.9087.53 3.6023.6M   46.5M13.6196.67 
0.00  8.06   
03/11/14-19:05  46.2687.73 3.2425.8M   49.1M15.3994.05 
0.00  9.98   
03/11/14-19:10  43.8587.69 3.1925.4M   50.9M12.8897.80 
0.00  7.99   
03/11/14-19:15  45.2887.69 3.3625.6M   49.6M13.1096.86 
0.00  7.47   
03/11/14-19:20  44.1185.20 3.2924.1M   47.8M14.2496.75 
0.00  5.82   
03/11/14-19:25  45.2687.78 3.5224.4M   47.7M13.2195.44 
0.00  7.61   
03/11/14-19:30  44.8387.80 3.6425.7M   50.8M13.2798.02 
0.00  6.85   
03/11/14-19:35  44.8987.78 3.6123.9M   49.0M13.3497.42 
0.00  7.04   
03/11/14-19:40  69.2188.88 0.5518.3M   33.7M11.3971.23 
0.00 65.80   
03/11/14-19:45  72.4788.66 0.2715.4M   31.6M11.5172.31 
0.00 11.56   
03/11/14-19:50  44.8788.72 4.1122.7M   46.3M12.9997.33 
0.00  8.29   
in addition, top command show
hi:0
ni:0
si:45.56
st:0
sy:13.92
us:12.58
wa:14.3
id:15.96

who help me ? thanks in advance.


 why the load of trafficserver occurrs a abrupt rise on a occasion ?
 ---

 Key: TS-3164
 URL: 

[jira] [Assigned] (TS-357) Compiling with -Wconversion generates a lot of errors

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs reassigned TS-357:
-

Assignee: Susan Hinrichs

 Compiling with -Wconversion generates a lot of errors
 -

 Key: TS-357
 URL: https://issues.apache.org/jira/browse/TS-357
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cleanup
Affects Versions: 2.1.0
Reporter: Bryan Call
Assignee: Susan Hinrichs
 Fix For: sometime


 After adding -Wconversion to CFLAGS and CXXFLAGS I got 11k errors:
 [bcall@snowball traffic.git]$ gmake -k 2 /dev/stdout | grep -c ' error: '
 11432



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-357) Compiling with -Wconversion generates a lot of errors

2014-11-04 Thread Susan Hinrichs (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196331#comment-14196331
 ] 

Susan Hinrichs commented on TS-357:
---

Bryan will check this out, because a number of these have been fixed in other 
efforts.

 Compiling with -Wconversion generates a lot of errors
 -

 Key: TS-357
 URL: https://issues.apache.org/jira/browse/TS-357
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cleanup
Affects Versions: 2.1.0
Reporter: Bryan Call
Assignee: Bryan Call
 Fix For: sometime


 After adding -Wconversion to CFLAGS and CXXFLAGS I got 11k errors:
 [bcall@snowball traffic.git]$ gmake -k 2 /dev/stdout | grep -c ' error: '
 11432



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-357) Compiling with -Wconversion generates a lot of errors

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-357:
--
Assignee: Bryan Call  (was: Susan Hinrichs)

 Compiling with -Wconversion generates a lot of errors
 -

 Key: TS-357
 URL: https://issues.apache.org/jira/browse/TS-357
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cleanup
Affects Versions: 2.1.0
Reporter: Bryan Call
Assignee: Bryan Call
 Fix For: sometime


 After adding -Wconversion to CFLAGS and CXXFLAGS I got 11k errors:
 [bcall@snowball traffic.git]$ gmake -k 2 /dev/stdout | grep -c ' error: '
 11432



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3164) why the load of trafficserver occurrs a abrupt rise on a occasion ?

2014-11-04 Thread James Peach (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196335#comment-14196335
 ] 

James Peach commented on TS-3164:
-

I suggest using standard Linux tools to investigate this. {strace} and {perf} 
could show you interesting data.

 why the load of trafficserver occurrs a abrupt rise on a occasion ?
 ---

 Key: TS-3164
 URL: https://issues.apache.org/jira/browse/TS-3164
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
 Environment: CentOS 6.3 64bit, 8 cores, 128G mem 
Reporter: taoyunxing

 I use Tsar to monitor the traffic status of the ATS 4.2.0, and come across 
 the following problem:
 {code}
 Time   ---cpu-- ---mem-- ---tcp-- -traffic --sda--- --sdb--- 
 --sdc---  ---load- 
 Time util util   retranbytin  bytout util util
  util load1   
 03/11/14-18:20  40.6787.19 3.3624.5M   43.9M13.0294.68
  0.00  5.34   
 03/11/14-18:25  40.3087.20 3.2722.5M   42.6M12.3894.87
  0.00  5.79   
 03/11/14-18:30  40.8484.67 3.4421.4M   42.0M13.2995.37
  0.00  6.28   
 03/11/14-18:35  43.6387.36 3.2123.8M   45.0M13.2393.99
  0.00  7.37   
 03/11/14-18:40  42.2587.37 3.0924.2M   44.8M12.8495.77
  0.00  7.25   
 03/11/14-18:45  42.9687.44 3.4623.3M   46.0M12.9695.84
  0.00  7.10   
 03/11/14-18:50  44.0087.42 3.4922.3M   43.0M14.1794.99
  0.00  6.57   
 03/11/14-18:55  42.2087.44 3.4622.3M   43.6M13.1996.05
  0.00  6.09   
 03/11/14-19:00  44.9087.53 3.6023.6M   46.5M13.6196.67
  0.00  8.06   
 03/11/14-19:05  46.2687.73 3.2425.8M   49.1M15.3994.05
  0.00  9.98   
 03/11/14-19:10  43.8587.69 3.1925.4M   50.9M12.8897.80
  0.00  7.99   
 03/11/14-19:15  45.2887.69 3.3625.6M   49.6M13.1096.86
  0.00  7.47   
 03/11/14-19:20  44.1185.20 3.2924.1M   47.8M14.2496.75
  0.00  5.82   
 03/11/14-19:25  45.2687.78 3.5224.4M   47.7M13.2195.44
  0.00  7.61   
 03/11/14-19:30  44.8387.80 3.6425.7M   50.8M13.2798.02
  0.00  6.85   
 03/11/14-19:35  44.8987.78 3.6123.9M   49.0M13.3497.42
  0.00  7.04   
 03/11/14-19:40  69.2188.88 0.5518.3M   33.7M11.3971.23
  0.00 65.80   
 03/11/14-19:45  72.4788.66 0.2715.4M   31.6M11.5172.31
  0.00 11.56   
 03/11/14-19:50  44.8788.72 4.1122.7M   46.3M12.9997.33
  0.00  8.29
 {code}

 in addition, top command show
 {code}
 hi:0
 ni:0
 si:45.56
 st:0
 sy:13.92
 us:12.58
 wa:14.3
 id:15.96
 {code}
 who help me ? thanks in advance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-382) socket option cleanup (and bug fixes)

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-382:
--
Assignee: (was: Alan M. Carroll)

 socket option cleanup (and bug fixes)
 ---

 Key: TS-382
 URL: https://issues.apache.org/jira/browse/TS-382
 Project: Traffic Server
  Issue Type: Bug
  Components: Network
Reporter: Leif Hedstrom
 Fix For: sometime


 This is a bug moved from Y! Bugzilla, I'm posting the original bug 
 description and a few comments separately. Note that the bug description is 
 fairly limited, but while looking at this code, I noticed a lot of oddities 
 with the socket option support (lots of hardcoded stuff, and most of it is 
 not configurable).
 Note that the original bug should have been fixed already in Apache TS, but 
 the other comments are still applicable.
 From Bugzilla (posted by Leif):
 We have two socket option config options in records.config:
 proxy.config.net.sock_option_flag_in
 proxy.config.net.sock_option_flag_out
 With accept thread enabled, at least the _in option isn't honored. There are 
 possibly other cases in UnixNetAccept.cc
 that we don't honor these flags either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-454) Dynamic resizing API stats

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-454:
--
Assignee: (was: Phil Sorber)

 Dynamic resizing API stats
 --

 Key: TS-454
 URL: https://issues.apache.org/jira/browse/TS-454
 Project: Traffic Server
  Issue Type: Bug
  Components: Metrics
Reporter: Leif Hedstrom
 Fix For: sometime


 This is a new bug, as introduced with TS-390. We do not properly allow to 
 resize the librecords container, not allowing API (plugins) to create the 
 requested number of stats slots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-3165) Cache Toolkit API

2014-11-04 Thread Alan M. Carroll (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan M. Carroll reassigned TS-3165:
---

Assignee: Alan M. Carroll

 Cache Toolkit API
 -

 Key: TS-3165
 URL: https://issues.apache.org/jira/browse/TS-3165
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache
Reporter: Alan M. Carroll
Assignee: Alan M. Carroll

 Implement the Cache Toolkit API as described in 
 https://cwiki.apache.org/confluence/download/attachments/47384310/ApacheCon-2014-Cache-Toolkit.pdf?version=1modificationDate=1414085058000api=v2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3165) Cache Toolkit API

2014-11-04 Thread Alan M. Carroll (JIRA)
Alan M. Carroll created TS-3165:
---

 Summary: Cache Toolkit API
 Key: TS-3165
 URL: https://issues.apache.org/jira/browse/TS-3165
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache
Reporter: Alan M. Carroll


Implement the Cache Toolkit API as described in 
https://cwiki.apache.org/confluence/download/attachments/47384310/ApacheCon-2014-Cache-Toolkit.pdf?version=1modificationDate=1414085058000api=v2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-454) Dynamic resizing API stats

2014-11-04 Thread Susan Hinrichs (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196345#comment-14196345
 ] 

Susan Hinrichs commented on TS-454:
---

Related to work for reducing number of callbacks for states.  Not sure there is 
a bug created for that work yet.

[~jtblatt] is directing that effort.

 Dynamic resizing API stats
 --

 Key: TS-454
 URL: https://issues.apache.org/jira/browse/TS-454
 Project: Traffic Server
  Issue Type: Bug
  Components: Metrics
Reporter: Leif Hedstrom
 Fix For: sometime


 This is a new bug, as introduced with TS-390. We do not properly allow to 
 resize the librecords container, not allowing API (plugins) to create the 
 requested number of stats slots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-464) Chunked response with bad Content-Length header to HTTP/1.0 client is broken

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-464:
--
Fix Version/s: (was: sometime)
   5.3.0

 Chunked response with bad Content-Length header to HTTP/1.0 client is broken
 

 Key: TS-464
 URL: https://issues.apache.org/jira/browse/TS-464
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Leif Hedstrom
Assignee: Leif Hedstrom
  Labels: HTTP, Violation
 Fix For: 5.3.0


 A client sends an HTTP/1.0 request through ATS, which gets proxied with 
 HTTP/1.1 to the origin. The origin (which is at fault here, but nonetheless) 
 returns with both
 Content-Length: 10
 Transfer-Encoding: chunked
 and a chunked body which is  10 bytes long. In this case, ATS should still 
 respond with an HTTP/1.0 response, undoing the chunking, and return with an 
 appropriate CL: header. We do everything, except set the correct 
 Content-Length header, we simply return the erroneous CL header that the 
 Origin provided. This is not allowed in the RFC.
 (Originally discovered using Coadvisor).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-464) Chunked response with bad Content-Length header to HTTP/1.0 client is broken

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-464:
--
Assignee: Leif Hedstrom

 Chunked response with bad Content-Length header to HTTP/1.0 client is broken
 

 Key: TS-464
 URL: https://issues.apache.org/jira/browse/TS-464
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Leif Hedstrom
Assignee: Leif Hedstrom
  Labels: HTTP, Violation
 Fix For: 5.3.0


 A client sends an HTTP/1.0 request through ATS, which gets proxied with 
 HTTP/1.1 to the origin. The origin (which is at fault here, but nonetheless) 
 returns with both
 Content-Length: 10
 Transfer-Encoding: chunked
 and a chunked body which is  10 bytes long. In this case, ATS should still 
 respond with an HTTP/1.0 response, undoing the chunking, and return with an 
 appropriate CL: header. We do everything, except set the correct 
 Content-Length header, we simply return the erroneous CL header that the 
 Origin provided. This is not allowed in the RFC.
 (Originally discovered using Coadvisor).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-337) Cache Replacement Algorithm plug-in API

2014-11-04 Thread Alan M. Carroll (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196348#comment-14196348
 ] 

Alan M. Carroll commented on TS-337:


I will look at this as part of the Cache Toolkit work.

 Cache Replacement Algorithm plug-in API
 ---

 Key: TS-337
 URL: https://issues.apache.org/jira/browse/TS-337
 Project: Traffic Server
  Issue Type: New Feature
  Components: TS API
Affects Versions: 2.0.0
Reporter: Mark Nottingham
Assignee: Alan M. Carroll
 Fix For: sometime


 New cache replacement algorithms are often proposed, and often it's not a 
 one size fits all problem; different workloads require different approaches.
 To facilitate this, TS should have a pluggable cache replacement policy API, 
 both for the memory and disk cache.
 Squid has done this to good effect; see
   http://www.squid-cache.org/cgi-bin/cvsweb.cgi/squid/src/repl/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-467) Proxy should not forward request headers that matches a Connection token

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs reassigned TS-467:
-

Assignee: Susan Hinrichs

 Proxy should not forward request headers that matches a Connection token
 

 Key: TS-467
 URL: https://issues.apache.org/jira/browse/TS-467
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Leif Hedstrom
Assignee: Susan Hinrichs
  Labels: CoAdvisor
 Fix For: 5.3.0


 If a client sends a request like
 X-foobar: fum
 Connection: ,,X-foobar,,
 the proxy (Traffic Server) MUST remove the header X-foobar before forwarding 
 the request. From the RFC:
  HTTP/1.1 proxies MUST parse the Connection header field before a
message is forwarded and, for each connection-token in this field,
remove any header field(s) from the message with the same name as the
connection-token.
 and
  A system receiving an HTTP/1.0 (or lower-version) message that
includes a Connection header MUST, for each connection-token in this
field, remove and ignore any header field(s) from the message with
the same name as the connection-token.
 (Originally discovered with Coadvisor).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-467) Proxy should not forward request headers that matches a Connection token

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-467:
--
Fix Version/s: (was: sometime)
   5.3.0

 Proxy should not forward request headers that matches a Connection token
 

 Key: TS-467
 URL: https://issues.apache.org/jira/browse/TS-467
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Leif Hedstrom
Assignee: Susan Hinrichs
  Labels: CoAdvisor
 Fix For: 5.3.0


 If a client sends a request like
 X-foobar: fum
 Connection: ,,X-foobar,,
 the proxy (Traffic Server) MUST remove the header X-foobar before forwarding 
 the request. From the RFC:
  HTTP/1.1 proxies MUST parse the Connection header field before a
message is forwarded and, for each connection-token in this field,
remove any header field(s) from the message with the same name as the
connection-token.
 and
  A system receiving an HTTP/1.0 (or lower-version) message that
includes a Connection header MUST, for each connection-token in this
field, remove and ignore any header field(s) from the message with
the same name as the connection-token.
 (Originally discovered with Coadvisor).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-469) A PUT request should invalidate a previously cached object with the same URI

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-469:
--
Assignee: (was: Bryan Call)

 A PUT request should invalidate a previously cached object with the same URI
 

 Key: TS-469
 URL: https://issues.apache.org/jira/browse/TS-469
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 2.1.6
Reporter: Leif Hedstrom
  Labels: CoAdvisor
 Fix For: sometime


 If the client first fetches an object with GET, and TS caches it, a 
 subsequent PUT request for the same URL should invalidate the cached object.
 Update: This is a problem with PUT/POST when the response from the origin is 
 a 201, and there is a Location: header. We then must also invalidate any 
 cached object for the URL which the Location header refers to
 (Originally discovered with Coadvisor).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-469) A PUT request should invalidate a previously cached object with the same URI

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-469:
--
Fix Version/s: (was: 5.2.0)
   sometime

 A PUT request should invalidate a previously cached object with the same URI
 

 Key: TS-469
 URL: https://issues.apache.org/jira/browse/TS-469
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 2.1.6
Reporter: Leif Hedstrom
  Labels: CoAdvisor
 Fix For: sometime


 If the client first fetches an object with GET, and TS caches it, a 
 subsequent PUT request for the same URL should invalidate the cached object.
 Update: This is a problem with PUT/POST when the response from the origin is 
 a 201, and there is a Location: header. We then must also invalidate any 
 cached object for the URL which the Location header refers to
 (Originally discovered with Coadvisor).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-481) Missing remap support for two cases

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs reassigned TS-481:
-

Assignee: Susan Hinrichs

 Missing remap support for two cases
 ---

 Key: TS-481
 URL: https://issues.apache.org/jira/browse/TS-481
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Leif Hedstrom
Assignee: Susan Hinrichs
 Fix For: sometime


 There are two cases where the remap processor is not used, but it should. As 
 of v2.1.4, the broken support was removed (broken code is worse than no 
 code), making two cases not use / support remap features. Both cases 
 currently call request_url_remap() which is now set to return false. The two 
 cases are
 1) If you configure remap mode to be URL_REMAP_FOR_OS (2). This is an 
 undocumented feature, I don't even know when or why you'd want to use it. The 
 setting in RecordsConfig.cc is proxy.config.url_remap.url_remap_mode, which 
 defaults to 1.
 2) If TS uses raw connections, I think for example when using the CONNECT 
 method, we will not do remap rules. This would be in a forward proxy mode, 
 primarily for things like HTTPS afaik.
 I don't think either case are critical to get support for the remap processor 
 for v2.2, but please adjust the fix version if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-481) Missing remap support for two cases

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-481:
--
Assignee: Leif Hedstrom  (was: Susan Hinrichs)

 Missing remap support for two cases
 ---

 Key: TS-481
 URL: https://issues.apache.org/jira/browse/TS-481
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Leif Hedstrom
Assignee: Leif Hedstrom
 Fix For: sometime


 There are two cases where the remap processor is not used, but it should. As 
 of v2.1.4, the broken support was removed (broken code is worse than no 
 code), making two cases not use / support remap features. Both cases 
 currently call request_url_remap() which is now set to return false. The two 
 cases are
 1) If you configure remap mode to be URL_REMAP_FOR_OS (2). This is an 
 undocumented feature, I don't even know when or why you'd want to use it. The 
 setting in RecordsConfig.cc is proxy.config.url_remap.url_remap_mode, which 
 defaults to 1.
 2) If TS uses raw connections, I think for example when using the CONNECT 
 method, we will not do remap rules. This would be in a forward proxy mode, 
 primarily for things like HTTPS afaik.
 I don't think either case are critical to get support for the remap processor 
 for v2.2, but please adjust the fix version if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-510) API: Add documentation for HTTP Status code counters in stats (TS-509)

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs reassigned TS-510:
-

Assignee: Susan Hinrichs

 API: Add documentation for HTTP Status code counters in stats (TS-509)
 --

 Key: TS-510
 URL: https://issues.apache.org/jira/browse/TS-510
 Project: Traffic Server
  Issue Type: Bug
  Components: Documentation, TS API
Affects Versions: 2.1.6
Reporter: Theo Schlossnagle
Assignee: Susan Hinrichs
 Fix For: Docs






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-518) Close UA connection early if the origin sent Connection close:, there's a bad Content-Length header, and the UA connection has Keep-Alive.

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs reassigned TS-518:
-

Assignee: Susan Hinrichs  (was: James Peach)

 Close UA connection early if the origin sent Connection close:, there's a bad 
 Content-Length header, and the UA connection has Keep-Alive.
 --

 Key: TS-518
 URL: https://issues.apache.org/jira/browse/TS-518
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Leif Hedstrom
Assignee: Susan Hinrichs
 Fix For: 5.2.0


 In a very special case, we could improve the user experience by forcefully 
 closing the connection early. The case is
 1) The origin server sends a Content-Length: header that is wrong, where the 
 CL: value exceeds the actually body size.
 2) The origin server either sends a Connection: close, or it uses HTTP/1.0 
 without keep-alive.
 3) The client (and TS) uses Keep-Alive to the UA.
 In this case, we can end up stalling the UA until either the UA or the TS 
 connection times out. It might make sense to prematurely disconnect the 
 client when this case is detected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-518) Close UA connection early if the origin sent Connection close:, there's a bad Content-Length header, and the UA connection has Keep-Alive.

2014-11-04 Thread Alan M. Carroll (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196370#comment-14196370
 ] 

Alan M. Carroll commented on TS-518:


Is it presumed the origin server has sent a FIN or RST on the connection?

 Close UA connection early if the origin sent Connection close:, there's a bad 
 Content-Length header, and the UA connection has Keep-Alive.
 --

 Key: TS-518
 URL: https://issues.apache.org/jira/browse/TS-518
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Leif Hedstrom
Assignee: Susan Hinrichs
 Fix For: 5.2.0


 In a very special case, we could improve the user experience by forcefully 
 closing the connection early. The case is
 1) The origin server sends a Content-Length: header that is wrong, where the 
 CL: value exceeds the actually body size.
 2) The origin server either sends a Connection: close, or it uses HTTP/1.0 
 without keep-alive.
 3) The client (and TS) uses Keep-Alive to the UA.
 In this case, we can end up stalling the UA until either the UA or the TS 
 connection times out. It might make sense to prematurely disconnect the 
 client when this case is detected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-608) Is HttpSessionManager::purge_keepalives() too aggressive?

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-608:
--
Assignee: (was: Bin Chen)

 Is HttpSessionManager::purge_keepalives()  too aggressive?
 --

 Key: TS-608
 URL: https://issues.apache.org/jira/browse/TS-608
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Leif Hedstrom
 Fix For: 5.2.0

 Attachments: TS-608.patch


 It seems that if we trigger the max server connections, we call this purge 
 function in the session manager, which will close all currently open 
 keep-alive connections. This seems very aggressive, why not limit it to say 
 only removing 10% of each bucket or some such? Also, how does this work 
 together with per-origin limits? Ideally, if the per-origin limits are in 
 place, we would only purge sessions that are for the IP we wish to connect to 
 ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-608) Is HttpSessionManager::purge_keepalives() too aggressive?

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-608:
--
Assignee: Bryan Call

 Is HttpSessionManager::purge_keepalives()  too aggressive?
 --

 Key: TS-608
 URL: https://issues.apache.org/jira/browse/TS-608
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Leif Hedstrom
Assignee: Bryan Call
 Fix For: 5.2.0

 Attachments: TS-608.patch


 It seems that if we trigger the max server connections, we call this purge 
 function in the session manager, which will close all currently open 
 keep-alive connections. This seems very aggressive, why not limit it to say 
 only removing 10% of each bucket or some such? Also, how does this work 
 together with per-origin limits? Ideally, if the per-origin limits are in 
 place, we would only purge sessions that are for the IP we wish to connect to 
 ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-641) WebUI: Kill it with fire.

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-641:
--
Assignee: Leif Hedstrom

 WebUI: Kill it with fire.
 -

 Key: TS-641
 URL: https://issues.apache.org/jira/browse/TS-641
 Project: Traffic Server
  Issue Type: Improvement
  Components: Web UI
Reporter: Igor Galić
Assignee: Leif Hedstrom
 Fix For: sometime

   Original Estimate: 14m
  Remaining Estimate: 14m

 Get rid of proxy/mgmt/html2
 Get rid of all code in proxy/mgmt/web2 that is not used
 Merge used code into proxy/mgmt/utils/WebMgmtUtils.*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-654) request for support of Layer7 http health checking for Origin Servers

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-654:
--
Assignee: Alan M. Carroll  (was: portl4t)

 request for support of Layer7 http health checking for Origin Servers
 -

 Key: TS-654
 URL: https://issues.apache.org/jira/browse/TS-654
 Project: Traffic Server
  Issue Type: New Feature
  Components: HTTP
Affects Versions: 3.0.0
Reporter: Zhao Yongming
Assignee: Alan M. Carroll
 Attachments: HCUtil.cc, hc.patch


 this ticket is for the L7 health checking project: 
 https://cwiki.apache.org/confluence/display/TS/HttpHealthCheck



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-654) request for support of Layer7 http health checking for Origin Servers

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-654:
--
Fix Version/s: (was: 5.3.0)

 request for support of Layer7 http health checking for Origin Servers
 -

 Key: TS-654
 URL: https://issues.apache.org/jira/browse/TS-654
 Project: Traffic Server
  Issue Type: New Feature
  Components: HTTP
Affects Versions: 3.0.0
Reporter: Zhao Yongming
Assignee: Alan M. Carroll
 Attachments: HCUtil.cc, hc.patch


 this ticket is for the L7 health checking project: 
 https://cwiki.apache.org/confluence/display/TS/HttpHealthCheck



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3166) HostDB Upgrade

2014-11-04 Thread Alan M. Carroll (JIRA)
Alan M. Carroll created TS-3166:
---

 Summary: HostDB Upgrade
 Key: TS-3166
 URL: https://issues.apache.org/jira/browse/TS-3166
 Project: Traffic Server
  Issue Type: Improvement
  Components: HostDB, DNS
Reporter: Alan M. Carroll


Implement the HostDB upgrade as described in 
https://cwiki.apache.org/confluence/download/attachments/47384310/ApacheCon-2014-Host-Resolution.pdf?version=1modificationDate=1414085058000api=v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-658) HTTP SM holds the cache write lock for too long

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-658:
--
Assignee: Alan M. Carroll  (was: Phil Sorber)

 HTTP SM holds the cache write lock for too long
 ---

 Key: TS-658
 URL: https://issues.apache.org/jira/browse/TS-658
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Leif Hedstrom
Assignee: Alan M. Carroll
  Labels: A
 Fix For: 5.2.0


 It seems we open the cache for write very early on in the HTTP SM, which can 
 have very bad effect on performance if the object is not cacheable. It's not 
 totally clear as to why this is done this way, but we should examine this for 
 v3.1, and try to minimize how long we hold the lock. It's possible this is 
 related to read_while_writer, but then it should be modified IMO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-656) reimplement Connection Collapsing in a smooth way

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs closed TS-656.
-
Resolution: Duplicate

Addressed via documentation.  Remainder addressed by ts-2604.

 reimplement Connection Collapsing in a smooth way
 -

 Key: TS-656
 URL: https://issues.apache.org/jira/browse/TS-656
 Project: Traffic Server
  Issue Type: Improvement
  Components: HTTP
Affects Versions: 2.1.6
 Environment: per discussed in IRC, we'd like to clean the current CC 
 codes from trunk.
Reporter: Zhao Yongming
 Fix For: 5.3.0


 we should figure out how to implement the Connection Collapsing that not so 
 ugly. target for V3.1 or so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-656) reimplement Connection Collapsing in a smooth way

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-656:
--
Fix Version/s: (was: 5.3.0)

 reimplement Connection Collapsing in a smooth way
 -

 Key: TS-656
 URL: https://issues.apache.org/jira/browse/TS-656
 Project: Traffic Server
  Issue Type: Improvement
  Components: HTTP
Affects Versions: 2.1.6
 Environment: per discussed in IRC, we'd like to clean the current CC 
 codes from trunk.
Reporter: Zhao Yongming

 we should figure out how to implement the Connection Collapsing that not so 
 ugly. target for V3.1 or so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-658) HTTP SM holds the cache write lock for too long

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-658:
--
Fix Version/s: (was: 5.2.0)
   sometime

 HTTP SM holds the cache write lock for too long
 ---

 Key: TS-658
 URL: https://issues.apache.org/jira/browse/TS-658
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Leif Hedstrom
Assignee: Alan M. Carroll
  Labels: A
 Fix For: sometime


 It seems we open the cache for write very early on in the HTTP SM, which can 
 have very bad effect on performance if the object is not cacheable. It's not 
 totally clear as to why this is done this way, but we should examine this for 
 v3.1, and try to minimize how long we hold the lock. It's possible this is 
 related to read_while_writer, but then it should be modified IMO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-727) Do we need support for streams partitions?

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs closed TS-727.
-
Resolution: Duplicate

 Do we need support for streams partitions?
 

 Key: TS-727
 URL: https://issues.apache.org/jira/browse/TS-727
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache
Reporter: Leif Hedstrom
Assignee: Alan M. Carroll
  Labels: CacheToolKit
 Fix For: 6.0.0


 There's code in the cache related to MIXT streams volumes (caches). Since we 
 don't support streams, I'm thinking this code could be removed? Or 
 alternatively, we should expose APIs so that someone writing a plugin and 
 wish to store a different protocol (e.g. QT) can register this media type 
 with the API and core. The idea being that the core only contains protocols 
 that are in the core, but expose the cache core so that plugins can take 
 advantage of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-727) Do we need support for streams partitions?

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-727:
--
Fix Version/s: (was: 6.0.0)

 Do we need support for streams partitions?
 

 Key: TS-727
 URL: https://issues.apache.org/jira/browse/TS-727
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache
Reporter: Leif Hedstrom
Assignee: Alan M. Carroll
  Labels: CacheToolKit

 There's code in the cache related to MIXT streams volumes (caches). Since we 
 don't support streams, I'm thinking this code could be removed? Or 
 alternatively, we should expose APIs so that someone writing a plugin and 
 wish to store a different protocol (e.g. QT) can register this media type 
 with the API and core. The idea being that the core only contains protocols 
 that are in the core, but expose the cache core so that plugins can take 
 advantage of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-767) Make the cluster interface configurable

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs closed TS-767.
-
   Resolution: Duplicate
Fix Version/s: (was: sometime)

Already addressed by Http proxy port issues.

 Make the cluster interface configurable
 ---

 Key: TS-767
 URL: https://issues.apache.org/jira/browse/TS-767
 Project: Traffic Server
  Issue Type: Improvement
  Components: Configuration, Network
Affects Versions: 2.1.8
Reporter: Arno Toell
Priority: Minor

 I consider the way how Traffic Server opens listening ports dangerous, or at 
 least more risky than necessary. Currently ATS allows to configure port 
 numbers for the related services, but not the listening interface. Instead it 
 binds to 0.0.0.0. Therefore I'd like to suggest 
 * -Allow the user to specify a listening interface, don't assume 0.0.0.0 
 suits for all setups.-
 * -Disable the autoconfiguration port (i.e. 8083 by default) unless 
 proxy.local.cluster.type is set to enable clustering (!= 3). I think 
 _traffic_shell_ and eventually _traffic_line_ use this port to configure ATS 
 locally. If so it should be bound to the loop back at least or using Unix 
 Domain Sockets or whatever local socket method you prefer.-
 * Disable the reliable service port (i.e. 8088 by default) unless 
 proxy.local.cluster.type enables clustering. Similar to the 
 autoconfiguration port. If _traffic_cop_ (or something else on the local 
 machine) is using this port, the same suggestions apply as above. 
 * -The internal communication port (8084) should not open a public socket 
 at all. Instead use Unix Domain Sockets or something similar.-
 n.b. striked through issues are obsolete or tracked in TS-765. For the 
 remaining issue is worth to be added the comment from TS-765:
 8088 is no problem anymore until clustering is enabled, so there is only the 
 TS-766 improvement left there. However if enabled, I think it is still fairly 
 useful to allow the user to bind to a specific IP. Say, you run a public 
 facing proxy in cluster mode where you want to communicate in between on 
 private IPs between cluster peers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-777) Increasing logbuffer size makes us drop log entries

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-777:
--
Fix Version/s: (was: 5.2.0)
   5.3.0

 Increasing logbuffer size makes us drop log entries
 -

 Key: TS-777
 URL: https://issues.apache.org/jira/browse/TS-777
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging
Affects Versions: 2.1.8
Reporter: Leif Hedstrom
Assignee: Leif Hedstrom
  Labels: A
 Fix For: 5.3.0


 Setting proxy.config.log.log_buffer_size higher than somewhere around 24KB 
 makes us start losing log entries. This is bad, since increasing this setting 
 could be a way to increase performance for busy systems. I've for now set the 
 defaults to 16KB, which seems to be stable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-795) Clean up definitions and usage of buffer size indexes vs buffer sizes

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-795:
--
Fix Version/s: (was: 5.2.0)
   sometime

 Clean up definitions and usage of buffer size indexes vs buffer sizes
 -

 Key: TS-795
 URL: https://issues.apache.org/jira/browse/TS-795
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Reporter: Leif Hedstrom
 Fix For: sometime


 Right now, we have a set of defines, and APIs, which can take either a 
 index to a size (used e.g. for an allocator index), or a real size. Both of 
 these are currently int64_t in all APIs and member variables, and in the 
 implementations, the usage are sometimes overloaded (i.e. we do size 
 calculations on top of the indexes, making for real confusing code).
 There's also a lack of proper identification of what is an size index type 
 (or parameter), and what is a size. This makes for risky code.
 I think we should clean up all the implementations and APIs, as follow
 1) Make proper names of all APIs and macros, clearly indicating if it's 
 working on a size index or a size.
 2) Keep only the size types, parameters and macros using int64_t. Do not 
 overload real size over the size indexes.
 3) We either make the size indexes an int (or even a short), or perhaps 
 even better an enum (like below). All APIs, parameters, and member variables 
 that refer to such size indexes would use this appropriate type.
 {code}
 enum BufferSizeIndex {
   BUFFER_SIZE_INDEX_128 = 0,
   BUFFER_SIZE_INDEX_256,   /*  1 */
   BUFFER_SIZE_INDEX_512,   /*  2 */
   BUFFER_SIZE_INDEX_1K,/*  3 */
   BUFFER_SIZE_INDEX_2K,/*  4 */
   BUFFER_SIZE_INDEX_4K,/*  5 */
   BUFFER_SIZE_INDEX_8K,/*  6 */
   BUFFER_SIZE_INDEX_16K,   /*  7 */
   BUFFER_SIZE_INDEX_32K,   /*  8 */
   BUFFER_SIZE_INDEX_64K,   /*  9 */
   BUFFER_SIZE_INDEX_128K,  /* 10 */
   BUFFER_SIZE_INDEX_256K,  /* 11 */
   BUFFER_SIZE_INDEX_512K,  /* 12 */
   BUFFER_SIZE_INDEX_1M,/* 13 */
   BUFFER_SIZE_INDEX_2M,/* 14 */
   /* These have special semantics */
   BUFFER_SIZE_NOT_ALLOCATED,
 };
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-795) Clean up definitions and usage of buffer size indexes vs buffer sizes

2014-11-04 Thread Alan M. Carroll (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196430#comment-14196430
 ] 

Alan M. Carroll commented on TS-795:


I should contribute my NumericTypeId class for this.

 Clean up definitions and usage of buffer size indexes vs buffer sizes
 -

 Key: TS-795
 URL: https://issues.apache.org/jira/browse/TS-795
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Reporter: Leif Hedstrom
 Fix For: sometime


 Right now, we have a set of defines, and APIs, which can take either a 
 index to a size (used e.g. for an allocator index), or a real size. Both of 
 these are currently int64_t in all APIs and member variables, and in the 
 implementations, the usage are sometimes overloaded (i.e. we do size 
 calculations on top of the indexes, making for real confusing code).
 There's also a lack of proper identification of what is an size index type 
 (or parameter), and what is a size. This makes for risky code.
 I think we should clean up all the implementations and APIs, as follow
 1) Make proper names of all APIs and macros, clearly indicating if it's 
 working on a size index or a size.
 2) Keep only the size types, parameters and macros using int64_t. Do not 
 overload real size over the size indexes.
 3) We either make the size indexes an int (or even a short), or perhaps 
 even better an enum (like below). All APIs, parameters, and member variables 
 that refer to such size indexes would use this appropriate type.
 {code}
 enum BufferSizeIndex {
   BUFFER_SIZE_INDEX_128 = 0,
   BUFFER_SIZE_INDEX_256,   /*  1 */
   BUFFER_SIZE_INDEX_512,   /*  2 */
   BUFFER_SIZE_INDEX_1K,/*  3 */
   BUFFER_SIZE_INDEX_2K,/*  4 */
   BUFFER_SIZE_INDEX_4K,/*  5 */
   BUFFER_SIZE_INDEX_8K,/*  6 */
   BUFFER_SIZE_INDEX_16K,   /*  7 */
   BUFFER_SIZE_INDEX_32K,   /*  8 */
   BUFFER_SIZE_INDEX_64K,   /*  9 */
   BUFFER_SIZE_INDEX_128K,  /* 10 */
   BUFFER_SIZE_INDEX_256K,  /* 11 */
   BUFFER_SIZE_INDEX_512K,  /* 12 */
   BUFFER_SIZE_INDEX_1M,/* 13 */
   BUFFER_SIZE_INDEX_2M,/* 14 */
   /* These have special semantics */
   BUFFER_SIZE_NOT_ALLOCATED,
 };
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-802) Unique KA pools for SOCKS/src IP parameters

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs closed TS-802.
-
Resolution: Duplicate

 Unique KA pools for SOCKS/src IP parameters
 ---

 Key: TS-802
 URL: https://issues.apache.org/jira/browse/TS-802
 Project: Traffic Server
  Issue Type: Wish
  Components: HTTP
Reporter: M. Nunberg
 Fix For: sometime


 From what I've observed, ATS' keepalive/connection cache only indexes by the 
 OS server or next proxy server, but not by other types of 
 connection/transport/socket parameters, specifically in my case, negotiated 
 SOCKS connections and outgoing connections which are bound to a specific 
 source IP
 Is it possible to have such functionality in the near future? Currently I've 
 been forced to write my own 'KA gateway' which pretends to be an HTTP server 
 (to which ATS can maintain unique connections) and have that do SOCKS/source 
 ip bind()ing for me. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-802) Unique KA pools for SOCKS/src IP parameters

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-802:
--
Fix Version/s: (was: sometime)

 Unique KA pools for SOCKS/src IP parameters
 ---

 Key: TS-802
 URL: https://issues.apache.org/jira/browse/TS-802
 Project: Traffic Server
  Issue Type: Wish
  Components: HTTP
Reporter: M. Nunberg

 From what I've observed, ATS' keepalive/connection cache only indexes by the 
 OS server or next proxy server, but not by other types of 
 connection/transport/socket parameters, specifically in my case, negotiated 
 SOCKS connections and outgoing connections which are bound to a specific 
 source IP
 Is it possible to have such functionality in the near future? Currently I've 
 been forced to write my own 'KA gateway' which pretends to be an HTTP server 
 (to which ATS can maintain unique connections) and have that do SOCKS/source 
 ip bind()ing for me. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-803) Fix SOCKS breakage and allow for setting next-hop SOCKS

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs closed TS-803.
-
   Resolution: Won't Fix
Fix Version/s: (was: sometime)

If someone wishes to fix, please provide a patch against the current master.

 Fix SOCKS breakage and allow for setting next-hop SOCKS
 ---

 Key: TS-803
 URL: https://issues.apache.org/jira/browse/TS-803
 Project: Traffic Server
  Issue Type: New Feature
  Components: Network
Affects Versions: 3.0.0
 Environment: Wherever ATS might run
Reporter: M. Nunberg

 Here is a patch I drew up a few months ago against a snapshot of ATS/2.1.7 
 unstable/git. There are some quirks here, and I'm not that sure any more what 
 this patch does exactly. However it:
 1) Does fix SOCKS connections in general
 2) Allows setting next-hop SOCKS proxy via the API
 Problems:
 See https://issues.apache.org/jira/browse/TS-802
 This has no effect on connections which are drawn from the connection pool, 
 as it seems ATS currently doesn't maintain unique identities for peripheral 
 connection params (source IP, SOCKS etc); i.e. this only affects new TCP 
 connections to an OS.
 diff -x '*.o' -ru tsorig/iocore/net/I_NetVConnection.h 
 tsgit217/iocore/net/I_NetVConnection.h
 --- tsorig/iocore/net/I_NetVConnection.h2011-03-09 21:43:58.0 
 +
 +++ tsgit217/iocore/net/I_NetVConnection.h2011-03-17 14:37:18.0 
 +
 @@ -120,6 +120,13 @@
/// Version of SOCKS to use.
unsigned char socks_version;
 +  struct {
 +  unsigned int ip;
 +  int port;
 +  char *username;
 +  char *password;
 +  } socks_override;
 +
int socket_recv_bufsize;
int socket_send_bufsize;
 Only in tsgit217/iocore/net: Makefile
 Only in tsgit217/iocore/net: Makefile.in
 diff -x '*.o' -ru tsorig/iocore/net/P_Socks.h tsgit217/iocore/net/P_Socks.h
 --- tsorig/iocore/net/P_Socks.h2011-03-09 21:43:58.0 +
 +++ tsgit217/iocore/net/P_Socks.h2011-03-17 13:17:20.0 +
 @@ -126,7 +126,7 @@
unsigned char version;
bool write_done;
 -
 +  bool manual_parent_selection;
SocksAuthHandler auth_handler;
unsigned char socks_cmd;
 @@ -145,7 +145,8 @@
  SocksEntry():Continuation(NULL), netVConnection(0),
  ip(0), port(0), server_ip(0), server_port(0), nattempts(0),
 -lerrno(0), timeout(0), version(5), write_done(false), 
 auth_handler(NULL), socks_cmd(NORMAL_SOCKS)
 +lerrno(0), timeout(0), version(5), write_done(false), 
 manual_parent_selection(false),
 +auth_handler(NULL), socks_cmd(NORMAL_SOCKS)
{
}
  };
 diff -x '*.o' -ru tsorig/iocore/net/Socks.cc tsgit217/iocore/net/Socks.cc
 --- tsorig/iocore/net/Socks.cc2011-03-09 21:43:58.0 +
 +++ tsgit217/iocore/net/Socks.cc2011-03-17 13:46:07.0 +
 @@ -73,7 +73,8 @@
nattempts = 0;
findServer();
 -  timeout = this_ethread()-schedule_in(this, 
 HRTIME_SECONDS(netProcessor.socks_conf_stuff-server_connect_timeout));
 +//  timeout = this_ethread()-schedule_in(this, 
 HRTIME_SECONDS(netProcessor.socks_conf_stuff-server_connect_timeout));
 +  timeout = this_ethread()-schedule_in(this, HRTIME_SECONDS(5));
write_done = false;
  }
 @@ -81,6 +82,15 @@
  SocksEntry::findServer()
  {
nattempts++;
 +  if(manual_parent_selection) {
 +  if(nattempts  1) {
 +  //Nullify IP and PORT
 +  server_ip = -1;
 +  server_port = 0;
 +  }
 +  Debug(mndebug(Socks), findServer() is a noop with manual socks 
 selection);
 +  return;
 +  }
  #ifdef SOCKS_WITH_TS
if (nattempts == 1) {
 @@ -187,7 +197,6 @@
  }
  Debug(Socks, Failed to connect to %u.%u.%u.%u:%d, 
 PRINT_IP(server_ip), server_port);
 -
  findServer();
  if (server_ip == (uint32_t) - 1) {
 diff -x '*.o' -ru tsorig/iocore/net/UnixNetProcessor.cc 
 tsgit217/iocore/net/UnixNetProcessor.cc
 --- tsorig/iocore/net/UnixNetProcessor.cc2011-03-09 21:43:58.0 
 +
 +++ tsgit217/iocore/net/UnixNetProcessor.cc2011-03-17 15:48:38.0 
 +
 @@ -228,6 +228,11 @@
!socks_conf_stuff-ip_range.match(ip))
  #endif
  );
 +  if(opt-socks_override.ip = 1) {
 +  using_socks = true;
 +  Debug(mndebug, trying to set using_socks to true);
 +  }
 +
SocksEntry *socksEntry = NULL;
  #endif
NET_SUM_GLOBAL_DYN_STAT(net_connections_currently_open_stat, 1);
 @@ -242,6 +247,16 @@
if (using_socks) {
  Debug(Socks, Using Socks ip: %u.%u.%u.%u:%d\n, PRINT_IP(ip), port);
  socksEntry = socksAllocator.alloc();
 +
 +if (opt-socks_override.ip) {
 +//Needs to be done before socksEntry-init()
 +socksEntry-server_ip = opt-socks_override.ip;
 +socksEntry-server_port = opt-socks_override.port;
 +socksEntry-manual_parent_selection 

[jira] [Closed] (TS-817) TSFetchUrl/TSFetchPages does not work with HTTP/1.1 requests

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs closed TS-817.
-
   Resolution: Invalid
Fix Version/s: (was: 5.2.0)

 TSFetchUrl/TSFetchPages does not work with HTTP/1.1 requests
 

 Key: TS-817
 URL: https://issues.apache.org/jira/browse/TS-817
 Project: Traffic Server
  Issue Type: Bug
  Components: TS API
Affects Versions: 3.0.0
Reporter: Naveen
  Labels: api-change, function

 The API calls TSFetchUrl/TSFetchPages do not work with HTTP/1.1 requests. The 
 implementation seems to use the end of the connection to signal the user 
 callback function, which is not the case on persistent connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-829) socks stats cleanup - some stats are registered, but not used

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-829:
--
Fix Version/s: (was: sometime)
   5.3.0

 socks stats cleanup - some stats are registered, but not used
 -

 Key: TS-829
 URL: https://issues.apache.org/jira/browse/TS-829
 Project: Traffic Server
  Issue Type: Bug
  Components: Network
Affects Versions: 3.0.0
Reporter: Bryan Call
Priority: Minor
 Fix For: 5.3.0


 From reviewing TS-818 I noticed that the stats that were being double 
 resisted are not used.  Some cleanup work should be done for the socks stats.
 Stats that are registered, but not used in the code:
 [bcall@snowball traffic.git]$ grep -r proxy.process.socks iocore/net/Net.cc 
  proxy.process.socks.connections_successful,
  proxy.process.socks.connections_unsuccessful,
  proxy.process.socks.connections_currently_open,
 These stats are used some tests, so maybe they should be added back into the 
 code.
 [bcall@snowball traffic.git]$ grep -rl --binary-files=without-match 
 proxy.process.socks.connections_ *
 iocore/net/Net.cc
 mgmt/api/remote/APITestCliRemote.cc
 test/plugin/test-mgmt/test-mgmt.c
 I did however see these stats being used:
 [bcall@snowball traffic.git]$ grep -r SOCKSPROXY_ *
 proxy/SocksProxy.cc:#define SOCKSPROXY_INC_STAT(x) \
 proxy/SocksProxy.cc:
 SOCKSPROXY_INC_STAT(socksproxy_http_connections_stat);
 proxy/SocksProxy.cc:
 SOCKSPROXY_INC_STAT(socksproxy_tunneled_connections_stat);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-829) socks stats cleanup - some stats are registered, but not used

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-829:
--
Assignee: Bryan Call

 socks stats cleanup - some stats are registered, but not used
 -

 Key: TS-829
 URL: https://issues.apache.org/jira/browse/TS-829
 Project: Traffic Server
  Issue Type: Bug
  Components: Network
Affects Versions: 3.0.0
Reporter: Bryan Call
Assignee: Bryan Call
Priority: Minor
 Fix For: 5.3.0


 From reviewing TS-818 I noticed that the stats that were being double 
 resisted are not used.  Some cleanup work should be done for the socks stats.
 Stats that are registered, but not used in the code:
 [bcall@snowball traffic.git]$ grep -r proxy.process.socks iocore/net/Net.cc 
  proxy.process.socks.connections_successful,
  proxy.process.socks.connections_unsuccessful,
  proxy.process.socks.connections_currently_open,
 These stats are used some tests, so maybe they should be added back into the 
 code.
 [bcall@snowball traffic.git]$ grep -rl --binary-files=without-match 
 proxy.process.socks.connections_ *
 iocore/net/Net.cc
 mgmt/api/remote/APITestCliRemote.cc
 test/plugin/test-mgmt/test-mgmt.c
 I did however see these stats being used:
 [bcall@snowball traffic.git]$ grep -r SOCKSPROXY_ *
 proxy/SocksProxy.cc:#define SOCKSPROXY_INC_STAT(x) \
 proxy/SocksProxy.cc:
 SOCKSPROXY_INC_STAT(socksproxy_http_connections_stat);
 proxy/SocksProxy.cc:
 SOCKSPROXY_INC_STAT(socksproxy_tunneled_connections_stat);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-852) hostdb and dns system hangup

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-852:
--
Fix Version/s: (was: sometime)

 hostdb and dns system hangup
 

 Key: TS-852
 URL: https://issues.apache.org/jira/browse/TS-852
 Project: Traffic Server
  Issue Type: Bug
  Components: Core, DNS
Affects Versions: 3.1.0
Reporter: Zhao Yongming
Assignee: Phil Sorber
Priority: Minor
  Labels: dns, hostdb
 Fix For: 3.1.0

 Attachments: TS-852.patch


 in my testing, all request that need go OS, fails with 30s timeout, and the 
 slow query shows data from dns missed: 
 {code}
 [Jun 22 15:33:47.183] Server {1106880832} ERROR: [8122411] Slow Request: url: 
 http://download.im.alisoft.com/aliim/AliIM6/update/packages//2011-5-4-10-10-40/0/modulesproxy/8203.xml.wwzip
  status: 0 unique id:  bytes: 0 fd: 0 client state: 6 server state: 0 
 ua_begin: 0.000 ua_read_header_done: 0.000 cache_open_read_begin: 0.000 
 cache_open_read_end: 0.000 dns_lookup_begin: 0.000 dns_lookup_end: -1.000 
 server_connect: -1.000 server_first_read: -1.000 server_read_header_done: 
 -1.000 server_close: -1.000 ua_close: 30.667 sm_finish: 30.667
 [Jun 22 15:33:47.220] Server {1099663680} ERROR: [8122422] Slow Request: url: 
 http://img.uu1001.cn/materials/original/2010-11-22/22-48/a302876929a9c40a8272ac439a16ad25e74edf71.png
  status: 0 unique id:  bytes: 0 fd: 0 client state: 6 server state: 0 
 ua_begin: 0.000 ua_read_header_done: 0.000 cache_open_read_begin: 0.000 
 cache_open_read_end: 0.000 dns_lookup_begin: 0.000 dns_lookup_end: -1.000 
 server_connect: -1.000 server_first_read: -1.000 server_read_header_done: 
 -1.000 server_close: -1.000 ua_close: 30.318 sm_finish: 30.318
 [Jun 22 15:33:47.441] Server {1098611008} ERROR: [8122421] Slow Request: url: 
 http://img02.taobaocdn.com/bao/uploaded/i2/T1mp8QXopdXXblNIZ6_061203.jpg_b.jpg
  status: 0 unique id:  bytes: 0 fd: 0 client state: 6 server state: 0 
 ua_begin: 0.000 ua_read_header_done: 0.000 cache_open_read_begin: 0.000 
 cache_open_read_end: 0.000 dns_lookup_begin: 0.000 dns_lookup_end: -1.000 
 server_connect: -1.000 server_first_read: -1.000 server_read_header_done: 
 -1.000 server_close: -1.000 ua_close: 30.597 sm_finish: 30.597
 [Jun 22 15:33:47.441] Server {1098611008} ERROR: [8122409] Slow Request: url: 
 http://img04.taobaocdn.com/tps/i4/T1EM9gXltt-440-135.jpg status: 0 
 unique id:  bytes: 0 fd: 0 client state: 6 server state: 0 ua_begin: 0.000 
 ua_read_header_done: 0.000 cache_open_read_begin: 0.000 cache_open_read_end: 
 0.000 dns_lookup_begin: 0.000 dns_lookup_end: -1.000 server_connect: -1.000 
 server_first_read: -1.000 server_read_header_done: -1.000 server_close: 
 -1.000 ua_close: 30.948 sm_finish: 30.948
 {code}
 all http SM show from {http} in DNS_LOOKUP
 and traffic.out show nothing with error, when I enable debug on 
 hostdb.*|dns.*:
 {code}
 [Jun 22 15:27:42.391] Server {1108986176} DEBUG: (hostdb) probe 
 img03.taobaocdn.com DB357D60B8247DC7 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.391] Server {1108986176} DEBUG: (hostdb) timeout 64204 
 1308663404 300
 [Jun 22 15:27:42.391] Server {1108986176} DEBUG: (hostdb) delaying force 0 
 answer for img03.taobaocdn.com [timeout 0]
 [Jun 22 15:27:42.411] Server {1108986176} DEBUG: (hostdb) probe 
 img03.taobaocdn.com DB357D60B8247DC7 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.411] Server {1108986176} DEBUG: (hostdb) timeout 64204 
 1308663404 300
 [Jun 22 15:27:42.411] Server {1108986176} DEBUG: (hostdb) DNS 
 img03.taobaocdn.com
 [Jun 22 15:27:42.411] Server {1108986176} DEBUG: (hostdb) enqueuing 
 additional request
 [Jun 22 15:27:42.416] Server {47177168876656} DEBUG: (hostdb) probe 127.0.0.1 
 E9B7563422C93608 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.416] Server {47177168876656} DEBUG: (hostdb) immediate 
 answer for 127.0.0.1
 [Jun 22 15:27:42.422] Server {1098611008} DEBUG: (hostdb) probe 
 img08.taobaocdn.com 945198AE9AE37481 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.422] Server {1098611008} DEBUG: (hostdb) timeout 64281 
 1308663327 300
 [Jun 22 15:27:42.422] Server {1098611008} DEBUG: (hostdb) delaying force 0 
 answer for img08.taobaocdn.com [timeout 0]
 [Jun 22 15:27:42.441] Server {1098611008} DEBUG: (hostdb) probe 
 img08.taobaocdn.com 945198AE9AE37481 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.441] Server {1098611008} DEBUG: (hostdb) timeout 64281 
 1308663327 300
 [Jun 22 15:27:42.441] Server {1098611008} DEBUG: (hostdb) DNS 
 img08.taobaocdn.com
 [Jun 22 15:27:42.441] Server {1098611008} DEBUG: (hostdb) enqueuing 
 additional request
 [Jun 22 15:27:42.470] Server {1099663680} DEBUG: (hostdb) probe 127.0.0.1 
 E9B7563422C93608 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.470] Server {1099663680} DEBUG: (hostdb) immediate answer 
 for 127.0.0.1
 [Jun 22 

[jira] [Updated] (TS-852) hostdb and dns system hangup

2014-11-04 Thread Alan M. Carroll (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan M. Carroll updated TS-852:
---
Fix Version/s: sometime

 hostdb and dns system hangup
 

 Key: TS-852
 URL: https://issues.apache.org/jira/browse/TS-852
 Project: Traffic Server
  Issue Type: Bug
  Components: Core, DNS
Affects Versions: 3.1.0
Reporter: Zhao Yongming
Assignee: Phil Sorber
Priority: Minor
  Labels: dns, hostdb
 Fix For: 3.1.0, sometime

 Attachments: TS-852.patch


 in my testing, all request that need go OS, fails with 30s timeout, and the 
 slow query shows data from dns missed: 
 {code}
 [Jun 22 15:33:47.183] Server {1106880832} ERROR: [8122411] Slow Request: url: 
 http://download.im.alisoft.com/aliim/AliIM6/update/packages//2011-5-4-10-10-40/0/modulesproxy/8203.xml.wwzip
  status: 0 unique id:  bytes: 0 fd: 0 client state: 6 server state: 0 
 ua_begin: 0.000 ua_read_header_done: 0.000 cache_open_read_begin: 0.000 
 cache_open_read_end: 0.000 dns_lookup_begin: 0.000 dns_lookup_end: -1.000 
 server_connect: -1.000 server_first_read: -1.000 server_read_header_done: 
 -1.000 server_close: -1.000 ua_close: 30.667 sm_finish: 30.667
 [Jun 22 15:33:47.220] Server {1099663680} ERROR: [8122422] Slow Request: url: 
 http://img.uu1001.cn/materials/original/2010-11-22/22-48/a302876929a9c40a8272ac439a16ad25e74edf71.png
  status: 0 unique id:  bytes: 0 fd: 0 client state: 6 server state: 0 
 ua_begin: 0.000 ua_read_header_done: 0.000 cache_open_read_begin: 0.000 
 cache_open_read_end: 0.000 dns_lookup_begin: 0.000 dns_lookup_end: -1.000 
 server_connect: -1.000 server_first_read: -1.000 server_read_header_done: 
 -1.000 server_close: -1.000 ua_close: 30.318 sm_finish: 30.318
 [Jun 22 15:33:47.441] Server {1098611008} ERROR: [8122421] Slow Request: url: 
 http://img02.taobaocdn.com/bao/uploaded/i2/T1mp8QXopdXXblNIZ6_061203.jpg_b.jpg
  status: 0 unique id:  bytes: 0 fd: 0 client state: 6 server state: 0 
 ua_begin: 0.000 ua_read_header_done: 0.000 cache_open_read_begin: 0.000 
 cache_open_read_end: 0.000 dns_lookup_begin: 0.000 dns_lookup_end: -1.000 
 server_connect: -1.000 server_first_read: -1.000 server_read_header_done: 
 -1.000 server_close: -1.000 ua_close: 30.597 sm_finish: 30.597
 [Jun 22 15:33:47.441] Server {1098611008} ERROR: [8122409] Slow Request: url: 
 http://img04.taobaocdn.com/tps/i4/T1EM9gXltt-440-135.jpg status: 0 
 unique id:  bytes: 0 fd: 0 client state: 6 server state: 0 ua_begin: 0.000 
 ua_read_header_done: 0.000 cache_open_read_begin: 0.000 cache_open_read_end: 
 0.000 dns_lookup_begin: 0.000 dns_lookup_end: -1.000 server_connect: -1.000 
 server_first_read: -1.000 server_read_header_done: -1.000 server_close: 
 -1.000 ua_close: 30.948 sm_finish: 30.948
 {code}
 all http SM show from {http} in DNS_LOOKUP
 and traffic.out show nothing with error, when I enable debug on 
 hostdb.*|dns.*:
 {code}
 [Jun 22 15:27:42.391] Server {1108986176} DEBUG: (hostdb) probe 
 img03.taobaocdn.com DB357D60B8247DC7 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.391] Server {1108986176} DEBUG: (hostdb) timeout 64204 
 1308663404 300
 [Jun 22 15:27:42.391] Server {1108986176} DEBUG: (hostdb) delaying force 0 
 answer for img03.taobaocdn.com [timeout 0]
 [Jun 22 15:27:42.411] Server {1108986176} DEBUG: (hostdb) probe 
 img03.taobaocdn.com DB357D60B8247DC7 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.411] Server {1108986176} DEBUG: (hostdb) timeout 64204 
 1308663404 300
 [Jun 22 15:27:42.411] Server {1108986176} DEBUG: (hostdb) DNS 
 img03.taobaocdn.com
 [Jun 22 15:27:42.411] Server {1108986176} DEBUG: (hostdb) enqueuing 
 additional request
 [Jun 22 15:27:42.416] Server {47177168876656} DEBUG: (hostdb) probe 127.0.0.1 
 E9B7563422C93608 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.416] Server {47177168876656} DEBUG: (hostdb) immediate 
 answer for 127.0.0.1
 [Jun 22 15:27:42.422] Server {1098611008} DEBUG: (hostdb) probe 
 img08.taobaocdn.com 945198AE9AE37481 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.422] Server {1098611008} DEBUG: (hostdb) timeout 64281 
 1308663327 300
 [Jun 22 15:27:42.422] Server {1098611008} DEBUG: (hostdb) delaying force 0 
 answer for img08.taobaocdn.com [timeout 0]
 [Jun 22 15:27:42.441] Server {1098611008} DEBUG: (hostdb) probe 
 img08.taobaocdn.com 945198AE9AE37481 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.441] Server {1098611008} DEBUG: (hostdb) timeout 64281 
 1308663327 300
 [Jun 22 15:27:42.441] Server {1098611008} DEBUG: (hostdb) DNS 
 img08.taobaocdn.com
 [Jun 22 15:27:42.441] Server {1098611008} DEBUG: (hostdb) enqueuing 
 additional request
 [Jun 22 15:27:42.470] Server {1099663680} DEBUG: (hostdb) probe 127.0.0.1 
 E9B7563422C93608 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.470] Server {1099663680} DEBUG: (hostdb) immediate answer 
 for 127.0.0.1
 [Jun 22 

[jira] [Updated] (TS-871) Subversion (1.7) with serf fails with ATS in forward and reverse proxy mode

2014-11-04 Thread Bryan Call (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Call updated TS-871:
--
Assignee: Leif Hedstrom

 Subversion (1.7) with serf fails with ATS in forward and reverse proxy mode
 ---

 Key: TS-871
 URL: https://issues.apache.org/jira/browse/TS-871
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 3.0.0
Reporter: Igor Galić
Assignee: Leif Hedstrom
 Fix For: sometime

 Attachments: TS-871-20121107.diff, TS-871.diff, 
 ats_Thttp.debug.notime.txt, ats_Thttp.debug.txt, 
 revats_Thttp.debug.notime.txt, revats_Thttp.debug.txt, serf_proxy.cap, 
 serf_revproxy.cap, stats.diff


 When accessing a remote subversion repository via http or https with svn 1.7, 
 it will currently timeout:
 {noformat}
 igalic@tynix ~/src/asf % svn co 
 http://svn.apache.org/repos/asf/trafficserver/plugins/stats_over_http/
 svn: E020014: Unable to connect to a repository at URL 
 'http://svn.apache.org/repos/asf/trafficserver/plugins/stats_over_http'
 svn: E020014: Unspecified error message: 504 Connection Timed Out
 1 igalic@tynix ~/src/asf %
 {noformat}
 I have started traffic_server -Thttp and captured the output, which I'm 
 attaching.
 There's also a capture from the network.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-871) Subversion (1.7) with serf fails with ATS in forward and reverse proxy mode

2014-11-04 Thread Bryan Call (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196463#comment-14196463
 ] 

Bryan Call commented on TS-871:
---

@Leif Can you test again and see if it works with subversion?

 Subversion (1.7) with serf fails with ATS in forward and reverse proxy mode
 ---

 Key: TS-871
 URL: https://issues.apache.org/jira/browse/TS-871
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 3.0.0
Reporter: Igor Galić
Assignee: Leif Hedstrom
 Fix For: sometime

 Attachments: TS-871-20121107.diff, TS-871.diff, 
 ats_Thttp.debug.notime.txt, ats_Thttp.debug.txt, 
 revats_Thttp.debug.notime.txt, revats_Thttp.debug.txt, serf_proxy.cap, 
 serf_revproxy.cap, stats.diff


 When accessing a remote subversion repository via http or https with svn 1.7, 
 it will currently timeout:
 {noformat}
 igalic@tynix ~/src/asf % svn co 
 http://svn.apache.org/repos/asf/trafficserver/plugins/stats_over_http/
 svn: E020014: Unable to connect to a repository at URL 
 'http://svn.apache.org/repos/asf/trafficserver/plugins/stats_over_http'
 svn: E020014: Unspecified error message: 504 Connection Timed Out
 1 igalic@tynix ~/src/asf %
 {noformat}
 I have started traffic_server -Thttp and captured the output, which I'm 
 attaching.
 There's also a capture from the network.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (TS-871) Subversion (1.7) with serf fails with ATS in forward and reverse proxy mode

2014-11-04 Thread Bryan Call (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196463#comment-14196463
 ] 

Bryan Call edited comment on TS-871 at 11/4/14 5:59 PM:


[~zwoop] Can you test again and see if it works with subversion?


was (Author: bcall):
@Leif Can you test again and see if it works with subversion?

 Subversion (1.7) with serf fails with ATS in forward and reverse proxy mode
 ---

 Key: TS-871
 URL: https://issues.apache.org/jira/browse/TS-871
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 3.0.0
Reporter: Igor Galić
Assignee: Leif Hedstrom
 Fix For: sometime

 Attachments: TS-871-20121107.diff, TS-871.diff, 
 ats_Thttp.debug.notime.txt, ats_Thttp.debug.txt, 
 revats_Thttp.debug.notime.txt, revats_Thttp.debug.txt, serf_proxy.cap, 
 serf_revproxy.cap, stats.diff


 When accessing a remote subversion repository via http or https with svn 1.7, 
 it will currently timeout:
 {noformat}
 igalic@tynix ~/src/asf % svn co 
 http://svn.apache.org/repos/asf/trafficserver/plugins/stats_over_http/
 svn: E020014: Unable to connect to a repository at URL 
 'http://svn.apache.org/repos/asf/trafficserver/plugins/stats_over_http'
 svn: E020014: Unspecified error message: 504 Connection Timed Out
 1 igalic@tynix ~/src/asf %
 {noformat}
 I have started traffic_server -Thttp and captured the output, which I'm 
 attaching.
 There's also a capture from the network.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-879) Seek on cached file

2014-11-04 Thread Alan M. Carroll (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan M. Carroll closed TS-879.
--
Resolution: Duplicate

 Seek on cached file
 ---

 Key: TS-879
 URL: https://issues.apache.org/jira/browse/TS-879
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache, TS API
Affects Versions: 3.0.0
Reporter: Nelson Pérez
Assignee: Alan M. Carroll
  Labels: api-addition, cache, trafficserver
 Fix For: sometime


 I want a custom written plugin to be able to seek to any point in a cached 
 file. According to John Plevyak 
 (http://www.mail-archive.com/dev@trafficserver.apache.org/msg02785.html) this 
 feature is new in the cache, but not yet available to the API. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-892) request for bulk setting in remap.config for refer filter

2014-11-04 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom resolved TS-892.
--
Resolution: Won't Fix

Closing in liu of future configuration refactoring.

 request for bulk setting in remap.config for refer filter
 -

 Key: TS-892
 URL: https://issues.apache.org/jira/browse/TS-892
 Project: Traffic Server
  Issue Type: Improvement
  Components: Configuration, HTTP
Affects Versions: 3.1.0
Reporter: Zhao Yongming
  Labels: remap

 when we put our TS online, we find out we are complex when handling squid's 
 default filter rule such as:
 {code}
 acl ctoc referer_regex -i XXXa\.com\/ XXXb\.com\/ XXXc\.com\/
 http_access deny ctoc
 {code}
 we have to convert every map rule to map_with_referer, and get very ugly. if 
 we can get the referer filter config in the bulk way as IP based filter in 
 the remap.config, it will make the config file clear and clean
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-892) request for bulk setting in remap.config for refer filter

2014-11-04 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-892:
-
Fix Version/s: (was: sometime)

 request for bulk setting in remap.config for refer filter
 -

 Key: TS-892
 URL: https://issues.apache.org/jira/browse/TS-892
 Project: Traffic Server
  Issue Type: Improvement
  Components: Configuration, HTTP
Affects Versions: 3.1.0
Reporter: Zhao Yongming
  Labels: remap

 when we put our TS online, we find out we are complex when handling squid's 
 default filter rule such as:
 {code}
 acl ctoc referer_regex -i XXXa\.com\/ XXXb\.com\/ XXXc\.com\/
 http_access deny ctoc
 {code}
 we have to convert every map rule to map_with_referer, and get very ugly. if 
 we can get the referer filter config in the bulk way as IP based filter in 
 the remap.config, it will make the config file clear and clean
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-893) the prefetch function in codes need more love to show up

2014-11-04 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-893:
-
Assignee: (was: Zhao Yongming)

 the prefetch function in codes need more love to show up
 

 Key: TS-893
 URL: https://issues.apache.org/jira/browse/TS-893
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache, HTTP
Affects Versions: 3.0.0
Reporter: Zhao Yongming
 Fix For: sometime


 the prefetch function in proxy is a good solution when you really need to 
 faster up your user download time, it can parse any allowed plean html file, 
 get all resource tags out and do batch loading from OS. I am going to preload 
 my site before we put it online, as it will get about 1 month to get the disk 
 full and hit rate stable. it is a cool feature but it have the following 
 issues:
 1, the prefetch config file is not managed well. ie, it is not managed by 
 cluster
 2, the it does not any document in the admin guide or old pdf file.
 3, prefetching just care plean html file, without compressing, should we do 
 some decompressing? is that possible?
 hopes this is the starting of make prefetch really useful for some cutting 
 edge situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-852) hostdb and dns system hangup

2014-11-04 Thread Susan Hinrichs (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196472#comment-14196472
 ] 

Susan Hinrichs commented on TS-852:
---

[~psudaemon] any additional thoughts on this.

 hostdb and dns system hangup
 

 Key: TS-852
 URL: https://issues.apache.org/jira/browse/TS-852
 Project: Traffic Server
  Issue Type: Bug
  Components: Core, DNS
Affects Versions: 3.1.0
Reporter: Zhao Yongming
Assignee: Phil Sorber
Priority: Minor
  Labels: dns, hostdb
 Fix For: 3.1.0, sometime

 Attachments: TS-852.patch


 in my testing, all request that need go OS, fails with 30s timeout, and the 
 slow query shows data from dns missed: 
 {code}
 [Jun 22 15:33:47.183] Server {1106880832} ERROR: [8122411] Slow Request: url: 
 http://download.im.alisoft.com/aliim/AliIM6/update/packages//2011-5-4-10-10-40/0/modulesproxy/8203.xml.wwzip
  status: 0 unique id:  bytes: 0 fd: 0 client state: 6 server state: 0 
 ua_begin: 0.000 ua_read_header_done: 0.000 cache_open_read_begin: 0.000 
 cache_open_read_end: 0.000 dns_lookup_begin: 0.000 dns_lookup_end: -1.000 
 server_connect: -1.000 server_first_read: -1.000 server_read_header_done: 
 -1.000 server_close: -1.000 ua_close: 30.667 sm_finish: 30.667
 [Jun 22 15:33:47.220] Server {1099663680} ERROR: [8122422] Slow Request: url: 
 http://img.uu1001.cn/materials/original/2010-11-22/22-48/a302876929a9c40a8272ac439a16ad25e74edf71.png
  status: 0 unique id:  bytes: 0 fd: 0 client state: 6 server state: 0 
 ua_begin: 0.000 ua_read_header_done: 0.000 cache_open_read_begin: 0.000 
 cache_open_read_end: 0.000 dns_lookup_begin: 0.000 dns_lookup_end: -1.000 
 server_connect: -1.000 server_first_read: -1.000 server_read_header_done: 
 -1.000 server_close: -1.000 ua_close: 30.318 sm_finish: 30.318
 [Jun 22 15:33:47.441] Server {1098611008} ERROR: [8122421] Slow Request: url: 
 http://img02.taobaocdn.com/bao/uploaded/i2/T1mp8QXopdXXblNIZ6_061203.jpg_b.jpg
  status: 0 unique id:  bytes: 0 fd: 0 client state: 6 server state: 0 
 ua_begin: 0.000 ua_read_header_done: 0.000 cache_open_read_begin: 0.000 
 cache_open_read_end: 0.000 dns_lookup_begin: 0.000 dns_lookup_end: -1.000 
 server_connect: -1.000 server_first_read: -1.000 server_read_header_done: 
 -1.000 server_close: -1.000 ua_close: 30.597 sm_finish: 30.597
 [Jun 22 15:33:47.441] Server {1098611008} ERROR: [8122409] Slow Request: url: 
 http://img04.taobaocdn.com/tps/i4/T1EM9gXltt-440-135.jpg status: 0 
 unique id:  bytes: 0 fd: 0 client state: 6 server state: 0 ua_begin: 0.000 
 ua_read_header_done: 0.000 cache_open_read_begin: 0.000 cache_open_read_end: 
 0.000 dns_lookup_begin: 0.000 dns_lookup_end: -1.000 server_connect: -1.000 
 server_first_read: -1.000 server_read_header_done: -1.000 server_close: 
 -1.000 ua_close: 30.948 sm_finish: 30.948
 {code}
 all http SM show from {http} in DNS_LOOKUP
 and traffic.out show nothing with error, when I enable debug on 
 hostdb.*|dns.*:
 {code}
 [Jun 22 15:27:42.391] Server {1108986176} DEBUG: (hostdb) probe 
 img03.taobaocdn.com DB357D60B8247DC7 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.391] Server {1108986176} DEBUG: (hostdb) timeout 64204 
 1308663404 300
 [Jun 22 15:27:42.391] Server {1108986176} DEBUG: (hostdb) delaying force 0 
 answer for img03.taobaocdn.com [timeout 0]
 [Jun 22 15:27:42.411] Server {1108986176} DEBUG: (hostdb) probe 
 img03.taobaocdn.com DB357D60B8247DC7 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.411] Server {1108986176} DEBUG: (hostdb) timeout 64204 
 1308663404 300
 [Jun 22 15:27:42.411] Server {1108986176} DEBUG: (hostdb) DNS 
 img03.taobaocdn.com
 [Jun 22 15:27:42.411] Server {1108986176} DEBUG: (hostdb) enqueuing 
 additional request
 [Jun 22 15:27:42.416] Server {47177168876656} DEBUG: (hostdb) probe 127.0.0.1 
 E9B7563422C93608 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.416] Server {47177168876656} DEBUG: (hostdb) immediate 
 answer for 127.0.0.1
 [Jun 22 15:27:42.422] Server {1098611008} DEBUG: (hostdb) probe 
 img08.taobaocdn.com 945198AE9AE37481 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.422] Server {1098611008} DEBUG: (hostdb) timeout 64281 
 1308663327 300
 [Jun 22 15:27:42.422] Server {1098611008} DEBUG: (hostdb) delaying force 0 
 answer for img08.taobaocdn.com [timeout 0]
 [Jun 22 15:27:42.441] Server {1098611008} DEBUG: (hostdb) probe 
 img08.taobaocdn.com 945198AE9AE37481 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.441] Server {1098611008} DEBUG: (hostdb) timeout 64281 
 1308663327 300
 [Jun 22 15:27:42.441] Server {1098611008} DEBUG: (hostdb) DNS 
 img08.taobaocdn.com
 [Jun 22 15:27:42.441] Server {1098611008} DEBUG: (hostdb) enqueuing 
 additional request
 [Jun 22 15:27:42.470] Server {1099663680} DEBUG: (hostdb) probe 127.0.0.1 
 E9B7563422C93608 1 [ignore_timeout = 0]
 [Jun 22 15:27:42.470] Server 

[jira] [Created] (TS-3167) Remove the prefetch feature

2014-11-04 Thread Susan Hinrichs (JIRA)
Susan Hinrichs created TS-3167:
--

 Summary: Remove the prefetch feature
 Key: TS-3167
 URL: https://issues.apache.org/jira/browse/TS-3167
 Project: Traffic Server
  Issue Type: Task
Reporter: Susan Hinrichs


In discussions with [~amc] and [~zwoop] and [~bcall], it was decided to remove 
the prefetch feature.  It currently does not work.  It would be best to remove 
the broken feature.  If someone really needs this feature, then they can build 
a working feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3167) Remove the prefetch feature

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-3167:
---
Assignee: Leif Hedstrom

 Remove the prefetch feature
 ---

 Key: TS-3167
 URL: https://issues.apache.org/jira/browse/TS-3167
 Project: Traffic Server
  Issue Type: Task
Reporter: Susan Hinrichs
Assignee: Leif Hedstrom

 In discussions with [~amc] and [~zwoop] and [~bcall], it was decided to 
 remove the prefetch feature.  It currently does not work.  It would be best 
 to remove the broken feature.  If someone really needs this feature, then 
 they can build a working feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-898) fix potential problems from coverity scan

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-898:
--
Assignee: Leif Hedstrom

 fix potential problems from coverity scan
 -

 Key: TS-898
 URL: https://issues.apache.org/jira/browse/TS-898
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.0.0
 Environment: RHEL5
Reporter: Bryan Call
Assignee: Leif Hedstrom
 Fix For: 5.2.0


 Ran Coverity over the code and it reported 856 potential problems with the 
 code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-898) fix potential problems from coverity scan

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-898:
--
Fix Version/s: (was: sometime)
   5.2.0

 fix potential problems from coverity scan
 -

 Key: TS-898
 URL: https://issues.apache.org/jira/browse/TS-898
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.0.0
 Environment: RHEL5
Reporter: Bryan Call
Assignee: Leif Hedstrom
 Fix For: 5.2.0


 Ran Coverity over the code and it reported 856 potential problems with the 
 code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-904) when doing custom logging, it would be nice if we can set dedicate sampling rate for each rule

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs closed TS-904.
-
   Resolution: Invalid
Fix Version/s: (was: sometime)

 when doing custom logging, it would be nice if we can set dedicate sampling 
 rate for each rule
 --

 Key: TS-904
 URL: https://issues.apache.org/jira/browse/TS-904
 Project: Traffic Server
  Issue Type: New Feature
  Components: Logging
Affects Versions: 3.0.0
Reporter: Zhao Yongming
Priority: Minor

 the sampling is a global config in logging, we want to get it custom-able.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-905) Improve state machine around DNS lookups and handling of URL_REMAP_FOR_OS

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs closed TS-905.
-
   Resolution: Duplicate
Fix Version/s: (was: sometime)

 Improve state machine around DNS lookups and handling of URL_REMAP_FOR_OS
 -

 Key: TS-905
 URL: https://issues.apache.org/jira/browse/TS-905
 Project: Traffic Server
  Issue Type: Improvement
  Components: DNS, HTTP
Reporter: Leif Hedstrom

 The code around DNS lookups in the HTTP state machine is somewhat odd. The 
 same state is revisited at least twice in the case of a parent proxy lookup 
 for example, and the internal state is not reset in between such lookups. 
 This caused breakage of parent proxy when we added an API to bypass origin 
 DNS lookup, as a very obscure side effect.
 Also, there's code in there for a remapping mode URL_REMAP_FOR_OS (2), which 
 I believe we have crippled, and it's unclear exactly when it would be used. 
 We should examine this, and either fix this mode, or disable it completely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3168) Remove Log Collation

2014-11-04 Thread Susan Hinrichs (JIRA)
Susan Hinrichs created TS-3168:
--

 Summary: Remove Log Collation
 Key: TS-3168
 URL: https://issues.apache.org/jira/browse/TS-3168
 Project: Traffic Server
  Issue Type: Task
  Components: Logging
Reporter: Susan Hinrichs


In discussion with [~amc], [~zwoop], and [~bcall], decided to remove this 
feature.  Does not work.  Better to use newer dedicated log sub systems and 
syslog-ng



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-910) log collation in custom log will make dedicate connection to the same collation server

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs closed TS-910.
-
   Resolution: Invalid
Fix Version/s: (was: sometime)

Removing feature.

 log collation in custom log will make dedicate connection to the same 
 collation server
 --

 Key: TS-910
 URL: https://issues.apache.org/jira/browse/TS-910
 Project: Traffic Server
  Issue Type: Improvement
  Components: Logging
Affects Versions: 3.1.0
Reporter: Zhao Yongming
Assignee: Zhao Yongming
Priority: Trivial

 when you define LogObject in logs_xml.config, and set CollationHosts, it will 
 open connections for each LogObject, despite you put the same host in 
 CollationHosts.
 it will affect the default squid logging too. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-935) Should EVENT_INTERNAL really be the same as TS_EVENT_TIMEOUT

2014-11-04 Thread Susan Hinrichs (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196504#comment-14196504
 ] 

Susan Hinrichs commented on TS-935:
---

[~briang] unassign yourself unless you are still working on it.

 Should EVENT_INTERNAL really be the same as TS_EVENT_TIMEOUT
 

 Key: TS-935
 URL: https://issues.apache.org/jira/browse/TS-935
 Project: Traffic Server
  Issue Type: Bug
  Components: TS API
Affects Versions: 3.0.1
Reporter: Brian Geffon
Assignee: Brian Geffon
  Labels: api-change
 Fix For: sometime


 When trying to use TSContCall with event = TS_EVENT_TIMEOUT I stumbled upon 
 the fact that the API will decrement the event count for EVENT_INTERNAL or 
 EVENT_IMMEDIATE (see INKContInternal::handle_event_count), but shouldn't we 
 be able to do a TSContCall with TS_EVENT_IMMEDIAITE or TS_EVENT_TIMEOUT 
 because as of now doing so would cause m_event_count to become -1 or 
 shouldn't these defined values be something different? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-946) Scheduling a continuation on all threads of a specific type

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-946:
--
Fix Version/s: (was: 5.2.0)
   5.3.0

 Scheduling a continuation on all threads of a specific type
 ---

 Key: TS-946
 URL: https://issues.apache.org/jira/browse/TS-946
 Project: Traffic Server
  Issue Type: New Feature
  Components: Core
Affects Versions: 3.0.0
Reporter: Leif Hedstrom
 Fix For: 5.3.0


 It would be incredibly useful, both in the core and in plugin APIs, to be 
 able to schedule a continuation to run on all threads of a specific type. 
 E.g. in a plugin, something like
 TSAction TSContScheduleOnThreads(TSCont cont, TSThreadPool tp);
 This would be useful for e.g. setting up per-thread specifics from a plugin, 
 but quite possibly also from the core.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-966) cache.config dest_domain= dest_hostname= dest_ip= do not match anything

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-966:
--
Fix Version/s: (was: 5.2.0)
   5.3.0

 cache.config dest_domain= dest_hostname= dest_ip= do not match anything
 ---

 Key: TS-966
 URL: https://issues.apache.org/jira/browse/TS-966
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Affects Versions: 3.1.0, 3.0.1
Reporter: Igor Galić
  Labels: A
 Fix For: 5.3.0


 Caching policies are not applied when using these options to match targets.
 It is also not very clear *what* dest_domain= vs dest_hostname= can match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-946) Scheduling a continuation on all threads of a specific type

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-946:
--
Assignee: Leif Hedstrom

 Scheduling a continuation on all threads of a specific type
 ---

 Key: TS-946
 URL: https://issues.apache.org/jira/browse/TS-946
 Project: Traffic Server
  Issue Type: New Feature
  Components: Core
Affects Versions: 3.0.0
Reporter: Leif Hedstrom
Assignee: Leif Hedstrom
 Fix For: 5.3.0


 It would be incredibly useful, both in the core and in plugin APIs, to be 
 able to schedule a continuation to run on all threads of a specific type. 
 E.g. in a plugin, something like
 TSAction TSContScheduleOnThreads(TSCont cont, TSThreadPool tp);
 This would be useful for e.g. setting up per-thread specifics from a plugin, 
 but quite possibly also from the core.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-965) cache.config can't deal with both revalidate= and ttl-in-cache= specified

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-965:
--
Fix Version/s: (was: 5.2.0)
   5.3.0

 cache.config can't deal with both revalidate= and ttl-in-cache= specified
 -

 Key: TS-965
 URL: https://issues.apache.org/jira/browse/TS-965
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Affects Versions: 3.1.0, 3.0.1
Reporter: Igor Galić
  Labels: A
 Fix For: 5.3.0


 If both of these options are specified (with the same time?), nothing is 
 cached at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-965) cache.config can't deal with both revalidate= and ttl-in-cache= specified

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-965:
--
Assignee: Alan M. Carroll

 cache.config can't deal with both revalidate= and ttl-in-cache= specified
 -

 Key: TS-965
 URL: https://issues.apache.org/jira/browse/TS-965
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Affects Versions: 3.1.0, 3.0.1
Reporter: Igor Galić
Assignee: Alan M. Carroll
  Labels: A
 Fix For: 5.3.0


 If both of these options are specified (with the same time?), nothing is 
 cached at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-974) TS should have a mode to hold partial objects in cache

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-974:
--
Assignee: Alan M. Carroll

 TS should have a mode to hold partial objects in cache
 --

 Key: TS-974
 URL: https://issues.apache.org/jira/browse/TS-974
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache
Affects Versions: 3.0.1
Reporter: William Bardwell
Assignee: Alan M. Carroll
  Labels: A
 Fix For: sometime


 For ATS to do an excelent job caching large files like video it would need to 
 be able to hold partial objects for a large file.  This could be done in a 
 plugin or in the core.  This would need to be integrated with the Range 
 handling code to serve requests out of the partial objects and to get more 
 parts of a file to satisfy a Range request.
 An intermediate step (also do-able in the core or in a plugin) would be to 
 have some settings to let the Range handling code be able to trigger a full 
 file download either asynchronously when a Range response indicates that the 
 file isn't larger than some threshold, or synchronously when a Range request 
 could reasonably be answered quickly from a full request.  (Right now Range 
 requests are tunneled if there is not full cached content as far as I can 
 tell.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-974) TS should have a mode to hold partial objects in cache

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-974:
--
Fix Version/s: (was: sometime)
   5.3.0

 TS should have a mode to hold partial objects in cache
 --

 Key: TS-974
 URL: https://issues.apache.org/jira/browse/TS-974
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache
Affects Versions: 3.0.1
Reporter: William Bardwell
Assignee: Alan M. Carroll
  Labels: A
 Fix For: 5.3.0


 For ATS to do an excelent job caching large files like video it would need to 
 be able to hold partial objects for a large file.  This could be done in a 
 plugin or in the core.  This would need to be integrated with the Range 
 handling code to serve requests out of the partial objects and to get more 
 parts of a file to satisfy a Range request.
 An intermediate step (also do-able in the core or in a plugin) would be to 
 have some settings to let the Range handling code be able to trigger a full 
 file download either asynchronously when a Range response indicates that the 
 file isn't larger than some threshold, or synchronously when a Range request 
 could reasonably be answered quickly from a full request.  (Right now Range 
 requests are tunneled if there is not full cached content as far as I can 
 tell.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-966) cache.config dest_domain= dest_hostname= dest_ip= do not match anything

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-966:
--
Assignee: Alan M. Carroll

 cache.config dest_domain= dest_hostname= dest_ip= do not match anything
 ---

 Key: TS-966
 URL: https://issues.apache.org/jira/browse/TS-966
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Affects Versions: 3.1.0, 3.0.1
Reporter: Igor Galić
Assignee: Alan M. Carroll
  Labels: A
 Fix For: 5.3.0


 Caching policies are not applied when using these options to match targets.
 It is also not very clear *what* dest_domain= vs dest_hostname= can match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-974) TS should have a mode to hold partial objects in cache

2014-11-04 Thread Alan M. Carroll (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196518#comment-14196518
 ] 

Alan M. Carroll commented on TS-974:


This is now under active development.

 TS should have a mode to hold partial objects in cache
 --

 Key: TS-974
 URL: https://issues.apache.org/jira/browse/TS-974
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache
Affects Versions: 3.0.1
Reporter: William Bardwell
Assignee: Alan M. Carroll
  Labels: A
 Fix For: 5.3.0


 For ATS to do an excelent job caching large files like video it would need to 
 be able to hold partial objects for a large file.  This could be done in a 
 plugin or in the core.  This would need to be integrated with the Range 
 handling code to serve requests out of the partial objects and to get more 
 parts of a file to satisfy a Range request.
 An intermediate step (also do-able in the core or in a plugin) would be to 
 have some settings to let the Range handling code be able to trigger a full 
 file download either asynchronously when a Range response indicates that the 
 file isn't larger than some threshold, or synchronously when a Range request 
 could reasonably be answered quickly from a full request.  (Right now Range 
 requests are tunneled if there is not full cached content as far as I can 
 tell.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-980) change client_session schedule from global to thread local, and reduce the try_locks in UnixNetVConnection::reenable

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs closed TS-980.
-
   Resolution: Invalid
Fix Version/s: (was: 5.2.0)

Consensus of [~zwoop] and [~amc] is that this optimization will not work.

 change client_session schedule from global  to thread local, and reduce the 
 try_locks in UnixNetVConnection::reenable
 -

 Key: TS-980
 URL: https://issues.apache.org/jira/browse/TS-980
 Project: Traffic Server
  Issue Type: Improvement
  Components: Network, Performance
Affects Versions: 3.1.0, 3.0.0
 Environment: all
Reporter: weijin
Assignee: weijin
 Attachments: ts-980.diff


 I did some performance test on ats last days(disable cache, set share_server 
 session 2, pure proxy mode), I did see significant improvement on low load, 
 but it dropped rapidly when load is high. meanwhile, some stability problems 
 happened. Through gdb, I found the client_session`s mutex can be acquired by 
 two or more threads, I believe some schedules happened during the sm 
 life_time. May be we need do some work to find these eventProcessor.schedules 
 and change them to thread schedules.
 UnixVConnecton::reenable {
 if (nh-mutex-thread_holding == t) {
   // put into ready_list
 } else {
MUTEX_TRY_LOCK(lock, nh-mutex, t);
if (!lock) {
  // put into enable_list;
} else {
  // put into ready_list;
}
 }
 remove UnixNetVConnection::reenable try_lock operations, 3 reasons
 1. try_lock operation means obj allocation and deallocation operation. 
 frequently
 2. try_lock hardly can lock the net-handler`s mutex.(net-handler is schedule 
 by as soon as possible)
 3. try_lock should not acquire the net-handler`s mutex. That may lead more 
 net io latency if it is an epoll event need to be processed in other threads. 
 If it is not an epoll event(time event), I don`t think putting vc in 
 ready_list has any advantage than in enable_list.
 may be we can change reenale function like this:
 UnixVConnecton::reenable {
 if (nh-mutex-thread_holding == t) {
   // put into ready_list;
 } else {
   // put into enable_list;
 }
 my buddies, any advice?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-990) IPv6 support for clustering

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-990:
--
Assignee: (was: Alan M. Carroll)

 IPv6 support for clustering
 ---

 Key: TS-990
 URL: https://issues.apache.org/jira/browse/TS-990
 Project: Traffic Server
  Issue Type: Improvement
  Components: Clustering
Affects Versions: 3.1.0
Reporter: Alan M. Carroll
Priority: Minor
  Labels: clustering, ipv6
 Fix For: sometime


 Update clustering to be IPv6 compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-983) prefetch: the documents

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs closed TS-983.
-
   Resolution: Invalid
Fix Version/s: (was: sometime)

 prefetch: the documents
 ---

 Key: TS-983
 URL: https://issues.apache.org/jira/browse/TS-983
 Project: Traffic Server
  Issue Type: Sub-task
  Components: HTTP
Reporter: Zhao Yongming
Assignee: Zhao Yongming





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-993) Add OpenBSD support.

2014-11-04 Thread Susan Hinrichs (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196525#comment-14196525
 ] 

Susan Hinrichs commented on TS-993:
---

[~piotrsikora] Are you still interested?  If not, we will close as 'Won't fix'.

 Add OpenBSD support.
 

 Key: TS-993
 URL: https://issues.apache.org/jira/browse/TS-993
 Project: Traffic Server
  Issue Type: Improvement
  Components: Build
 Environment: OpenBSD
Reporter: Piotr Sikora
 Fix For: sometime

 Attachments: 0001-Add-OpenBSD-support.patch, freebsd.patch


 Add OpenBSD support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-935) Should EVENT_INTERNAL really be the same as TS_EVENT_TIMEOUT

2014-11-04 Thread Brian Geffon (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196531#comment-14196531
 ] 

Brian Geffon commented on TS-935:
-

[~shinrich], I think we should just close this as WONT FIX.

 Should EVENT_INTERNAL really be the same as TS_EVENT_TIMEOUT
 

 Key: TS-935
 URL: https://issues.apache.org/jira/browse/TS-935
 Project: Traffic Server
  Issue Type: Bug
  Components: TS API
Affects Versions: 3.0.1
Reporter: Brian Geffon
Assignee: Brian Geffon
  Labels: api-change
 Fix For: sometime


 When trying to use TSContCall with event = TS_EVENT_TIMEOUT I stumbled upon 
 the fact that the API will decrement the event count for EVENT_INTERNAL or 
 EVENT_IMMEDIATE (see INKContInternal::handle_event_count), but shouldn't we 
 be able to do a TSContCall with TS_EVENT_IMMEDIAITE or TS_EVENT_TIMEOUT 
 because as of now doing so would cause m_event_count to become -1 or 
 shouldn't these defined values be something different? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-1001) reload the changes in dns.resolv_conf

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs closed TS-1001.
--
Resolution: Duplicate

 reload the changes in dns.resolv_conf
 -

 Key: TS-1001
 URL: https://issues.apache.org/jira/browse/TS-1001
 Project: Traffic Server
  Issue Type: Wish
  Components: DNS
Reporter: Conan Wang
Assignee: Leif Hedstrom
Priority: Trivial
  Labels: A
 Fix For: 5.2.0


 a trivial wish: ATS can reload (traffic_line -x) resolv.conf if nameserver 
 changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-1007) SSN Close called before TXN Close

2014-11-04 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs reassigned TS-1007:
--

Assignee: Susan Hinrichs

 SSN Close called before TXN Close
 -

 Key: TS-1007
 URL: https://issues.apache.org/jira/browse/TS-1007
 Project: Traffic Server
  Issue Type: Bug
  Components: TS API
Affects Versions: 3.0.1
Reporter: Nick Kew
Assignee: Susan Hinrichs
Priority: Critical
 Fix For: 5.3.0


 Where a plugin implements both SSN_CLOSE_HOOK and TXN_CLOSE_HOOK, the 
 SSN_CLOSE_HOOK is called first of the two.  This messes up normal cleanups!
 Details:
   Register a SSN_START event globally
   In the SSN START, add a TXN_START and a SSN_CLOSE
   In the TXN START, add a TXN_CLOSE
 Stepping through, I see the order of events actually called, for the simple 
 case of a one-off HTTP request with no keepalive:
 SSN_START
 TXN_START
 SSN_END
 TXN_END
 Whoops, SSN_END cleaned up the SSN context, leaving dangling pointers in the 
 TXN!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >