[squid-dev] [PATCH] pconn_lifetime for squid-3.5.x

2015-05-25 Thread Tsantilas Christos
I am attaching a patch which implements the pconn_lifetime feature for 
squid-3.5.x, in the case someone want to use it or we decide that it can 
be inlcuded in squid-3.5.x.


This patch include the recent fixes for pconn_lifetime feature.

This is a Measurement Factory project

Regards,
   Christos
Add pconn_lifetime to limit maximum lifetime of a persistent connection.

When set, Squid will close a now-idle persistent connection that
exceeded configured lifetime instead of moving the connection into
the idle connection pool (or equivalent). No effect on ongoing/active
transactions. Connection lifetime is the time period from the
connection acceptance or opening time until "now".

This limit is useful in environments with long-lived connections
where Squid configuration or environmental factors change during a
single connection lifetime. If unrestricted, some connections may
last for hours and even days, ignoring those changes that should
have affected their behavior or their existence.

This is a Measurement Factory project
=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h	2015-02-18 08:50:00 +
+++ src/SquidConfig.h	2015-03-22 10:52:21 +
@@ -72,40 +72,41 @@
 #if USE_HTTP_VIOLATIONS
 time_t negativeTtl;
 #endif
 time_t maxStale;
 time_t negativeDnsTtl;
 time_t positiveDnsTtl;
 time_t shutdownLifetime;
 time_t backgroundPingRate;
 
 struct {
 time_t read;
 time_t write;
 time_t lifetime;
 time_t connect;
 time_t forward;
 time_t peer_connect;
 time_t request;
 time_t clientIdlePconn;
 time_t serverIdlePconn;
 time_t ftpClientIdle;
+time_t pconnLifetime; ///< pconn_lifetime in squid.conf
 time_t siteSelect;
 time_t deadPeer;
 int icp_query;  /* msec */
 int icp_query_max;  /* msec */
 int icp_query_min;  /* msec */
 int mcast_icp_query;/* msec */
 time_msec_t idns_retransmit;
 time_msec_t idns_query;
 } Timeout;
 size_t maxRequestHeaderSize;
 int64_t maxRequestBodySize;
 size_t maxRequestBufferSize;
 size_t maxReplyHeaderSize;
 AclSizeLimit *ReplyBodySize;
 
 struct {
 unsigned short icp;
 #if USE_HTCP
 
 unsigned short htcp;

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2015-02-18 10:30:07 +
+++ src/cf.data.pre	2015-03-22 10:54:02 +
@@ -6027,40 +6027,65 @@
 TYPE: time_t
 LOC: Config.Timeout.lifetime
 DEFAULT: 1 day
 DOC_START
 	The maximum amount of time a client (browser) is allowed to
 	remain connected to the cache process.  This protects the Cache
 	from having a lot of sockets (and hence file descriptors) tied up
 	in a CLOSE_WAIT state from remote clients that go away without
 	properly shutting down (either because of a network failure or
 	because of a poor client implementation).  The default is one
 	day, 1440 minutes.
 
 	NOTE:  The default value is intended to be much larger than any
 	client would ever need to be connected to your cache.  You
 	should probably change client_lifetime only as a last resort.
 	If you seem to have many client connections tying up
 	filedescriptors, we recommend first tuning the read_timeout,
 	request_timeout, persistent_request_timeout and quick_abort values.
 DOC_END
 
+NAME: pconn_lifetime
+COMMENT: time-units
+TYPE: time_t
+LOC: Config.Timeout.pconnLifetime
+DEFAULT: 0 seconds
+DOC_START
+	Desired maximum lifetime of a persistent connection.
+	When set, Squid will close a now-idle persistent connection that
+	exceeded configured lifetime instead of moving the connection into
+	the idle connection pool (or equivalent). No effect on ongoing/active
+	transactions. Connection lifetime is the time period from the
+	connection acceptance or opening time until "now".
+	 
+	This limit is useful in environments with long-lived connections
+	where Squid configuration or environmental factors change during a
+	single connection lifetime. If unrestricted, some connections may
+	last for hours and even days, ignoring those changes that should
+	have affected their behavior or their existence.
+	 
+	Currently, a new lifetime value supplied via Squid reconfiguration
+	has no effect on already idle connections unless they become busy.
+	 
+	When set to '0' this limit is not used.
+DOC_END
+
 NAME: half_closed_clients
 TYPE: onoff
 LOC: Config.onoff.half_closed_clients
 DEFAULT: off
 DOC_START
 	Some clients may shutdown the sending side of their TCP
 	connections, while leaving their receiving sides open.	Sometimes,
 	Squid can not tell the difference between a half-closed and a
 	fully-closed TCP connection.
 
 	By default, Squid will immediately close client connections when
 	read(2) returns "no more data to read."
 
 	Change this option to 'on' and Squid will keep open connections
 	until a read(2) or write(2) on the socket returns an error.
 	This may show some benefits for reverse proxies. But if not
 	it 

Re: [squid-dev] [PATCH] pconn_lifetime robustness fixes

2015-05-01 Thread Alex Rousskov
On 04/28/2015 08:18 AM, Amos Jeffries wrote:
> It is worth noting that this patch means the directive is now no longer
> guaranteed to meet its primary use case:

Yes, but only if one misinterprets the primary use case. AFAICT after
rereading the original pconn_lifetime thread, this misinterpretation has
started from the very beginning and, apparently, has not quite ended
yet. The feature limits the total lifetime of a connection, but it is
applied to idle persistent connections only.


> With the reversion to the keep-alive -> close action any existing and
> ongoing pipeline of requests will cause the connection to persist past
> its lifetime indefinitely. 

Yes, the limit should affect idle persistent connections only. We tried
to expand the definition of "idle" and "persistent" to cover more use
cases (in part, to accommodate squid-dev review comments IIRC), but the
real world was against us. For example, terminating connections between
pipelined requests caused interoperability problems even though they are
not supposed to happen with compliant HTTP clients. I suspect HTTP/2
solves this problem, but we are not there yet.

I would not be surprised to see more fixes in this direction. For
example, we have an untriaged bug report about FTP control connections
being negatively affected, with an argument that FTP does not have a
concept of "idle" and so should not be the subject to pconn_lifetime limits.


Cheers,

Alex.



>> On 04/27/2015 06:40 PM, Tsantilas Christos wrote:
>>> If there is not any objection I will apply this patch to trunk.
>>>
>>>
>>> On 04/15/2015 07:11 PM, Tsantilas Christos wrote:
 Hi all,
   I am attaching which fixes pconn_lifetime feature.
 We had a long discussion for this feature, which is resulted to the
 patch r13780, but unfortunately, Measurement Factory customers reported
 problems:

 1. Squid closed connections with partially received requests when they
 reached pconn_lifetime limit. We should only close _idle_ connections.

 2. When connecting to Squid without sending anything for longer than
 pconn_lifetime, the connection hangs if the request is sent after the
 waiting period.

 3. The connection also hangs if the initial request is starting to be
 transmitted but then there is a longer pause before the request is
 completed.

 Please read the patch preamble for more informations.

 This is a Measurement Factory project.

> 
> 
> Amos
> 
> ___
> squid-dev mailing list
> squid-dev@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev
> 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime robustness fixes

2015-04-28 Thread Amos Jeffries
On 28/04/2015 9:56 p.m., Tsantilas Christos wrote:
> Patch applied to trunk as r14046.
> 
> 

Sorry, I got a bit delayed today digging up the reference.

I dont have any objections, since it means reverting back towards Squid
behaviour before this directive ever existed.



It is worth noting that this patch means the directive is now no longer
guaranteed to meet its primary use case:

"
From: Tsantilas Christos
Date: Mon, 01 Sep 2014 19:49:59 +0300

This limit is useful in environments with long-lived connections where
Squid configuration or environmental factors change during a single
connection lifetime. If unrestricted, some connections may last for
hours and even days, ignoring those changes that should have affected
their behavior or their existence.
"

With the reversion to the keep-alive -> close action any existing and
ongoing pipeline of requests will cause the connection to persist past
its lifetime indefinitely. For example; a pipeline of 2+ Long-Polling
requests with Connection:keep-alive will remain alive for as long as the
polling continues to have a second request pipelined before the first ends.


> On 04/27/2015 06:40 PM, Tsantilas Christos wrote:
>> If there is not any objection I will apply this patch to trunk.
>>
>>
>> On 04/15/2015 07:11 PM, Tsantilas Christos wrote:
>>> Hi all,
>>>   I am attaching which fixes pconn_lifetime feature.
>>> We had a long discussion for this feature, which is resulted to the
>>> patch r13780, but unfortunately, Measurement Factory customers reported
>>> problems:
>>>
>>> 1. Squid closed connections with partially received requests when they
>>> reached pconn_lifetime limit. We should only close _idle_ connections.
>>>
>>> 2. When connecting to Squid without sending anything for longer than
>>> pconn_lifetime, the connection hangs if the request is sent after the
>>> waiting period.
>>>
>>> 3. The connection also hangs if the initial request is starting to be
>>> transmitted but then there is a longer pause before the request is
>>> completed.
>>>
>>> Please read the patch preamble for more informations.
>>>
>>> This is a Measurement Factory project.
>>>


Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime robustness fixes

2015-04-28 Thread Tsantilas Christos

Patch applied to trunk as r14046.


On 04/27/2015 06:40 PM, Tsantilas Christos wrote:

If there is not any objection I will apply this patch to trunk.


On 04/15/2015 07:11 PM, Tsantilas Christos wrote:

Hi all,
  I am attaching which fixes pconn_lifetime feature.
We had a long discussion for this feature, which is resulted to the
patch r13780, but unfortunately, Measurement Factory customers reported
problems:

1. Squid closed connections with partially received requests when they
reached pconn_lifetime limit. We should only close _idle_ connections.

2. When connecting to Squid without sending anything for longer than
pconn_lifetime, the connection hangs if the request is sent after the
waiting period.

3. The connection also hangs if the initial request is starting to be
transmitted but then there is a longer pause before the request is
completed.

Please read the patch preamble for more informations.

This is a Measurement Factory project.


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime robustness fixes

2015-04-27 Thread Tsantilas Christos

If there is not any objection I will apply this patch to trunk.


On 04/15/2015 07:11 PM, Tsantilas Christos wrote:

Hi all,
  I am attaching which fixes pconn_lifetime feature.
We had a long discussion for this feature, which is resulted to the
patch r13780, but unfortunately, Measurement Factory customers reported
problems:

1. Squid closed connections with partially received requests when they
reached pconn_lifetime limit. We should only close _idle_ connections.

2. When connecting to Squid without sending anything for longer than
pconn_lifetime, the connection hangs if the request is sent after the
waiting period.

3. The connection also hangs if the initial request is starting to be
transmitted but then there is a longer pause before the request is
completed.

Please read the patch preamble for more informations.

This is a Measurement Factory project.


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] pconn_lifetime robustness fixes

2015-04-15 Thread Tsantilas Christos

Hi all,
 I am attaching which fixes pconn_lifetime feature.
We had a long discussion for this feature, which is resulted to the 
patch r13780, but unfortunately, Measurement Factory customers reported 
problems:


1. Squid closed connections with partially received requests when they 
reached pconn_lifetime limit. We should only close _idle_ connections.


2. When connecting to Squid without sending anything for longer than 
pconn_lifetime, the connection hangs if the request is sent after the 
waiting period.


3. The connection also hangs if the initial request is starting to be 
transmitted but then there is a longer pause before the request is 
completed.


Please read the patch preamble for more informations.

This is a Measurement Factory project.
pconn_lifetime robustness fixes

This patch changes pconn_lifetime (r13780) to abort only really idle 
persistent connections (when they timeout). It removes some "extra" features
(added to pconn_lifetime during the feature review) because they break things
when aggressive timeouts is combined with picky clients. Specifically,

1. Squid closed connections with partially received requests when they
   reached pconn_lifetime limit. We should only close _idle_ connections.

2. When connecting to Squid without sending anything for longer than
   pconn_lifetime, the connection hangs if the request is sent after the
   waiting period.

3. The connection also hangs if the initial request is starting to be
   transmitted but then there is a longer pause before the request is
   completed.

Most of the above problems are easy to trigger only when using very aggressive
pconn_lifetime settings that the feature was not designed for, but they still
can be considered bugs from admins point of view. Fixes:

* Do not stop reading a partially received request when we are timing out,
  to avoid aborting that request.

* Do not set keepalive flag based on the pconn_lifetime timeout. We cannot
  predict whether some new request data is going to be read (and reset the
  idle timeout clock) before our Connection:close response is sent back.

HTTP clients are supposed to recover from such races, but some apparently
do not, especially if it is their first request on the connection.

This is a Measurement Factory project.

=== modified file 'src/client_side.cc'
--- src/client_side.cc	2015-03-16 09:52:13 +
+++ src/client_side.cc	2015-03-23 09:53:55 +
@@ -3045,46 +3045,40 @@
  */
 bool
 ConnStateData::clientParseRequests()
 {
 bool parsed_req = false;
 
 debugs(33, 5, HERE << clientConnection << ": attempting to parse");
 
 // Loop while we have read bytes that are not needed for producing the body
 // On errors, bodyPipe may become nil, but readMore will be cleared
 while (!in.buf.isEmpty() && !bodyPipe && flags.readMore) {
 
 /* Don't try to parse if the buffer is empty */
 if (in.buf.isEmpty())
 break;
 
 /* Limit the number of concurrent requests */
 if (concurrentRequestQueueFilled())
 break;
 
-/*Do not read more requests if persistent connection lifetime exceeded*/
-if (Config.Timeout.pconnLifetime && clientConnection->lifeTime() > Config.Timeout.pconnLifetime) {
-flags.readMore = false;
-break;
-}
-
 // try to parse the PROXY protocol header magic bytes
 if (needProxyProtocolHeader_ && !parseProxyProtocolHeader())
 break;
 
 if (ClientSocketContext *context = parseOneRequest()) {
 debugs(33, 5, clientConnection << ": done parsing a request");
 
 AsyncCall::Pointer timeoutCall = commCbCall(5, 4, "clientLifetimeTimeout",
  CommTimeoutCbPtrFun(clientLifetimeTimeout, context->http));
 commSetConnTimeout(clientConnection, Config.Timeout.lifetime, timeoutCall);
 
 context->registerWithConn();
 
 processParsedRequest(context);
 
 parsed_req = true; // XXX: do we really need to parse everything right NOW ?
 
 if (context->mayUseConnection()) {
 debugs(33, 3, HERE << "Not parsing new requests, as this request may need the connection");
 break;

=== modified file 'src/client_side_reply.cc'
--- src/client_side_reply.cc	2015-01-13 07:25:36 +
+++ src/client_side_reply.cc	2015-03-23 09:59:02 +
@@ -1487,44 +1487,40 @@
 } else if (reply->bodySize(request->method) < 0 && !maySendChunkedReply) {
 debugs(88, 3, "clientBuildReplyHeader: can't keep-alive, unknown body size" );
 request->flags.proxyKeepalive = false;
 } else if (fdUsageHigh()&& !request->flags.mustKeepalive) {
 debugs(88, 3, "clientBuildReplyHeader: Not many unused FDs, can't keep-alive");
 request->flags.proxyKeepalive = false;
 } else if (request->flags.sslBumped && !reply->persistent()) {
 // We do not really have to close, but we pretend we are 

Re: [squid-dev] [PATCH] pconn_lifetime

2014-12-24 Thread Tsantilas Christos

Patch applied to trunk (revno: 13780).

On 12/23/2014 08:52 PM, Tsantilas Christos wrote:

If there is not any objection I will apply this patch to trunk.

On 12/15/2014 12:39 PM, Tsantilas Christos wrote:

Hi all,

  I am attaching a new patch for the "pconn_lifetime" feature. A first
patch has posted in mailing list and discussed under the mail thread
with the same title 1-2 months ago.

This patch is similar to the old one posted, with a small fix to better
handle pipelined connections:
   1. finish interpreting the Nth request
  check whether pconn_lifetime has expired
   2. if pconn_lifetime has expired, then stop further reading and
  do not interpret any already read raw bytes of the N+1st request
   3. otherwise, read and interpret read raw bytes of the N+1st request
  and go to #1.

The above should be enough. The pipelined requests are always
idempotent, they do not have body data to take care about, and the web
clients knows that if a pipelined HTTP request failed, it should be
retried in a new connection.

I must recall the following about this patch:
- The pconn_lifetime it applies to any persistent connection,
server, client, or ICAP.
- This patch does not fix other problems may exist in current squid.
- The pconn_lifetime should not confused with the client_lifetime
timeout. They have different purpose.

This is a Measurement Factory project



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime

2014-12-23 Thread Tsantilas Christos

If there is not any objection I will apply this patch to trunk.

On 12/15/2014 12:39 PM, Tsantilas Christos wrote:

Hi all,

  I am attaching a new patch for the "pconn_lifetime" feature. A first
patch has posted in mailing list and discussed under the mail thread
with the same title 1-2 months ago.

This patch is similar to the old one posted, with a small fix to better
handle pipelined connections:
   1. finish interpreting the Nth request
  check whether pconn_lifetime has expired
   2. if pconn_lifetime has expired, then stop further reading and
  do not interpret any already read raw bytes of the N+1st request
   3. otherwise, read and interpret read raw bytes of the N+1st request
  and go to #1.

The above should be enough. The pipelined requests are always
idempotent, they do not have body data to take care about, and the web
clients knows that if a pipelined HTTP request failed, it should be
retried in a new connection.

I must recall the following about this patch:
- The pconn_lifetime it applies to any persistent connection,
server, client, or ICAP.
- This patch does not fix other problems may exist in current squid.
- The pconn_lifetime should not confused with the client_lifetime
timeout. They have different purpose.

This is a Measurement Factory project



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime

2014-12-15 Thread Tsantilas Christos

Hi all,

 I am attaching a new patch for the "pconn_lifetime" feature. A first 
patch has posted in mailing list and discussed under the mail thread 
with the same title 1-2 months ago.


This patch is similar to the old one posted, with a small fix to better 
handle pipelined connections:

  1. finish interpreting the Nth request
 check whether pconn_lifetime has expired
  2. if pconn_lifetime has expired, then stop further reading and
 do not interpret any already read raw bytes of the N+1st request
  3. otherwise, read and interpret read raw bytes of the N+1st request
 and go to #1.

The above should be enough. The pipelined requests are always 
idempotent, they do not have body data to take care about, and the web 
clients knows that if a pipelined HTTP request failed, it should be 
retried in a new connection.


I must recall the following about this patch:
   - The pconn_lifetime it applies to any persistent connection, 
server, client, or ICAP.

   - This patch does not fix other problems may exist in current squid.
   - The pconn_lifetime should not confused with the client_lifetime 
timeout. They have different purpose.


This is a Measurement Factory project

pconn_lifetime

This patch add a new configuration option the 'pconn_lifetime' to allow users
set the desired maximum lifetime of a persistent connection.

When set, Squid will close a now-idle persistent connection that
exceeded configured lifetime instead of moving the connection into
the idle connection pool (or equivalent). No effect on ongoing/active
transactions. Connection lifetime is the time period from the
connection acceptance or opening time until "now".

This limit is useful in environments with long-lived connections
where Squid configuration or environmental factors change during a
single connection lifetime. If unrestricted, some connections may
last for hours and even days, ignoring those changes that should
have affected their behavior or their existence.

This option has the following behaviour when pipelined requests tunneled
to a connection where its lifetime expired:

 1. finish interpreting the Nth request
check whether pconn_lifetime has expired
 2. if pconn_lifetime has expired, then stop further reading and
do not interpret any already read raw bytes of the N+1st request
 3. otherwise, read and interpret read raw bytes of the N+1st request
and go to #1.


This is a Measurement Factory project
=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h	2014-12-04 14:00:17 +
+++ src/SquidConfig.h	2014-12-15 09:28:15 +
@@ -72,40 +72,41 @@
 #if USE_HTTP_VIOLATIONS
 time_t negativeTtl;
 #endif
 time_t maxStale;
 time_t negativeDnsTtl;
 time_t positiveDnsTtl;
 time_t shutdownLifetime;
 time_t backgroundPingRate;
 
 struct {
 time_t read;
 time_t write;
 time_t lifetime;
 time_t connect;
 time_t forward;
 time_t peer_connect;
 time_t request;
 time_t clientIdlePconn;
 time_t serverIdlePconn;
 time_t ftpClientIdle;
+time_t pconnLifetime; ///< pconn_lifetime in squid.conf
 time_t siteSelect;
 time_t deadPeer;
 int icp_query;  /* msec */
 int icp_query_max;  /* msec */
 int icp_query_min;  /* msec */
 int mcast_icp_query;/* msec */
 time_msec_t idns_retransmit;
 time_msec_t idns_query;
 time_t urlRewrite;
 } Timeout;
 size_t maxRequestHeaderSize;
 int64_t maxRequestBodySize;
 int64_t maxChunkedRequestBodySize;
 size_t maxRequestBufferSize;
 size_t maxReplyHeaderSize;
 AclSizeLimit *ReplyBodySize;
 
 struct {
 unsigned short icp;
 #if USE_HTCP

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2014-12-08 11:25:58 +
+++ src/cf.data.pre	2014-12-15 09:28:15 +
@@ -6129,40 +6129,65 @@
 TYPE: time_t
 LOC: Config.Timeout.lifetime
 DEFAULT: 1 day
 DOC_START
 	The maximum amount of time a client (browser) is allowed to
 	remain connected to the cache process.  This protects the Cache
 	from having a lot of sockets (and hence file descriptors) tied up
 	in a CLOSE_WAIT state from remote clients that go away without
 	properly shutting down (either because of a network failure or
 	because of a poor client implementation).  The default is one
 	day, 1440 minutes.
 
 	NOTE:  The default value is intended to be much larger than any
 	client would ever need to be connected to your cache.  You
 	should probably change client_lifetime only as a last resort.
 	If you seem to have many client connections tying up
 	filedescriptors, we recommend first tuning the read_timeout,
 	request_timeout, persistent_request_timeout and quick_abort values.
 DOC_END
 
+NAME: pconn_lifetime
+COMMENT: time-units
+TYPE: time_t
+LOC: Config.Timeout.pconnLifetime
+DEFAULT: 0 seconds
+DOC_START
+	Desired maximum lifetime of a persistent connection.
+	When set, Squid will close a now-idl

Re: [squid-dev] [PATCH] pconn_lifetime

2014-10-06 Thread Tsantilas Christos

On 10/03/2014 09:58 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 3/10/2014 10:21 p.m., Tsantilas Christos wrote:

On 10/02/2014 08:24 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE- Hash: SHA1

On 2/10/2014 5:23 a.m., Tsantilas Christos wrote:

Ping

So which is the decision? Can the patch go to trunk? Should
the patch replaced with on other solution?



We already have client_lifetime directive which is documented as
doing exactly what this new directive does.
().

However, what that directive *actually* does is very different
from the documented behaviour - it is also used for controlling
between-request idle timeout (breaking client_idle_pconn_timeout
by making it 1 week!) and also server request timout (pretending
to be a timeout for duration between sending request and starting
to read reply).



My sense is that the client_lifetime is a timeout, which should
triggered when the client waits for long time an HTTP or ICAP or
DNS server to finish. Probably should renamed to
"client_wait_for_activity_timeout" and documentation should
adapted. Also probably we need to fix it a little.


We already have a lot of different *_timeout directives being used for
activities. Each one named for the activity being watched.

What I am hearing you requesting, and seeing in your code, is a total
since-opened timeout. Covering all actions on that connection.




However it make sense to me a such timeout parameter. It is used in
the case: "The web browser, waits 1 minute the dns server to
respond, 1 minute helper to finish, 1 minute the ICAP server to
respond, 2 minutes the HTTP server send first bytes of the answer,
please close this connection I am not interested any more for the
response..."

I should note that the pconn_lifetime has effect to server side
connections too. And also try to not halt a served transaction.
Only triggered after the transaction is finished, or while we are
waiting for new requests.


Yes, I noted that, and agree it is a good decision.


It is a timeout used by administrators to resolve a problem
without affecting active transactions. There are major differences
in use cases.



There is a lot of similarity here with:
* request_timeout which counts down the period between accept() and
first request bytes arriving, and
* client_idle_pconn_timeout which counts down the period between requests.

Although both those count in time relative to the current idle
activity, instead of time since connection accept().

I note that you said this pconn_lifetime did not need coding for ICAP
or server connections since those were stored in Pconn/Idle pools. The
Idle pool timeouts only count from start of idle period, just like
client_idle_pconn_timeout.
  So either request_timeout/client_idle_pconn_timeout make
pconn_lifetime needless or pconn_lifetime needs some since-connect()
equivalent on server connections (and ICAP).


My sense is that the pconn_lifetime scoping lifetime since-connect in 
this patch.

Am I wrong, is there any bug to this patch?



I think your choice of scoping "lifetime" as since-accept() is right.







2) There is more work to do to integrate it properly with the
existing pconn timeout and lifetime directives.

I request that we include these in this patch:

* change ConnStateData::clientParseRequests() to use
conn->timeLeft(Config.Timeout.clientIdlePconn) when setting
timeout between requests. - this is another bug in the existing
code that is retained by your patch and needs fixing.



I think this change should not be done. We may affect an valid
existing request. If the timeLeft() is very short, we may have
timeout while waiting data from helpers or ICAP server.


Your patch is already setting keepalive=false when the pipeline may
have others still waiting responses, and in fact some of those may be
fully serviced by a server and only awaiting their reply data to be
written to the client.


However it is not the same as aborting a served transaction (including 
POST or PUT requests)




If you want lifetime to have no bad side effects on a client
connection, then you need the timeout handler for it to set the client
connection to half-closed (no more reading) and let the existing
transactions (all of them) complete.


Maybe this is a good optimization, and probably we should add it to this 
patch.



If you are doing the half-closed action, then it is better to set idle
timeout using timeLeft() in ConnStateData::clientParseRequests()
instead of playing with requests keep-alive flags.




We need a client_wait_for_activity_timeout here. This is what the
"client_lifetime" now does.



No it waits for a random request and sets the keepalive flag to close.
Activity on other requests is *probably* pending.



Currently the client_lifetime now is set to client_side connection after 
we received all of the HTTP request. This is not a connection lifetime 
timeout.


Also set t

Re: [squid-dev] [PATCH] pconn_lifetime

2014-10-03 Thread Alex Rousskov
On 10/03/2014 11:57 AM, Amos Jeffries wrote:
> On 4/10/2014 3:52 a.m., Alex Rousskov wrote:
>> On 10/01/2014 11:24 PM, Amos Jeffries wrote:
> 
>>> We already have client_lifetime directive which is documented
>>> as doing exactly what this new directive does. 
>>> ().
> 
>> No, client_lifetime is not documented (and is not meant) to do
>> what we want pconn_lifetime to do. Client_lifetime is supposed to
>> kill client connections exceeding the configured lifetime,
>> _regardless_ of the client (and client connection) state. We do
>> not want that for pconn_lifetime which only affects _idle_
>> connections.
> 
> Lets have a look at that documentation then...
> 
> http://www.squid-cache.org/Doc/config/client_lifetime/ : " The
> maximum amount of time a client (browser) is allowed to remain
> connected to the cache process. "
> 
> - scope: "client (browser)"

Yes.


> And Christos patch description: " the desired maximum lifetime of a
> persistent connection. "

Yes. No "side" limitation here.


> Having read through the patch. What it does is *only* set a timer 
> running when the client connection is accepted, and limits (most) 
> timeouts applied later on that connection to be no more than the 
> configured total.

The patch should limit the _total_ pconn lifetime (but enforce that
limit only when Squid thinks the persistent connection is [going to
be] idle). Also, it should apply the same logic to origin server
connections that we pool. AFAICT, the patch does all of the above. If
it does not, there is a bug in the patch. More on that further below.

Please note that I use the term "idle from Squid point of view" as in
"may be closed without causing much harm". That is, it is an opposite
of "in the middle of handling some transaction". We could split hairs
arguing about some corner cases, but I hope we do not have to do that.


>> Moreover, pconn_lifetime is meant to apply to all connections,
>> not just the connections from a client.
> 
> Then Christos patch is seriously incomplete, because it only
> affects connections arriving from clients. The lifetime is never
> set on any of the other types of connection that make up "all".

AFAICT, the patch affects both HTTP user and origin connections. The
origin connections are affected by pconn.cc changes. Perhaps I
misinterpreted that change or you missed it?

I am not 100% sure, but I suspect those pconn.cc changes affect ICAP
connection as well.

As we discussed, the patch does not add new code to pool idle HTTP
user connections -- that big change is outside this patch scope, but
when/if that code is added, the same approach can be applied there
trivially.


>>> * update tunnel.cc client connection timeouts to use 
>>> conn->timeLeft(). - maybe with some new tunnel-specific
>>> lifetime config item. But I think your new config lifetime is
>>> appropriate for now at least.
> 
>> There is no concept of "idle persistent connection" for TCP
>> tunnels so we should not apply pconn_lifetime to them right now.
> 
> You added "idle" to that description. The code in question *does
> not* limit itself to idle connections, nor persistent connections.

I did not add that. The proposed feature should apply to "idle"
connections only, as documented: "When set, Squid will close a
now-idle persistent connection that ..."


> So do we at least have a consensus on terminology:
> 
> That "lifetime" for a connection means from its opening 
> accept()/connect() to its close()  ??

Yes, as documented in the patch: 'Connection lifetime is the time
period from the connection acceptance or opening time until "now"'.


> That the bits named FOO_timeout means some duration of that
> lifetime spent doing a "FOO" ??

Timeout and lifetime are very different things so I would not mix them
up like that, but yes, agents may do FOO during their lifetime (often
many times!).


Hope this clarifies,

Alex.
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime

2014-10-03 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 3/10/2014 10:21 p.m., Tsantilas Christos wrote:
> On 10/02/2014 08:24 AM, Amos Jeffries wrote:
>> -BEGIN PGP SIGNED MESSAGE- Hash: SHA1
>> 
>> On 2/10/2014 5:23 a.m., Tsantilas Christos wrote:
>>> Ping
>>> 
>>> So which is the decision? Can the patch go to trunk? Should
>>> the patch replaced with on other solution?
>>> 
>> 
>> We already have client_lifetime directive which is documented as
>> doing exactly what this new directive does. 
>> ().
>> 
>> However, what that directive *actually* does is very different
>> from the documented behaviour - it is also used for controlling 
>> between-request idle timeout (breaking client_idle_pconn_timeout
>> by making it 1 week!) and also server request timout (pretending
>> to be a timeout for duration between sending request and starting
>> to read reply).
> 
> 
> My sense is that the client_lifetime is a timeout, which should 
> triggered when the client waits for long time an HTTP or ICAP or
> DNS server to finish. Probably should renamed to
> "client_wait_for_activity_timeout" and documentation should
> adapted. Also probably we need to fix it a little.

We already have a lot of different *_timeout directives being used for
activities. Each one named for the activity being watched.

What I am hearing you requesting, and seeing in your code, is a total
since-opened timeout. Covering all actions on that connection.


> 
> However it make sense to me a such timeout parameter. It is used in
> the case: "The web browser, waits 1 minute the dns server to
> respond, 1 minute helper to finish, 1 minute the ICAP server to
> respond, 2 minutes the HTTP server send first bytes of the answer,
> please close this connection I am not interested any more for the
> response..."
> 
> I should note that the pconn_lifetime has effect to server side 
> connections too. And also try to not halt a served transaction.
> Only triggered after the transaction is finished, or while we are
> waiting for new requests.

Yes, I noted that, and agree it is a good decision.

> It is a timeout used by administrators to resolve a problem
> without affecting active transactions. There are major differences
> in use cases.
> 

There is a lot of similarity here with:
* request_timeout which counts down the period between accept() and
first request bytes arriving, and
* client_idle_pconn_timeout which counts down the period between requests.

Although both those count in time relative to the current idle
activity, instead of time since connection accept().

I note that you said this pconn_lifetime did not need coding for ICAP
or server connections since those were stored in Pconn/Idle pools. The
Idle pool timeouts only count from start of idle period, just like
client_idle_pconn_timeout.
 So either request_timeout/client_idle_pconn_timeout make
pconn_lifetime needless or pconn_lifetime needs some since-connect()
equivalent on server connections (and ICAP).


I think your choice of scoping "lifetime" as since-accept() is right.


>> 
>> I think extending it to a better solution for the above bugs and 
>> scoping correctly.
>> 
>> 
>> 1) In the existing client_side_reply.cc keep-alive conditions
>> there exists a line:
>> 
>> else if (http->getConn() && http->getConn()->port->listenConn ==
>> NULL)
>> 
>> your patch retains it as: if (http->getConn()->port->listenConn
>> == NULL)
>> 
>> - this check is *supposed* to be the check for when, as you put
>> it "environments with long-lived connections where Squid
>> configuration ... change during a single connection lifetime." -
>> that is buggy. A check of !Comm::IsConnOpen() is what will 
>> determine a live/dead listener socket.
> 
> This is should fixed.
> 
>> 
>> 
>> 2) There is more work to do to integrate it properly with the
>> existing pconn timeout and lifetime directives.
>> 
>> I request that we include these in this patch:
>> 
>> * change ConnStateData::clientParseRequests() to use 
>> conn->timeLeft(Config.Timeout.clientIdlePconn) when setting
>> timeout between requests. - this is another bug in the existing
>> code that is retained by your patch and needs fixing.
>> 
> 
> I think this change should not be done. We may affect an valid
> existing request. If the timeLeft() is very short, we may have
> timeout while waiting data from helpers or ICAP server.

Your patch is already setting keepalive=false when the pipeline may
have others still waiting responses, and in fact some of those may be
fully serviced by a server and only awaiting their reply data to be
written to the client.

If you want lifetime to have no bad side effects on a client
connection, then you need the timeout handler for it to set the client
connection to half-closed (no more reading) and let the existing
transactions (all of them) complete.

If you are doing the half-closed action, then it is better to set idle
timeout using timeLeft() in ConnStateData

Re: [squid-dev] [PATCH] pconn_lifetime

2014-10-03 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 4/10/2014 3:56 a.m., Alex Rousskov wrote:
> On 10/03/2014 03:21 AM, Tsantilas Christos wrote:
>> But still I am not sure that we should use different parameters
>> for server and client connections
> 
> IMO, side-specific parameters are not needed for the intended use
> cases of the pconn_lifetime feature -- long-lived connections
> getting out of sync with the environment configuration.
> 
> Please do not forget that we also have ICAP side, DNS side, etc. We
> can always add side-specific options if they become needed, but I
> hope there will be no good use cases to justify them.

We have an almost but not quite complete set of directionally specific
existing directives in this pattern below, (if we believe the docs) ...

Client connections:
* Total connection lifetime
 - client_lifetime, but only between requests
 + proposed "pconn_lifetime"

* Timeout from transaction initiation to first bytes arrival (request)
 - request_timeout

* Timeouts for a transaction specific acts
 - read_timeout, write_timeout, forward_timeout

* Timout for idle period between transactions
 - client_idle_pconn_timeout


Origin/Peer Server Connections:
* Total connection lifetime
  - none

* Timeout from transaction initiation to first bytes arrival (reply)
 - client_lifetime (undocumented, yuck)

* Timeouts for a transaction specific acts
 - read_timeout, write_timeout

* Timeout for idle period between transactions
 - server_idle_pconn_timeout


Note "pconn_lifetime" does not accurately describe the intention since
we have absolutely no idea whether a connection is "a pconn" or not
until too late in the connections lifecycle (on receiving reply headers).
 So forget the "pconn" part.


ICAP "side" OPTIONS is already controlling lifetime in the protocol.
DNS and the other UDP-based "sides" already have per-query timeouts
that apply better in UDP context.


The requests I made will give us this model for the HTTP "sides"...

Client connections:
* Total connection lifetime
 - client_lifetime (using proposed "pconn_lifetime" patch)

* Timeout from transaction initiation to first bytes arrival (request)
 - request_timeout

* Timeouts for transaction specific acts
 - action specific *_timeout

* Timout for idle period between transactions
 - client_idle_pconn_timeout


Origin/Peer Server Connections:
* Total connection lifetime
  - server_lifetime (using optional extension to pconn_lifetime patch)

* Timeout from transaction initiation to first bytes arrival (reply)
 - TODO (server_lifetime implicitly)

* Timeouts for transaction specific acts
 - action specific *_timeout

* Timeout for idle period between transactions
 - server_idle_pconn_timeout


Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJULuq5AAoJELJo5wb/XPRjhecIANgXjJISBNTwIiGkea2eRGxl
85uDoJH+7PKKt8zUaPCAlWvmB7jVdpHy677UB2TGJjBf7k73wRfEsrtIl/p136LK
ta4+IEyteXNgIjTP1QutJ3jn3kAxjaggH0wC2oWq3GngazErkdzA5T4ue65NESil
mX5KNoD/i8KuAe6+Uro86IyRlfSDtnHL51XDnuacD8xauc7CiUN7g6HaSy47CApT
oLqlRz78UqIVLGVrwA/dH2xR3a90NUQ6e3EDLoyycqmBt+vCkPAsZRXdeVVr1X4W
bPCLipYl+sj+gSzu3TwV4vxmNjS5FAbJ7/FLbIVTtPc55dM4pHDGGtP8nyAEgP0=
=OwMT
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime

2014-10-03 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 4/10/2014 3:52 a.m., Alex Rousskov wrote:
> On 10/01/2014 11:24 PM, Amos Jeffries wrote:
> 
>> We already have client_lifetime directive which is documented as 
>> doing exactly what this new directive does. 
>> ().
> 
> No, client_lifetime is not documented (and is not meant) to do what
> we want pconn_lifetime to do. Client_lifetime is supposed to kill
> client connections exceeding the configured lifetime, _regardless_
> of the client (and client connection) state. We do not want that
> for pconn_lifetime which only affects _idle_ connections.

Lets have a look at that documentation then...

http://www.squid-cache.org/Doc/config/client_lifetime/ :
"
The maximum amount of time a client (browser) is allowed to
remain connected to the cache process.
"

 - scope: "client (browser)"

And Christos patch description:
"
the desired maximum lifetime of a persistent connection.
"

Having read through the patch. What it does is *only* set a timer
running when the client connection is accepted, and limits (most)
timeouts applied later on that connection to be no more than the
configured total.

 - scope implied: all client (browser) connections


Christos patch is therefore implementing the _documented_ behaviour of
client_lifetime.


Sadly the actual current implementation of client_lifetime does
several undocumented and IMO wrong things.

What asked for is that those wrong things get moved to other
directives so they can still be controlled. And that client_lifetime
configure the new logic so that people using it already will actually
start to see it working as documented.

> 
> Moreover, pconn_lifetime is meant to apply to all connections, not 
> just the connections from a client.

Then Christos patch is seriously incomplete, because it only affects
connections arriving from clients. The lifetime is never set on any of
the other types of connection that make up "all".

> 
>> * change your new directive "pconn_lifetime" to be 
>> "client_pconn_lifetime" since it is specifically scoped to client
>>  browser connections.
> 
> pconn_lifetime is not exclusive to client browser connections. 
> Ideally, it should apply to _all_ Squid connections that are in an 
> "idle persistent connection" state. We are not there yet, but we 
> already support origin connections AFAIK.

The patch code presented does none of what you say above.
 - it *is* limited only to client connections,
 - it is *not* limited to measuring idle connections,
 - it is *not* limited to persistent connections,


Ironically, client_lifetime directives current implementation does
what you are describing. It starts measuring from the beginning of
client connections idle period (violating the documented behaviour),
and gets cleared when the connection stops being idle. For server
connections it strangely starts counting from end of the idle period
when the request is sent until reply come back.


> 
>> * update tunnel.cc client connection timeouts to use 
>> conn->timeLeft(). - maybe with some new tunnel-specific lifetime 
>> config item. But I think your new config lifetime is appropriate 
>> for now at least.
> 
> There is no concept of "idle persistent connection" for TCP tunnels
> so we should not apply pconn_lifetime to them right now.

You added "idle" to that description. The code in question *does not*
limit itself to idle connections, nor persistent connections.

Christos description (and the patch) is applied to all client
connections. Without "idle", tunnels do indeed meet the description of
a "persistent connection" and the timout I'm mentioning above is the
one being applied on the client end of the tunnel.

In fact Christos patch sets pconn_lifetime timeout on tunnel
connections to measure the parse and setup, then reverts to using
client_lifetime "maximum connection time" to cover the tunnel idle
periods *individually* (violating the documented behaviour of both).


So do we at least have a consensus on terminology:

 That "lifetime" for a connection means from its opening
accept()/connect() to its close()  ??

 That the bits named FOO_timeout means some duration of that lifetime
spent doing a "FOO" ??


Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJULuOSAAoJELJo5wb/XPRj33oH+gJWtnnL+ygHj4BhVejy3l/V
Nq2XdU16stxwIC5mqUHUi0WfEdG2V25qI7z8KtKEaqjk94hRx2iV+xaq1kZ+dhM2
sGvdlxm4J/9xLCc+vn6EHddhzEWo+x1zp3F0GuM3MjiEkA15r0q6M0KtzVUai5Rw
WUyPlJkQik/oLqO928WZwy9Yl6uJIjWYwNkOysf0/is7DgLfKKMkzOX5M+0MWbKU
8p3ElDCmzJKmteZ1K+xA1B+U8p5yQIT61sb2xgSQTr1YbvT115K2+0Za92qM6Lmj
uUoOw+JwPKBE3ymGmuWko6l6j+N47FW4ioTmyEoL+6Lel2WQH6nrd298NjnwNFI=
=Q2bq
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime

2014-10-03 Thread Alex Rousskov
On 10/03/2014 03:21 AM, Tsantilas Christos wrote:
> But still I am not sure that we should use different parameters for
> server and client connections

IMO, side-specific parameters are not needed for the intended use cases
of the pconn_lifetime feature -- long-lived connections getting out of
sync with the environment configuration.

Please do not forget that we also have ICAP side, DNS side, etc. We can
always add side-specific options if they become needed, but I hope there
will be no good use cases to justify them.


Thank you,

Alex.

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime

2014-10-03 Thread Alex Rousskov
On 10/01/2014 11:24 PM, Amos Jeffries wrote:

> We already have client_lifetime directive which is documented as
> doing exactly what this new directive does. 
> ().

No, client_lifetime is not documented (and is not meant) to do what we
want pconn_lifetime to do. Client_lifetime is supposed to kill client
connections exceeding the configured lifetime, _regardless_ of the
client (and client connection) state. We do not want that for
pconn_lifetime which only affects _idle_ connections.

Moreover, pconn_lifetime is meant to apply to all connections, not
just the connections from a client.


> * change your new directive "pconn_lifetime" to be 
> "client_pconn_lifetime" since it is specifically scoped to client 
> browser connections.

pconn_lifetime is not exclusive to client browser connections.
Ideally, it should apply to _all_ Squid connections that are in an
"idle persistent connection" state. We are not there yet, but we
already support origin connections AFAIK.



> * update tunnel.cc client connection timeouts to use
> conn->timeLeft(). - maybe with some new tunnel-specific lifetime
> config item. But I think your new config lifetime is appropriate
> for now at least.

There is no concept of "idle persistent connection" for TCP tunnels so
we should not apply pconn_lifetime to them right now.


Hope this clarifies,

Alex.
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime

2014-10-03 Thread Tsantilas Christos

On 10/02/2014 08:24 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2/10/2014 5:23 a.m., Tsantilas Christos wrote:

Ping

So which is the decision? Can the patch go to trunk? Should the
patch replaced with on other solution?



We already have client_lifetime directive which is documented as doing
exactly what this new directive does.
().

However, what that directive *actually* does is very different from
the documented behaviour - it is also used for controlling
between-request idle timeout (breaking client_idle_pconn_timeout by
making it 1 week!) and also server request timout (pretending to be a
timeout for duration between sending request and starting to read reply).



My sense is that the client_lifetime is a timeout, which should 
triggered when the client waits for long time an HTTP or ICAP or DNS 
server to finish.
Probably should renamed to "client_wait_for_activity_timeout" and 
documentation should adapted. Also probably we need to fix it a little.


However it make sense to me a such timeout parameter. It is used in the 
case:
  "The web browser, waits 1 minute the dns server to respond, 1 minute 
helper to finish, 1 minute the ICAP server to respond, 2 minutes the 
HTTP server send first bytes of the answer, please close this connection 
I am not interested any more for the response..."


I should note that the pconn_lifetime has effect to server side 
connections too. And also try to not halt a served transaction. Only 
triggered after the transaction is finished, or while we are waiting for 
new requests.
It is a timeout used by administrators to resolve a problem without 
affecting active transactions.

There are major differences in use cases.





I think extending it to a better solution for the above bugs and
scoping correctly.


1) In the existing client_side_reply.cc keep-alive conditions there
exists a line:

  else if (http->getConn() && http->getConn()->port->listenConn == NULL)

your patch retains it as:
   if (http->getConn()->port->listenConn == NULL)

  - this check is *supposed* to be the check for when, as you put it
"environments with long-lived connections where Squid configuration
... change during a single connection lifetime."
  - that is buggy. A check of !Comm::IsConnOpen() is what will
determine a live/dead listener socket.


This is should fixed.




2) There is more work to do to integrate it properly with the existing
pconn timeout and lifetime directives.

I request that we include these in this patch:

  * change ConnStateData::clientParseRequests() to use
conn->timeLeft(Config.Timeout.clientIdlePconn) when setting timeout
between requests.
  - this is another bug in the existing code that is retained by your
patch and needs fixing.



I think this change should not be done.
We may affect an valid existing request. If the timeLeft() is very 
short, we may have timeout while waiting data from helpers or ICAP server.


We need a client_wait_for_activity_timeout here.
This is what the "client_lifetime" now does.



  * change your new directive "pconn_lifetime" to be
"client_pconn_lifetime" since it is specifically scoped to client
browser connections.
  - keeping or updating the client_lifetime documentation.

  * add some new directive (or find existing one that supposed to be
applied) for the server waiting-for-reply timout period.
  - use that instead of Config.Timeout.lifetime in src/http.cc

  * update tunnel.cc client connection timeouts to use conn->timeLeft().
   - maybe with some new tunnel-specific lifetime config item. But I
think your new config lifetime is appropriate for now at least.

NP: at this point Config.Timeout.lifetime should not be used anywhere
in Squid and can be dropped.

  * check the SSL code where Config.Timeout.lifetime is mentioned in
comment to see if the timeout there is still relevant. Probably is but
needs checking and the comment updating.


So you are suggesting to use two configuration parameters here. One for 
client_side connections and one for server side.


We need to require a Lifetime value in Comm::Connection constructor, for 
this. It will make this patch a little big, but it can be implemented 
without huge changes.


But still I am not sure that we should use different parameters for 
server and client connections







Optional / future work:

A) this depends on having a matching/symmetrical server_pconn_lifetime
directive for the server connections.

Since there would be separate lifetimes for each 'side', I think we
should followup with a patch making a Comm::Connection::startTime_ be
initialized by Comm::TcpAcceptor() and Comm::ConnOpener() with the
relevant lifetime.

Then timeLeft() method does not need dependency on SquidConfig.h and
connection lifetimes will actually be predictable across a
reconfigure. At present a reconfigure changing lifetime will change
some connections but not others depending on whether they a

Re: [squid-dev] [PATCH] pconn_lifetime

2014-10-01 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2/10/2014 5:23 a.m., Tsantilas Christos wrote:
> Ping
> 
> So which is the decision? Can the patch go to trunk? Should the
> patch replaced with on other solution?
> 

We already have client_lifetime directive which is documented as doing
exactly what this new directive does.
().

However, what that directive *actually* does is very different from
the documented behaviour - it is also used for controlling
between-request idle timeout (breaking client_idle_pconn_timeout by
making it 1 week!) and also server request timout (pretending to be a
timeout for duration between sending request and starting to read reply).


I think extending it to a better solution for the above bugs and
scoping correctly.


1) In the existing client_side_reply.cc keep-alive conditions there
exists a line:

 else if (http->getConn() && http->getConn()->port->listenConn == NULL)

your patch retains it as:
  if (http->getConn()->port->listenConn == NULL)

 - this check is *supposed* to be the check for when, as you put it
"environments with long-lived connections where Squid configuration
... change during a single connection lifetime."
 - that is buggy. A check of !Comm::IsConnOpen() is what will
determine a live/dead listener socket.


2) There is more work to do to integrate it properly with the existing
pconn timeout and lifetime directives.

I request that we include these in this patch:

 * change ConnStateData::clientParseRequests() to use
conn->timeLeft(Config.Timeout.clientIdlePconn) when setting timeout
between requests.
 - this is another bug in the existing code that is retained by your
patch and needs fixing.

 * change your new directive "pconn_lifetime" to be
"client_pconn_lifetime" since it is specifically scoped to client
browser connections.
 - keeping or updating the client_lifetime documentation.

 * add some new directive (or find existing one that supposed to be
applied) for the server waiting-for-reply timout period.
 - use that instead of Config.Timeout.lifetime in src/http.cc

 * update tunnel.cc client connection timeouts to use conn->timeLeft().
  - maybe with some new tunnel-specific lifetime config item. But I
think your new config lifetime is appropriate for now at least.

NP: at this point Config.Timeout.lifetime should not be used anywhere
in Squid and can be dropped.

 * check the SSL code where Config.Timeout.lifetime is mentioned in
comment to see if the timeout there is still relevant. Probably is but
needs checking and the comment updating.



Optional / future work:

A) this depends on having a matching/symmetrical server_pconn_lifetime
directive for the server connections.

Since there would be separate lifetimes for each 'side', I think we
should followup with a patch making a Comm::Connection::startTime_ be
initialized by Comm::TcpAcceptor() and Comm::ConnOpener() with the
relevant lifetime.

Then timeLeft() method does not need dependency on SquidConfig.h and
connection lifetimes will actually be predictable across a
reconfigure. At present a reconfigure changing lifetime will change
some connections but not others depending on whether they actually
receive a request between the reconfigure and one of the two old/new
lifetimes.

B) after the above we can also de-duplicate the timeLeft(X) calls to
one inside commSetConnTimeout().

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJULOGPAAoJELJo5wb/XPRjTnsIAJtxQgW1OOGo+xfcgnEKGjlm
lTSJNOFClJCD/06FBV4ky6A2d0lqYOsHj4n3I5shh12UA3knHUNcWRCX4+Utnh41
O+ixrKKLLdjDoriyzV3p9vN9n7xlEvlTWjyPJ/qpSkZi95QbOKhkl1rkt/V+l67R
VTNHGzHXat6FISPfkNiscLkSG+fXGxXbZcXMn+TO+3wJtcFJZrb41qJGYuGYUZyN
MFKKEhyfvuvd+pIlEJuq9NO55oh+4k7jn6z9ySAsDZ44XuI59enjVlggAJxgMi9b
OqbNjxYL21d6pd1Mjr35+M8oNTG1Q8LtvRN+Bpm6mNclUbWeZAMuGrUp1mSZje0=
=G9Kt
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime

2014-10-01 Thread Tsantilas Christos

Ping

So which is the decision?
Can the patch go to trunk?
Should the patch replaced with on other solution?

On 09/02/2014 07:25 PM, Alex Rousskov wrote:

On 09/02/2014 06:28 AM, Amos Jeffries wrote:

On 2/09/2014 7:38 p.m., Tsantilas Christos wrote:

On 09/02/2014 03:51 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE- Hash: SHA1

On 2/09/2014 4:49 a.m., Tsantilas Christos wrote:

Hi all,

This patch add a new configuration option the
'pconn_lifetime' to allow users set the desired maximum
lifetime of a persistent connection.

When set, Squid will close a now-idle persistent connection
that exceeded configured lifetime instead of moving the
connection into the idle connection pool (or equivalent). No
effect on ongoing/active transactions. Connection lifetime
is the time period from the connection acceptance or opening
time until "now".

This limit is useful in environments with long-lived
connections where Squid configuration or environmental
factors change during a single connection lifetime. If
unrestricted, some connections may last for hours and even
days, ignoring those changes that should have affected their
behavior or their existence.

This is a Measurement Factory project


Two problems with this.

* the directive name does not indicate whether it applies to
client, server, or ICAP conections.



It applies to any connection, server, client or ICAP
connections.


Those are very distinct groups of connections. With very different
lifetime properties.


Those properties are about the same as far as this option is
concerned. It would be possible to split this option into 4 or more
different ones, to allow for more precise control, but I think that
should not be done until there is a specific use case for such a split.



* the rationale for its need is a bit screwey. Any directives
which affect Squid current running state need to ensure in
their post-reconfigure logics that state is consistent.
Preferrably create a runner, or use configDoConfigure if that
is to much.


The alternative you suggest is bit screwy on two counts:

1) It does not address environmental factors outside of Squid
knowledge. A trivial example would be typical IP table rules that
allow already accepted connections. If the acceptance criteria change,
the admin may want the already accepted connections to be nicely
closed to make sure they do not violate the new rules.

2) Given the number of squid.conf directives with side-effects (and
side-effects of side-effects), it is unlikely that a reasonable effort
would be sufficient to identify all of them. Moreover, in some cases,
it would probably be very difficult to judge whether there is a side
effect that warrants connection closure. While I agree that we should
improve reconfigure handling in this area (and have posted specific
proposals for that), I do not think what you are suggesting is a
practical alternative for the proposed new directive in the
foreseeable future.



Do you mean to run over the persistent connections list and
change the timeout?


I mean:

1) for the client connections list, set the flag forcing keep-alive
to terminate after current request. - only if the ports config
changed.


Port configuration adjustments is only one of the many changes the new
option is meant to address. Moreover, correctly determining what port
configuration changes should lead to automatic pconn closures is very
difficult!



2) For the server pconn idle pools call close() on all existing
comnnections. - but why? what condition requires this?


Unknown. The option is for admins that know what those conditions are.



3) for the cache_peer standby pools, calling close() - only if the
peer config changed.


Same as (1) above.



4) for the ICAP idle pools calling close(). - but why? what
condition needs this other than ICAP config changes, and thus
parent object holding them open being destructed?


Same as (2) above.



5) for the ICAP currently active connections set a flag closing
after current request. - but again why? what condition has
changed?


Same as (2) above.




To support that we need some API updates to globally access
the currently active connection lists for servers/client/icap.
But there are other things like stats and reporting also
needing that API, so we should look at adding it instead of yet
another "temporary" workaround.



We do not need to make any change to support ICAP and server
side connections. This is because we are storing this connections
to IdleConnList or PconnPool structures.



Currently we can not change timeout for client-side keep-alived
connections. We do not have a list with these connections
available.



That is what I refer to as needed API. Stats reporting and delay
pool reassignnment also need this std::list of active
ConnStateData connections.


I agree that having an API to iterate idle user pconns would be
useful. Implementing that correctly is not easy because those pconns
are owned by other objects. Do you insist th