[squid-dev] [PATCH] Allow unlimited LDAP search filter for ext_ldap_group_acl helper

2015-11-02 Thread Tsantilas Christos

Hi all,

The LDAP search filter in ext_ldap_group_acl is limited to 256 
characters. In some environments the user DN or group filter can be 
larger than this limitation.


This patch uses dynamic allocated buffers for LDAP search filters.

This is a Measurement Factory project
Allow unlimited LDAP search filter for ext_ldap_group_acl helper

The LDAP search filter in ext_ldap_group_acl is limited to 256 characters.
In some environments the user DN or group filter can be larger than this
limitation. 
This patch uses dynamic allocated buffers for LDAP search filters.

This is a Measurement Factory project

=== modified file 'helpers/external_acl/LDAP_group/ext_ldap_group_acl.cc'
--- helpers/external_acl/LDAP_group/ext_ldap_group_acl.cc	2015-09-07 17:44:33 +
+++ helpers/external_acl/LDAP_group/ext_ldap_group_acl.cc	2015-10-28 12:25:24 +
@@ -19,60 +19,63 @@
  * or (at your option) any later version.
  *
  * Authors:
  *  Flavio Pescuma 
  *  Henrik Nordstrom 
  *  MARA Systems AB, Sweden 
  *
  * With contributions from others mentioned in the ChangeLog file
  *
  * In part based on squid_ldap_auth by Glen Newton and Henrik Nordstrom.
  *
  * Latest version of this program can always be found from MARA Systems
  * at http://marasystems.com/download/LDAP_Group/
  *
  * Dependencies: You need to get the OpenLDAP libraries
  * from http://www.openldap.org or use another compatible
  * LDAP C-API library.
  *
  * If you want to make a TLS enabled connection you will also need the
  * OpenSSL libraries linked into openldap. See http://www.openssl.org/
  */
 #include "squid.h"
 #include "helpers/defines.h"
 #include "rfc1738.h"
 #include "util.h"
 
 #define LDAP_DEPRECATED 1
 
 #include 
 #include 
+#include 
+#include 
+#include 
 
 #if _SQUID_WINDOWS_ && !_SQUID_CYGWIN_
 
 #define snprintf _snprintf
 #include 
 #include 
 #ifndef LDAPAPI
 #define LDAPAPI __cdecl
 #endif
 #ifdef LDAP_VERSION3
 #ifndef LDAP_OPT_X_TLS
 #define LDAP_OPT_X_TLS 0x6000
 #endif
 /* Some tricks to allow dynamic bind with ldap_start_tls_s entry point at
  * run time.
  */
 #undef ldap_start_tls_s
 #if LDAP_UNICODE
 #define LDAP_START_TLS_S "ldap_start_tls_sW"
 typedef WINLDAPAPI ULONG(LDAPAPI * PFldap_start_tls_s) (IN PLDAP, OUT PULONG, OUT LDAPMessage **, IN PLDAPControlW *, IN PLDAPControlW *);
 #else
 #define LDAP_START_TLS_S "ldap_start_tls_sA"
 typedef WINLDAPAPI ULONG(LDAPAPI * PFldap_start_tls_s) (IN PLDAP, OUT PULONG, OUT LDAPMessage **, IN PLDAPControlA *, IN PLDAPControlA *);
 #endif /* LDAP_UNICODE */
 PFldap_start_tls_s Win32_ldap_start_tls_s;
 #define ldap_start_tls_s(l,s,c) Win32_ldap_start_tls_s(l,NULL,NULL,s,c)
 #endif /* LDAP_VERSION3 */
 
 #else
 
@@ -583,250 +586,243 @@
 break;
 } else {
 if (tryagain) {
 tryagain = 0;
 ldap_unbind(ld);
 ld = NULL;
 goto recover;
 }
 }
 }
 if (found)
 SEND_OK("");
 else {
 SEND_ERR("");
 }
 
 if (ld != NULL) {
 if (!persistent || (squid_ldap_errno(ld) != LDAP_SUCCESS && squid_ldap_errno(ld) != LDAP_INVALID_CREDENTIALS)) {
 ldap_unbind(ld);
 ld = NULL;
 } else {
 tryagain = 1;
 }
 }
 }
 if (ld)
 ldap_unbind(ld);
 return 0;
 }
 
-static int
-ldap_escape_value(char *escaped, int size, const char *src)
+static std::string
+escape_character(const char c)
 {
-int n = 0;
-while (size > 4 && *src) {
-switch (*src) {
+std::stringstream str;
+switch (c) {
 case '*':
 case '(':
 case ')':
 case '\\':
-n += 3;
-size -= 3;
-if (size > 0) {
-*escaped = '\\';
-++escaped;
-snprintf(escaped, 3, "%02x", (unsigned char) *src);
-++src;
-escaped += 2;
-}
+str << '\\' << std::setfill('0') << std::setw(2) << std::hex << (int)c;
 break;
 default:
-*escaped = *src;
-++escaped;
-++src;
-++n;
---size;
-}
+str << c;
 }
-*escaped = '\0';
-return n;
-}
-
-static int
-build_filter(char *filter, int size, const char *templ, const char *user, const char *group)
-{
-int n;
-while (*templ && size > 0) {
+return str.str();
+}
+
+static std::string
+ldap_escape_value(const std::string )
+{
+std::string s;
+std::for_each(src.begin(), src.end(), [](const char ) { s.append(escape_character(c)); });
+return s;
+}
+
+static bool
+build_filter(std::string , const char *templ, const char *user, const char *group)
+{
+std::stringstream str;
+while (*templ) {
 switch (*templ) {
 

Re: [squid-dev] [PATCH] %ssl::

2015-10-08 Thread Tsantilas Christos

Patch applied to trunk as r14343.

On 10/07/2015 06:11 PM, Tsantilas Christos wrote:

If there is not any objection  I will apply this patch to trunk.



On 09/29/2015 06:11 PM, Tsantilas Christos wrote:

A new version of this patch.

On 09/24/2015 04:11 PM, Amos Jeffries wrote:

On 17/09/2015 8:08 p.m., Tsantilas Christos wrote:


Currently Squid with SSL bumping only logs SSL errors that have caused
Squid to block traffic. It does not log SSL errors that are mimicked.
Logging a list with all encountered (and ignored) errors is interesting
for debugging and statistics reasons.

The new %ssl::

in cf.data.pre:

* Please leave a 1-line whitespace gap between these very long
descriptions. Same as you can see above the cert_issuer option
description.


in src/format/Format.cc:

* please shuffle the switch case up above the two "not implemented"
existing ones.

  * Also leave whitespace around the new case code. The existing ones
are
only squashed together since they both fall through to the same break.

* sslErrorName can be a static function local to this .cc
  - that avoids the need to touch Format.h


all of the above fixed in this new patch.





Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev





___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] %ssl::

2015-10-07 Thread Tsantilas Christos

If there is not any objection  I will apply this patch to trunk.



On 09/29/2015 06:11 PM, Tsantilas Christos wrote:

A new version of this patch.

On 09/24/2015 04:11 PM, Amos Jeffries wrote:

On 17/09/2015 8:08 p.m., Tsantilas Christos wrote:


Currently Squid with SSL bumping only logs SSL errors that have caused
Squid to block traffic. It does not log SSL errors that are mimicked.
Logging a list with all encountered (and ignored) errors is interesting
for debugging and statistics reasons.

The new %ssl::

in cf.data.pre:

* Please leave a 1-line whitespace gap between these very long
descriptions. Same as you can see above the cert_issuer option
description.


in src/format/Format.cc:

* please shuffle the switch case up above the two "not implemented"
existing ones.

  * Also leave whitespace around the new case code. The existing ones are
only squashed together since they both fall through to the same break.

* sslErrorName can be a static function local to this .cc
  - that avoids the need to touch Format.h


all of the above fixed in this new patch.





Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev





___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] %ssl::

2015-09-29 Thread Tsantilas Christos

A new version of this patch.

On 09/24/2015 04:11 PM, Amos Jeffries wrote:

On 17/09/2015 8:08 p.m., Tsantilas Christos wrote:


Currently Squid with SSL bumping only logs SSL errors that have caused
Squid to block traffic. It does not log SSL errors that are mimicked.
Logging a list with all encountered (and ignored) errors is interesting
for debugging and statistics reasons.

The new %ssl::

in cf.data.pre:

* Please leave a 1-line whitespace gap between these very long
descriptions. Same as you can see above the cert_issuer option description.


in src/format/Format.cc:

* please shuffle the switch case up above the two "not implemented"
existing ones.

  * Also leave whitespace around the new case code. The existing ones are
only squashed together since they both fall through to the same break.

* sslErrorName can be a static function local to this .cc
  - that avoids the need to touch Format.h


all of the above fixed in this new patch.





Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



%ssl::cert_subject The Subject field of the received client
 SSL certificate or a dash ('-') if Squid has
 received an invalid/malformed certificate or
 no certificate at all. Consider encoding the
 logged value because Subject often has spaces.
 
 		%ssl::>cert_issuer The Issuer field of the received client
 SSL certificate or a dash ('-') if Squid has
 received an invalid/malformed certificate or
 no certificate at all. Consider encoding the
 logged value because Issuer often has spaces.
 
+		%ssl::a %Ss/%03>Hs %a %[ui %[un [%tl] "%rm %ru HTTP/%rv" %>Hs %a %[ui %[un [%tl] "%rm %ru HTTP/%rv" %>Hs %h" "%{User-Agent}>h" %Ss:%Sh
 logformat referrer   %ts.%03tu %>a %{Referer}>h %ru
 logformat useragent  %>a [%tl] "%{User-Agent}>h"
 
 	NOTE: When the log_mime_hdrs directive is set to ON.
 		The squid, common and combined formats have a safely encoded copy
 		of the mime headers appended to each line within a pair of brackets.
 
 	NOTE: The common and combined formats are not quite true to the Apache definition.
 		The logs from Squid contain an extra status and hierarchy code appended.
 
 DOC_END
 
 NAME: access_log cache_access_log
 TYPE: access_log
 LOC: Config.Log.accesslogs

=== modified file 'src/format/ByteCode.h'
--- src/format/ByteCode.h	2015-01-13 07:25:36 +
+++ src/format/ByteCode.h	2015-09-16 09:16:57 +
@@ -200,40 +200,41 @@
 LFT_ICAP_REQ_ALL_HEADERS,
 
 LFT_ICAP_REP_HEADER,
 LFT_ICAP_REP_HEADER_ELEM,
 LFT_ICAP_REP_ALL_HEADERS,
 
 LFT_ICAP_TR_RESPONSE_TIME,
 LFT_ICAP_IO_TIME,
 LFT_ICAP_OUTCOME,
 LFT_ICAP_STATUS_CODE,
 #endif
 LFT_CREDENTIALS,
 
 #if USE_OPENSSL
 LFT_SSL_BUMP_MODE,
 LFT_SSL_USER_CERT_SUBJECT,
 LFT_SSL_USER_CERT_ISSUER,
 LFT_SSL_CLIENT_SNI,
 LFT_SSL_SERVER_CERT_SUBJECT,
 LFT_SSL_SERVER_CERT_ISSUER,
+LFT_SSL_SERVER_CERT_ERRORS,
 #endif
 
 LFT_NOTE,
 LFT_PERCENT,/* special string cases for escaped chars */
 
 // TODO assign better bytecode names and Token strings for these
 LFT_EXT_ACL_USER_CERT_RAW,
 LFT_EXT_ACL_USER_CERTCHAIN_RAW,
 LFT_EXT_ACL_USER_CERT,
 LFT_EXT_ACL_USER_CA_CERT,
 LFT_EXT_ACL_CLIENT_EUI48,
 LFT_EXT_ACL_CLIENT_EUI64,
 LFT_EXT_ACL_NAME,
 LFT_EXT_ACL_DATA
 
 } ByteCode_t;
 
 /// Quoting style for a format output.
 enum Quoting {
 LOG_QUOTE_NONE = 0,

=== modified file 'src/format/Format.cc'
--- src/format/Format.cc	2015-07-19 13:23:01 +
+++ src/format/Format.cc	2015-09-29 15:00:43 +
@@ -292,40 +292,49 @@
 *p = '\\';
 ++p;
 *p = 't';
 ++p;
 ++str;
 break;
 
 default:
 *p = '\\';
 ++p;
 *p = *str;
 ++p;
 ++str;
 break;
 }
 }
 
 *p = '\0';
 }
 
+#if USE_OPENSSL
+static char *
+sslErrorName(Ssl::ssl_error_t err, char *buf, size_t size)
+{
+snprintf(buf, size, "SSL_ERR=%d", err);
+return buf;
+}
+#endif
+
 void
 Format::Format::assemble(MemBuf , const AccessLogEntry::Pointer , int logSequenceNumber) const
 {
 char tmp[1024];
 String sb;
 
 for (Token *fmt = format; fmt != NULL; fmt = fmt->next) {   /* for each token */
 const char *out = NULL;
 int quote = 0;
 long int outint = 0;
 int doint = 0;
 int dofree = 0;
 int64_t outoff = 0;
 int dooff = 0;
 struct timeval outtv = {0, 0};
 int doMsec = 0;
 int doSec = 0;
 
 switch (fmt->type) {
 
@@ -871,44 +880,42 @@
 outoff = al->hier.bodyBytesRead;
 dooff = 1;
 }
 // else if hier.bodyBytesRead < 0 we did not have

[squid-dev] [PATCH] %ssl::

2015-09-17 Thread Tsantilas Christos


Currently Squid with SSL bumping only logs SSL errors that have caused 
Squid to block traffic. It does not log SSL errors that are mimicked. 
Logging a list with all encountered (and ignored) errors is interesting 
for debugging and statistics reasons.


The new %ssl::%ssl::cert_subject The Subject field of the received client
 SSL certificate or a dash ('-') if Squid has
 received an invalid/malformed certificate or
 no certificate at all. Consider encoding the
 logged value because Subject often has spaces.
 
 		%ssl::>cert_issuer The Issuer field of the received client
 SSL certificate or a dash ('-') if Squid has
 received an invalid/malformed certificate or
 no certificate at all. Consider encoding the
 logged value because Issuer often has spaces.
+		%ssl::a %Ss/%03>Hs %a %[ui %[un [%tl] "%rm %ru HTTP/%rv" %>Hs %a %[ui %[un [%tl] "%rm %ru HTTP/%rv" %>Hs %h" "%{User-Agent}>h" %Ss:%Sh
 logformat referrer   %ts.%03tu %>a %{Referer}>h %ru
 logformat useragent  %>a [%tl] "%{User-Agent}>h"
 
 	NOTE: When the log_mime_hdrs directive is set to ON.
 		The squid, common and combined formats have a safely encoded copy
 		of the mime headers appended to each line within a pair of brackets.
 
 	NOTE: The common and combined formats are not quite true to the Apache definition.
 		The logs from Squid contain an extra status and hierarchy code appended.
 
 DOC_END
 
 NAME: access_log cache_access_log
 TYPE: access_log

=== modified file 'src/format/ByteCode.h'
--- src/format/ByteCode.h	2015-01-13 07:25:36 +
+++ src/format/ByteCode.h	2015-09-16 09:16:57 +
@@ -200,40 +200,41 @@
 LFT_ICAP_REQ_ALL_HEADERS,
 
 LFT_ICAP_REP_HEADER,
 LFT_ICAP_REP_HEADER_ELEM,
 LFT_ICAP_REP_ALL_HEADERS,
 
 LFT_ICAP_TR_RESPONSE_TIME,
 LFT_ICAP_IO_TIME,
 LFT_ICAP_OUTCOME,
 LFT_ICAP_STATUS_CODE,
 #endif
 LFT_CREDENTIALS,
 
 #if USE_OPENSSL
 LFT_SSL_BUMP_MODE,
 LFT_SSL_USER_CERT_SUBJECT,
 LFT_SSL_USER_CERT_ISSUER,
 LFT_SSL_CLIENT_SNI,
 LFT_SSL_SERVER_CERT_SUBJECT,
 LFT_SSL_SERVER_CERT_ISSUER,
+LFT_SSL_SERVER_CERT_ERRORS,
 #endif
 
 LFT_NOTE,
 LFT_PERCENT,/* special string cases for escaped chars */
 
 // TODO assign better bytecode names and Token strings for these
 LFT_EXT_ACL_USER_CERT_RAW,
 LFT_EXT_ACL_USER_CERTCHAIN_RAW,
 LFT_EXT_ACL_USER_CERT,
 LFT_EXT_ACL_USER_CA_CERT,
 LFT_EXT_ACL_CLIENT_EUI48,
 LFT_EXT_ACL_CLIENT_EUI64,
 LFT_EXT_ACL_NAME,
 LFT_EXT_ACL_DATA
 
 } ByteCode_t;
 
 /// Quoting style for a format output.
 enum Quoting {
 LOG_QUOTE_NONE = 0,

=== modified file 'src/format/Format.cc'
--- src/format/Format.cc	2015-07-19 13:23:01 +
+++ src/format/Format.cc	2015-09-16 20:18:59 +
@@ -871,44 +871,42 @@
 outoff = al->hier.bodyBytesRead;
 dooff = 1;
 }
 // else if hier.bodyBytesRead < 0 we did not have any data exchange with
 // a peer server so just print a "-" (eg requests served from cache,
 // or internal error messages).
 break;
 
 case LFT_SQUID_STATUS:
 out = al->cache.code.c_str();
 break;
 
 case LFT_SQUID_ERROR:
 if (al->request && al->request->errType != ERR_NONE)
 out = errorPageName(al->request->errType);
 break;
 
 case LFT_SQUID_ERROR_DETAIL:
 #if USE_OPENSSL
 if (al->request && al->request->errType == ERR_SECURE_CONNECT_FAIL) {
-if (! (out = Ssl::GetErrorName(al->request->errDetail))) {
-snprintf(tmp, sizeof(tmp), "SSL_ERR=%d", al->request->errDetail);
-out = tmp;
-}
+if (! (out = Ssl::GetErrorName(al->request->errDetail)))
+out = sslErrorName(al->request->errDetail, tmp, sizeof(tmp));
 } else
 #endif
 if (al->request && al->request->errDetail != ERR_DETAIL_NONE) {
 if (al->request->errDetail > ERR_DETAIL_START  &&
 al->request->errDetail < ERR_DETAIL_MAX)
 out = errorDetailName(al->request->errDetail);
 else {
 if (al->request->errDetail >= ERR_DETAIL_EXCEPTION_START)
 snprintf(tmp, sizeof(tmp), "%s=0x%X",
  errorDetailName(al->request->errDetail), (uint32_t) al->request->errDetail);
 else
 snprintf(tmp, sizeof(tmp), "%s=%d",
  errorDetailName(al->request->errDetail), al->request->errDetail);
 out = tmp;
 }
 }
 break;
 
 case LFT_SQUID_HIERARCHY:
 if (al->hier.ping.timedout)
@@ -1145,40 +1143,57 @@
 if (X509 *cert = al->cache.sslClientCert.get()) {

Re: [squid-dev] cope with OPENSSL_NO_SSL3 builds of (libre|open)ssl

2015-09-11 Thread Tsantilas Christos

On 09/10/2015 11:09 PM, Amos Jeffries wrote:

On 11/09/2015 4:50 a.m., Tsantilas Christos wrote:

On 09/10/2015 04:07 PM, Stuart Henderson wrote:

LibreSSL has removed SSLv3, and it can be disabled optionally in OpenSSL
by building with no_ssl3. The patch below allows building against such a


I suppose that LibreSSL wants to forces as to use the TLS isntead of
sslv3, so maybe it is better to try use the TLS_method() instead of
SSLv23_method.

Also, with a very quick view looks that the  libreSSL TLS_method() is
equivalent to openSSL TLSv1_2_method() method...


Yes, maybe and no :


Yes - LibreSSL is following the SSLv2/SSLv3 deprecation RFCs very
closely. Upcoming OpenSSL versions will be too, eventually. So those
using the very latest libraries get the very latest up-to-date
specification requirements applied.



The SSLv2 removed from OpenSSL git repository too...




Maybe - If I'm reading the OpenSSL docs right the SSLv3_method was
producing a fixed specific method to negotiate SSLv3-only protocol. The
TLS_method is negotiating any TLS version. I think use of SSLv3_method
was a bug to begin with and TLS_method()/SSLv23_method() would be
correct now.


The SSLv23_method, if the SSLv2 is disabled from options, will sent a 
full SSLv3/TLS message without SSLv2 backward compatibility.


Looks that using the TLS_method/SSLv23_method is the correct.



[ IMO we should rename parseV23Hello to parseV2Hello to clarify that it
parses a v2 syntax hello. Avoid confusing with SSL*_method() vs
TLS_method() relevance. ]


This is not exactly correct.
The parseV23Hello actually parses an SSLv3 Hello message which is 
encapsulated to an SSLv2 compatible header.

Unfortunately there are clients which still using it.





No - TLS_method() is *not* equivalent to TLSv1_2_method(). It is
equivalent to SSLv23_method() / SSLv23_server_method(). All of those may
or may not produce TLSv1_2_method() as their output depending on the
config settings.


This is true. TLS_method is something like "support up to TLSv1.2".



Assuming the patch is correct in swapping SSLv23_method(). Then it
should actually be swapping to TLS_method() with back-compat #if
wrappers using SSLv23_method(). As seen with uses of the
SSLv23_*_method() functions.


Yes.



[ I see the parseV23Hello() is using SSLv23_method() bare. That is a bug
waiting to happen when OpenSSL v1.2-1.3 hits us. Which should also be
fixed in this patch scope. ]


True.
The SSLv23_method() is used to parse an SSLv3/TLS hello message, which 
is encapsulated in an SSLv2 compatible SSL header.

We need to replace this method.
With a very quick view the only use of this method is to get the size of 
each cipher in HELLO message. I suppose  it can be hardcoded.
I hope we have some time before the SSLv23_method full removed from 
openSSL and forked libraries.






Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] default SSL client and server methods

2015-09-10 Thread Tsantilas Christos

Hi all,

  starting from Stuart Henderson mail about libreSSL I saw that in 
current squid trunk (but not squid-3.5), for many Linux OS systems we 
are always using SSLv23 as default method while connecting to server or 
connecting to clients, without giving an other alternate to the users..


The problem I am seeing is that we are using the TLS_server_method() and 
TLS_client_method() which are available only in openSSL-1.1.0 and later.
But many OSes still using older openSSL libraries, so the users for 
these systems are forced to use SSLv23 method without any other alternate.


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] FtpServer.cc:1024: reply != NULL assertion

2015-08-19 Thread Tsantilas Christos

Patch applied to trunk as rev.14230

I am attaching the squid-3.5 version of the patch.

On 08/19/2015 08:56 AM, Amos Jeffries wrote:

On 19/08/2015 4:08 a.m., Tsantilas Christos wrote:


Handle nil HttpReply pointer inside various handlers called from
Ftp::Server::handleReply(). For example, when the related StoreEntry
object is aborted, the client_side_reply.cc code may call the
Ftp::Server::handleReply() method with a nil reply pointer.

The Ftp::Server::handleReply() methods itself cannot handle nil replies
because they are valid in many states. Only state-specific handlers know
whether they need the reply.

The Ftp::Server::handleReply() method is called [via Store] from Client
code. Thus, exceptions in handleReply() are handled by the Ftp::Client
job. That job does not have enough information to know whether the
client-to-Squid connection should be closed; the job keeps the
connection open. When the reply is nil, that open connection becomes
unusable, leading to more problems.

This patch fixes the Ftp::Server::handleReply() to handle exceptions,
including closing the connections in the case of an exception. It also
adds Must(reply) checks to check for nil HttpReply pointers where the
reply is required. Eventually, Store should start using async calls to
protect jobs waiting for Store updates. Meanwhile, this should help.

This is a Measurement Factory project.



+1. Please apply. Thank you.

Amos




FtpServer.cc:1024: reply != NULL assertion

Handle nil HttpReply pointer inside various handlers called from
Ftp::Server::handleReply(). For example, when the related StoreEntry
object is aborted, the client_side_reply.cc code may call the
Ftp::Server::handleReply() method with a nil reply pointer.

The Ftp::Server::handleReply() methods itself cannot handle nil replies
because they are valid in many states. Only state-specific handlers know
whether they need the reply.

The Ftp::Server::handleReply() method is called [via Store] from Client code.
Thus, exceptions in handleReply() are handled by the Ftp::Client job. That job
does not have enough information to know whether the client-to-Squid connection
should be closed; the job keeps the connection open. When the reply is nil,
that open connection becomes unusable, leading to more problems.

This patch fixes the Ftp::Server::handleReply() to handle exceptions,
including closing the connections in the case of an exception. It also
adds Must(reply) checks to check for nil HttpReply pointers where the
reply is required. Eventually, Store should start using async calls to
protect jobs waiting for Store updates. Meanwhile, this should help.

This is a Measurement Factory project.

=== modified file 'src/servers/FtpServer.cc'
--- src/servers/FtpServer.cc	2015-01-13 09:13:49 +
+++ src/servers/FtpServer.cc	2015-08-18 15:09:20 +
@@ -768,55 +768,61 @@
 !context-http-al-reply  reply) {
 context-http-al-reply = reply;
 HTTPMSGLOCK(context-http-al-reply);
 }
 
 static ReplyHandler handlers[] = {
 NULL, // fssBegin
 NULL, // fssConnected
 Ftp::Server::handleFeatReply, // fssHandleFeat
 Ftp::Server::handlePasvReply, // fssHandlePasv
 Ftp::Server::handlePortReply, // fssHandlePort
 Ftp::Server::handleDataReply, // fssHandleDataRequest
 Ftp::Server::handleUploadReply, // fssHandleUploadRequest
 Ftp::Server::handleEprtReply,// fssHandleEprt
 Ftp::Server::handleEpsvReply,// fssHandleEpsv
 NULL, // fssHandleCwd
 NULL, // fssHandlePass
 NULL, // fssHandleCdup
 Ftp::Server::handleErrorReply // fssError
 };
-const Server server = dynamic_castconst Ftp::Server(*context-getConn());
-if (const ReplyHandler handler = handlers[server.master-serverState])
-(this-*handler)(reply, data);
-else
-writeForwardedReply(reply);
+try {
+const Server server = dynamic_castconst Ftp::Server(*context-getConn());
+if (const ReplyHandler handler = handlers[server.master-serverState])
+(this-*handler)(reply, data);
+else
+writeForwardedReply(reply);
+} catch (const std::exception e) {
+callException(e);
+throw TexcHere(e.what());
+}
 }
 
 void
 Ftp::Server::handleFeatReply(const HttpReply *reply, StoreIOBuffer)
 {
 if (getCurrentContext()-http-request-errType != ERR_NONE) {
 writeCustomReply(502, Server does not support FEAT, reply);
 return;
 }
 
+Must(reply);
 HttpReply::Pointer featReply = Ftp::HttpReplyWrapper(211, End, Http::scNoContent, 0);
 HttpHeader const serverReplyHeader = reply-header;
 
 HttpHeaderPos pos = HttpHeaderInitPos;
 bool hasEPRT = false;
 bool hasEPSV = false;
 int prependSpaces = 1;
 
 featReply-header.putStr(HDR_FTP_PRE, \211-Features:\);
 const int scode = serverReplyHeader.getInt(HDR_FTP_STATUS);
 if (scode == 211) {
 while (const

Re: [squid-dev] [PATCH] Ignore impossible SSL bumping actions, as intended and documented / bug 4237 fix

2015-08-19 Thread Tsantilas Christos

Patch applied to trunk as rev.14227

I am also attaching the squid-3.5 version of the patch. The trunk patch 
does not apply cleanly.



On 08/15/2015 03:21 AM, Amos Jeffries wrote:

On 15/08/2015 2:41 a.m., Tsantilas Christos wrote:

Hi all,
  The wiki pages are fixed.
Is it OK to commit this patch?



+1 from me.

Please remove the ACLChecklist:  part from the new debugs in
ACLChecklist::bannedAction() during commit.

Amos
Ignore impossible SSL bumping actions, as intended and documented.

According to Squid wiki: Some actions are not possible during
certain processing steps. During a given processing step, Squid
ignores ssl_bump lines with impossible actions. The distributed
squid.conf.documented has similar text.

Current Squid violates the above rule. Squid considers all actions,
and if an impossible action matches first, Squid guesses what the
true configuration intent was. Squid may guess wrong. For example,
depending on the transaction, Squid may guess that a matching
stare or peek action during bumping step3 means bump, breaking
peeked connections that cannot be bumped.

This unintended but gross configuration semantics violation remained
invisible until bug 4237, probably because most configurations in
most environments either worked around the problem (where admins
experimented to make it work) or did not result in visible
errors (where Squid guesses did not lead to terminated connections).

While configuration workarounds are possible, the current
implementation is very wrong and leads to overly complex and, hence,
often wrong configurations. It is also nearly impossible to document
accurately because the guessing logic depends on too many factors.

To fix this, we add an action filtering/banning mechanism to Squid
ACL code. This mechanism is then used to:
  - ban client-first and server-first on bumping steps 2 and 3.
  - ban peek and stare actions on bumping step 3.
  - ban splice on step3 if stare is selected on step2 and
Squid cannot splice the SSL connection any more.
  - ban bump on step3 if peek is selected on step2 and
Squid cannot bump the connection any more.

The same action filtering mechanism may be useful for other
ACL-driven directives with state-dependent custom actions.

This change adds a runtime performance overhead of a single virtual
method call to all ORed ACLs that do not use banned actions.
That method itself just returns false unless the ACL represents
a whole directive rule. In the latter case, an std::vector size()
is also checked. It is possible to avoid this overhead by adding
a boolean I may ban actions flag to Acl::OrNode, but we decided
the small performance harm is not worth the extra code to set
that flag.

This is a Measurement Factory project.

=== modified file 'src/acl/Acl.h'
--- src/acl/Acl.h	2015-01-13 09:13:49 +
+++ src/acl/Acl.h	2015-08-19 08:42:04 +
@@ -150,52 +150,56 @@
 virtual bool requiresReply() const;
 };
 
 /// \ingroup ACLAPI
 typedef enum {
 // Authorization ACL result states
 ACCESS_DENIED,
 ACCESS_ALLOWED,
 ACCESS_DUNNO,
 
 // Authentication ACL result states
 ACCESS_AUTH_REQUIRED,// Missing Credentials
 } aclMatchCode;
 
 /// \ingroup ACLAPI
 /// ACL check answer; TODO: Rename to Acl::Answer
 class allow_t
 {
 public:
 // not explicit: allow aclMatchCode to allow_t conversions (for now)
-allow_t(const aclMatchCode aCode): code(aCode), kind(0) {}
+allow_t(const aclMatchCode aCode, int aKind = 0): code(aCode), kind(aKind) {}
 
 allow_t(): code(ACCESS_DUNNO), kind(0) {}
 
 bool operator ==(const aclMatchCode aCode) const {
 return code == aCode;
 }
 
 bool operator !=(const aclMatchCode aCode) const {
 return !(*this == aCode);
 }
 
+bool operator ==(const allow_t allow) const {
+return code == allow.code  kind == allow.kind;
+}
+
 operator aclMatchCode() const {
 return code;
 }
 
 aclMatchCode code; /// ACCESS_* code
 int kind; /// which custom access list verb matched
 };
 
 inline std::ostream 
 operator (std::ostream o, const allow_t a)
 {
 switch (a) {
 case ACCESS_DENIED:
 o  DENIED;
 break;
 case ACCESS_ALLOWED:
 o  ALLOWED;
 break;
 case ACCESS_DUNNO:
 o  DUNNO;

=== modified file 'src/acl/BoolOps.cc'
--- src/acl/BoolOps.cc	2015-01-13 09:13:49 +
+++ src/acl/BoolOps.cc	2015-08-19 08:42:04 +
@@ -98,47 +98,55 @@
 Acl::AndNode::parse()
 {
 // Not implemented: AndNode cannot be configured directly. See Acl::AllOf.
 assert(false);
 }
 
 /* Acl::OrNode */
 
 char const *
 Acl::OrNode::typeString() const
 {
 return any-of;
 }
 
 ACL *
 Acl::OrNode::clone() const
 {
 return new OrNode;
 }
 
+bool
+Acl::OrNode::bannedAction(ACLChecklist *, Nodes::const_iterator) const
+{
+return false;
+}
+
 int
 Acl::OrNode::doMatch(ACLChecklist *checklist, Nodes::const_iterator start) const
 {
 lastMatch_ = nodes.end();
 
 // find

[squid-dev] [PATCH] FtpServer.cc:1024: reply != NULL assertion

2015-08-18 Thread Tsantilas Christos


Handle nil HttpReply pointer inside various handlers called from 
Ftp::Server::handleReply(). For example, when the related StoreEntry 
object is aborted, the client_side_reply.cc code may call the 
Ftp::Server::handleReply() method with a nil reply pointer.


The Ftp::Server::handleReply() methods itself cannot handle nil replies 
because they are valid in many states. Only state-specific handlers know 
whether they need the reply.


The Ftp::Server::handleReply() method is called [via Store] from Client 
code. Thus, exceptions in handleReply() are handled by the Ftp::Client 
job. That job does not have enough information to know whether the 
client-to-Squid connection should be closed; the job keeps the 
connection open. When the reply is nil, that open connection becomes 
unusable, leading to more problems.


This patch fixes the Ftp::Server::handleReply() to handle exceptions, 
including closing the connections in the case of an exception. It also 
adds Must(reply) checks to check for nil HttpReply pointers where the 
reply is required. Eventually, Store should start using async calls to 
protect jobs waiting for Store updates. Meanwhile, this should help.


This is a Measurement Factory project.

FtpServer.cc:1024: reply != NULL assertion

Handle nil HttpReply pointer inside various handlers called from
Ftp::Server::handleReply(). For example, when the related StoreEntry
object is aborted, the client_side_reply.cc code may call the 
Ftp::Server::handleReply() method with a nil reply pointer.

The Ftp::Server::handleReply() methods itself cannot handle nil replies
because they are valid in many states. Only state-specific handlers know
whether they need the reply.

The Ftp::Server::handleReply() method is called [via Store] from Client code.
Thus, exceptions in handleReply() are handled by the Ftp::Client job. That job
does not have enough information to know whether the client-to-Squid connection
should be closed; the job keeps the connection open. When the reply is nil,
that open connection becomes unusable, leading to more problems.

This patch fixes the Ftp::Server::handleReply() to handle exceptions,
including closing the connections in the case of an exception. It also
adds Must(reply) checks to check for nil HttpReply pointers where the 
reply is required. Eventually, Store should start using async calls to
protect jobs waiting for Store updates. Meanwhile, this should help.

This is a Measurement Factory project.

=== modified file 'src/servers/FtpServer.cc'
--- src/servers/FtpServer.cc	2015-08-04 19:57:07 +
+++ src/servers/FtpServer.cc	2015-08-17 14:47:32 +
@@ -768,55 +768,61 @@
 !context-http-al-reply  reply) {
 context-http-al-reply = reply;
 HTTPMSGLOCK(context-http-al-reply);
 }
 
 static ReplyHandler handlers[] = {
 NULL, // fssBegin
 NULL, // fssConnected
 Ftp::Server::handleFeatReply, // fssHandleFeat
 Ftp::Server::handlePasvReply, // fssHandlePasv
 Ftp::Server::handlePortReply, // fssHandlePort
 Ftp::Server::handleDataReply, // fssHandleDataRequest
 Ftp::Server::handleUploadReply, // fssHandleUploadRequest
 Ftp::Server::handleEprtReply,// fssHandleEprt
 Ftp::Server::handleEpsvReply,// fssHandleEpsv
 NULL, // fssHandleCwd
 NULL, // fssHandlePass
 NULL, // fssHandleCdup
 Ftp::Server::handleErrorReply // fssError
 };
-const Server server = dynamic_castconst Ftp::Server(*context-getConn());
-if (const ReplyHandler handler = handlers[server.master-serverState])
-(this-*handler)(reply, data);
-else
-writeForwardedReply(reply);
+try {
+const Server server = dynamic_castconst Ftp::Server(*context-getConn());
+if (const ReplyHandler handler = handlers[server.master-serverState])
+(this-*handler)(reply, data);
+else
+writeForwardedReply(reply);
+} catch (const std::exception e) {
+callException(e);
+throw TexcHere(e.what());
+}
 }
 
 void
 Ftp::Server::handleFeatReply(const HttpReply *reply, StoreIOBuffer)
 {
 if (getCurrentContext()-http-request-errType != ERR_NONE) {
 writeCustomReply(502, Server does not support FEAT, reply);
 return;
 }
 
+Must(reply);
 HttpReply::Pointer featReply = Ftp::HttpReplyWrapper(211, End, Http::scNoContent, 0);
 HttpHeader const serverReplyHeader = reply-header;
 
 HttpHeaderPos pos = HttpHeaderInitPos;
 bool hasEPRT = false;
 bool hasEPSV = false;
 int prependSpaces = 1;
 
 featReply-header.putStr(Http::HdrType::FTP_PRE, \211-Features:\);
 const int scode = serverReplyHeader.getInt(Http::HdrType::FTP_STATUS);
 if (scode == 211) {
 while (const HttpHeaderEntry *e = serverReplyHeader.getEntry(pos)) {
 if (e-id == Http::HdrType::FTP_PRE) {
 // assume RFC 2389 FEAT response format, quoted by Squid:
 

Re: [squid-dev] [PATCH] Ignore impossible SSL bumping actions, as intended and documented / bug 4237 fix

2015-08-11 Thread Tsantilas Christos

On 08/11/2015 07:30 AM, Amos Jeffries wrote:

On 11/08/2015 3:54 a.m., Tsantilas Christos wrote:

According to Squid wiki: Some actions are not possible during  certain
processing steps. During a given processing step, Squid ignores ssl_bump
lines with impossible actions. The distributed squid.conf.documented
has similar text.

Current Squid violates the above rule. Squid considers all actions, and
if an impossible action matches first, Squid guesses what the true
configuration intent was. Squid may guess wrong. For example, depending
on the transaction, Squid may guess that a matching  stare or peek
action during bumping step3 means bump, breaking peeked connections
that cannot be bumped.

This unintended but gross configuration semantics violation remained
invisible until bug 4237, probably because most configurations in most
environments either worked around the problem (where admins experimented
to make it work) or did not result in visible errors (where Squid
guesses did not lead to terminated connections).



... and mind this mess and admin confusion is a direct (and predicted)
result of conflating one single access control with all of the TLS
related authentication + authorization + processing control logics.

Thanks for doing this patch anyway. Adjusting allow_t like this has been
on the TODO list for auth related issues a long time before ssl-bump
existed.



While configuration workarounds are possible, the current implementation
is very wrong and leads to overly complex and, hence, often wrong
configurations. It is also nearly impossible to document accurately
because the guessing logic depends on too many factors.

To fix this, we add an action filtering/banning mechanism to Squid ACL
code. This mechanism is then used to:
   - ban client-first and server-first on bumping steps 2 and 3.


How about we just remove of client-first entirely?
  It has major security issues for which CVE already exist. The one
remaining use-case AFAICT is malware using Squid.


If we decide to remove client-first, we  should implement a separate 
patch. It has some work.





The attempted seamless upgrade from old configs was an outright failure.
So I think we can shorten the deprecation time without additional pain.



   - ban peek and stare actions on bumping step 3.
   - ban splice on step3 if stare is selected on step2 and
 Squid cannot splice the SSL connection any more.
   - ban bump on step3 if peek is selected on step2 and
 Squid cannot bump the connection any more.



What about the other documented actions:
  * reconnect at step 1  2
  * none at step 2 and 3


The reconnect is not yet implemented.
About the none you are right it should handled.



(http://wiki.squid-cache.org/Features/SslPeekAndSplice)


The same action filtering mechanism may be useful for other ACL-driven
directives with state-dependent custom actions.


So far that is only AUTH_REQUIRED on all non-http_access directives. And
DUNNO results on fast-check access controls.

The former would be a good (quick?) followup patch. The latter will
require careful testing and documentation.



This change adds a runtime performance overhead of a single virtual
method call to all ORed ACLs that do not use banned actions. That method
itself just returns false unless the ACL represents a whole directive
rule. In the latter case, an std::vector size() is also checked. It is
possible to avoid this overhead by adding a boolean I may ban actions
flag to Acl::OrNode, but we decided the small performance harm is not
worth the extra code to set that flag.


Agreed.


So the audit:

in src/acl/Tree.cc:

* src/Checklist.h and include/Checklist.h do not exist.
  - did you mean #include acl/Checklist.h ?


fixed.




in src/cache_cf.cc:

* revert all changes


oops, sorry, forgot to remove before post the patch...



NP: you may also want to test how ssl_bump works when the admin has
configures tls_outgoing_options disable or a min-version value higher
than the server will accept.


Squid does not start because of FATAL:
FATAL: No valid signing SSL certificate configured for HTTP_port [::]:8082




in src/client_side.cc:

* not banning other non-available options (see comment above about none
and reconnect).


ok



* looks like httpsSslBumpStep2AccessCheckDone() is still making bad
assumptions about none action == splice. When its documented as
being not available at step 2  3.


fixed in this patch


  - also reconnect action is actually performed? at step2 when documented
as not available until step3.


reconnect is not yet implemented.




in src/ssl/PeerConnector.cc:

* Ssl::PeekingPeerConnector::checkForPeekAndSpliceDone() is another
point doing none == splice when PeekingConnector is != step1 right?
being a server/peer job it should only be step2 or step3.


fixed.



* the Must() will throw on receiving final action reconnect which is
documented as being most relevant here on peer connections.


reconnect is not yet implemented

Re: [squid-dev] [PATCH] squid SSL subsystem did not initialized correctly

2015-08-10 Thread Tsantilas Christos

This patch looks OK

On 08/10/2015 05:12 PM, Amos Jeffries wrote:

On 10/08/2015 11:29 p.m., Tsantilas Christos wrote:

On 08/06/2015 02:55 PM, Amos Jeffries wrote:

On 6/08/2015 9:54 p.m., Tsantilas Christos wrote:

Hi all,

 Currently SSL subsystem did not initialized correctly in squid
trunk.
This is because of the Security::ProxyOutgoingConfig.encryptTransport
which is always false so the client SSL CTX object never builds. As a
result squid may not start if SSL is configured. I am attaching a small
patch I am using in my squid trees to work with SSL.


This always-enabled code is not compatible with the possible admin
configuration:

   tls_outgoing_options disable


Can you please try this instead:

   Security::PeerOptions::parse(const char *token)
   {
   if (strncmp(token, disable, 7) == 0) {
   clear();
+return;
   } else if (strncmp(token, cert=, 5) == 0) {
...
   } else {
   debugs(3, DBG_CRITICAL, ERROR: Unknown TLS option '  ...
+return;
   }
+
+encryptTransport = true;
   }


If that works you can go through and also remove uses of
secure.encryptTransport = true from adaptation/ServiceConfig.cc and
cache_cf.cc where it is set next to a call to secure.parse()
... but not the other one where it is set to always-on for https_port.


This is will not work, because it is not required for someone to
configure any of the sslproxy options for the SSL client to work.
Squid can always work with the default options.


Did you test it?

The default squid.conf parser always sets tls_outgoing_options
tls-min-version=1.0. Which should auto-enable DIRECT outgoing, then
explicit disable is required to turn off again.


http_port ... protocol=HTTPS and https_port forces
encryptTransport=true; explicitly based on the expected protocol. So
it is either enabled by the parse() call when TLS options are used, or
forced on anyway later when the protocol is validated.


icaps:// services also explicitly set encryptTransport=true;
explicitly based on 's' in the service URI scheme.

The cache_peer requires a minimum of ssl option to be configured. And
calls parse(). I see that simple case is passing  token which gets
reported as unknown option.


With the attached patch TLS should be:
* default-on for all https_port, icaps:// services, and outgoing
https:// traffic.
* manually enabled on cache_peer and http_port.
* manually disabled on outgoing https:// traffic.




The Security::ProxyOutgoingConfig.encryptTransport = true must be always
true unless the the SSL client is disabled.


Yes. And the default config should see to that happening. Which is why I
asked if you could try the change.




In previous squid releases it was not possible to disable SSL client,
but now looks that this is can be done using the
   tls_outgoing_options disable


Yes, that is new in Squid-4. Along with some small non-OpenSSL HTTPS
support (not much yet, but growing).




Maybe we need to add a parameter to Security::PeerOptions constructor,
to define if the SSL is enabled by default (for example in the case of
ProxyOutgoingConfig) or not (for example in HTTP ports configuration).



That would be messy because ProxyOutgoingConfig is a global and the
others are all explicitly constructed.

Amos



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] Ignore impossible SSL bumping actions, as intended and documented / bug 4237 fix

2015-08-10 Thread Tsantilas Christos
According to Squid wiki: Some actions are not possible during  certain 
processing steps. During a given processing step, Squid ignores ssl_bump 
lines with impossible actions. The distributed squid.conf.documented 
has similar text.


Current Squid violates the above rule. Squid considers all actions, and 
if an impossible action matches first, Squid guesses what the true 
configuration intent was. Squid may guess wrong. For example, depending 
on the transaction, Squid may guess that a matching  stare or peek 
action during bumping step3 means bump, breaking peeked connections 
that cannot be bumped.


This unintended but gross configuration semantics violation remained 
invisible until bug 4237, probably because most configurations in most 
environments either worked around the problem (where admins experimented 
to make it work) or did not result in visible errors (where Squid 
guesses did not lead to terminated connections).


While configuration workarounds are possible, the current 
implementation is very wrong and leads to overly complex and, hence, 
often wrong configurations. It is also nearly impossible to document 
accurately because the guessing logic depends on too many factors.


To fix this, we add an action filtering/banning mechanism to Squid ACL 
code. This mechanism is then used to:

  - ban client-first and server-first on bumping steps 2 and 3.
  - ban peek and stare actions on bumping step 3.
  - ban splice on step3 if stare is selected on step2 and
Squid cannot splice the SSL connection any more.
  - ban bump on step3 if peek is selected on step2 and
Squid cannot bump the connection any more.

The same action filtering mechanism may be useful for other ACL-driven 
directives with state-dependent custom actions.


This change adds a runtime performance overhead of a single virtual 
method call to all ORed ACLs that do not use banned actions. That method 
itself just returns false unless the ACL represents a whole directive 
rule. In the latter case, an std::vector size() is also checked. It is 
possible to avoid this overhead by adding a boolean I may ban actions 
flag to Acl::OrNode, but we decided the small performance harm is not 
worth the extra code to set that flag.


This is a Measurement Factory project
Ignore impossible SSL bumping actions, as intended and documented.

According to Squid wiki: Some actions are not possible during 
certain processing steps. During a given processing step, Squid
ignores ssl_bump lines with impossible actions. The distributed
squid.conf.documented has similar text.

Current Squid violates the above rule. Squid considers all actions,
and if an impossible action matches first, Squid guesses what the
true configuration intent was. Squid may guess wrong. For example,
depending on the transaction, Squid may guess that a matching 
stare or peek action during bumping step3 means bump, breaking
peeked connections that cannot be bumped.

This unintended but gross configuration semantics violation remained
invisible until bug 4237, probably because most configurations in
most environments either worked around the problem (where admins
experimented to make it work) or did not result in visible
errors (where Squid guesses did not lead to terminated connections).

While configuration workarounds are possible, the current 
implementation is very wrong and leads to overly complex and, hence,
often wrong configurations. It is also nearly impossible to document
accurately because the guessing logic depends on too many factors.

To fix this, we add an action filtering/banning mechanism to Squid
ACL code. This mechanism is then used to:
  - ban client-first and server-first on bumping steps 2 and 3.
  - ban peek and stare actions on bumping step 3.
  - ban splice on step3 if stare is selected on step2 and
Squid cannot splice the SSL connection any more.
  - ban bump on step3 if peek is selected on step2 and
Squid cannot bump the connection any more.

The same action filtering mechanism may be useful for other
ACL-driven directives with state-dependent custom actions.

This change adds a runtime performance overhead of a single virtual
method call to all ORed ACLs that do not use banned actions.
That method itself just returns false unless the ACL represents
a whole directive rule. In the latter case, an std::vector size()
is also checked. It is possible to avoid this overhead by adding
a boolean I may ban actions flag to Acl::OrNode, but we decided
the small performance harm is not worth the extra code to set
that flag.

This is a Measurement Factory project.

=== modified file 'src/acl/Acl.h'
--- src/acl/Acl.h	2015-01-13 07:25:36 +
+++ src/acl/Acl.h	2015-08-10 15:51:03 +
@@ -149,52 +149,56 @@
 virtual bool requiresReply() const;
 };
 
 /// \ingroup ACLAPI
 typedef enum {
 // Authorization ACL result states
 ACCESS_DENIED,
 ACCESS_ALLOWED,
 ACCESS_DUNNO,
 
 // Authentication ACL result states
 

Re: [squid-dev] [PATCH] squid SSL subsystem did not initialized correctly

2015-08-10 Thread Tsantilas Christos

On 08/06/2015 02:55 PM, Amos Jeffries wrote:

On 6/08/2015 9:54 p.m., Tsantilas Christos wrote:

Hi all,

Currently SSL subsystem did not initialized correctly in squid trunk.
This is because of the Security::ProxyOutgoingConfig.encryptTransport
which is always false so the client SSL CTX object never builds. As a
result squid may not start if SSL is configured. I am attaching a small
patch I am using in my squid trees to work with SSL.


This always-enabled code is not compatible with the possible admin
configuration:

  tls_outgoing_options disable


Can you please try this instead:

  Security::PeerOptions::parse(const char *token)
  {
  if (strncmp(token, disable, 7) == 0) {
  clear();
+return;
  } else if (strncmp(token, cert=, 5) == 0) {
...
  } else {
  debugs(3, DBG_CRITICAL, ERROR: Unknown TLS option '  ...
+return;
  }
+
+encryptTransport = true;
  }


If that works you can go through and also remove uses of
secure.encryptTransport = true from adaptation/ServiceConfig.cc and
cache_cf.cc where it is set next to a call to secure.parse()
... but not the other one where it is set to always-on for https_port.


This is will not work, because it is not required for someone to 
configure any of the sslproxy options for the SSL client to work.

Squid can always work with the default options.

The Security::ProxyOutgoingConfig.encryptTransport = true must be always 
true unless the the SSL client is disabled.


In previous squid releases it was not possible to disable SSL client, 
but now looks that this is can be done using the

  tls_outgoing_options disable

Maybe we need to add a parameter to Security::PeerOptions constructor, 
to define if the SSL is enabled by default (for example in the case of 
ProxyOutgoingConfig) or not (for example in HTTP ports configuration).





If the final result still works, please commit.

Amos



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] squid SSL subsystem did not initialized correctly

2015-08-06 Thread Tsantilas Christos

Hi all,

   Currently SSL subsystem did not initialized correctly in squid 
trunk. This is because of the 
Security::ProxyOutgoingConfig.encryptTransport which is always false so 
the client SSL CTX object never builds. As a result squid may not start 
if SSL is configured. I am attaching a small patch I am using in my 
squid trees to work with SSL.
=== modified file 'src/cache_cf.cc'
--- src/cache_cf.cc	2015-08-04 21:04:09 +
+++ src/cache_cf.cc	2015-08-06 09:49:07 +
@@ -848,47 +848,46 @@
 #endif
 }
 } else {
 Config2.effectiveUserID = geteuid();
 Config2.effectiveGroupID = getegid();
 }
 
 if (NULL != Config.effectiveGroup) {
 
 struct group *grp = getgrnam(Config.effectiveGroup);
 
 if (NULL == grp) {
 fatalf(getgrnam failed to find groupid for effective group '%s',
Config.effectiveGroup);
 return;
 }
 
 Config2.effectiveGroupID = grp-gr_gid;
 }
 
-if (Security::ProxyOutgoingConfig.encryptTransport) {
-debugs(3, DBG_IMPORTANT, Initializing https:// proxy context);
-Config.ssl_client.sslContext = Security::ProxyOutgoingConfig.createClientContext(false);
-if (!Config.ssl_client.sslContext) {
-debugs(3, DBG_CRITICAL, ERROR: Could not initialize https:// proxy context);
-self_destruct();
-}
+debugs(3, DBG_IMPORTANT, Initializing https:// proxy context);
+Security::ProxyOutgoingConfig.encryptTransport = true;
+Config.ssl_client.sslContext = Security::ProxyOutgoingConfig.createClientContext(false);
+if (!Config.ssl_client.sslContext) {
+debugs(3, DBG_CRITICAL, ERROR: Could not initialize https:// proxy context);
+self_destruct();
 }
 
 for (CachePeer *p = Config.peers; p != NULL; p = p-next) {
 
 // default value for ssldomain= is the peer host/IP
 if (p-secure.sslDomain.isEmpty())
 p-secure.sslDomain = p-host;
 
 if (p-secure.encryptTransport) {
 debugs(3, DBG_IMPORTANT, Initializing cache_peer   p-name   TLS context);
 p-sslContext = p-secure.createClientContext(true);
 if (!p-sslContext) {
 debugs(3, DBG_CRITICAL, ERROR: Could not initialize cache_peer   p-name   TLS context);
 self_destruct();
 }
 }
 }
 
 #if USE_OPENSSL
 for (AnyP::PortCfgPointer s = HttpPortList; s != NULL; s = s-next) {

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] received_encrypted ACL

2015-07-21 Thread Tsantilas Christos

On 07/21/2015 01:25 PM, Amos Jeffries wrote:


No. Christos wrote this:

NOTE: Currently there is not any mechanism to indicate if a cached
object came from secure source or not, so we assume that all hits for
secure requests are secure too.


The cache hits rely on the request markings to determine the HIT
matching. In this case we have a https:// (secure, TLS-received) marked
request being re-written with non-TLS URL and delivered a HIT originated
from a non-TLS server.



If the HITs is the problem, it is something can be solved. Just in the 
application with the ICAP services described it is not needed.

Handling the HITs can be  marked as a TODO on this patch.


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] received_encrypted ACL

2015-07-17 Thread Tsantilas Christos

This patch adds received_encrypted ACL

The new received_encrypted ACL matches transactions where all HTTP 
messages were received over TLS or SSL transport connections, including 
messages received from ICAP servers.


Some eCAP services receive data from unencrypted sources. Some eCAP 
services are secure, but we assume that all are not secure until we 
add a configuration option to mark secure eCAP services.


Use case: Sending everything to Secure ICAP services increases 
adaptation performance overhead. Folks want to send received_encrypted 
transactions and only those transactions to Secure ICAP services.


NOTE: Currently there is not any mechanism to indicate if a cached 
object came from secure source or not, so we assume that all hits for 
secure requests  are secure too.


This is a Measurement Factory project.
Add received_encrypted ACL

The new received_encrypted ACL matches transactions where all HTTP
messages were received over TLS or SSL transport connections, including
messages received from ICAP servers.

Some eCAP services receive data from unencrypted sources. Some eCAP
services are secure, but we assume that all are not secure until we
add a configuration option to mark secure eCAP services.

This is a Measurement Factory project.

=== modified file 'src/AclRegs.cc'
--- src/AclRegs.cc	2015-04-10 08:54:13 +
+++ src/AclRegs.cc	2015-06-15 15:45:43 +
@@ -43,40 +43,41 @@
 #include acl/HierCode.h
 #include acl/HierCodeData.h
 #include acl/HttpHeaderData.h
 #include acl/HttpRepHeader.h
 #include acl/HttpReqHeader.h
 #include acl/HttpStatus.h
 #include acl/IntRange.h
 #include acl/Ip.h
 #include acl/LocalIp.h
 #include acl/LocalPort.h
 #include acl/MaxConnection.h
 #include acl/Method.h
 #include acl/MethodData.h
 #include acl/MyPortName.h
 #include acl/Note.h
 #include acl/NoteData.h
 #include acl/PeerName.h
 #include acl/Protocol.h
 #include acl/ProtocolData.h
 #include acl/Random.h
+#include acl/ReceivedEncrypted.h
 #include acl/Referer.h
 #include acl/RegexData.h
 #include acl/ReplyHeaderStrategy.h
 #include acl/ReplyMimeType.h
 #include acl/RequestHeaderStrategy.h
 #include acl/RequestMimeType.h
 #include acl/SourceAsn.h
 #include acl/SourceDomain.h
 #include acl/SourceIp.h
 #include acl/SquidError.h
 #include acl/SquidErrorData.h
 #if USE_OPENSSL
 #include acl/Certificate.h
 #include acl/CertificateData.h
 #include acl/ServerName.h
 #include acl/SslError.h
 #include acl/SslErrorData.h
 #endif
 #include acl/Strategised.h
 #include acl/Strategy.h
@@ -213,20 +214,22 @@
 ACL::Prototype ACLTag::RegistryProtoype(ACLTag::RegistryEntry_, tag);
 ACLStrategisedconst char * ACLTag::RegistryEntry_(new ACLStringData, ACLTagStrategy::Instance(), tag);
 
 ACL::Prototype Acl::AnyOf::RegistryProtoype(Acl::AnyOf::RegistryEntry_, any-of);
 Acl::AnyOf Acl::AnyOf::RegistryEntry_;
 
 ACL::Prototype Acl::AllOf::RegistryProtoype(Acl::AllOf::RegistryEntry_, all-of);
 Acl::AllOf Acl::AllOf::RegistryEntry_;
 
 ACL::Prototype ACLNote::RegistryProtoype(ACLNote::RegistryEntry_, note);
 ACLStrategisedHttpRequest * ACLNote::RegistryEntry_(new ACLNoteData, ACLNoteStrategy::Instance(), note);
 
 #if USE_ADAPTATION
 ACL::Prototype ACLAdaptationService::RegistryProtoype(ACLAdaptationService::RegistryEntry_, adaptation_service);
 ACLStrategisedconst char * ACLAdaptationService::RegistryEntry_(new ACLAdaptationServiceData, ACLAdaptationServiceStrategy::Instance(), adaptation_service);
 #endif
 
 ACL::Prototype ACLSquidError::RegistryProtoype(ACLSquidError::RegistryEntry_, squid_error);
 ACLStrategisederr_type ACLSquidError::RegistryEntry_(new ACLSquidErrorData, ACLSquidErrorStrategy::Instance(), squid_error);
 
+ACL::Prototype ACLReceivedEncrypted::RegistryProtoype(ACLReceivedEncrypted::RegistryEntry_, received_encrypted);
+ACLReceivedEncrypted ACLReceivedEncrypted::RegistryEntry_(received_encrypted);

=== modified file 'src/HttpMsg.cc'
--- src/HttpMsg.cc	2015-04-27 05:31:56 +
+++ src/HttpMsg.cc	2015-06-26 15:49:21 +
@@ -6,41 +6,42 @@
  * Please see the COPYING and CONTRIBUTORS files for details.
  */
 
 /* DEBUG: section 74HTTP Message */
 
 #include squid.h
 #include Debug.h
 #include HttpHeaderTools.h
 #include HttpMsg.h
 #include MemBuf.h
 #include mime_header.h
 #include profiler/Profiler.h
 #include SquidConfig.h
 
 HttpMsg::HttpMsg(http_hdr_owner_type owner):
 http_ver(Http::ProtocolVersion()),
 header(owner),
 cache_control(NULL),
 hdr_sz(0),
 content_length(0),
-pstate(psReadyToParseStartLine)
+pstate(psReadyToParseStartLine),
+sources(0)
 {}
 
 HttpMsg::~HttpMsg()
 {
 assert(!body_pipe);
 }
 
 HttpMsgParseState operator++ (HttpMsgParseState aState)
 {
 int tmp = (int)aState;
 aState = (HttpMsgParseState)(++tmp);
 return aState;
 }
 
 /* find end of headers */
 static int
 httpMsgIsolateHeaders(const char **parse_start, int l, const char **blk_start, const char **blk_end)
 {
 /*
  * parse_start points to the first line of HTTP message 

Re: [squid-dev] [PATCH] Avoid SSL certificate db corruption with empty index.txt as a symptom.

2015-07-09 Thread Tsantilas Christos

Applied to trunk as r14146.

On 07/09/2015 04:30 PM, Amos Jeffries wrote:

On 4/07/2015 1:48 a.m., Tsantilas Christos wrote:

I just show that I had forgot to attach the patch here.




Looks reasonable. +1.

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Errors served using invalid certificates when dealing with SSL server errors.

2015-07-09 Thread Tsantilas Christos

The patch for squid-3.5.
I suppose it should applied here too.

On 07/09/2015 04:13 PM, Tsantilas Christos wrote:

Applied to trunk as r14145.


On 07/07/2015 09:05 PM, Amos Jeffries wrote:

On 8/07/2015 4:28 a.m., Tsantilas Christos wrote:

Hi all,

When bumping Squid needs to send an Squid-generated error page over a
secure connection, Squid needs to generate a certificate for that
connection. Prior to these changes, several scenarios could lead to
Squid generating a certificate that clients could not validate. In those
cases, the user would get a cryptic and misleading browser error instead
of a Squid-generated error page with useful details about the problem.

For example, is a server certificate that is rejected by the certificate
validation helper. Squid no longer uses CN from that certificate to
generate a fake certificate.

Another example is a user accessing an origin server using one of its
alternative names and getting a Squid-generated certificate containing
just the server common name (CN).

These changes make sure that certificate for error pages is generated
using SNI (when peeking or staring, if available) or CONNECT host name
(including server-first bumping mode). We now update the
ConnStateData::sslCommonName  field (used as CN field for generated
certificates) only _after_ the server certificate is successfully
validated.



+1.

Amos



Errors served using invalid certificates when dealing with SSL server errors.

When bumping Squid needs to send an Squid-generated error page over a
secure connection, Squid needs to generate a certificate for that connection.
Prior to these changes, several scenarios could lead to Squid generating
a certificate that clients could not validate. In those cases, the user would
get a cryptic and misleading browser error instead of a Squid-generated
error page with useful details about the problem.

For example, is a server certificate that is rejected by the certificate
validation helper. Squid no longer uses CN from that certificate to generate
a fake certificate.

Another example is a user accessing an origin server using one of its
alternative names and getting a Squid-generated certificate containing just
the server common name (CN).

These changes make sure that certificate for error pages is generated using
SNI (when peeking or staring, if available) or CONNECT host name (including
server-first bumping mode). We now update the ConnStateData::sslCommonName 
field (used as CN field for generated certificates) only _after_ the server
certificate is successfully validated.

This is a Measurement Factory project.

=== modified file 'src/ssl/PeerConnector.cc'
--- src/ssl/PeerConnector.cc	2015-04-26 16:44:23 +
+++ src/ssl/PeerConnector.cc	2015-06-08 09:14:37 +
@@ -257,51 +257,67 @@
 return;
 
 callBack();
 }
 
 void
 Ssl::PeerConnector::handleServerCertificate()
 {
 if (serverCertificateHandled)
 return;
 
 if (ConnStateData *csd = request-clientConnectionManager.valid()) {
 const int fd = serverConnection()-fd;
 SSL *ssl = fd_table[fd].ssl;
 Ssl::X509_Pointer serverCert(SSL_get_peer_certificate(ssl));
 if (!serverCert.get())
 return;
 
 serverCertificateHandled = true;
 
-csd-resetSslCommonName(Ssl::CommonHostName(serverCert.get()));
-debugs(83, 5, HTTPS server CN:   csd-sslCommonName() 
-bumped:   *serverConnection());
-
 // remember the server certificate for later use
 if (Ssl::ServerBump *serverBump = csd-serverBump()) {
 serverBump-serverCert.reset(serverCert.release());
 }
 }
 }
 
+void
+Ssl::PeerConnector::serverCertificateVerified()
+{
+if (ConnStateData *csd = request-clientConnectionManager.valid()) {
+Ssl::X509_Pointer serverCert;
+if(Ssl::ServerBump *serverBump = csd-serverBump())
+serverCert.resetAndLock(serverBump-serverCert.get());
+else {
+const int fd = serverConnection()-fd;
+SSL *ssl = fd_table[fd].ssl;
+serverCert.reset(SSL_get_peer_certificate(ssl));
+}
+if (serverCert.get()) {
+csd-resetSslCommonName(Ssl::CommonHostName(serverCert.get()));
+debugs(83, 5, HTTPS server CN:   csd-sslCommonName() 
+bumped:   *serverConnection());
+}
+}
+}
+
 bool
 Ssl::PeerConnector::sslFinalized()
 {
 const int fd = serverConnection()-fd;
 SSL *ssl = fd_table[fd].ssl;
 
 // In the case the session is resuming, the certificates does not exist and
 // we did not do any cert validation
 if (resumingSession)
 return true;
 
 handleServerCertificate();
 
 if (ConnStateData *csd = request-clientConnectionManager.valid()) {
 if (Ssl::ServerBump *serverBump = csd-serverBump()) {
 // remember validation errors, if any
 if (Ssl::CertErrors *errs = static_castSsl::CertErrors *(SSL_get_ex_data

Re: [squid-dev] [PATCH] Errors served using invalid certificates when dealing with SSL server errors.

2015-07-09 Thread Tsantilas Christos

Applied to trunk as r14145.


On 07/07/2015 09:05 PM, Amos Jeffries wrote:

On 8/07/2015 4:28 a.m., Tsantilas Christos wrote:

Hi all,

When bumping Squid needs to send an Squid-generated error page over a
secure connection, Squid needs to generate a certificate for that
connection. Prior to these changes, several scenarios could lead to
Squid generating a certificate that clients could not validate. In those
cases, the user would get a cryptic and misleading browser error instead
of a Squid-generated error page with useful details about the problem.

For example, is a server certificate that is rejected by the certificate
validation helper. Squid no longer uses CN from that certificate to
generate a fake certificate.

Another example is a user accessing an origin server using one of its
alternative names and getting a Squid-generated certificate containing
just the server common name (CN).

These changes make sure that certificate for error pages is generated
using SNI (when peeking or staring, if available) or CONNECT host name
(including server-first bumping mode). We now update the
ConnStateData::sslCommonName  field (used as CN field for generated
certificates) only _after_ the server certificate is successfully
validated.



+1.

Amos


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] Errors served using invalid certificates when dealing with SSL server errors.

2015-07-07 Thread Tsantilas Christos

Hi all,

When bumping Squid needs to send an Squid-generated error page over a 
secure connection, Squid needs to generate a certificate for that 
connection. Prior to these changes, several scenarios could lead to 
Squid generating a certificate that clients could not validate. In those 
cases, the user would get a cryptic and misleading browser error instead 
of a Squid-generated error page with useful details about the problem.


For example, is a server certificate that is rejected by the certificate 
validation helper. Squid no longer uses CN from that certificate to 
generate a fake certificate.


Another example is a user accessing an origin server using one of its 
alternative names and getting a Squid-generated certificate containing 
just the server common name (CN).


These changes make sure that certificate for error pages is generated 
using SNI (when peeking or staring, if available) or CONNECT host name 
(including server-first bumping mode). We now update the 
ConnStateData::sslCommonName  field (used as CN field for generated 
certificates) only _after_ the server certificate is successfully validated.


This is a Measurement Factory project.
Errors served using invalid certificates when dealing with SSL server errors.

When bumping Squid needs to send an Squid-generated error page over a
secure connection, Squid needs to generate a certificate for that connection.
Prior to these changes, several scenarios could lead to Squid generating
a certificate that clients could not validate. In those cases, the user would
get a cryptic and misleading browser error instead of a Squid-generated
error page with useful details about the problem.

For example, is a server certificate that is rejected by the certificate
validation helper. Squid no longer uses CN from that certificate to generate
a fake certificate.

Another example is a user accessing an origin server using one of its
alternative names and getting a Squid-generated certificate containing just
the server common name (CN).

These changes make sure that certificate for error pages is generated using
SNI (when peeking or staring, if available) or CONNECT host name (including
server-first bumping mode). We now update the ConnStateData::sslCommonName 
field (used as CN field for generated certificates) only _after_ the server
certificate is successfully validated.

This is a Measurement Factory project.

=== modified file 'src/ssl/PeerConnector.cc'
--- src/ssl/PeerConnector.cc	2015-06-09 06:14:43 +
+++ src/ssl/PeerConnector.cc	2015-07-07 16:17:23 +
@@ -722,42 +722,45 @@
 handleServerCertificate();
 }
 }
 
 if (error) {
 // For intercepted connections, set the host name to the server
 // certificate CN. Otherwise, we just hope that CONNECT is using
 // a user-entered address (a host name or a user-entered IP).
 const bool isConnectRequest = !request-clientConnectionManager-port-flags.isIntercepted();
 if (request-flags.sslPeek  !isConnectRequest) {
 if (X509 *srvX509 = serverBump-serverCert.get()) {
 if (const char *name = Ssl::CommonHostName(srvX509)) {
 request-SetHost(name);
 debugs(83, 3, reset request host:   name);
 }
 }
 }
 }
 }
 
-if (!error  splice)
-switchToTunnel(request.getRaw(), clientConn, serverConn);
+if (!error) {
+serverCertificateVerified();
+if (splice)
+switchToTunnel(request.getRaw(), clientConn, serverConn);
+}
 }
 
 void
 Ssl::PeekingPeerConnector::noteWantWrite()
 {
 const int fd = serverConnection()-fd;
 SSL *ssl = fd_table[fd].ssl;
 BIO *b = SSL_get_rbio(ssl);
 Ssl::ServerBio *srvBio = static_castSsl::ServerBio *(b-ptr);
 
 if ((srvBio-bumpMode() == Ssl::bumpPeek || srvBio-bumpMode() == Ssl::bumpStare)  srvBio-holdWrite()) {
 debugs(81, DBG_IMPORTANT, hold write on SSL connection on FD   fd);
 checkForPeekAndSplice();
 return;
 }
 
 Ssl::PeerConnector::noteWantWrite();
 }
 
 void
@@ -803,31 +806,48 @@
 
 // else call parent noteNegotiationError to produce an error page
 Ssl::PeerConnector::noteSslNegotiationError(result, ssl_error, ssl_lib_error);
 }
 
 void
 Ssl::PeekingPeerConnector::handleServerCertificate()
 {
 if (serverCertificateHandled)
 return;
 
 if (ConnStateData *csd = request-clientConnectionManager.valid()) {
 const int fd = serverConnection()-fd;
 SSL *ssl = fd_table[fd].ssl;
 Ssl::X509_Pointer serverCert(SSL_get_peer_certificate(ssl));
 if (!serverCert.get())
 return;
 
 serverCertificateHandled = true;
 
-csd-resetSslCommonName(Ssl::CommonHostName(serverCert.get()));
-debugs(83, 5, HTTPS server CN:   csd-sslCommonName() 
-bumped:   

Re: [squid-dev] [PATCH] Avoid SSL certificate db corruption with empty index.txt as a symptom.

2015-07-03 Thread Tsantilas Christos

I just show that I had forgot to attach the patch here.


On 06/23/2015 06:30 PM, Tsantilas Christos wrote:


* Detect cases where the size file is corrupted or has a clearly wrong
value. Automatically rebuild the database in such cases.

* Teach ssl_crtd to keep running if it is unable to store the generated
certificate in the database. Return the generated certificate to Squid
and log an error message in such cases.

Background:

There are cases where ssl_crtd may corrupt its certificate database. The
known cases manifest themselves with an empty db index file.  When that
happens, ssl_crtd helpers quit, SSL bumping does not work any more, and
the certificate DB has to be deleted and re-initialized.

We do not know exactly what causes corruption in deployments, but one
known trigger that is easy to reproduce in a lab is the block size
change in the ssl_crtd configuration. That change has the following
side-effects:

1. When ssl_crtd removes certificates, it computes their size using a
different block size than the one used to store the certificates. This
is may result in negative database sizes.

2. Signed/unsigned conversion results in a huge number near LONG_MAX,
which is then written to the size file.

3. The ssl_crtd helper refuses to store new certificates because the
database size (as described by the size file) exceeds the configured
limit.

4. The ssl_crtd helper exits because it cannot store a new certificates
to the database. No helper response is sent to Squid in this case.

Most likely, there are other corruption triggers -- the database
management code is of an overall poor quality. This change resolves some
of the underlying problems in hope to address at least some of the
unknown triggers as well as the known one.

This is a Measurement Factory project.


Avoid SSL certificate db corruption with empty index.txt as a symptom.

* Detect cases where the size file is corrupted or has a clearly wrong
  value. Automatically rebuild the database in such cases.

* Teach ssl_crtd to keep running if it is unable to store the generated
  certificate in the database. Return the generated certificate to Squid
  and log an error message in such cases.

Background:

There are cases where ssl_crtd may corrupt its certificate database.
The known cases manifest themselves with an empty db index file.  When
that happens, ssl_crtd helpers quit, SSL bumping does not work any more,
and the certificate DB has to be deleted and re-initialized.

We do not know exactly what causes corruption in deployments, but one
known trigger that is easy to reproduce in a lab is the block size
change in the ssl_crtd configuration. That change has the following
side-effects:

1. When ssl_crtd removes certificates, it computes their size using a
   different block size than the one used to store the certificates.
   This is may result in negative database sizes.

2. Signed/unsigned conversion results in a huge number near LONG_MAX,
   which is then written to the size file.

3. The ssl_crtd helper refuses to store new certificates because the
   database size (as described by the size file) exceeds the
   configured limit.

4. The ssl_crtd helper exits because it cannot store a new certificates
   to the database. No helper response is sent to Squid in this case.

Most likely, there are other corruption triggers -- the database
management code is of an overall poor quality. This change resolves some
of the underlying problems in hope to address at least some of the
unknown triggers as well as the known one.

This is a Measurement Factory project.


=== modified file 'src/ssl/certificate_db.cc'
--- src/ssl/certificate_db.cc	2015-04-14 07:26:12 +
+++ src/ssl/certificate_db.cc	2015-06-12 17:15:03 +
@@ -303,52 +303,59 @@
 // TODO: check if the stored row is valid.
 return true;
 }
 
 {
 TidyPointerchar, tidyFree subject(X509_NAME_oneline(X509_get_subject_name(cert.get()), NULL, 0));
 Ssl::X509_Pointer findCert;
 Ssl::EVP_PKEY_Pointer findPkey;
 if (pure_find(useName.empty() ? subject.get() : useName, findCert, findPkey)) {
 // Replace with database certificate
 cert.reset(findCert.release());
 pkey.reset(findPkey.release());
 return true;
 }
 // pure_find may fail because the entry is expired, or because the
 // certs file is corrupted. Remove any entry with given hostname
 deleteByHostname(useName.empty() ? subject.get() : useName);
 }
 
 // check db size while trying to minimize calls to size()
-while (size()  max_db_size) {
-if (deleteInvalidCertificate())
-continue; // try to find another invalid certificate if needed
-
-// there are no more invalid ones, but there must be valid certificates
-do {
-if (!deleteOldestCertificate()) {
-save(); // Some entries may have been removed. Update the index file

Re: [squid-dev] [PATCH] Splice to origin cache_peer

2015-07-03 Thread Tsantilas Christos

The patch appllied to trunk as rev14132.
The applied patch includes the requested fixes.

Regards,
   Christos

On 06/28/2015 03:17 PM, Amos Jeffries wrote:

On 24/06/2015 2:54 a.m., Tsantilas Christos wrote:

Currently, Squid cannot redirect intercepted connections that are
subject to SslBump rules to _originserver_ cache_peer. For example,
consider Squid that enforces safe search by redirecting clients to
forcesafesearch.example.com. Consider a TLS client that tries to connect
to www.example.com. Squid needs to send that client to
forcesafesearch.example.com (without changing the host header and SNI
information; those would still point to www.example.com for safe search
to work as intended!).

The admin may configure Squid to send intercepted clients to an
originserver cache_peer with the forcesafesearch.example.com address.
Such a configuration does not currently work together with ssl_bump
peek/splice rules.

This patch:

* Fixes src/neighbors.cc bug which prevented CONNECT requests from going
to originserver cache peers. This bug affects both true CONNECT requests
and intercepted SSL/TLS connections (with fake CONNECT requests). Squid
use the CachePeer::in_addr.port which is not meant to be used for the
HTTP port, apparently. HTTP checks should use CachePeer::http_port instead.

* Changes Squid to not initiate SSL/TLS connection to cache_peer for
true CONNECT requests.

* Allows forwarding being-peeked (or stared) at connections to
originserver cache_peers.


This is a Measurement Factory project.



General comment: remember that SSL (all versions) are now deprecated and
target is to kill all use of SSL (and references if we can). Please use
TLS for naming and documenting new things that are generic TLS/SSL and
not explicitly part of SSLv2 or SSLv3 protocols.


in src/FwdState.cc:

* Took me ages to figure out why sslToPeer contains
!userWillSslToPeerForUs. Please either rename sslToPeer  as
needTlsToPeer OR add code comments to document those logics more clearly.
  - please add comment that userWillSslToPeerForUs assumes CONNECT ==
HTTPS (which is not always true in reality).


+1. Other than that bit of polish this looks fine. The updated patch can
go in without another review.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] Splice to origin cache_peer

2015-06-23 Thread Tsantilas Christos
Currently, Squid cannot redirect intercepted connections that are 
subject to SslBump rules to _originserver_ cache_peer. For example, 
consider Squid that enforces safe search by redirecting clients to 
forcesafesearch.example.com. Consider a TLS client that tries to connect 
to www.example.com. Squid needs to send that client to 
forcesafesearch.example.com (without changing the host header and SNI 
information; those would still point to www.example.com for safe search 
to work as intended!).


The admin may configure Squid to send intercepted clients to an 
originserver cache_peer with the forcesafesearch.example.com address. 
Such a configuration does not currently work together with ssl_bump 
peek/splice rules.


This patch:

* Fixes src/neighbors.cc bug which prevented CONNECT requests from going 
to originserver cache peers. This bug affects both true CONNECT requests 
and intercepted SSL/TLS connections (with fake CONNECT requests). Squid 
use the CachePeer::in_addr.port which is not meant to be used for the 
HTTP port, apparently. HTTP checks should use CachePeer::http_port instead.


* Changes Squid to not initiate SSL/TLS connection to cache_peer for 
true CONNECT requests.


* Allows forwarding being-peeked (or stared) at connections to 
originserver cache_peers.



This is a Measurement Factory project.
Splice to origin cache_peer.

Currently, Squid cannot redirect intercepted connections that are subject to
SslBump rules to _originserver_ cache_peer. For example, consider Squid that
enforces safe search by redirecting clients to forcesafesearch.example.com.
Consider a TLS client that tries to connect to www.example.com. Squid needs to
send that client to forcesafesearch.example.com (without changing the host
header and SNI information; those would still point to www.example.com for
safe search to work as intended!).

The admin may configure Squid to send intercepted clients to an originserver
cache_peer with the forcesafesearch.example.com address. Such a configuration
does not currently work together with ssl_bump peek/splice rules.

This patch:

* Fixes src/neighbors.cc bug which prevented CONNECT requests from going
  to originserver cache peers. This bug affects both true CONNECT requests
  and intercepted SSL/TLS connections (with fake CONNECT requests). Squid
  use the CachePeer::in_addr.port which is not meant to be used for the HTTP
  port, apparently. HTTP checks should use CachePeer::http_port instead.

* Changes Squid to not initiate SSL/TLS connection to cache_peer for
  true CONNECT requests. 

* Allows forwarding being-peeked (or stared) at connections to originserver
  cache_peers.

The bug fix described in the first bullet makes the last two changes
necessary.

This is a Measurement Factory project.

=== modified file 'src/FwdState.cc'
--- src/FwdState.cc	2015-06-09 06:14:43 +
+++ src/FwdState.cc	2015-06-23 14:29:15 +
@@ -674,44 +674,47 @@
 peerConnectFailed(conn-getPeer());
 
 conn-close();
 }
 retryOrBail();
 return;
 }
 
 serverConn = conn;
 flags.connected_okay = true;
 
 debugs(17, 3, HERE  serverConnection()  : '  entry-url()  ' );
 
 comm_add_close_handler(serverConnection()-fd, fwdServerClosedWrapper, this);
 
 if (serverConnection()-getPeer())
 peerConnectSucceded(serverConnection()-getPeer());
 
 #if USE_OPENSSL
 if (!request-flags.pinned) {
-if ((serverConnection()-getPeer()  serverConnection()-getPeer()-secure.encryptTransport) ||
-(!serverConnection()-getPeer()  request-url.getScheme() == AnyP::PROTO_HTTPS) ||
-request-flags.sslPeek) {
-
+const CachePeer *p = serverConnection()-getPeer();
+const bool peerWantsSsl = p  p-secure.encryptTransport;
+const bool userWillSslToPeerForUs = p  p-options.originserver 
+request-method == Http::METHOD_CONNECT;
+const bool sslToPeer = peerWantsSsl  !userWillSslToPeerForUs;
+const bool sslToOrigin = !p  request-url.getScheme() == AnyP::PROTO_HTTPS;
+if (sslToPeer || sslToOrigin || request-flags.sslPeek) {
 HttpRequest::Pointer requestPointer = request;
 AsyncCall::Pointer callback = asyncCall(17,4,
 FwdState::ConnectedToPeer,
 FwdStatePeerAnswerDialer(FwdState::connectedToPeer, this));
 // Use positive timeout when less than one second is left.
 const time_t sslNegotiationTimeout = max(static_casttime_t(1), timeLeft());
 Ssl::PeekingPeerConnector *connector =
 new Ssl::PeekingPeerConnector(requestPointer, serverConnection(), clientConn, callback, sslNegotiationTimeout);
 AsyncJob::Start(connector); // will call our callback
 return;
 }
 }
 #endif
 
 // if not encrypting just run the post-connect actions
 

[squid-dev] [PATCH] Avoid SSL certificate db corruption with empty index.txt as a symptom.

2015-06-23 Thread Tsantilas Christos


* Detect cases where the size file is corrupted or has a clearly wrong 
value. Automatically rebuild the database in such cases.


* Teach ssl_crtd to keep running if it is unable to store the generated 
certificate in the database. Return the generated certificate to Squid 
and log an error message in such cases.


Background:

There are cases where ssl_crtd may corrupt its certificate database. The 
known cases manifest themselves with an empty db index file.  When that 
happens, ssl_crtd helpers quit, SSL bumping does not work any more, and 
the certificate DB has to be deleted and re-initialized.


We do not know exactly what causes corruption in deployments, but one 
known trigger that is easy to reproduce in a lab is the block size 
change in the ssl_crtd configuration. That change has the following 
side-effects:


1. When ssl_crtd removes certificates, it computes their size using a 
different block size than the one used to store the certificates. This 
is may result in negative database sizes.


2. Signed/unsigned conversion results in a huge number near LONG_MAX, 
which is then written to the size file.


3. The ssl_crtd helper refuses to store new certificates because the 
database size (as described by the size file) exceeds the configured 
limit.


4. The ssl_crtd helper exits because it cannot store a new certificates 
to the database. No helper response is sent to Squid in this case.


Most likely, there are other corruption triggers -- the database 
management code is of an overall poor quality. This change resolves some 
of the underlying problems in hope to address at least some of the 
unknown triggers as well as the known one.


This is a Measurement Factory project.
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] TLS: Disable client-initiated renegotiation

2015-06-19 Thread Tsantilas Christos
This patch, probably is ok as workarround, but my sense is that it is 
not the best method to fix it.  We should spent some hours of work to 
check openSSL versions has the problem, and apply a better solution.




On 06/19/2015 06:39 AM, Amos Jeffries wrote:

Absent objections I have applied this to trunk as rev.14114

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] TLS: Disable client-initiated renegotiation

2015-06-05 Thread Tsantilas Christos

Hi Paulo,

 Which is the openSSL version in a Debian wheezy system?

My understanding is that openSSL-0.9.8m and later, by default provides 
protection against this bug.


The openSSL provides the following flags to control the  renegotiation:
  - SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION flag for an openSSL server
  -  SSL_OP_LEGACY_SERVER_CONNECT and 
SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION for an openSSL client.


This is what OpenSSL manual says for the behaviour of an openSSL server 
on renegotiation:


The initial connection succeeds but client renegotiation is denied by 
the server with a no_renegotiation warning alert if TLS v1.0 is used or 
a fatal handshake_failure alert in SSL v3.0.


If the patched OpenSSL server attempts to renegotiate a fatal 
handshake_failure alert is sent. This is because the server code may be 
unaware of the unpatched nature of the client.


If the option SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION is set then 
renegotiation always succeeds.


Reference:
   https://www.openssl.org/docs/ssl/SSL_CTX_set_options.html


We may have problem in squid openSSL client.
But we can add support for these flags and allow users set them in squid 
configuration file. Is n't it better?




On 06/04/2015 09:51 PM, Paulo Matias wrote:

Hi all,

This patch disables client-initiated renegotiation, mitigating a DoS attack
which might be possible with some builds of the OpenSSL library.  We have been
warned about this when testing our service with the Qualys SSL Test
(https://www.ssllabs.com/ssltest) back when it was running in a Debian wheezy
system. Further information is available at:
https://community.qualys.com/blogs/securitylabs/2011/10/31/tls-renegotiation-and-denial-of-service-attacks
Our solution is similar to the one adopted in pureftpd:
https://github.com/jedisct1/pure-ftpd/blob/549e94aaa093a48622efd6d91fdfb3a4236c13f4/src/tls.c#L106

This was previously posted to squid-users, but modified since then to implement
Amos's suggestions:


* please avoid #ifdef and #ifndef in new code.
- use #if defined() style instead.
* please wrap the entire ssl_info_cb() definition in the #if
conditionals and the appropriate calling lines.


We welcome any additional suggestions or comments.

Best regards,
Paulo Matias


-- next part --
=== modified file 'src/ssl/support.cc'
--- src/ssl/support.cc  2015-06-03 10:42:08 +
+++ src/ssl/support.cc  2015-06-04 12:59:30 +
@@ -823,12 +823,28 @@
  return dh;
  }

+#if defined(SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS)
+static void
+ssl_info_cb(const SSL *ssl, int where, int ret)
+{
+(void)ret;
+if ((where  SSL_CB_HANDSHAKE_DONE) != 0) {
+// disable renegotiation (CVE-2009-3555)
+ssl-s3-flags |= SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS;
+}
+}
+#endif
+
  static bool
  configureSslContext(SSL_CTX *sslContext, AnyP::PortCfg port)
  {
  int ssl_error;
  SSL_CTX_set_options(sslContext, port.sslOptions);

+#if defined(SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS)
+SSL_CTX_set_info_callback(sslContext, ssl_info_cb);
+#endif
+
  if (port.sslContextSessionId)
  SSL_CTX_set_session_id_context(sslContext, (const unsigned char 
*)port.sslContextSessionId, strlen(port.sslContextSessionId));

@@ -1045,6 +1061,10 @@

  SSL_CTX_set_options(sslContext, Ssl::parse_options(options));

+#if defined(SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS)
+SSL_CTX_set_info_callback(sslContext, ssl_info_cb);
+#endif
+
  if (*cipher) {
  debugs(83, 5, Using chiper suite   cipher  .);



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev




--
Tsantilas Christos
Network and Systems Engineer
email:chris...@chtsanti.net
  web:http://www.chtsanti.net
Phone:+30 6977678842
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Bug3329

2015-06-02 Thread Tsantilas Christos

Hi Amos,

I applied the t5 patch to trunk.
In my  first mail I included t5 patch for squid-3.5 to save you time 
from porting it to 3.5.


I will attach the patches to bug report too and I will close the bug.

Regards,
  Christos


On 06/02/2015 06:44 AM, Amos Jeffries wrote:

On 28/05/2015 7:41 p.m., Tsantilas Christos wrote:

I am attaching a new patch for trunk which renames the  noteClsure() to
noteClosureXXX().

If it is OK, I will post the squid-3.5 patch to.


It seems I mistook what Alex has been using the XXX() for.

What I'm thinking of for a long term fix can happen with either version.
So I will leave it up to you which of the patches you apply.

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Bug3329

2015-05-28 Thread Tsantilas Christos
I am attaching a new patch for trunk which renames the  noteClsure() to 
noteClosureXXX().


If it is OK, I will post the squid-3.5 patch to.

Regards,
Christos

On 05/26/2015 10:59 AM, Amos Jeffries wrote:

On 26/05/2015 12:43 a.m., Tsantilas Christos wrote:

On 05/25/2015 02:37 PM, Amos Jeffries wrote:

On 25/05/2015 10:13 p.m., Tsantilas Christos wrote:

Hi all,

I am attaching new squid patches for bug3329.



+1 on the conversion of comm_close() to X-close()

However please name the noteClsure() as noteClosureXXX() to highlight
that this function is undesirable and we need to fix the underlying
problem still for the places which find themselves having to use it


The suggestion of this patch is to start use the noteClosure method to
all comm_close handlers to avoid such problems.

The problem is not appears only when the comm_close method on fds is
used. We may face the same problem when a timeout is expired but not a
timeout-handler is installed for the fd. In this case squid code will
just close the fd. Still some Comm::Connection objects may use the fd,
but not informed about the fd closure.



A better long term way would be the comm FD opening code to install the
first close callback that hold reference to the Comm::Connection and
sets the FD to -1 when its closed without close().

We cant do that until those abandoned and orphan connections are all
figured out though. So we dont end up with that close handler being a
lingering reference-count that holds all the state alive.

So I agree this method is needed right now, but only as a temporary fix.

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



Bug3329: The server side pinned connection is not closed properly
in ConnStateData::clientPinnedConnectionClosed CommClose handler.

Squid enters a buggy state when an idle connection pinned to a peer closes:

 - The ConnStateData::clientPinnedConnectionRead, the pinned peer
   connection read handler, is called with the io.flag set to
   Comm::ERR_CLOSING. The read handler does not close the peer
   Comm::Connection object. This is correct and expected -- the I/O
   handler must exit on ERR_CLOSING without doing anything.

 - The ConnStateData::clientPinnedConnectionClosed close handler is called,
   but it does not close the peer Comm::Connection object either. Again,
   this is correct and expected -- the close handler is not the place to
   close a being-closed connection.

 - The corresponding fde object is marked as closed (fde::flags.open
   is false), but the peer Comm::Connection object is still open
   (Comm::Connection.fd = 0)! From this point on, we have an inconsistency
   between the peer Comm::Connection object state and the real world.

 - When the ConnStateData::pinning::serverConnection object is later
   destroyed (by refcounting), it will try to close its fd. If that fd
   is already in use (e.g., by another Comm::Connection), bad things
   happen (crashes, segfaults, etc). Otherwise (i.e., if that fd is
   not open), comm_close may cry about BUG 3556 (or worse).

To fix this problem, we must not allow Comm::Connections to get out
of sync with fd_table, even when a descriptor is closed without going
through Connection::close(). There are two ways to accomplished that:

 * Change Comm to always store Comm::Connections and similar high-level
   objects instead of fdes. This is a huge change that has been long on
   the TODO list (those other high-level objects is on of the primary
   obstacles there because not everything with a FD is a Connection).

 * Notify Comm::Connections about closure in their closing handlers
   (this change). This design relies on every Comm::Connection having
   a close handler that notifies it. It may take us some time to reach
   that goal, but this change is the first step providing the necessary
   API, a known bug fix, and a few preventive changes.

This change:

 - Adds a new Comm::Connection::noteClosure() method to inform the
   Comm::Connection object that somebody is closing its FD.

 - Uses the new method inside ConnStateData::clientPinnedConnectionClosed
   handler to inform the ConnStateData::pinning::serverConnection object
   that its FD is being closed.

 - Replaces comm_close calls which may cause bug #3329 in other places with
   Comm::Connection-close() calls.

Initially based on Nathan Hoad research for bug 3329.

This is a Measurement Factory project.

=== modified file 'src/client_side.cc'
--- src/client_side.cc	2015-05-19 07:51:31 +
+++ src/client_side.cc	2015-05-27 14:19:17 +
@@ -3607,41 +3607,41 @@
 (  ssl_error  /  ret  ));
 return -1;
 }
 
 /* NOTREACHED */
 }
 return 1;
 }
 
 /** negotiate an SSL connection */
 static void
 clientNegotiateSSL(int fd, void *data)
 {
 ConnStateData *conn = (ConnStateData *)data;
 X509 *client_cert;
 SSL *ssl

Re: [squid-dev] [PATCH] support custom OIDs in *_cert ACLs

2015-05-28 Thread Tsantilas Christos

If there is not any objection I will apply this patch to trunk.

On 05/26/2015 12:00 PM, Tsantilas Christos wrote:

Hi all,

This patch allow user_cert and ca_cert ACLs to match arbitrary
stand-alone OIDs (not DN/C/O/CN/L/ST objects or their substrings). For
example, should be able to match certificates that have
1.3.6.1.4.1.1814.3.1.14 OID in the certificate Subject or Issuer field.
Squid configuration would look like this:

  acl User_Cert-TrustedCustomerNum user_cert 1.3.6.1.4.1.1814.3.1.14 1001

This is a Measurement Factory project


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] support custom OIDs in *_cert ACLs

2015-05-26 Thread Tsantilas Christos

Hi all,

This patch allow user_cert and ca_cert ACLs to match arbitrary 
stand-alone OIDs (not DN/C/O/CN/L/ST objects or their substrings). For 
example, should be able to match certificates that have 
1.3.6.1.4.1.1814.3.1.14 OID in the certificate Subject or Issuer field. 
Squid configuration would look like this:


 acl User_Cert-TrustedCustomerNum user_cert 1.3.6.1.4.1.1814.3.1.14 1001

This is a Measurement Factory project
support custom OIDs in *_cert ACLs

This patch allow user_cert and ca_cert ACLs to match arbitrary stand-alone OIDs
(not DN/C/O/CN/L/ST objects or their substrings). For example, should be able to
match certificates that have 1.3.6.1.4.1.1814.3.1.14 OID in the certificate
Subject or Issuer field. Squid configuration would look like this:

 acl User_Cert-TrustedCustomerNum user_cert 1.3.6.1.4.1.1814.3.1.14 1001

This is a Measurement Factory project

=== modified file 'src/acl/CertificateData.cc'
--- src/acl/CertificateData.cc	2015-01-29 19:05:24 +
+++ src/acl/CertificateData.cc	2015-05-26 08:52:46 +
@@ -110,41 +110,62 @@
 else {
 bool valid = false;
 for (std::liststd::string::const_iterator it = validAttributes.begin(); it != validAttributes.end(); ++it) {
 if (*it == * || *it == newAttribute) {
 valid = true;
 break;
 }
 }
 
 if (!valid) {
 debugs(28, DBG_CRITICAL, FATAL: Unknown option. Supported option(s) are:   validAttributesStr);
 self_destruct();
 }
 
 /* an acl must use consistent attributes in all config lines */
 if (attribute) {
 if (strcasecmp(newAttribute, attribute) != 0) {
 debugs(28, DBG_CRITICAL, FATAL: An acl must use consistent attributes in all config lines (  newAttribute  !=  attribute  ).);
 self_destruct();
 }
-} else
+} else {
+if (strcasecmp(newAttribute, DN) != 0) {
+int nid = OBJ_txt2nid(newAttribute);
+if (nid == 0) {
+ const size_t span = strspn(newAttribute, 0123456789.);
+ if(newAttribute[span] == '\0') { // looks like a numerical OID
+ // create a new object based on this attribute
+
+ // NOTE: Not a [bad] leak: If the same attribute
+ // has been added before, the OBJ_txt2nid call
+ // would return a valid nid value.
+ // TODO: call OBJ_cleanup() on reconfigure?
+ nid = OBJ_create(newAttribute, newAttribute,  newAttribute);
+ debugs(28, 7, New SSL certificate attribute created with name:   newAttribute   and nid:   nid);
+ }
+}
+if (nid == 0) {
+debugs(28, DBG_CRITICAL, FATAL: Not valid SSL certificate attribute name or numerical OID:   newAttribute);
+self_destruct();
+}
+}
 attribute = xstrdup(newAttribute);
+}
 }
 }
 
 values.parse();
 }
 
 bool
 ACLCertificateData::empty() const
 {
 return values.empty();
 }
 
 ACLDataX509 * *
 ACLCertificateData::clone() const
 {
 /* Splay trees don't clone yet. */
 return new ACLCertificateData(*this);
 }
 

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2015-05-22 09:42:55 +
+++ src/cf.data.pre	2015-05-26 08:50:23 +
@@ -,45 +,45 @@
 
 	acl aclname rep_mime_type [-i] mime-type ...
 	  # regex match against the mime type of the reply received by
 	  # squid. Can be used to detect file download or some
 	  # types HTTP tunneling requests. [fast]
 	  # NOTE: This has no effect in http_access rules. It only has
 	  # effect in rules that affect the reply data stream such as
 	  # http_reply_access.
 
 	acl aclname rep_header header-name [-i] any\.regex\.here
 	  # regex match against any of the known reply headers. May be
 	  # thought of as a superset of browser, referer and mime-type
 	  # ACLs [fast]
 
 	acl aclname external class_name [arguments...]
 	  # external ACL lookup via a helper class defined by the
 	  # external_acl_type directive [slow]
 
 	acl aclname user_cert attribute values...
 	  # match against attributes in a user SSL certificate
-	  # attribute is one of DN/C/O/CN/L/ST [fast]
+	  # attribute is one of DN/C/O/CN/L/ST or a numerical OID [fast]
 
 	acl aclname ca_cert attribute values...
 	  # match against attributes a users issuing CA SSL certificate
-	  # attribute is one of DN/C/O/CN/L/ST [fast]
+	  # attribute is one of DN/C/O/CN/L/ST or a numerical OID  [fast]
 
 	acl aclname ext_user username ...
 	acl aclname ext_user_regex [-i] pattern ...
 	  # string match on 

Re: [squid-dev] [PATCH] support custom OIDs in *_cert ACLs

2015-05-26 Thread Tsantilas Christos

On 05/26/2015 12:10 PM, Amos Jeffries wrote:

On 26/05/2015 9:00 p.m., Tsantilas Christos wrote:

Hi all,

This patch allow user_cert and ca_cert ACLs to match arbitrary
stand-alone OIDs (not DN/C/O/CN/L/ST objects or their substrings). For
example, should be able to match certificates that have
1.3.6.1.4.1.1814.3.1.14 OID in the certificate Subject or Issuer field.
Squid configuration would look like this:

  acl User_Cert-TrustedCustomerNum user_cert 1.3.6.1.4.1.1814.3.1.14 1001

This is a Measurement Factory project




+1 anyway.

Dont like the extra leak-ish part though. Does TidyPointer make sense there?


No.
It is not a memory leak.
The OBJ_create just adds the OID in internal openSSL database of valid 
fields.  Even if the OID is not used after a reconfigure, still remains 
in this database. This is not a real problem unless someone add some 
thousands of these OIDs.

But I do not believe that this is a real problem...




Amos



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] pconn_lifetime for squid-3.5.x

2015-05-25 Thread Tsantilas Christos
I am attaching a patch which implements the pconn_lifetime feature for 
squid-3.5.x, in the case someone want to use it or we decide that it can 
be inlcuded in squid-3.5.x.


This patch include the recent fixes for pconn_lifetime feature.

This is a Measurement Factory project

Regards,
   Christos
Add pconn_lifetime to limit maximum lifetime of a persistent connection.

When set, Squid will close a now-idle persistent connection that
exceeded configured lifetime instead of moving the connection into
the idle connection pool (or equivalent). No effect on ongoing/active
transactions. Connection lifetime is the time period from the
connection acceptance or opening time until now.

This limit is useful in environments with long-lived connections
where Squid configuration or environmental factors change during a
single connection lifetime. If unrestricted, some connections may
last for hours and even days, ignoring those changes that should
have affected their behavior or their existence.

This is a Measurement Factory project
=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h	2015-02-18 08:50:00 +
+++ src/SquidConfig.h	2015-03-22 10:52:21 +
@@ -72,40 +72,41 @@
 #if USE_HTTP_VIOLATIONS
 time_t negativeTtl;
 #endif
 time_t maxStale;
 time_t negativeDnsTtl;
 time_t positiveDnsTtl;
 time_t shutdownLifetime;
 time_t backgroundPingRate;
 
 struct {
 time_t read;
 time_t write;
 time_t lifetime;
 time_t connect;
 time_t forward;
 time_t peer_connect;
 time_t request;
 time_t clientIdlePconn;
 time_t serverIdlePconn;
 time_t ftpClientIdle;
+time_t pconnLifetime; /// pconn_lifetime in squid.conf
 time_t siteSelect;
 time_t deadPeer;
 int icp_query;  /* msec */
 int icp_query_max;  /* msec */
 int icp_query_min;  /* msec */
 int mcast_icp_query;/* msec */
 time_msec_t idns_retransmit;
 time_msec_t idns_query;
 } Timeout;
 size_t maxRequestHeaderSize;
 int64_t maxRequestBodySize;
 size_t maxRequestBufferSize;
 size_t maxReplyHeaderSize;
 AclSizeLimit *ReplyBodySize;
 
 struct {
 unsigned short icp;
 #if USE_HTCP
 
 unsigned short htcp;

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2015-02-18 10:30:07 +
+++ src/cf.data.pre	2015-03-22 10:54:02 +
@@ -6027,40 +6027,65 @@
 TYPE: time_t
 LOC: Config.Timeout.lifetime
 DEFAULT: 1 day
 DOC_START
 	The maximum amount of time a client (browser) is allowed to
 	remain connected to the cache process.  This protects the Cache
 	from having a lot of sockets (and hence file descriptors) tied up
 	in a CLOSE_WAIT state from remote clients that go away without
 	properly shutting down (either because of a network failure or
 	because of a poor client implementation).  The default is one
 	day, 1440 minutes.
 
 	NOTE:  The default value is intended to be much larger than any
 	client would ever need to be connected to your cache.  You
 	should probably change client_lifetime only as a last resort.
 	If you seem to have many client connections tying up
 	filedescriptors, we recommend first tuning the read_timeout,
 	request_timeout, persistent_request_timeout and quick_abort values.
 DOC_END
 
+NAME: pconn_lifetime
+COMMENT: time-units
+TYPE: time_t
+LOC: Config.Timeout.pconnLifetime
+DEFAULT: 0 seconds
+DOC_START
+	Desired maximum lifetime of a persistent connection.
+	When set, Squid will close a now-idle persistent connection that
+	exceeded configured lifetime instead of moving the connection into
+	the idle connection pool (or equivalent). No effect on ongoing/active
+	transactions. Connection lifetime is the time period from the
+	connection acceptance or opening time until now.
+	 
+	This limit is useful in environments with long-lived connections
+	where Squid configuration or environmental factors change during a
+	single connection lifetime. If unrestricted, some connections may
+	last for hours and even days, ignoring those changes that should
+	have affected their behavior or their existence.
+	 
+	Currently, a new lifetime value supplied via Squid reconfiguration
+	has no effect on already idle connections unless they become busy.
+	 
+	When set to '0' this limit is not used.
+DOC_END
+
 NAME: half_closed_clients
 TYPE: onoff
 LOC: Config.onoff.half_closed_clients
 DEFAULT: off
 DOC_START
 	Some clients may shutdown the sending side of their TCP
 	connections, while leaving their receiving sides open.	Sometimes,
 	Squid can not tell the difference between a half-closed and a
 	fully-closed TCP connection.
 
 	By default, Squid will immediately close client connections when
 	read(2) returns no more data to read.
 
 	Change this option to 'on' and Squid will keep open connections
 	until a read(2) or write(2) on the socket returns an error.
 	This may show some benefits for reverse proxies. But if not
 	it is 

Re: [squid-dev] [PATCH] Add chained certificates and signing certificate to bumpAndSpliced connections

2015-05-23 Thread Tsantilas Christos

Hi Nathan,

 The patch works.

However I believe It is not good idea to configure SSL_CTX objects while 
we are setting parameters to an SSL object.

A SSL_CTX object is common to many SSL objects.

Instead of setting SSL_CTX object from 
configureSSLUsingPkeyAndCertFromMemory I am suggesting a new method 
configureUnconfigureCTX() which does the job:


Then inside client_side use:

 bool ret = Ssl::configureSSLUsingPkeyAndCertFromMemory(...);
  if (!ret)
debugs(33, 5, mpla mpla);
 SSL_CTX *sslContext = SSL_get_SSL_CTX(ssl);
 ret = configureUnconfigureCTX(sslContext,..., signAlgorithm)


OR

  Ssl::configureSSL(ssl, certProperties, *port))
  SSL_CTX *sslContext = SSL_get_SSL_CTX(ssl);
  ret = configureUnconfigureCTX(sslContext,..., signAlgorithm)


Probably the above should be wrapped to a new method.
Or  maybe a new function which its name says that both CTX and SSL 
objects are modified.



On 04/30/2015 08:11 AM, Nathan Hoad wrote:

Hello,

I am running Squid with SSL bump in bump and splice mode, and I've
observed that this mode does not append the signing certificate or any
chained certificates to the certificate chain presented to the client.

With old bump mode, Squid adds the signing certificate and any other
chained certificates to the SSL context. With bump and splice mode,
these certificates are not added. Attached is a patch that adds these
certificates for bump and spliced connections.

Thank you,

Nathan.



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] comm_connect_addr on failures return Comm:OK

2015-05-08 Thread Tsantilas Christos


I found the following problem in squid-trunk and squid-3.5:

  - Squid calls peer_select to retrieve server destinations addresses
  - The peer_select returns two ip addresses, the first is an ipv6 
address the second one is an ipv4.
  - The FwdState creates a Comm::ConnOpener object which fails to 
connect to the first address, but returns Comm:OK.
  -The FwdState calls Ssl::PeerConnector, which fails to establish SSL 
on a non opened connection, and return an error page to the user.


I am attaching a small patch which fixes the problem.

I believe that this is the problem reported by some users, that the 
sslbumping does not work in squid-3.5 and later.


Regards,
Christos
comm_connect_addr on failures return Comm:OK

The comm_connect_addr on connect failures set the errno to 0 and return
Comm::OK. This is causes problems on ConnOpener class users which believes the
connection is established and it is ready for use.

This is a Measurement Factory project

=== modified file 'src/comm.cc'
--- src/comm.cc	2015-04-26 16:48:02 +
+++ src/comm.cc	2015-05-08 15:51:02 +
@@ -623,40 +623,41 @@
 return Comm::ERR_PROTOCOL;
 }
 
 address.getAddrInfo(AI, F-sock_family);
 
 /* Establish connection. */
 int xerrno = 0;
 
 if (!F-flags.called_connect) {
 F-flags.called_connect = true;
 ++ statCounter.syscalls.sock.connects;
 
 x = connect(sock, AI-ai_addr, AI-ai_addrlen);
 
 // XXX: ICAP code refuses callbacks during a pending comm_ call
 // Async calls development will fix this.
 if (x == 0) {
 x = -1;
 xerrno = EINPROGRESS;
 } else if (x  0) {
+xerrno = errno;
 debugs(5,5, comm_connect_addr: sock=  sock  , addrinfo(  
 flags=  AI-ai_flags 
, family=  AI-ai_family 
, socktype=  AI-ai_socktype 
, protocol=  AI-ai_protocol 
, addr=  AI-ai_addr 
, addrlen=  AI-ai_addrlen 
 ) );
 debugs(5, 9, connect FD   sock  : (  x  )   xstrerr(xerrno));
 debugs(14,9, connecting to:   address );
 }
 
 } else {
 errno = 0;
 #if _SQUID_NEWSOS6_
 /* Makoto MATSUSHITA matus...@ics.es.osaka-u.ac.jp */
 if (connect(sock, AI-ai_addr, AI-ai_addrlen)  0)
 xerrno = errno;
 
 if (xerrno == EINVAL) {

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime robustness fixes

2015-04-28 Thread Tsantilas Christos

Patch applied to trunk as r14046.


On 04/27/2015 06:40 PM, Tsantilas Christos wrote:

If there is not any objection I will apply this patch to trunk.


On 04/15/2015 07:11 PM, Tsantilas Christos wrote:

Hi all,
  I am attaching which fixes pconn_lifetime feature.
We had a long discussion for this feature, which is resulted to the
patch r13780, but unfortunately, Measurement Factory customers reported
problems:

1. Squid closed connections with partially received requests when they
reached pconn_lifetime limit. We should only close _idle_ connections.

2. When connecting to Squid without sending anything for longer than
pconn_lifetime, the connection hangs if the request is sent after the
waiting period.

3. The connection also hangs if the initial request is starting to be
transmitted but then there is a longer pause before the request is
completed.

Please read the patch preamble for more informations.

This is a Measurement Factory project.


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime robustness fixes

2015-04-27 Thread Tsantilas Christos

If there is not any objection I will apply this patch to trunk.


On 04/15/2015 07:11 PM, Tsantilas Christos wrote:

Hi all,
  I am attaching which fixes pconn_lifetime feature.
We had a long discussion for this feature, which is resulted to the
patch r13780, but unfortunately, Measurement Factory customers reported
problems:

1. Squid closed connections with partially received requests when they
reached pconn_lifetime limit. We should only close _idle_ connections.

2. When connecting to Squid without sending anything for longer than
pconn_lifetime, the connection hangs if the request is sent after the
waiting period.

3. The connection also hangs if the initial request is starting to be
transmitted but then there is a longer pause before the request is
completed.

Please read the patch preamble for more informations.

This is a Measurement Factory project.


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Secure ICAP

2015-04-23 Thread Tsantilas Christos

A new patch  for Secure ICAP.
This is synced with the latest trunk.

Also handles Amos requests.

Regards,
   Christos

On 04/10/2015 04:51 AM, Amos Jeffries wrote:

On 10/04/2015 2:43 a.m., Tsantilas Christos wrote:


This patch adds support for ICAP services that require SSL/TLS transport
connections.

To mark an ICAP service as secure, use an icaps:// service URI
scheme when listing your service via an icap_service directive.


The squid.conf parsing code contradicts that - it expects ... ssl
icap://


Inside Adaptation::Icap::ServiceRep::finalize() method if the protocol 
is the icaps then sets the secure.encryptTransport flag to true. IT 
can be used to enable secure icap without any ssl option.







Squid uses port 11344 for Secure ICAP by default, following another
popular proxy convention. The old 1344 default for plain ICAP ports has
not changed.

This patch should applied after the server_name and splicing resumed
sessions patches applied to trunk, and after re-merged with the trunk.
However we can start the discussion if you agree.


Technical Details
==

This patch:
   - Splits Ssl::PeerConnector class into Ssl::PeerConnector parent and
two kids: Ssl::BlindPeerConnector, a basic SSL connector for
cache_peers, and Ssl::PeekingPeerConnector, a peek-and-splice SSL
connector for HTTP servers.

   - Adds a third Ssl::IcapPeerConnector kid to connect to Secure ICAP
servers.

   - Fixes ErrorState class to avoid crashes on nil ErrorState::request
member. (Ssl::IcapPeerConnector may generate an ErrorState with a nil
request).

   - Modifies the ACL peername to use the Secure ICAP server name as
value while connecting to an ICAP server. This is useful to make SSL
certificate  policies based on ICAP server name. However, this change is
undocumented until we decide whether a dedicated ACL would be better.


This is a Measurement Factory project.



Initial audit:

* Watch for HERE macros in the new, changed or moved code. I see at
least one being added in PeerConnector.cc


fixed






in src/acl/FilledChecklist.h:

* please use an SBuf for dst_peer_name
  - that will also prevent memory leaks in
Ssl::IcapPeerConnector::initializeSsl().


ok.




in src/adaptation/ServiceConfig.cc:

* the new ssl option needs some polishing.
  - for the dependency WARNING it should have:
debugs(3, DBG_PARSE_NOTE(DBG_IMPORTANT), WARNING: ... ICAP service
option ignored.);
  - since this is all new code please use tls- as the TLS/SSL options
prefix.


OK
I am not documenting tls- prefix of the options in cf.data.pre.
But it works.




in src/cf.data.pre:
* the new securtiy options section title should have s/HTTPS/ICAPS/


OK.





in src/adaptation/icap/ServiceRep.cc:

* please create/use #define DEFAULT_ICAP_PORT and DEFAULT_ICAPS_PORT
instead of magic port numbers.




ok.


in src/adaptation/icap/ServiceRep.h:

* please use Security::ContextPointer instead of SSL_CTX* for sslContext.
  - and it can go outside the USE_OPENSSL wrapper


ok




in src/adaptation/icap/Xaction.cc:

* Adaptation::Icap::Xaction::noteCommClosed() does not need to wrap
securer use with USE_OPENSSL.


fixed.





in src/err_detail_type.h:
* whats with the change to ERR_DETAIL_REDIRECTOR_TIMEDOUT ?
   - if thats a bug-fix you found along the way it should go in separate
for backporting.


Yes it is a bug.
I committed as separate patch. It is a trunk only bug.





NOTE: I've not done much audit on the PeerConnector changes, my brain is
too fuzzy right now. Though I am liking the class split.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



Secure ICAP

This patch adds support for ICAP services that require SSL/TLS transport
connections. The same options used for the cache_peer directive are used for
the icap_service directive, with similar certificate validation logic.

To mark an ICAP service as secure, use an icaps:// service URI scheme when
listing your service via an icap_service directive. The industry is using a
Secure ICAP term, and Squid follows that convention, but icaps seems more
appropriate for a _scheme_ name.

Squid uses port 11344 for Secure ICAP by default, following another popular
proxy convention. The old 1344 default for plain ICAP ports has not changed.


Technical Details
==

This patch:
  - Splits Ssl::PeerConnector class into Ssl::PeerConnector parent and two kids:
Ssl::BlindPeerConnector, a basic SSL connector for cache_peers, and
Ssl::PeekingPeerConnector, a peek-and-splice SSL connector for HTTP servers.

  - Adds a third Ssl::IcapPeerConnector kid to connect to Secure ICAP servers.

  - Fixes ErrorState class to avoid crashes on nil ErrorState::request member.
(Ssl::IcapPeerConnector may generate an ErrorState with a nil request).

  - Modifies the ACL peername to use the Secure ICAP server name as value while
connecting to an ICAP server. This is useful

Re: [squid-dev] [PATCH] Negotiate Kerberos authentication request size exceeds output buffer size

2015-04-16 Thread Tsantilas Christos

applied to trunk as r14021
I am also attaching the patch for squid-3.5, the trunk patch does not 
apply cleanly.




On 04/16/2015 02:32 PM, Amos Jeffries wrote:

On 16/04/2015 8:51 p.m., Tsantilas Christos wrote:

A more complete patch.It handles the cases where the snprintf return an
error.
If no objections I will apply this one to trunk.


+1.

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



Negotiate Kerberos authentication request size exceeds output buffer size.

Despite the must match comment, MAX_AUTHTOKEN_LEN in
auth/UserRequest.h got out of sync with similar constants in Negotiate helpers.
A 32KB buffer cannot fit some helper requests (e.g., those carrying Privilege
Account Certificate information in the client's Kerberos ticket). Each truncated 
request blocks the negotiate helper channel, eventually causing helper queue
overflow and possibly killing Squid.

This patch increases MAX_AUTHTOKEN_LEN in UserRequest.h to 65535 which
is also the maximum used by the negotiate helpers. The patch also adds checks
to avoid sending truncated requests, treating them as helper errors instead.

This is a Measurement Factory project.

=== modified file 'src/auth/UserRequest.h'
--- src/auth/UserRequest.h	2015-01-13 09:13:49 +
+++ src/auth/UserRequest.h	2015-04-16 13:24:05 +
@@ -10,42 +10,42 @@
 #define SQUID_AUTH_USERREQUEST_H
 
 #if USE_AUTH
 
 #include AccessLogEntry.h
 #include auth/AuthAclState.h
 #include auth/Scheme.h
 #include auth/User.h
 #include dlink.h
 #include helper/forward.h
 #include HttpHeader.h
 #include ip/Address.h
 
 class ConnStateData;
 class HttpReply;
 class HttpRequest;
 
 /**
  * Maximum length (buffer size) for token strings.
  */
-// AYJ: must match re-definition in helpers/negotiate_auth/kerberos/negotiate_kerb_auth.cc
-#define MAX_AUTHTOKEN_LEN   32768
+// XXX: Keep in sync with all others: bzr grep 'define MAX_AUTHTOKEN_LEN'
+#define MAX_AUTHTOKEN_LEN   65535
 
 /**
  * Node used to link an IP address to some user credentials
  * for the max_user_ip ACL feature.
  */
 class AuthUserIP
 {
 public:
 AuthUserIP(const Ip::Address ip, time_t t) : ipaddr(ip), ip_expiretime(t) {}
 
 dlink_node node;
 
 /// IP address this user authenticated from
 Ip::Address ipaddr;
 
 /** When this IP should be forgotten.
  * Set to the time of last request made from this
  * (user,IP) pair plus authenticate_ip_ttl seconds
  */
 time_t ip_expiretime;

=== modified file 'src/auth/negotiate/UserRequest.cc'
--- src/auth/negotiate/UserRequest.cc	2015-02-01 09:14:12 +
+++ src/auth/negotiate/UserRequest.cc	2015-04-16 13:23:20 +
@@ -51,45 +51,54 @@
 {
 return NULL;
 }
 
 int
 Auth::Negotiate::UserRequest::authenticated() const
 {
 if (user() != NULL  user()-credentials() == Auth::Ok) {
 debugs(29, 9, HERE  user authenticated.);
 return 1;
 }
 
 debugs(29, 9, HERE  user not fully authenticated.);
 return 0;
 }
 
 const char *
 Auth::Negotiate::UserRequest::credentialsStr()
 {
 static char buf[MAX_AUTHTOKEN_LEN];
+int printResult = 0;
 if (user()-credentials() == Auth::Pending) {
-snprintf(buf, sizeof(buf), YR %s\n, client_blob); //CHECKME: can ever client_blob be 0 here?
+printResult = snprintf(buf, sizeof(buf), YR %s\n, client_blob); //CHECKME: can ever client_blob be 0 here?
 } else {
-snprintf(buf, sizeof(buf), KK %s\n, client_blob);
+printResult = snprintf(buf, sizeof(buf), KK %s\n, client_blob);
 }
+
+// truncation is OK because we are used only for logging
+if (printResult  0) {
+debugs(29, 2, Can not build negotiate authentication credentials.);
+buf[0] = '\0';
+} else if (printResult = (int)sizeof(buf))
+debugs(29, 2, Negotiate authentication credentials truncated.);
+
 return buf;
 }
 
 Auth::Direction
 Auth::Negotiate::UserRequest::module_direction()
 {
 /* null auth_user is checked for by Auth::UserRequest::direction() */
 
 if (waiting || client_blob)
 return Auth::CRED_LOOKUP; /* need helper response to continue */
 
 if (user()-auth_type != Auth::AUTH_NEGOTIATE)
 return Auth::CRED_ERROR;
 
 switch (user()-credentials()) {
 
 case Auth::Handshake:
 assert(server_blob);
 return Auth::CRED_CHALLENGE;
 
@@ -108,50 +117,60 @@
 void
 Auth::Negotiate::UserRequest::startHelperLookup(HttpRequest *req, AccessLogEntry::Pointer al, AUTHCB * handler, void *data)
 {
 static char buf[MAX_AUTHTOKEN_LEN];
 
 assert(data);
 assert(handler);
 
 assert(user() != NULL);
 assert(user()-auth_type == Auth::AUTH_NEGOTIATE);
 
 if (static_castAuth::Negotiate::Config*(Auth::Config::Find(negotiate))-authenticateProgram == NULL) {
 debugs(29, DBG_CRITICAL, ERROR: No Negotiate authentication program configured.);
 handler(data);
 return

[squid-dev] [PATCH] Negotiate Kerberos authentication request size exceeds output buffer size

2015-04-15 Thread Tsantilas Christos

Despite the must match comment, MAX_AUTHTOKEN_LEN in
auth/UserRequest.h got out of sync with similar constants in Negotiate 
helpers. A 32KB buffer cannot fit some helper requests (e.g., those 
carrying Privilege Account Certificate information in the client's 
Kerberos ticket). Each truncated  request blocks the negotiate helper 
channel, eventually causing helper queue overflow and possibly killing 
Squid.


This patch increases MAX_AUTHTOKEN_LEN in UserRequest.h to 65535 which
is also the maximum used by the negotiate helpers. The patch also adds 
checks to avoid sending truncated requests, treating them as helper 
errors instead.


This is a Measurement Factory project
Negotiate Kerberos authentication request size exceeds output buffer size.

Despite the must match comment, MAX_AUTHTOKEN_LEN in 
auth/UserRequest.h got out of sync with similar constants in Negotiate helpers.
A 32KB buffer cannot fit some helper requests (e.g., those carrying Privilege
Account Certificate information in the client's Kerberos ticket). Each truncated 
request blocks the negotiate helper channel, eventually causing helper queue
overflow and possibly killing Squid.

This patch increases MAX_AUTHTOKEN_LEN in UserRequest.h to 65535 which
is also the maximum used by the negotiate helpers. The patch also adds checks
to avoid sending truncated requests, treating them as helper errors instead.

This is a Measurement Factory project.

=== modified file 'src/auth/UserRequest.h'
--- src/auth/UserRequest.h	2015-01-13 07:25:36 +
+++ src/auth/UserRequest.h	2015-04-15 14:15:15 +
@@ -10,42 +10,42 @@
 #define SQUID_AUTH_USERREQUEST_H
 
 #if USE_AUTH
 
 #include AccessLogEntry.h
 #include auth/AuthAclState.h
 #include auth/Scheme.h
 #include auth/User.h
 #include dlink.h
 #include helper/forward.h
 #include HttpHeader.h
 #include ip/Address.h
 
 class ConnStateData;
 class HttpReply;
 class HttpRequest;
 
 /**
  * Maximum length (buffer size) for token strings.
  */
-// AYJ: must match re-definition in helpers/negotiate_auth/kerberos/negotiate_kerb_auth.cc
-#define MAX_AUTHTOKEN_LEN   32768
+// XXX: Keep in sync with all others: bzr grep 'define MAX_AUTHTOKEN_LEN'
+#define MAX_AUTHTOKEN_LEN   65535
 
 /**
  * Node used to link an IP address to some user credentials
  * for the max_user_ip ACL feature.
  */
 class AuthUserIP
 {
 MEMPROXY_CLASS(AuthUserIP);
 
 public:
 AuthUserIP(const Ip::Address ip, time_t t) : ipaddr(ip), ip_expiretime(t) {}
 
 dlink_node node;
 
 /// IP address this user authenticated from
 Ip::Address ipaddr;
 
 /** When this IP should be forgotten.
  * Set to the time of last request made from this
  * (user,IP) pair plus authenticate_ip_ttl seconds

=== modified file 'src/auth/negotiate/UserRequest.cc'
--- src/auth/negotiate/UserRequest.cc	2015-01-31 18:12:07 +
+++ src/auth/negotiate/UserRequest.cc	2015-04-15 14:26:56 +
@@ -52,45 +52,50 @@
 {
 return NULL;
 }
 
 int
 Auth::Negotiate::UserRequest::authenticated() const
 {
 if (user() != NULL  user()-credentials() == Auth::Ok) {
 debugs(29, 9, HERE  user authenticated.);
 return 1;
 }
 
 debugs(29, 9, HERE  user not fully authenticated.);
 return 0;
 }
 
 const char *
 Auth::Negotiate::UserRequest::credentialsStr()
 {
 static char buf[MAX_AUTHTOKEN_LEN];
+size_t written = 0;
 if (user()-credentials() == Auth::Pending) {
-snprintf(buf, sizeof(buf), YR %s\n, client_blob); //CHECKME: can ever client_blob be 0 here?
+written = snprintf(buf, sizeof(buf), YR %s\n, client_blob); //CHECKME: can ever client_blob be 0 here?
 } else {
-snprintf(buf, sizeof(buf), KK %s\n, client_blob);
+written = snprintf(buf, sizeof(buf), KK %s\n, client_blob);
 }
+
+if (written = sizeof(buf))
+debugs(29, 2, Negotiate authentication credentials truncated.);
+
 return buf;
 }
 
 Auth::Direction
 Auth::Negotiate::UserRequest::module_direction()
 {
 /* null auth_user is checked for by Auth::UserRequest::direction() */
 
 if (waiting || client_blob)
 return Auth::CRED_LOOKUP; /* need helper response to continue */
 
 if (user()-auth_type != Auth::AUTH_NEGOTIATE)
 return Auth::CRED_ERROR;
 
 switch (user()-credentials()) {
 
 case Auth::Handshake:
 assert(server_blob);
 return Auth::CRED_CHALLENGE;
 
@@ -109,50 +114,57 @@
 void
 Auth::Negotiate::UserRequest::startHelperLookup(HttpRequest *, AccessLogEntry::Pointer al, AUTHCB * handler, void *data)
 {
 static char buf[MAX_AUTHTOKEN_LEN];
 
 assert(data);
 assert(handler);
 
 assert(user() != NULL);
 assert(user()-auth_type == Auth::AUTH_NEGOTIATE);
 
 if (static_castAuth::Negotiate::Config*(Auth::Config::Find(negotiate))-authenticateProgram == NULL) {
 debugs(29, DBG_CRITICAL, ERROR: No Negotiate authentication program configured.);
 handler(data);
 return;
 }
 
 debugs(29, 8, HERE  credentials 

[squid-dev] [PATCH] pconn_lifetime robustness fixes

2015-04-15 Thread Tsantilas Christos

Hi all,
 I am attaching which fixes pconn_lifetime feature.
We had a long discussion for this feature, which is resulted to the 
patch r13780, but unfortunately, Measurement Factory customers reported 
problems:


1. Squid closed connections with partially received requests when they 
reached pconn_lifetime limit. We should only close _idle_ connections.


2. When connecting to Squid without sending anything for longer than 
pconn_lifetime, the connection hangs if the request is sent after the 
waiting period.


3. The connection also hangs if the initial request is starting to be 
transmitted but then there is a longer pause before the request is 
completed.


Please read the patch preamble for more informations.

This is a Measurement Factory project.
pconn_lifetime robustness fixes

This patch changes pconn_lifetime (r13780) to abort only really idle 
persistent connections (when they timeout). It removes some extra features
(added to pconn_lifetime during the feature review) because they break things
when aggressive timeouts is combined with picky clients. Specifically,

1. Squid closed connections with partially received requests when they
   reached pconn_lifetime limit. We should only close _idle_ connections.

2. When connecting to Squid without sending anything for longer than
   pconn_lifetime, the connection hangs if the request is sent after the
   waiting period.

3. The connection also hangs if the initial request is starting to be
   transmitted but then there is a longer pause before the request is
   completed.

Most of the above problems are easy to trigger only when using very aggressive
pconn_lifetime settings that the feature was not designed for, but they still
can be considered bugs from admins point of view. Fixes:

* Do not stop reading a partially received request when we are timing out,
  to avoid aborting that request.

* Do not set keepalive flag based on the pconn_lifetime timeout. We cannot
  predict whether some new request data is going to be read (and reset the
  idle timeout clock) before our Connection:close response is sent back.

HTTP clients are supposed to recover from such races, but some apparently
do not, especially if it is their first request on the connection.

This is a Measurement Factory project.

=== modified file 'src/client_side.cc'
--- src/client_side.cc	2015-03-16 09:52:13 +
+++ src/client_side.cc	2015-03-23 09:53:55 +
@@ -3045,46 +3045,40 @@
  */
 bool
 ConnStateData::clientParseRequests()
 {
 bool parsed_req = false;
 
 debugs(33, 5, HERE  clientConnection  : attempting to parse);
 
 // Loop while we have read bytes that are not needed for producing the body
 // On errors, bodyPipe may become nil, but readMore will be cleared
 while (!in.buf.isEmpty()  !bodyPipe  flags.readMore) {
 
 /* Don't try to parse if the buffer is empty */
 if (in.buf.isEmpty())
 break;
 
 /* Limit the number of concurrent requests */
 if (concurrentRequestQueueFilled())
 break;
 
-/*Do not read more requests if persistent connection lifetime exceeded*/
-if (Config.Timeout.pconnLifetime  clientConnection-lifeTime()  Config.Timeout.pconnLifetime) {
-flags.readMore = false;
-break;
-}
-
 // try to parse the PROXY protocol header magic bytes
 if (needProxyProtocolHeader_  !parseProxyProtocolHeader())
 break;
 
 if (ClientSocketContext *context = parseOneRequest()) {
 debugs(33, 5, clientConnection  : done parsing a request);
 
 AsyncCall::Pointer timeoutCall = commCbCall(5, 4, clientLifetimeTimeout,
  CommTimeoutCbPtrFun(clientLifetimeTimeout, context-http));
 commSetConnTimeout(clientConnection, Config.Timeout.lifetime, timeoutCall);
 
 context-registerWithConn();
 
 processParsedRequest(context);
 
 parsed_req = true; // XXX: do we really need to parse everything right NOW ?
 
 if (context-mayUseConnection()) {
 debugs(33, 3, HERE  Not parsing new requests, as this request may need the connection);
 break;

=== modified file 'src/client_side_reply.cc'
--- src/client_side_reply.cc	2015-01-13 07:25:36 +
+++ src/client_side_reply.cc	2015-03-23 09:59:02 +
@@ -1487,44 +1487,40 @@
 } else if (reply-bodySize(request-method)  0  !maySendChunkedReply) {
 debugs(88, 3, clientBuildReplyHeader: can't keep-alive, unknown body size );
 request-flags.proxyKeepalive = false;
 } else if (fdUsageHigh() !request-flags.mustKeepalive) {
 debugs(88, 3, clientBuildReplyHeader: Not many unused FDs, can't keep-alive);
 request-flags.proxyKeepalive = false;
 } else if (request-flags.sslBumped  !reply-persistent()) {
 // We do not really have to close, but we pretend we are a tunnel.
 debugs(88, 3, 

Re: [squid-dev] [PATCH] splicing resumed sessions

2015-04-14 Thread Tsantilas Christos

Hi Amos,

   I make a new patch for squid-3.5. Use this one it should be OK.
It include changes from r14013.



On 04/13/2015 03:49 PM, Amos Jeffries wrote:

On 11/04/2015 10:01 p.m., Tsantilas Christos wrote:

Patch applied as r14012.
I am attaching the t13 patch for squid-3.5 too.



I've backported the server_name ACL patch before this one and your 3.5
patch does not seem to apply well on top of it.

However the regular backport method bzr merge -r15011..14013 trunk
seems to have only one minor collision.

I'm unsure its doing the right things though. Comparing your patch to
the changeset bzr merge produces I see these two odd chunks (in relation
to the server_name ACL changes):

--- christos.patch  2015-04-13 04:54:19.678645218 -0700
+++ bzr_merge.patch   2015-04-13 04:33:23.514618772 -0700
@@ -599,19 +598,15 @@
  +return helloMsgSize;
  +}
  +
- bool
--Ssl::Bio::sslFeatures::get(const unsigned char *hello)
++bool
  +Ssl::Bio::sslFeatures::checkForCcsOrNst(const unsigned char *msg,
size_t size)
- {
--// The SSL handshake message should starts with a 0x16 byte
--if (hello[0] == 0x16) {
--return parseV3Hello(hello);
++{
  +while (size  5) {
  +const int msgType = msg[0];
  +const int msgSslVersion = (msg[1]  8) | msg[2];
  +debugs(83, 7, SSL Message Version :  std::hex 
std::setw(8)  std::setfill('0')  msgSslVersion);
  +// Check for Change Cipher Spec message
-+// RFC5246 section 6.2.1
++// RFC5246 section 6.2.1
  +if (msgType == 0x14) {// Change Cipher Spec message found
  +debugs(83, 7, SSL  Change Cipher Spec message found);
  +return true;
@@ -636,9 +631,13 @@
  +return false;
  +}
  +
-+bool
+ bool
+-Ssl::Bio::sslFeatures::get(const unsigned char *hello)
  +Ssl::Bio::sslFeatures::get(const MemBuf buf, bool record)
-+{
+ {
+-// The SSL handshake message should starts with a 0x16 byte
+-if (hello[0] == 0x16) {
+-return parseV3Hello(hello);
  +int msgSize;
  +if ((msgSize = parseMsgHead(buf)) = 0) {
  +debugs(83, 7, Not a known SSL handshake message);


If you can try the bzr merge and confirm the diff/changeset is correct I
will do that normal cherrypicking backport.

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



Splicing resuming sessions

This patch adds code in squid to control SslBump behavior when dealing with
resuming SSL/TLS sessions. Without these changes, SslBump usually terminates
all resuming sessions with an error because such sessions do not include
server certificates, preventing Squid from successfully validating the server
identity.

After these changes, Squid splices resuming sessions. Splicing is the right
because Squid most likely has spliced the original connections that the client
and server are trying to resume now.
Without SslBump, session resumption would just work, and SslBump behaviour
should approach that ideal.

Future projects may add ACL checks for allowing resuming sessions and may
add more complex algorithms, including maintaining an SMP-shared
cache of sessions that may be resumed in the future and evaluating
client/server attempts to resume a session using that cache.

This patch also makes SSL client Hello message parsing more robust and
adds an SSL server Hello message parser.

Also add support for NPN (next protocol negotiation) and ALPN (Application-Layer Protocol Negotiation) tls extensions, required to correctly bump web clients
support these extensions

Technical details
-

In Peek mode, the old Squid code would forward the client Hello message to the
server. If the server tries to resume the previous (spliced) SSL session with
the client, then Squid SSL code gets an ssl/PeerConnector.cc ccs received
early error (or similar) because the Squid SSL object expects a server
certificate and does not know anything about the session being resumed.

With this patch, Squid detects session resumption attempts and splices

Session resumption detection


There are two mechanism in SSL/TLS for resuming sessions. The traditional
shared session IDs and the TLS ticket extensions:

* If Squid detects a shared ID in both client and server Hello messages, then
Squid decides whether the session is being resumed by comparing those client
and server shared IDs. If (and only if) the IDs are the same, then Squid
assumes that it is dealing with a resuming session (using session IDs).

* If Squid detects a TLS ticket in the client Hello message and TLS ticket
support in the server Hello message as well as a Change Cipher Spec or a New
TLS Ticket message (following the server Hello message), then (and only then)
Squid assumes that it is dealing with a resuming session (using TLS tickets).

The TLS tickets check is not performed if Squid detects a shared session ID
in both client

Re: [squid-dev] [PATCH] splicing resumed sessions

2015-04-11 Thread Tsantilas Christos

Patch applied as r14012.
I am attaching the t13 patch for squid-3.5 too.


On 04/11/2015 06:18 AM, Amos Jeffries wrote:

On 11/04/2015 1:49 a.m., Tsantilas Christos wrote:

I am attaching patch for trunk and squid-3.5



Thank you. Looks pretty good now.


On 04/09/2015 04:13 PM, Amos Jeffries wrote:


* Ssl::Bio::sslFeatures::parseV3Hello()
   - similar issues with s/Client Hello/ClientHello/ and SSL Extension
as above.


I did it only for the comments added or modified by this patch to avoid
increasing the size of this patch.
If required we can do it as separate patch.


Theres still one s/SSL Extension/TLS Extension/ near the end of this method.


* Ssl::Bio::sslFeatures::print()
   - seems to be lacking display of ALPN received
   - missing the format details for sslVersion used elsewhere:
   std::hex  std::setw(8)  std::setfill('0')



I did not fix this. The ALPN is in an encoded form and requires some
development to correctly print it. We do not have to gain something
implementing this.
Also the Ssl::Bio::sslFeatures::print currently is not used. It is here
for using it for debugging if required.



That means the print() will be incomplete, and needs a TODO added about
the above.





* parseMsgHead() documentation about return result 0 is wrong.
- it does not return 0 when the contents of the buffer are a TLS
(non-SSL) message.


This is what it does!



The .h comment says it will return a negative number if the Hello
message is not SSL. TLS != SSL.

For the case of head[0] == 0x16 the only SSL indicator (SSLv3) is
*exactly* {0x16, 0x03, 0x00}

For TLS versions that final 0x00 byte changes. The method correctly
accepts those and returns the Hello size (N0) - in contradiction to
what is documented.
The comment should either say SSLv3 or TLS, or not mention the
protocol name at all.


OK I made this fixes too.




+1. I dont think this needs another review, once those comments are
added/updated it can merge.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



Splicing resuming sessions

This patch adds code in squid to control SslBump behavior when dealing with
resuming SSL/TLS sessions. Without these changes, SslBump usually terminates
all resuming sessions with an error because such sessions do not include
server certificates, preventing Squid from successfully validating the server
identity.

After these changes, Squid splices resuming sessions. Splicing is the right
because Squid most likely has spliced the original connections that the client
and server are trying to resume now.
Without SslBump, session resumption would just work, and SslBump behaviour
should approach that ideal.

Future projects may add ACL checks for allowing resuming sessions and may
add more complex algorithms, including maintaining an SMP-shared
cache of sessions that may be resumed in the future and evaluating
client/server attempts to resume a session using that cache.

This patch also makes SSL client Hello message parsing more robust and
adds an SSL server Hello message parser.

Also add support for NPN (next protocol negotiation) and ALPN (Application-Layer Protocol Negotiation) tls extensions, required to correctly bump web clients
support these extensions

Technical details
-

In Peek mode, the old Squid code would forward the client Hello message to the
server. If the server tries to resume the previous (spliced) SSL session with
the client, then Squid SSL code gets an ssl/PeerConnector.cc ccs received
early error (or similar) because the Squid SSL object expects a server
certificate and does not know anything about the session being resumed.

With this patch, Squid detects session resumption attempts and splices


Session resumption detection


There are two mechanism in SSL/TLS for resuming sessions. The traditional
shared session IDs and the TLS ticket extensions:

* If Squid detects a shared ID in both client and server Hello messages, then
Squid decides whether the session is being resumed by comparing those client
and server shared IDs. If (and only if) the IDs are the same, then Squid
assumes that it is dealing with a resuming session (using session IDs).

* If Squid detects a TLS ticket in the client Hello message and TLS ticket
support in the server Hello message as well as a Change Cipher Spec or a New
TLS Ticket message (following the server Hello message), then (and only then)
Squid assumes that it is dealing with a resuming session (using TLS tickets).

The TLS tickets check is not performed if Squid detects a shared session ID
in both client and server Hello messages.

NPN and ALPN tls extensions
---

Even if squid has some SSL hello messages parsing code, we are relying to 
openSSL for full parsing. The openSSL used in peek and splice mode to  parse
server hello message, check for errors and verify server

[squid-dev] [PATCH] Secure ICAP

2015-04-09 Thread Tsantilas Christos


This patch adds support for ICAP services that require SSL/TLS transport
connections.

To mark an ICAP service as secure, use an icaps:// service URI 
scheme when listing your service via an icap_service directive.


Squid uses port 11344 for Secure ICAP by default, following another 
popular proxy convention. The old 1344 default for plain ICAP ports has 
not changed.


This patch should applied after the server_name and splicing resumed 
sessions patches applied to trunk, and after re-merged with the trunk.

However we can start the discussion if you agree.


Technical Details
==

This patch:
  - Splits Ssl::PeerConnector class into Ssl::PeerConnector parent and 
two kids: Ssl::BlindPeerConnector, a basic SSL connector for 
cache_peers, and Ssl::PeekingPeerConnector, a peek-and-splice SSL 
connector for HTTP servers.


  - Adds a third Ssl::IcapPeerConnector kid to connect to Secure ICAP 
servers.


  - Fixes ErrorState class to avoid crashes on nil ErrorState::request 
member. (Ssl::IcapPeerConnector may generate an ErrorState with a nil 
request).


  - Modifies the ACL peername to use the Secure ICAP server name as 
value while connecting to an ICAP server. This is useful to make SSL 
certificate  policies based on ICAP server name. However, this change is 
undocumented until we decide whether a dedicated ACL would be better.



This is a Measurement Factory project.
Secure ICAP

This patch adds support for ICAP services that require SSL/TLS transport
connections. The same options used for the cache_peer directive are used for
the icap_service directive, with similar certificate validation logic.

To mark an ICAP service as secure, use an icaps:// service URI scheme when
listing your service via an icap_service directive. The industry is using a
Secure ICAP term, and Squid follows that convention, but icaps seems more
appropriate for a _scheme_ name.

Squid uses port 11344 for Secure ICAP by default, following another popular proxy
convention. The old 1344 default for plain ICAP ports has not changed.


Technical Details
==

This patch:
  - Splits Ssl::PeerConnector class into Ssl::PeerConnector parent and two kids:
Ssl::BlindPeerConnector, a basic SSL connector for cache_peers, and
Ssl::PeekingPeerConnector, a peek-and-splice SSL connector for HTTP servers.

  - Adds a third Ssl::IcapPeerConnector kid to connect to Secure ICAP servers.

  - Fixes ErrorState class to avoid crashes on nil ErrorState::request member.
(Ssl::IcapPeerConnector may generate an ErrorState with a nil request).

  - Modifies the ACL peername to use the Secure ICAP server name as value while
connecting to an ICAP server. This is useful to make SSL certificate 
policies based on ICAP server name. However, this change is undocumented
until we decide whether a dedicated ACL would be better.


This is a Measurement Factory project.


=== modified file 'src/FwdState.cc'
--- src/FwdState.cc	2015-03-20 15:10:07 +
+++ src/FwdState.cc	2015-04-06 16:15:04 +
@@ -678,42 +678,42 @@
 
 debugs(17, 3, HERE  serverConnection()  : '  entry-url()  ' );
 
 comm_add_close_handler(serverConnection()-fd, fwdServerClosedWrapper, this);
 
 if (serverConnection()-getPeer())
 peerConnectSucceded(serverConnection()-getPeer());
 
 #if USE_OPENSSL
 if (!request-flags.pinned) {
 if ((serverConnection()-getPeer()  serverConnection()-getPeer()-secure.encryptTransport) ||
 (!serverConnection()-getPeer()  request-url.getScheme() == AnyP::PROTO_HTTPS) ||
 request-flags.sslPeek) {
 
 HttpRequest::Pointer requestPointer = request;
 AsyncCall::Pointer callback = asyncCall(17,4,
 FwdState::ConnectedToPeer,
 FwdStatePeerAnswerDialer(FwdState::connectedToPeer, this));
 // Use positive timeout when less than one second is left.
 const time_t sslNegotiationTimeout = max(static_casttime_t(1), timeLeft());
-Ssl::PeerConnector *connector =
-new Ssl::PeerConnector(requestPointer, serverConnection(), clientConn, callback, sslNegotiationTimeout);
+Ssl::PeekingPeerConnector *connector =
+new Ssl::PeekingPeerConnector(requestPointer, serverConnection(), clientConn, callback, sslNegotiationTimeout);
 AsyncJob::Start(connector); // will call our callback
 return;
 }
 }
 #endif
 
 // if not encrypting just run the post-connect actions
 Security::EncryptorAnswer nil;
 connectedToPeer(nil);
 }
 
 void
 FwdState::connectedToPeer(Security::EncryptorAnswer answer)
 {
 if (ErrorState *error = answer.error.get()) {
 fail(error);
 answer.error.clear(); // preserve error for errorSendComplete()
 self = NULL;
 return;
 }
@@ -1234,41 +1234,41 @@
 if (!conn-getPeer() 

Re: [squid-dev] [PATCH] splicing resumed sessions

2015-04-09 Thread Tsantilas Christos

A new version of the patch.

This is removes the ssl_bump_resuming_sessions directive, includes many 
fixes over the previous patch.
Also include support for NPN and ALPN tls extensions, required to 
correctly bump SSL connections.
Please read carefully the patch preamble , specially the technical note 
part.


The resumed sessions and the NPN/ALPN extensions problem appeared in 
squid after our decision to not allow splicing of connections for which 
we do not have access on the server certificates. The resumed sessions 
does not include server certificates, and the NPN/ALPN extensions causes 
openSSL to abort before retrieve and verify server certificates.


The problem affects the ssl bumping and make it unusable for many cases. 
Many of the problems which reported by the users for squid-3.5 should be 
related to this.
So probably this patch should applied to squid-3.5 too. If yes I will 
post the patch for squid-3.5 too.


Regards,
   Christos



On 03/17/2015 07:21 PM, Tsantilas Christos wrote:

This patch adds the ssl_bump_resuming_sessions directive that controls
SslBump behavior when dealing with resuming SSL/TLS sessions. Without
these changes, SslBump usually terminates all resuming sessions with an
error because such sessions do not include server certificates,
preventing Squid from successfully validating the server identity.

After these changes, Squid either terminates or splices resuming
sessions, depending on configuration. Splicing is the right default
because Squid most likely has spliced the original connections that the
client and server are trying to resume now.  Most likely, the splicing
decision would not change now (but the lack of the server certificate
information means we cannot repeat the original ACL checks and need a
special directive to tell Squid what to do). Also, without SslBump,
session resumption would just work, and SslBump default should approach
that ideal.

In many deployment scenarios, this straightforward splice or terminate
resuming sessions implementation is exactly what the admin wants.
Future projects may add more complex algorithms, including maintaining
an SMP-shared cache of sessions that may be resumed in the future and
evaluating client/server attempts to resume a session using that cache.


Example:
   # splice all resuming sessions [this is the default]
   ssl_bump_resuming_sessions allow all

This patch also makes SSL client Hello message parsing more robust and
adds an SSL server Hello message parser.

This patch also prevents occasional segfaults when dealing with SSL
cache_peer negotiation failures.

The last two changes should applied to squid-3.5 even if this patch will
not go into squid-3.5.

Regards,
Christos



Added ssl_bump_resuming_sessions to control treatment of resuming sessions
by SslBump.

This patch adds code in squid to control SslBump behavior when dealing with
resuming SSL/TLS sessions. Without these changes, SslBump usually terminates
all resuming sessions with an error because such sessions do not include
server certificates, preventing Squid from successfully validating the server
identity.

After these changes, Squid splices resuming sessions. Splicing is the right
because Squid most likely has spliced the original connections that the client
and server are trying to resume now.
Without SslBump, session resumption would just work, and SslBump behaviour
should approach that ideal.

Future projects may add ACL checks for allowing resuming sessions and may
add more complex algorithms, including maintaining an SMP-shared
cache of sessions that may be resumed in the future and evaluating
client/server attempts to resume a session using that cache.

This patch also makes SSL client Hello message parsing more robust and
adds an SSL server Hello message parser.

Also add support for NPN (next protocol negotiation) and ALPN (Application-Layer Protocol Negotiation) tls extensions, required to correctly bump web clients
support these extensions

Technical details
-

In Peek mode, the old Squid code would forward the client Hello message to the
server. If the server tries to resume the previous (spliced) SSL session with
the client, then Squid SSL code gets an ssl/PeerConnector.cc ccs received
early error (or similar) because the Squid SSL object expects a server
certificate and does not know anything about the session being resumed.

With this patch, Squid detects session resumption attempts and splices


Session resumption detection


There are two mechanism in SSL/TLS for resuming sessions. The traditional
shared session IDs and the TLS ticket extensions:

* If Squid detects a shared ID in both client and server Hello messages, then
Squid decides whether the session is being resumed by comparing those client
and server shared IDs. If (and only if) the IDs are the same, then Squid
assumes that it is dealing with a resuming session (using session IDs).

* If Squid detects a TLS ticket

Re: [squid-dev] [PATCH] server_name ACL

2015-04-09 Thread Tsantilas Christos

Hi all,
 I am reposting this patch. It is updated to the latest squid-trunk.

In a discussion with Amos (the period the squid-dev was down):
  1) The server_name should be renamed to tls_server_name or 
ssl::server_name
  2) There is a bug in Ssl::matchX509CommonNames function. The 
subjectAltName if exists should be used instead of the subject name.


The (2) should be fixed as a separate issue/bug, and also applied to 
squid-3.5.


What about the (1) ?
The ssl: prefix looks better because the new feature can be used for 
ssl v3 too, it is not depends on tls. (However I believe that we should 
agree and use one prefix for all of these features to not confuse users)



Regards,
   Christos

On 02/24/2015 10:29 PM, Tsantilas Christos wrote:

Hi all,


This patch adds server_name ACL matching server name(s) obtained from
various sources such as CONNECT request URI, client SNI, and SSL server
certificate CN.

During each SslBump step, Squid improves its understanding of a true
server name, with a bias towards server-provided (and Squid-validated)
information.

The server-provided server names are retrieved from the server
certificate CN and Subject Alternate Names. The new server_name ACL
matches any of alternate names and CN. If the CN or an alternate name is
a wildcard, then the new ACL matches any domain that matches the domain
with the wildcard.

Other than supporting many sources of server name information (including
sources that may supply Squid with multiple server name variants and
wildcards), the new ACL is similar to dstdomain.

Also added a server_name_regex ACL.


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



Add server_name ACL matching server name(s) obtained from various sources
such as CONNECT request URI, client SNI, and SSL server certificate CN.

During each SslBump step, Squid improves its understanding of a true server
name, with a bias towards server-provided (and Squid-validated) information.

The server-provided server names are retrieved from the server certificate CN
and Subject Alternate Names. The new server_name ACL matches any of alternate
names and CN. If the CN or an alternate name is a wildcard, then the new ACL
matches any domain that matches the domain with the wildcard.

Other than supporting many sources of server name information (including
sources that may supply Squid with multiple server name variants and
wildcards), the new ACL is similar to dstdomain.

Also added a server_name_regex ACL.


This is a Measurement Factory project.
=== modified file 'src/AclRegs.cc'
--- src/AclRegs.cc	2015-01-16 18:12:04 +
+++ src/AclRegs.cc	2015-02-13 11:39:50 +
@@ -57,40 +57,41 @@
 #include acl/Note.h
 #include acl/NoteData.h
 #include acl/PeerName.h
 #include acl/Protocol.h
 #include acl/ProtocolData.h
 #include acl/Random.h
 #include acl/Referer.h
 #include acl/RegexData.h
 #include acl/ReplyHeaderStrategy.h
 #include acl/ReplyMimeType.h
 #include acl/RequestHeaderStrategy.h
 #include acl/RequestMimeType.h
 #include acl/SourceAsn.h
 #include acl/SourceDomain.h
 #include acl/SourceIp.h
 #include acl/SquidError.h
 #include acl/SquidErrorData.h
 #if USE_OPENSSL
 #include acl/Certificate.h
 #include acl/CertificateData.h
+#include acl/ServerName.h
 #include acl/SslError.h
 #include acl/SslErrorData.h
 #endif
 #include acl/Strategised.h
 #include acl/Strategy.h
 #include acl/StringData.h
 #if USE_OPENSSL
 #include acl/ServerCertificate.h
 #endif
 #include acl/Tag.h
 #include acl/Time.h
 #include acl/TimeData.h
 #include acl/Url.h
 #include acl/UrlLogin.h
 #include acl/UrlPath.h
 #include acl/UrlPort.h
 #include acl/UserData.h
 #if USE_AUTH
 #include auth/AclMaxUserIp.h
 #include auth/AclProxyAuth.h
@@ -160,40 +161,46 @@
 ACL::Prototype ACLUrlLogin::RegistryProtoype(ACLUrlLogin::RegistryEntry_, urllogin);
 ACLStrategisedchar const * ACLUrlLogin::RegistryEntry_(new ACLRegexData, ACLUrlLoginStrategy::Instance(), urllogin);
 ACL::Prototype ACLUrlPath::LegacyRegistryProtoype(ACLUrlPath::RegistryEntry_, pattern);
 ACL::Prototype ACLUrlPath::RegistryProtoype(ACLUrlPath::RegistryEntry_, urlpath_regex);
 ACLStrategisedchar const * ACLUrlPath::RegistryEntry_(new ACLRegexData, ACLUrlPathStrategy::Instance(), urlpath_regex);
 ACL::Prototype ACLUrlPort::RegistryProtoype(ACLUrlPort::RegistryEntry_, port);
 ACLStrategisedint ACLUrlPort::RegistryEntry_(new ACLIntRange, ACLUrlPortStrategy::Instance(), port);
 
 #if USE_OPENSSL
 ACL::Prototype ACLSslError::RegistryProtoype(ACLSslError::RegistryEntry_, ssl_error);
 ACLStrategisedconst Ssl::CertErrors * ACLSslError::RegistryEntry_(new ACLSslErrorData, ACLSslErrorStrategy::Instance(), ssl_error);
 ACL::Prototype ACLCertificate::UserRegistryProtoype(ACLCertificate::UserRegistryEntry_, user_cert);
 ACLStrategisedX509 * ACLCertificate::UserRegistryEntry_(new ACLCertificateData (Ssl::GetX509UserAttribute, *), ACLCertificateStrategy::Instance

Re: [squid-dev] [PATCH] Fix HttpStateData::readReply to retry reads from server

2015-04-09 Thread Tsantilas Christos

Applied to trunk as r14007.

On 04/09/2015 04:07 AM, Amos Jeffries wrote:

On 9/04/2015 3:12 a.m., Tsantilas Christos wrote:

Hi all,

This patch fixes HttpStateData::readReply to retry read from server in
the case of EINPROGRESS, EAGAIN or similar errors

This bug mostly affects SSL bumped connections. The
HttpStateData::readReply will not retry read from server in the case of
an EINPROGRESS or similar comm errors and the connection will hang,
until the timeout handler called.

The Comm::ReadNow method, used inside  HttpStateData::readReply, call
ignoreErrno function to test if the comm error should be ignored and in
this case return Comm::INPROGRESS value.
In this case we need to set flags.do_next_read to true to force
HttpStateData::maybeReadVirginBody() method retry read.

This is a Measurement Factory project


+1. Please apply ASAP.

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] Fix HttpStateData::readReply to retry reads from server

2015-04-08 Thread Tsantilas Christos

Hi all,

This patch fixes HttpStateData::readReply to retry read from server in 
the case of EINPROGRESS, EAGAIN or similar errors


This bug mostly affects SSL bumped connections. The 
HttpStateData::readReply will not retry read from server in the case of 
an EINPROGRESS or similar comm errors and the connection will hang, 
until the timeout handler called.


The Comm::ReadNow method, used inside  HttpStateData::readReply, call 
ignoreErrno function to test if the comm error should be ignored and in 
this case return Comm::INPROGRESS value.
In this case we need to set flags.do_next_read to true to force 
HttpStateData::maybeReadVirginBody() method retry read.


This is a Measurement Factory project
Fix HttpStateData::readReply to retry read from server in the case of EINPROGRESS, EAGAIN or similar errors

This bug mostly affects SSL bumped connections.
The HttpStateData::readReply will not retry read from server in the case of an
EINPROGRESS or similar comm errors and the connection will hang, until the
timeout handler called.

The Comm::ReadNow method calls ignoreErrno function to test if the comm error
should be ignored and in this case return Comm::INPROGRESS value.
In this case we need to set flags.do_next_read to true to force
HttpStateData::maybeReadVirginBody() method retry read.


This is a Measurement Factory project

=== modified file 'src/http.cc'
--- src/http.cc	2015-03-28 13:20:21 +
+++ src/http.cc	2015-04-06 14:26:11 +
@@ -1175,40 +1175,41 @@
 assert(entry-mem_obj);
 
 /* read ahead limit */
 /* Perhaps these two calls should both live in MemObject */
 AsyncCall::Pointer nilCall;
 if (!entry-mem_obj-readAheadPolicyCanRead()) {
 entry-mem_obj-delayRead(DeferredRead(readDelayed, this, CommRead(io.conn, NULL, 0, nilCall)));
 return;
 }
 
 /* delay id limit */
 entry-mem_obj-mostBytesAllowed().delayRead(DeferredRead(readDelayed, this, CommRead(io.conn, NULL, 0, nilCall)));
 return;
 }
 #endif
 
 switch (Comm::ReadNow(rd, inBuf)) {
 case Comm::INPROGRESS:
 if (inBuf.isEmpty())
 debugs(33, 2, io.conn  : no data to process,   xstrerr(rd.xerrno));
+flags.do_next_read = true;
 maybeReadVirginBody();
 return;
 
 case Comm::OK:
 {
 payloadSeen += rd.size;
 #if USE_DELAY_POOLS
 DelayId delayId = entry-mem_obj-mostBytesAllowed();
 delayId.bytesIn(rd.size);
 #endif
 
 kb_incr((statCounter.server.all.kbytes_in), rd.size);
 kb_incr((statCounter.server.http.kbytes_in), rd.size);
 ++ IOStats.Http.reads;
 
 int bin = 0;
 for (int clen = rd.size - 1; clen; ++bin)
 clen = 1;
 
 ++ IOStats.Http.read_hist[bin];
@@ -1217,50 +1218,45 @@
 const timeval sent = request-hier.peer_http_request_sent;
 if (sent.tv_sec)
 tvSub(request-hier.peer_response_time, sent, current_time);
 else
 request-hier.peer_response_time.tv_sec = -1;
 }
 
 /* Continue to process previously read data */
 break;
 
 case Comm::ENDFILE: // close detected by 0-byte read
 eof = 1;
 flags.do_next_read = false;
 
 /* Continue to process previously read data */
 break;
 
 // case Comm::COMM_ERROR:
 default: // no other flags should ever occur
 debugs(11, 2, io.conn  : read failure:   xstrerr(rd.xerrno));
-
-if (ignoreErrno(rd.xerrno)) {
-flags.do_next_read = true;
-} else {
-ErrorState *err = new ErrorState(ERR_READ_ERROR, Http::scBadGateway, fwd-request);
-err-xerrno = rd.xerrno;
-fwd-fail(err);
-flags.do_next_read = false;
-io.conn-close();
-}
+ErrorState *err = new ErrorState(ERR_READ_ERROR, Http::scBadGateway, fwd-request);
+err-xerrno = rd.xerrno;
+fwd-fail(err);
+flags.do_next_read = false;
+io.conn-close();
 
 return;
 }
 
 /* Process next response from buffer */
 processReply();
 }
 
 /// processes the already read and buffered response data, possibly after
 /// waiting for asynchronous 1xx control message processing
 void
 HttpStateData::processReply()
 {
 
 if (flags.handling1xx) { // we came back after handling a 1xx response
 debugs(11, 5, HERE  done with 1xx handling);
 flags.handling1xx = false;
 Must(!flags.headers_parsed);
 }
 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] splicing resumed sessions

2015-03-17 Thread Tsantilas Christos

This patch adds the ssl_bump_resuming_sessions directive that controls
SslBump behavior when dealing with resuming SSL/TLS sessions. Without 
these changes, SslBump usually terminates all resuming sessions with an 
error because such sessions do not include server certificates, 
preventing Squid from successfully validating the server identity.


After these changes, Squid either terminates or splices resuming 
sessions, depending on configuration. Splicing is the right default 
because Squid most likely has spliced the original connections that the 
client and server are trying to resume now.  Most likely, the splicing 
decision would not change now (but the lack of the server certificate 
information means we cannot repeat the original ACL checks and need a 
special directive to tell Squid what to do). Also, without SslBump, 
session resumption would just work, and SslBump default should approach 
that ideal.


In many deployment scenarios, this straightforward splice or terminate
resuming sessions implementation is exactly what the admin wants. 
Future projects may add more complex algorithms, including maintaining 
an SMP-shared cache of sessions that may be resumed in the future and 
evaluating client/server attempts to resume a session using that cache.



Example:
  # splice all resuming sessions [this is the default]
  ssl_bump_resuming_sessions allow all

This patch also makes SSL client Hello message parsing more robust and
adds an SSL server Hello message parser.

This patch also prevents occasional segfaults when dealing with SSL
cache_peer negotiation failures.

The last two changes should applied to squid-3.5 even if this patch will 
not go into squid-3.5.


Regards,
   Christos
Added ssl_bump_resuming_sessions to control treatment of resuming sessions
by SslBump.

This patch adds the ssl_bump_resuming_sessions directive that controls
SslBump behavior when dealing with resuming SSL/TLS sessions. Without
these changes, SslBump usually terminates all resuming sessions with an
error because such sessions do not include server certificates, preventing
Squid from successfully validating the server identity.

After these changes, Squid either terminates or splices resuming sessions,
depending on configuration. Splicing is the right default because Squid
most likely has spliced the original connections that the client and server
are trying to resume now. Most likely, the splicing decision would not
change now (but the lack of the server certificate information means we cannot
repeat the original ACL checks and need a special directive to tell Squid what
to do). Also, without SslBump, session resumption would just work, and SslBump
default should approach that ideal.

In many deployment scenarios, this straightforward splice or terminate
resuming sessions implementation is exactly what the admin wants. Future
projects may add more complex algorithms, including maintaining an SMP-shared
cache of sessions that may be resumed in the future and evaluating
client/server attempts to resume a session using that cache.


Example:
  # splice all resuming sessions [this is the default]
  ssl_bump_resuming_sessions allow all


This patch also makes SSL client Hello message parsing more robust and
adds an SSL server Hello message parser.

This patch also prevents occasional segfaults when dealing with SSL
cache_peer negotiation failures.


Technical details
-

In Peek mode, the old Squid code would forward the client Hello message to the
server. If the server tries to resume the previous (spliced) SSL session with
the client, then Squid SSL code gets an ssl/PeerConnector.cc ccs received
early error (or similar) because the Squid SSL object expects a server
certificate and does not know anything about the session being resumed.

With this patch, Squid detects session resumption attempts and consults the
ssl_bump_resuming_sessions access list to decide whether to splice or honor
the error. Honoring the error would usually terminate the client and server
connections.


Session resumption detection


There are two mechanism in SSL/TLS for resuming sessions. The traditional
shared session IDs and the TLS ticket extensions:

* If Squid detects a shared ID in both client and server Hello messages, then
Squid decides whether the session is being resumed by comparing those client
and server shared IDs. If (and only if) the IDs are the same, then Squid
assumes that it is dealing with a resuming session (using session IDs).

* If Squid detects a TLS ticket in the client Hello message and TLS ticket
support in the server Hello message as well as a Change Cipher Spec or a New
TLS Ticket message (following the server Hello message), then (and only then)
Squid assumes that it is dealing with a resuming session (using TLS tickets).

The TLS tickets check is not performed if Squid detects a shared session ID
in both client and server Hello messages.

This is a Measurement Factory 

Re: [squid-dev] [PATCH] start workers as root

2015-03-17 Thread Tsantilas Christos

A patch which solves this problem applied to trunk as rev13984.
The patch is different that the patch I posted here.
It just adds  enter_suid() calls after the writePidFile and removePidFile()
inside the watch_child() function.

Regards,
  Christos

On 03/09/2015 11:09 AM, Tsantilas Christos wrote:

On 03/08/2015 05:57 AM, Amos Jeffries wrote:

On 8/03/2015 6:34 a.m., Tsantilas Christos wrote:

On 03/07/2015 07:18 AM, Amos Jeffries wrote:

On 7/03/2015 12:18 a.m., Tsantilas Christos wrote:

SMP workers in trunk start without root privileges. This results in
startup  failures when workers need to use a privileged port 
(e.g., 443)

or other  root-only features such as TPROXY.

This bug added with my Moved PID file management from Coordinator to
Master patch.

The problem is inside watch_child function which called after a
enter_suid() call, but the  writePidFile() call, inside the
watch_child(), will leave suid mode before exit.

This patch removes the enter_suid/leave_suid cals from the 
writePidFile

   and make the caller responsible for setting the root privileges if
required.


I think this is wrong approach.

Firstly, what are processes without SUID ability doing writing to 
secure

system files?


What do you mean here?


After your patch that makes the master file the one writing the PID file
what are the workers (non-master) doing writing to it?


The workers does not write the PID file.
A single-process squid  (eg when started with the -N parameter) 
still need to write PID file.









Secondly, I thought the entire point of the earlier patch was to make
the *MASTER* process was the one writing the PID file. Not
low-privileged workers.


Yes the master process writing the pid file, not the workers.



Thirdly, the enter/leave_suid calls mean dangerous security stuff 
about

to happen and should only be called if absolutely necessary, AND only
around the (block of) system calls which require them.


I agree.
However the watch_child which is implements the master process, is
designed to run in suid mode. This was not changed by the PID patch.
Just due to a bug added with the PID patch this function leaves the 
suid

mode.



I think I understand you there. You mean this process:

  enter_suid()
  ...
  watch_child()
   ...
   writePidFile() {
enter_squid() // no-op
...
leave_suid()
  }
  ... something that needs suid. oops.

Yep.






Maybe we want to fix master process to not run in suid mode, but I 
believe:

   - This is not a scope for this patch
   - The master process does not do a lot of thinks. Probably we do not
need to make it run with low privileges. Moreover we may have problems
with the kids. (Is it possible for kids to run with different
cache_effective_user parameter?)



Yes, more than possible - probable that somebody will or is already
doing so. Just the other day I saw someone running workers with
different PID files.
Thats the kind of thing that happens with these directives available in
via squid.conf per-worker.




Your description sounds like some part of the code in worker scope is
using enter_suid doing a lot of Squid stuff - plus incidentally some
root system stuff, then leave_suid. That is broken code. None of the
general Squid stuff are security sentitive system calls needing root
privileges.


No, please forget the workers. This patch does not change anything in a
worker.
Please take a look into the writePidFile function, and into watch_child
function (which implements the master process)



The place your patch is changing are all in the workers code.

Instead of moving the enter_suid() outside of writePidFile() in several
places the master code should call enter only to preserve the part of
code *it* needs suid. eg.

  watch_child()
...
writePidFile()
enter_suid(); // writePidFile() uses leave_suid()
...

Much simpler patch, and fewer places overall using enter/leave_suid.



Well, still a writePidFile call in the future may cause similar bug.
The need of calling an enter_suid() after writePidFile,  is not 
something a developer expects.


However I have no problem for this patch too, it is OK for me.
If you prefer it I will apply this patch instead.



Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev





___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] start workers as root

2015-03-07 Thread Tsantilas Christos

On 03/07/2015 07:18 AM, Amos Jeffries wrote:

On 7/03/2015 12:18 a.m., Tsantilas Christos wrote:

SMP workers in trunk start without root privileges. This results in
startup  failures when workers need to use a privileged port (e.g., 443)
or other  root-only features such as TPROXY.

This bug added with my Moved PID file management from Coordinator to
Master patch.

The problem is inside watch_child function which called after a
enter_suid() call, but the  writePidFile() call, inside the
watch_child(), will leave suid mode before exit.

This patch removes the enter_suid/leave_suid cals from the writePidFile
  and make the caller responsible for setting the root privileges if
required.


I think this is wrong approach.

Firstly, what are processes without SUID ability doing writing to secure
system files?


What do you mean here?



Secondly, I thought the entire point of the earlier patch was to make
the *MASTER* process was the one writing the PID file. Not
low-privileged workers.


Yes the master process writing the pid file, not the workers.



Thirdly, the enter/leave_suid calls mean dangerous security stuff about
to happen and should only be called if absolutely necessary, AND only
around the (block of) system calls which require them.


I agree.
However the watch_child which is implements the master process, is 
designed to run in suid mode. This was not changed by the PID patch. 
Just due to a bug added with the PID patch this function leaves the suid 
mode.


Maybe we want to fix master process to not run in suid mode, but I believe:
  - This is not a scope for this patch
  - The master process does not do a lot of thinks. Probably we do not 
need to make it run with low privileges. Moreover we may have problems 
with the kids. (Is it possible for kids to run with different 
cache_effective_user parameter?)






Your description sounds like some part of the code in worker scope is
using enter_suid doing a lot of Squid stuff - plus incidentally some
root system stuff, then leave_suid. That is broken code. None of the
general Squid stuff are security sentitive system calls needing root
privileges.


No, please forget the workers. This patch does not change anything in a 
worker.
Please take a look into the writePidFile function, and into watch_child 
function (which implements the master process)



  We should be fixing that broken code. Either to not need the system
suid privilege at all, or to call enter/leave_suid only around the
sensitive operation - while also ensuring those suid calls will work at
the point they are used.

Amos


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] start workers as root

2015-03-06 Thread Tsantilas Christos
SMP workers in trunk start without root privileges. This results in 
startup  failures when workers need to use a privileged port (e.g., 443) 
or other  root-only features such as TPROXY.


This bug added with my Moved PID file management from Coordinator to 
Master patch.


The problem is inside watch_child function which called after a 
enter_suid() call, but the  writePidFile() call, inside the 
watch_child(), will leave suid mode before exit.


This patch removes the enter_suid/leave_suid cals from the writePidFile 
 and make the caller responsible for setting the root privileges if 
required.
start workers as root

SMP workers in trunk start without root privileges. This results in startup 
failures when workers need to use a privileged port (e.g., 443) or other 
root-only features such as TPROXY.

The watch_child function, responsible to watch and start squid workers for
the squid monitor process, called after a enter_suid() call, but the 
writePidFile() call, inside the watch_child(), will leave suid mode before exit.

This patch removes the enter_suid()/leave_suid() cals from the writePidFile and
make the caller responsible for setting the root privileges if required.

This is a Measurement Factory project

=== modified file 'src/main.cc'
--- src/main.cc	2015-02-09 18:12:51 +
+++ src/main.cc	2015-03-05 17:40:17 +
@@ -922,42 +922,45 @@
 
 storeDirOpenSwapLogs();
 
 mimeInit(Config.mimeTablePathname);
 
 if (unlinkdNeeded())
 unlinkdInit();
 
 #if USE_DELAY_POOLS
 Config.ClientDelay.finalize();
 #endif
 
 if (Config.onoff.announce) {
 if (!eventFind(start_announce, NULL))
 eventAdd(start_announce, start_announce, NULL, 3600.0, 1);
 } else {
 if (eventFind(start_announce, NULL))
 eventDelete(start_announce, NULL);
 }
 
-if (!InDaemonMode())
+if (!InDaemonMode()) {
+enter_suid();
 writePidFile(); /* write PID file */
+leave_suid();
+}
 
 reconfiguring = 0;
 }
 
 static void
 mainRotate(void)
 {
 icmpEngine.Close();
 redirectShutdown();
 #if USE_AUTH
 authenticateRotate();
 #endif
 externalAclShutdown();
 
 _db_rotate_log();   /* cache.log */
 storeDirWriteCleanLogs(1);
 storeLogRotate();   /* store.log */
 accessLogRotate();  /* access.log */
 #if ICAP_CLIENT
 icapLogRotate();   /*icap.log*/
@@ -1174,42 +1177,45 @@
 #if USE_WCCP
 wccpInit();
 
 #endif
 #if USE_WCCPv2
 
 wccp2Init();
 
 #endif
 }
 
 serverConnectionsOpen();
 
 neighbors_init();
 
 // neighborsRegisterWithCacheManager(); //moved to neighbors_init()
 
 if (Config.chroot_dir)
 no_suid();
 
-if (!configured_once  !InDaemonMode())
+if (!configured_once  !InDaemonMode()) {
+enter_suid();
 writePidFile(); /* write PID file */
+leave_suid();
+}
 
 #if defined(_SQUID_LINUX_THREADS_)
 
 squid_signal(SIGQUIT, rotate_logs, SA_RESTART);
 
 squid_signal(SIGTRAP, sigusr2_handle, SA_RESTART);
 
 #else
 
 squid_signal(SIGUSR1, rotate_logs, SA_RESTART);
 
 squid_signal(SIGUSR2, sigusr2_handle, SA_RESTART);
 
 #endif
 
 squid_signal(SIGHUP, reconfigure, SA_RESTART);
 
 squid_signal(SIGTERM, shut_down, SA_RESTART);
 
 squid_signal(SIGINT, shut_down, SA_RESTART);
@@ -1985,39 +1991,41 @@
 asnFreeMemory();
 clientdbFreeMemory();
 httpHeaderCleanModule();
 statFreeMemory();
 eventFreeMemory();
 mimeFreeMemory();
 errorClean();
 #endif
 // clear StoreController
 Store::Root(NULL);
 
 fdDumpOpen();
 
 comm_exit();
 
 RunRegisteredHere(RegisteredRunner::finishShutdown);
 
 memClean();
 
 if (!InDaemonMode()) {
+enter_suid();
 removePidFile();
+leave_suid();
 }
 
 debugs(1, DBG_IMPORTANT, Squid Cache (Version   version_string  ): Exiting normally.);
 
 /*
  * DPW 2006-10-23
  * We used to fclose(debug_log) here if it was set, but then
  * we forgot to set it to NULL.  That caused some coredumps
  * because exit() ends up calling a bunch of destructors and
  * such.   So rather than forcing the debug_log to close, we'll
  * leave it open so that those destructors can write some
  * debugging if necessary.  The file will be closed anyway when
  * the process truly exits.
  */
 
 exit(shutdown_status);
 }
 

=== modified file 'src/tools.cc'
--- src/tools.cc	2015-02-10 03:44:32 +
+++ src/tools.cc	2015-03-05 17:42:25 +
@@ -696,69 +696,63 @@
 roles.append( worker);
 if (IamDiskProcess())
 roles.append( disker);
 return roles;
 }
 
 void
 writePidFile(void)
 {
 int fd;
 const char *f = NULL;
 mode_t old_umask;
 char buf[32];
 
 if ((f = Config.pidFilename) == NULL)
 return;
 
 if (!strcmp(Config.pidFilename, none))
 return;
 
-enter_suid();
-
 old_umask = umask(022);
 
   

[squid-dev] [PATCH] Fake CONNECT exceeds concurrent requests limit

2015-02-24 Thread Tsantilas Christos


Squid closes the SSL client connection with Failed to start fake 
CONNECT  request for ssl spliced connection. This happens especially 
often when the pipeline_prefetch configuration parameter is set to 0 
(i.e., default).


When a transparent SSL connection is peeked and then spliced in step2, 
we are generating a fake CONNECT request. The fake CONNECT request is 
counted as a new pipelined request and may exceed the configured limit. 
This patch solves this problem by raising the limit for that request.


Needs more work to better identify the requests that need a different limit.

This is a Measurement Factory project.
Fake CONNECT exceeds concurrent requests limit.

Squid closes the SSL client connection with Failed to start fake CONNECT 
request for ssl spliced connection. This happens especially often when
the pipeline_prefetch configuration parameter is set to 0 (i.e., default).

When a transparent SSL connection is peeked and then spliced in step2, we are
generating a fake CONNECT request. The fake CONNECT request is counted as a
new pipelined request and may exceed the configured limit. This patch solves
this problem by raising the limit for that request.

Needs more work to better identify the requests that need a different limit.

This is a Measurement Factory project.

=== modified file 'src/client_side.cc'
--- src/client_side.cc	2015-02-06 19:45:02 +
+++ src/client_side.cc	2015-02-24 20:58:25 +
@@ -2724,41 +2724,42 @@
 }
 
 int
 ConnStateData::pipelinePrefetchMax() const
 {
 return Config.pipeline_max_prefetch;
 }
 
 /**
  * Limit the number of concurrent requests.
  * \return true  when there are available position(s) in the pipeline queue for another request.
  * \return false when the pipeline queue is full or disabled.
  */
 bool
 ConnStateData::concurrentRequestQueueFilled() const
 {
 const int existingRequestCount = getConcurrentRequestCount();
 
 // default to the configured pipeline size.
 // add 1 because the head of pipeline is counted in concurrent requests and not prefetch queue
-const int concurrentRequestLimit = pipelinePrefetchMax() + 1;
+const int internalRequest = (transparent()  sslBumpMode == Ssl::bumpSplice) ? 1 : 0;
+const int concurrentRequestLimit = pipelinePrefetchMax() + 1 + internalRequest;
 
 // when queue filled already we cant add more.
 if (existingRequestCount = concurrentRequestLimit) {
 debugs(33, 3, clientConnection   max concurrent requests reached (  concurrentRequestLimit  ));
 debugs(33, 5, clientConnection   deferring new request until one is done);
 return true;
 }
 
 return false;
 }
 
 /**
  * Perform proxy_protocol_access ACL tests on the client which
  * connected to PROXY protocol port to see if we trust the
  * sender enough to accept their PROXY header claim.
  */
 bool
 ConnStateData::proxyProtocolValidateClient()
 {
 if (!Config.accessList.proxyProtocol)

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] sslproxy_options in peek-and-splice mode

2015-02-17 Thread Tsantilas Christos

On 02/17/2015 02:49 AM, Amos Jeffries wrote:

On 14/02/2015 8:25 a.m., Amos Jeffries wrote:

On 13/02/2015 11:52 p.m., Tsantilas Christos wrote:


A new patch, which also adds a Must clause for bumping step in
Ssl::PeerConnector::initializeSsl method.






Was applied as trunk rev.13928


yep, sorry



Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] sslproxy_options in peek-and-splice mode

2015-02-12 Thread Tsantilas Christos

On 02/11/2015 09:48 PM, Amos Jeffries wrote:

On 12/02/2015 12:45 a.m., Tsantilas Christos wrote:

On 02/11/2015 01:54 AM, Amos Jeffries wrote:

On 9/02/2015 6:43 a.m., Tsantilas Christos wrote:

Bug description:

- Squid sslproxy_options deny the use of TLSv1_2 SSL protocol:
 sslproxy_options NO_TLSv1_2
- Squid uses peek mode for bumped connections.
- Web client sends an TLSv1_2 hello message and squid in peek mode,
forwards the client hello message to server
- Web server respond with an TLSv1_2 hello message
- Squid while parsing server hello message aborts with an error
because  sslproxy_options deny the use ot TLSv1_2 protocol.

This patch fixes squid to ignore sslproxy_options in peek or stare
bumping mode.


As I understand it the action of applying the options to the context
removes from the context cipher references etc which are not possible.

Since peek and stare are non-final states I can easily imagine that
OpenSSL library negotiates ciphers which the options would otherwise
prohibit. Then when the options get applied to the context it find
itself using an algorithm which does not exist.


The context SSL_CTX objects are bases to create the SSL objects which
are responsible for the negotiation with the other side (server in this
case).
The SSL created, inherits the options from CTXa nd we are adding our
options to SSL object.
The SSL library will use these options to build client hello message,
parse server hello message and select algorithms/ciphers and other
features to establish SSL connection.

In my patch the options applied to the squid client SSL objects
immediately after created, in the case of bump-server-first, or
bump-client-first.

In the cases of peek or stare we are not setting any options. This is
because we are sending a hello message which is the same or similar with
the client hello message, so we can not apply options. Else the peek or
stare will fail...



What I means is the code flow is roughly like this yes?

1) bump
2) splice
3) peek then bump
4) stare then bump
5) peek then splice
6) stare then splice

In which of those cases are the options set?
  all cases ending in bump or splice.


The important factor here is the bumping step.
in bump case (1,3,4)
 - if the decision is to bump in bumpStep1 or bumpStep2 then the squid 
SSL client options are set.
 - If the decision for bumping taken in bumpStep3, the the squid SSL 
client options are NOT set.


in splice case (2,5 or 6) :
 -  if we splice on bumpStep1 or on bumpStep2, then we are not using 
squid SSL client code at all, so the options does not play any  role on 
this case.
 - If we splice on bumpStep3 we have use squid SSL client, but in this 
case the options are not set.



After the bumpStep2 completed we have received the client hello message, 
but we do not sent any response (server hello). At the same time we are 
starting to initiate the connection to the SSL server. This is mean that 
we are going to set SSL client hello message which is depends on SSL 
client code. So:
  - If we know that we will going to bump the connection, we can safely 
set SSL client options. This is because the squid SSL client will 
initiate a normal SSL connection.
  - If we are going to stare, or peek then the client hello message we 
are going to sent must be similar to the client-to-squid SSL hello 
message. In this case we can not control SSL features using options, 
else the SSL negotiation will fail.






Also, for this bug to have occured at all means the server SSL_CTX is
being actively used during the peek/stare steps.


The SSL_CTX is an object which holds predefined settings and options for 
SSL objects.

This patch sets the options for the SSL object not the SSL_CTX options.


So what can possibly go wrong by changing the CTX cipher sets halfway
through case 3, 4, 5, 6 ?


We are not changing anything in SSL_CTX object







This is has as result that we can not control the ssl behaviour in peek
or stare mode. But it is not easy to do anything else...
Maybe we can add 1-2 new configurations parameter which control the
behaviour. But this is a different project...


I agree we should do the not-setting-options. What I'm having issues
with is how it appears that they are set later - which could be just
adding a different problem.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] sslproxy_options in peek-and-splice mode

2015-02-12 Thread Tsantilas Christos

On 02/12/2015 01:48 PM, Amos Jeffries wrote:

On 12/02/2015 11:31 p.m., Tsantilas Christos wrote:

On 02/11/2015 09:48 PM, Amos Jeffries wrote:

On 12/02/2015 12:45 a.m., Tsantilas Christos wrote:

On 02/11/2015 01:54 AM, Amos Jeffries wrote:

On 9/02/2015 6:43 a.m., Tsantilas Christos wrote:

Bug description:

 - Squid sslproxy_options deny the use of TLSv1_2 SSL protocol:
  sslproxy_options NO_TLSv1_2
 - Squid uses peek mode for bumped connections.
 - Web client sends an TLSv1_2 hello message and squid in peek
mode,
forwards the client hello message to server
 - Web server respond with an TLSv1_2 hello message
 - Squid while parsing server hello message aborts with an error
because  sslproxy_options deny the use ot TLSv1_2 protocol.

This patch fixes squid to ignore sslproxy_options in peek or stare
bumping mode.


As I understand it the action of applying the options to the context
removes from the context cipher references etc which are not possible.

Since peek and stare are non-final states I can easily imagine that
OpenSSL library negotiates ciphers which the options would otherwise
prohibit. Then when the options get applied to the context it find
itself using an algorithm which does not exist.


The context SSL_CTX objects are bases to create the SSL objects which
are responsible for the negotiation with the other side (server in this
case).
The SSL created, inherits the options from CTXa nd we are adding our
options to SSL object.
The SSL library will use these options to build client hello message,
parse server hello message and select algorithms/ciphers and other
features to establish SSL connection.

In my patch the options applied to the squid client SSL objects
immediately after created, in the case of bump-server-first, or
bump-client-first.

In the cases of peek or stare we are not setting any options. This is
because we are sending a hello message which is the same or similar with
the client hello message, so we can not apply options. Else the peek or
stare will fail...



What I means is the code flow is roughly like this yes?

1) bump
2) splice
3) peek then bump
4) stare then bump
5) peek then splice
6) stare then splice

In which of those cases are the options set?
   all cases ending in bump or splice.


The important factor here is the bumping step.
in bump case (1,3,4)
  - if the decision is to bump in bumpStep1 or bumpStep2 then the squid
SSL client options are set.
  - If the decision for bumping taken in bumpStep3, the the squid SSL
client options are NOT set.

in splice case (2,5 or 6) :
  -  if we splice on bumpStep1 or on bumpStep2, then we are not using
squid SSL client code at all, so the options does not play any  role on
this case.
  - If we splice on bumpStep3 we have use squid SSL client, but in this
case the options are not set.


After the bumpStep2 completed we have received the client hello message,
but we do not sent any response (server hello). At the same time we are
starting to initiate the connection to the SSL server. This is mean that
we are going to set SSL client hello message which is depends on SSL
client code. So:
   - If we know that we will going to bump the connection, we can safely
set SSL client options. This is because the squid SSL client will
initiate a normal SSL connection.
   - If we are going to stare, or peek then the client hello message we
are going to sent must be similar to the client-to-squid SSL hello
message. In this case we can not control SSL features using options,
else the SSL negotiation will fail.


So you're saying the src/ssl/PeerConnector.cc else-condition never gets
run after a peek and/or stare ?


Do you mean the if (csd-sslBumpMode == Ssl::bumpPeek || 
csd-sslBumpMode == Ssl::bumpStare) ?


Yes the  else never gets run when peek or stare mode selected in 
bumpStep2 bumping step.






Then please add a Must(step = 2) check at the start of that else
condition right before setting the SSL client options. If that works
properly I am happy for this to go in.


Requires a code like the following:
Must(csd-sslServerBump()-step=Ssl::bumpStep2);

This is will not work. In client-first bumping mode or in server-first 
bumping mode where we are not applying the peek-and-splice procedure the 
step is not updated.


Also my sense is that in client-first bumping mode the 
csd-sslServerBump() is NULL.





NP: the above explanation may be worth adding to the commit message as
an explanation for what the line This patch fixes squid to ignore
sslproxy_options in peek or stare bumping mode actually means.


OK, I will try to add some explanations  in commit message.





Amos


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] sslproxy_options in peek-and-splice mode

2015-02-12 Thread Tsantilas Christos

On 02/12/2015 05:33 PM, Amos Jeffries wrote:

On 13/02/2015 3:34 a.m., Tsantilas Christos wrote:

On 02/12/2015 01:48 PM, Amos Jeffries wrote:

On 12/02/2015 11:31 p.m., Tsantilas Christos wrote:

On 02/11/2015 09:48 PM, Amos Jeffries wrote:

On 12/02/2015 12:45 a.m., Tsantilas Christos wrote:

On 02/11/2015 01:54 AM, Amos Jeffries wrote:

On 9/02/2015 6:43 a.m., Tsantilas Christos wrote:

Bug description:

  - Squid sslproxy_options deny the use of TLSv1_2 SSL protocol:
   sslproxy_options NO_TLSv1_2
  - Squid uses peek mode for bumped connections.
  - Web client sends an TLSv1_2 hello message and squid in peek
mode,
forwards the client hello message to server
  - Web server respond with an TLSv1_2 hello message
  - Squid while parsing server hello message aborts with an error
because  sslproxy_options deny the use ot TLSv1_2 protocol.

This patch fixes squid to ignore sslproxy_options in peek or stare
bumping mode.


As I understand it the action of applying the options to the context
removes from the context cipher references etc which are not
possible.

Since peek and stare are non-final states I can easily imagine that
OpenSSL library negotiates ciphers which the options would otherwise
prohibit. Then when the options get applied to the context it find
itself using an algorithm which does not exist.


The context SSL_CTX objects are bases to create the SSL objects which
are responsible for the negotiation with the other side (server in
this
case).
The SSL created, inherits the options from CTXa nd we are adding our
options to SSL object.
The SSL library will use these options to build client hello message,
parse server hello message and select algorithms/ciphers and other
features to establish SSL connection.

In my patch the options applied to the squid client SSL objects
immediately after created, in the case of bump-server-first, or
bump-client-first.

In the cases of peek or stare we are not setting any options. This is
because we are sending a hello message which is the same or similar
with
the client hello message, so we can not apply options. Else the
peek or
stare will fail...



What I means is the code flow is roughly like this yes?

1) bump
2) splice
3) peek then bump
4) stare then bump
5) peek then splice
6) stare then splice

In which of those cases are the options set?
all cases ending in bump or splice.


The important factor here is the bumping step.
in bump case (1,3,4)
   - if the decision is to bump in bumpStep1 or bumpStep2 then the squid
SSL client options are set.
   - If the decision for bumping taken in bumpStep3, the the squid SSL
client options are NOT set.

in splice case (2,5 or 6) :
   -  if we splice on bumpStep1 or on bumpStep2, then we are not using
squid SSL client code at all, so the options does not play any  role on
this case.
   - If we splice on bumpStep3 we have use squid SSL client, but in this
case the options are not set.


After the bumpStep2 completed we have received the client hello message,
but we do not sent any response (server hello). At the same time we are
starting to initiate the connection to the SSL server. This is mean that
we are going to set SSL client hello message which is depends on SSL
client code. So:
- If we know that we will going to bump the connection, we can safely
set SSL client options. This is because the squid SSL client will
initiate a normal SSL connection.
- If we are going to stare, or peek then the client hello message we
are going to sent must be similar to the client-to-squid SSL hello
message. In this case we can not control SSL features using options,
else the SSL negotiation will fail.


So you're saying the src/ssl/PeerConnector.cc else-condition never gets
run after a peek and/or stare ?


Do you mean the if (csd-sslBumpMode == Ssl::bumpPeek ||
csd-sslBumpMode == Ssl::bumpStare) ?



I mean the:

  } else {
+// Set client SSL options
+SSL_set_options(ssl, ::Config.ssl_client.parsedOptions);
+




Yes the  else never gets run when peek or stare mode selected in
bumpStep2 bumping step.





Then please add a Must(step = 2) check at the start of that else
condition right before setting the SSL client options. If that works
properly I am happy for this to go in.


Requires a code like the following:
Must(csd-sslServerBump()-step=Ssl::bumpStep2);


Specifically:

Must(
  !csd-sslServerBump() ||
  csd-sslServerBump()-step = Ssl::bumpStep2
  );



This is will not work. In client-first bumping mode or in server-first
bumping mode where we are not applying the peek-and-splice procedure the
step is not updated.


Which would make step 0 or 1 for client-first right? that is fine.

For server-first it does need updating at the point the server is given
data. A jump right to step 3, or even a new sslBumpStepServerFirst
value at the end of the enum.


Exactly.
But why do you beleive this Must is needed?  Specially inside else is 
completely without any

Re: [squid-dev] [PATCH] sslproxy_options in peek-and-splice mode

2015-02-11 Thread Tsantilas Christos

On 02/11/2015 01:54 AM, Amos Jeffries wrote:

On 9/02/2015 6:43 a.m., Tsantilas Christos wrote:

Bug description:

   - Squid sslproxy_options deny the use of TLSv1_2 SSL protocol:
sslproxy_options NO_TLSv1_2
   - Squid uses peek mode for bumped connections.
   - Web client sends an TLSv1_2 hello message and squid in peek mode,
forwards the client hello message to server
   - Web server respond with an TLSv1_2 hello message
   - Squid while parsing server hello message aborts with an error
because  sslproxy_options deny the use ot TLSv1_2 protocol.

This patch fixes squid to ignore sslproxy_options in peek or stare
bumping mode.


As I understand it the action of applying the options to the context
removes from the context cipher references etc which are not possible.

Since peek and stare are non-final states I can easily imagine that
OpenSSL library negotiates ciphers which the options would otherwise
prohibit. Then when the options get applied to the context it find
itself using an algorithm which does not exist.


The context SSL_CTX objects are bases to create the SSL objects which 
are responsible for the negotiation with the other side (server in this 
case).
The SSL created, inherits the options from CTXa nd we are adding our 
options to SSL object.
The SSL library will use these options to build client hello message, 
parse server hello message and select algorithms/ciphers and other 
features to establish SSL connection.


In my patch the options applied to the squid client SSL objects 
immediately after created, in the case of bump-server-first, or 
bump-client-first.


In the cases of peek or stare we are not setting any options. This is 
because we are sending a hello message which is the same or similar with 
the client hello message, so we can not apply options. Else the peek or 
stare will fail...


This is has as result that we can not control the ssl behaviour in peek 
or stare mode. But it is not easy to do anything else...
Maybe we can add 1-2 new configurations parameter which control the 
behaviour. But this is a different project...


Regards,
   Christos




So what happens during the final state in that type of event?

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] SNI information is not set on transparent bumping mode

2015-02-09 Thread Tsantilas Christos

On 02/09/2015 02:26 PM, Amos Jeffries wrote:

On 9/02/2015 6:07 a.m., Tsantilas Christos wrote:

SNI information is not set on transparent bumping mode

Forward SNI (obtained from an intercepted client connection) to servers
when SslBump peeks or stares at the server certificate.

SslBump was not forwarding SNI to servers when Squid obtained SNI from
an intercepted client while peeking (or staring) at client Hello.

This patch also fixes squid to consider hostname included in SNI
information more reliable than the hostname provided in CONNECT request
for certificates CN verify



+1. ... and please apply ASAP.


Applied to trunk as rev:13919



Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] sslproxy_options in peek-and-splice mode

2015-02-08 Thread Tsantilas Christos

Bug description:

  - Squid sslproxy_options deny the use of TLSv1_2 SSL protocol:
   sslproxy_options NO_TLSv1_2
  - Squid uses peek mode for bumped connections.
  - Web client sends an TLSv1_2 hello message and squid in peek mode, 
forwards the client hello message to server

  - Web server respond with an TLSv1_2 hello message
  - Squid while parsing server hello message aborts with an error 
because  sslproxy_options deny the use ot TLSv1_2 protocol.


This patch fixes squid to ignore sslproxy_options in peek or stare 
bumping mode.


This is a Measurement Factory project
sslproxy_options in peek-and-splice mode

Problem description:
  - Squid sslproxy_options deny the use of TLSv1_2 SSL protocol:
 sslproxy_options NO_TLSv1_2
  - Squid uses peek mode for bumped connections.
  - Web client sends an TLSv1_2 hello message and squid in peek mode, forwards
the client hello message to server
  - Web server respond with an TLSv1_2 hello message
  - Squid while parsing server hello message aborts with an error because 
sslproxy_options deny the use ot TLSv1_2 protocol.
  
This patch fixes squid to ignore sslproxy_options in peek or stare bumping mode.

This is a Measurement Factory project
=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h	2015-02-02 16:20:11 +
+++ src/SquidConfig.h	2015-02-06 19:09:37 +
@@ -487,40 +487,41 @@
 
 wordlist *ext_methods;
 
 struct {
 int high_rptm;
 int high_pf;
 size_t high_memory;
 } warnings;
 char *store_dir_select_algorithm;
 int sleep_after_fork;   /* microseconds */
 time_t minimum_expiry_time; /* seconds */
 external_acl *externalAclHelperList;
 
 #if USE_OPENSSL
 
 struct {
 char *cert;
 char *key;
 int version;
 char *options;
+long parsedOptions;
 char *cipher;
 char *cafile;
 char *capath;
 char *crlfile;
 char *flags;
 acl_access *cert_error;
 SSL_CTX *sslContext;
 sslproxy_cert_sign *cert_sign;
 sslproxy_cert_adapt *cert_adapt;
 } ssl_client;
 #endif
 
 char *accept_filter;
 int umask;
 int max_filedescriptors;
 int workers;
 CpuAffinityMap *cpuAffinityMap;
 
 #if USE_LOADABLE_MODULES
 wordlist *loadable_module_names;

=== modified file 'src/cache_cf.cc'
--- src/cache_cf.cc	2015-02-02 20:02:55 +
+++ src/cache_cf.cc	2015-02-06 19:09:37 +
@@ -869,41 +869,44 @@
 Config2.effectiveGroupID = getegid();
 }
 
 if (NULL != Config.effectiveGroup) {
 
 struct group *grp = getgrnam(Config.effectiveGroup);
 
 if (NULL == grp) {
 fatalf(getgrnam failed to find groupid for effective group '%s',
Config.effectiveGroup);
 return;
 }
 
 Config2.effectiveGroupID = grp-gr_gid;
 }
 
 #if USE_OPENSSL
 
 debugs(3, DBG_IMPORTANT, Initializing https proxy context);
 
-Config.ssl_client.sslContext = sslCreateClientContext(Config.ssl_client.cert, Config.ssl_client.key, Config.ssl_client.version, Config.ssl_client.cipher, Config.ssl_client.options, Config.ssl_client.flags, Config.ssl_client.cafile, Config.ssl_client.capath, Config.ssl_client.crlfile);
+Config.ssl_client.sslContext = sslCreateClientContext(Config.ssl_client.cert, Config.ssl_client.key, Config.ssl_client.version, Config.ssl_client.cipher, NULL, Config.ssl_client.flags, Config.ssl_client.cafile, Config.ssl_client.capath, Config.ssl_client.crlfile);
+// Pre-parse SSL client options to be applied when the client SSL objects created.
+// Options must not used in the case of peek or stare bump mode.
+Config.ssl_client.parsedOptions = Ssl::parse_options(::Config.ssl_client.options);
 
 for (CachePeer *p = Config.peers; p != NULL; p = p-next) {
 if (p-use_ssl) {
 debugs(3, DBG_IMPORTANT, Initializing cache_peer   p-name   SSL context);
 p-sslContext = sslCreateClientContext(p-sslcert, p-sslkey, p-sslversion, p-sslcipher, p-ssloptions, p-sslflags, p-sslcafile, p-sslcapath, p-sslcrlfile);
 }
 }
 
 for (AnyP::PortCfgPointer s = HttpPortList; s != NULL; s = s-next) {
 if (!s-flags.tunnelSslBumping)
 continue;
 
 debugs(3, DBG_IMPORTANT, Initializing http_port   s-s   SSL context);
 s-configureSslServerContext();
 }
 
 for (AnyP::PortCfgPointer s = HttpsPortList; s != NULL; s = s-next) {
 debugs(3, DBG_IMPORTANT, Initializing https_port   s-s   SSL context);
 s-configureSslServerContext();
 }

=== modified file 'src/ssl/PeerConnector.cc'
--- src/ssl/PeerConnector.cc	2015-01-13 07:25:36 +
+++ src/ssl/PeerConnector.cc	2015-01-29 17:05:32 +
@@ -155,40 +155,43 @@
 const Ssl::Bio::sslFeatures features = clnBio-getFeatures();
 if (features.sslVersion != -1) {
 features.applyToSSL(ssl);
 // Should we allow it for all protocols?

[squid-dev] [PATCH] SNI information is not set on transparent bumping mode

2015-02-08 Thread Tsantilas Christos

SNI information is not set on transparent bumping mode

Forward SNI (obtained from an intercepted client connection) to servers 
when SslBump peeks or stares at the server certificate.


SslBump was not forwarding SNI to servers when Squid obtained SNI from 
an intercepted client while peeking (or staring) at client Hello.


This patch also fixes squid to consider hostname included in SNI 
information more reliable than the hostname provided in CONNECT request 
for certificates CN verify


This is a Measurement Factory project
SNI information is not set on transparent bumping mode

Forward SNI (obtained from an intercepted client connection) to servers
when SslBump peeks or stares at the server certificate.

SslBump was not forwarding SNI to servers when Squid obtained SNI from an 
intercepted client while peeking (or staring) at client Hello.

This is a Measurement Factory project
=== modified file 'src/ssl/PeerConnector.cc'
--- src/ssl/PeerConnector.cc	2015-01-13 07:25:36 +
+++ src/ssl/PeerConnector.cc	2015-02-08 08:35:55 +
@@ -127,82 +127,91 @@
 bail(anErr);
 return;
 }
 
 if (peer) {
 if (peer-ssldomain)
 SSL_set_ex_data(ssl, ssl_ex_index_server, peer-ssldomain);
 
 #if NOT_YET
 
 else if (peer-name)
 SSL_set_ex_data(ssl, ssl_ex_index_server, peer-name);
 
 #endif
 
 else
 SSL_set_ex_data(ssl, ssl_ex_index_server, peer-host);
 
 if (peer-sslSession)
 SSL_set_session(ssl, peer-sslSession);
-
-} else if (request-clientConnectionManager-sslBumpMode == Ssl::bumpPeek || request-clientConnectionManager-sslBumpMode == Ssl::bumpStare) {
-// client connection is required for Peek or Stare mode in the case we need to splice
+} else if (const ConnStateData *csd = request-clientConnectionManager.valid()) {
+// client connection is required in the case we need to splice
 // or terminate client and server connections
 assert(clientConn != NULL);
-SSL *clientSsl = fd_table[request-clientConnectionManager-clientConnection-fd].ssl;
-BIO *b = SSL_get_rbio(clientSsl);
-Ssl::ClientBio *clnBio = static_castSsl::ClientBio *(b-ptr);
-const Ssl::Bio::sslFeatures features = clnBio-getFeatures();
-if (features.sslVersion != -1) {
-features.applyToSSL(ssl);
-// Should we allow it for all protocols?
-if (features.sslVersion = 3) {
-b = SSL_get_rbio(ssl);
-Ssl::ServerBio *srvBio = static_castSsl::ServerBio *(b-ptr);
-srvBio-setClientFeatures(features);
-srvBio-recordInput(true);
-srvBio-mode(request-clientConnectionManager-sslBumpMode);
-}
+const char *hostName = NULL;
+Ssl::ClientBio *cltBio = NULL;
+
+// In server-first bumping mode, clientSsl is NULL.
+if (SSL *clientSsl = fd_table[clientConn-fd].ssl) {
+BIO *b = SSL_get_rbio(clientSsl);
+cltBio = static_castSsl::ClientBio *(b-ptr);
+const Ssl::Bio::sslFeatures features = cltBio-getFeatures();
+if (!features.serverName.isEmpty())
+hostName = features.serverName.c_str();
+}
 
-const bool isConnectRequest = request-clientConnectionManager.valid() 
-  !request-clientConnectionManager-port-flags.isIntercepted();
-if (isConnectRequest)
-SSL_set_ex_data(ssl, ssl_ex_index_server, (void*)request-GetHost());
-else if (!features.serverName.isEmpty())
-SSL_set_ex_data(ssl, ssl_ex_index_server, (void*)features.serverName.c_str());
+if (!hostName) {
+// While we are peeking at the certificate, we may not know the server
+// name that the client will request (after interception or CONNECT)
+// unless it was the CONNECT request with a user-typed address.
+const bool isConnectRequest = !csd-port-flags.isIntercepted();
+if (!request-flags.sslPeek || isConnectRequest)
+hostName = request-GetHost();
+}
+
+if (hostName)
+SSL_set_ex_data(ssl, ssl_ex_index_server, (void*)hostName);
+
+if (csd-sslBumpMode == Ssl::bumpPeek || csd-sslBumpMode == Ssl::bumpStare) {
+assert(cltBio);
+const Ssl::Bio::sslFeatures features = cltBio-getFeatures();
+if (features.sslVersion != -1) {
+features.applyToSSL(ssl);
+// Should we allow it for all protocols?
+if (features.sslVersion = 3) {
+BIO *b = SSL_get_rbio(ssl);
+Ssl::ServerBio *srvBio = static_castSsl::ServerBio *(b-ptr);
+// Inherite client features, like SSL version, SNI and other
+srvBio-setClientFeatures(features);
+

Re: [squid-dev] Moved PID file management from Coordinator to Master

2015-01-21 Thread Tsantilas Christos

On 01/21/2015 12:17 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 21/01/2015 10:57 p.m., Tsantilas Christos wrote:

On 01/20/2015 02:55 AM, Alex Rousskov wrote:

On 01/16/2015 08:51 AM, Amos Jeffries wrote:

On 16/01/2015 11:29 a.m., Alex Rousskov wrote:

In SMP, there is only one Coordinator process, created by
the Master process.



All SMP kids (Coordinator, workers, and diskers) are started
by the Master process. There are no multiple levels as far as
kid startup and waiting are concerned and, hence, there is no
level deeper than the master can see.




Hmm, okay. Then I have no problem per-se to this change of
esponsibility.


Great, thank you.



I do still think the coordinator needs to remain active until
last out of the kids though, so they can still use it to
coordinate during their shutdowns. Having it be the first up
and last down would solve a few architectural problems where
kids need to to collaborate on things, like log rotations or
broadcasting their availability.


Agreed!



This patch does not prevent coordinator process exit before the
workers, but also current squid does not guarantees that the
coordinator will exit after workers. I agree that we need to
implement it, but looks that this is out of the scope of this
patch.

If there is not any objection I will apply this patch as is for
now.



Okay, lets give it a try.

Can you please though make sure all the new functions use the Squid
coding style instead of the one you keep slipping in. Squid style is
this in .cc files:

  type
  functionName(...)
  {
 ... code ...
  }


Hmm...
I found 1-2 cases... I will fix while applying to trunk 




Cheers
Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUv3zOAAoJELJo5wb/XPRjJtMIAJV3sKXDHeOq+E3HqVGjrDFc
ZlJt7k76LIHCGZmfkh+1fPv4wuUtWKVzW2TfHMGP1HI2bqzq6hBTKe+uVmqi3Xzm
yXWHSymbXJKlWW7/lBLY7innpNjLyiE3Jv46gto1I6R79eXipiVYUehSVhx0FGL6
yB6yli6RTLEqxZlhI/tCHvfVh1Y0Wp3r0+yJJmW4POiF6S9PSnFJE+PzlEsWJ5L1
s6KuMlQ90jHAhLOsfJWrTE1bo/xJskVKkOT6KTTO7DxK0AyFxUvxsXVwT8I1h8oo
8qIHVjPEOYFuoejowH6+rn0nDDcFNyt47O/POIl7KJ1SvO1Vkx6lfDTPmvz7uMA=
=L5r1
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Moved PID file management from Coordinator to Master

2015-01-21 Thread Tsantilas Christos


I am posting a new patch.

This patch include fixes to follow Squid Coding Style but also have a 
fix for a small bug:
 In the previous patches I posted the pid creation done by master 
process but the pid file was not removed by master process. This patch 
fixes master process to remove pid file on exit.





On 01/21/2015 12:17 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 21/01/2015 10:57 p.m., Tsantilas Christos wrote:

On 01/20/2015 02:55 AM, Alex Rousskov wrote:

On 01/16/2015 08:51 AM, Amos Jeffries wrote:

On 16/01/2015 11:29 a.m., Alex Rousskov wrote:

In SMP, there is only one Coordinator process, created by
the Master process.



All SMP kids (Coordinator, workers, and diskers) are started
by the Master process. There are no multiple levels as far as
kid startup and waiting are concerned and, hence, there is no
level deeper than the master can see.




Hmm, okay. Then I have no problem per-se to this change of
esponsibility.


Great, thank you.



I do still think the coordinator needs to remain active until
last out of the kids though, so they can still use it to
coordinate during their shutdowns. Having it be the first up
and last down would solve a few architectural problems where
kids need to to collaborate on things, like log rotations or
broadcasting their availability.


Agreed!



This patch does not prevent coordinator process exit before the
workers, but also current squid does not guarantees that the
coordinator will exit after workers. I agree that we need to
implement it, but looks that this is out of the scope of this
patch.

If there is not any objection I will apply this patch as is for
now.



Okay, lets give it a try.

Can you please though make sure all the new functions use the Squid
coding style instead of the one you keep slipping in. Squid style is
this in .cc files:

  type
  functionName(...)
  {
 ... code ...
  }

Cheers
Amos


Moved PID file management from Coordinator to Master.

This move is the first step necessary to avoid the following race condition
among PID file deletion and shared segment creation/destruction in SMP Squid:

  O1) The old Squid Coordinator removes its PID file and quits.
  N1) The system script notices Coordinator death and starts the new Squid.
  N2) Shared segments are created by the new Master process.
  O2) Shared segments are removed by the old Master process.
  N3) New worker/disker processes fail due to missing segments.

TODO: The second step (not a part of this change) is to delete shared memory
segments before PID file is deleted (all in the Master process after this
change).


Now the Master process receives signals and is responsible for forwarding them
to the kids.

The kids does not install default signal handler for shudown signals (SIGINT,
SIGTERM) after a signal received. If a second shutdown signal is received then
squid imediatelly terminates the event loop and exits.

When the kill-parent-hack is enabled the kids are sending the kill signal
to master process and master process forward it to other kids too.

Also a small regression added: The PID file can no longer be renamed using
hot reconfiguration. A full Squid restart is now required for that.

This is a Measurement Factory project.

=== modified file 'src/ipc/Kid.cc'
--- src/ipc/Kid.cc	2015-01-13 07:25:36 +
+++ src/ipc/Kid.cc	2015-01-21 11:47:29 +
@@ -33,41 +33,42 @@
 badFailures(0),
 pid(-1),
 startTime(0),
 isRunning(false),
 status(0)
 {
 }
 
 /// called when this kid got started, records PID
 void Kid::start(pid_t cpid)
 {
 assert(!running());
 assert(cpid  0);
 
 isRunning = true;
 pid = cpid;
 time(startTime);
 }
 
 /// called when kid terminates, sets exiting status
-void Kid::stop(status_type theExitStatus)
+void
+Kid::stop(PidStatus const theExitStatus)
 {
 assert(running());
 assert(startTime != 0);
 
 isRunning = false;
 
 time_t stop_time;
 time(stop_time);
 if ((stop_time - startTime)  fastFailureTimeLimit)
 ++badFailures;
 else
 badFailures = 0; // the failures are not frequent [any more]
 
 status = theExitStatus;
 }
 
 /// returns true if tracking of kid is stopped
 bool Kid::running() const
 {
 return isRunning;

=== modified file 'src/ipc/Kid.h'
--- src/ipc/Kid.h	2015-01-13 07:25:36 +
+++ src/ipc/Kid.h	2015-01-13 16:35:58 +
@@ -1,60 +1,56 @@
 /*
  * Copyright (C) 1996-2015 The Squid Software Foundation and contributors
  *
  * Squid software is distributed under GPLv2+ license and includes
  * contributions from numerous individuals and organizations.
  * Please see the COPYING and CONTRIBUTORS files for details.
  */
 
 #ifndef SQUID_IPC_KID_H
 #define SQUID_IPC_KID_H
 
 #include SquidString.h
+#include tools.h
 
 /// Squid child, including current forked process info and
 /// info persistent across restarts
 class Kid
 {
 public:
-#if _SQUID_NEXT_
-typedef union wait status_type;
-#else
-typedef int

Re: [squid-dev] Moved PID file management from Coordinator to Master

2015-01-21 Thread Tsantilas Christos

On 01/20/2015 02:55 AM, Alex Rousskov wrote:

On 01/16/2015 08:51 AM, Amos Jeffries wrote:

On 16/01/2015 11:29 a.m., Alex Rousskov wrote:

In SMP, there is only one Coordinator process, created by the
Master process.



All SMP kids (Coordinator, workers, and diskers) are started by
the Master process. There are no multiple levels as far as kid
startup and waiting are concerned and, hence, there is no level
deeper than the master can see.




Hmm, okay. Then I have no problem per-se to this change of esponsibility.


Great, thank you.



I do still think the coordinator needs to remain active until last out
of the kids though, so they can still use it to coordinate during
their shutdowns. Having it be the first up and last down would solve a
few architectural problems where kids need to to collaborate on
things, like log rotations or broadcasting their availability.


Agreed!



This patch does not prevent coordinator process exit before the workers, 
but also current squid does not guarantees that the coordinator will 
exit after workers.
I agree that we need to implement it, but looks that this is out of the 
scope of this patch.


If there is not any objection I will apply this patch as is for now.






Alex.
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Non-HTTP bypass

2015-01-19 Thread Tsantilas Christos

Patch applied to trunk as r13853


On 01/14/2015 06:00 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 14/01/2015 7:21 a.m., Tsantilas Christos wrote:

I made all requested changes/fixes. The patch also ported to latest
trunk.




Okay, +1 for commit 

FYI: Alex, kinkie, and myself had a debate on IRC and came to an
agreement for calling the new directive on_unsupported_protocol
instead of on_first_request_error.

Please feel free to make that naming switch when comitting if you
like, it does not require another review IMO.

Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUtpKRAAoJELJo5wb/XPRjtLwH/1L0K9u80Yl95ymszoroP2MB
TivdghsRQcFO8BIbUkWxVp3M7FghUQY9h/famsxX5R55SiAPOgMmXxoCSWTPe+ID
6VPlYdhr8XsUkWuJZ0MwNA1iJO4yM5jGhU9E/kwH4PSbJqD4aP38Wdt+iuG/+753
px76GFBIVhiW6hVORxW1vXGcnrMcHKaoRwgfnEFSK4QyyDeVr5xVEAQOE0vOluyO
AWYGd8pEeMl1gcegcYm+OsdBXdQyvoJBSC74andl2PFOqEu/2wybKCZa86s6IXLi
0PrwtiGWXlOI868ZNlD0TCRTvrES11OZsxx2P9245HNpWo0IULjYlBui4NDVolA=
=KsmY
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Non-HTTP bypass

2015-01-16 Thread Tsantilas Christos
I am preparing this patch for commit, but I have many problems with 
tests/testHttp1Parser tester.
The most of the problems caused because the changes I made in 
Http1Parser aborts immediately parsing when no valid characters found 
for the request method.


These problems can be fixed however there are 1-2 cases where I am not 
sure about correct fix.


For example Http1PArser without my fixes considers as valid methods:
 - with tabs inside method name, for example \tGET
 - with '\0' at the end of method name

About the \t probably we should eat tabs with spaces in 
skipGarbageLines.
About the '\0' do we have such cases?  The true is that I remember in 
the past, cases where a '\0' is appeared inside HTTP request headers. 
But maybe in these cases we must not include it in HTTP request method, 
but consider it as a space.




On 01/14/2015 06:00 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 14/01/2015 7:21 a.m., Tsantilas Christos wrote:

I made all requested changes/fixes. The patch also ported to latest
trunk.




Okay, +1 for commit 

FYI: Alex, kinkie, and myself had a debate on IRC and came to an
agreement for calling the new directive on_unsupported_protocol
instead of on_first_request_error.

Please feel free to make that naming switch when comitting if you
like, it does not require another review IMO.

Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUtpKRAAoJELJo5wb/XPRjtLwH/1L0K9u80Yl95ymszoroP2MB
TivdghsRQcFO8BIbUkWxVp3M7FghUQY9h/famsxX5R55SiAPOgMmXxoCSWTPe+ID
6VPlYdhr8XsUkWuJZ0MwNA1iJO4yM5jGhU9E/kwH4PSbJqD4aP38Wdt+iuG/+753
px76GFBIVhiW6hVORxW1vXGcnrMcHKaoRwgfnEFSK4QyyDeVr5xVEAQOE0vOluyO
AWYGd8pEeMl1gcegcYm+OsdBXdQyvoJBSC74andl2PFOqEu/2wybKCZa86s6IXLi
0PrwtiGWXlOI868ZNlD0TCRTvrES11OZsxx2P9245HNpWo0IULjYlBui4NDVolA=
=KsmY
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Moved PID file management from Coordinator to Master

2015-01-15 Thread Tsantilas Christos

A new patch.

This is has the following change over the old one:
 - When a worker or coordinator receives a shutdown signal, starts the 
shutdown procedure.
 - If a second shutdown signal received by the worker process the 
shutdown_lifetime ignored and stops immediately event loop and completes 
the shutdown procedure.


Regards,
   Christos

On 01/12/2015 07:22 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/01/2015 6:02 a.m., Tsantilas Christos wrote:

Hi all,this patch moves pid file managment from coordinator
process to master process.

This move is the first step necessary to avoid the following race
condition among PID file deletion and shared segment
creation/destruction in SMP Squid:

O1) The old Squid Coordinator removes its PID file and quits. N1)
The system script notices Coordinator death and starts the new
Squid. N2) Shared segments are created by the new Master process.
O2) Shared segments are removed by the old Master process. N3) New
worker/disker processes fail due to missing segments.



The Coordinator needs to continue coordinating activities over the SMP
sockets until the workers are all shutdown and SMP sockets closed,
only then should it do O2 and O1 (in that order).

The planned behaviour for worker shutdown is to:
  W1) early client FD closures into the beginning of the
shutdown_timeout period
  W2) on each client closure or connection going idle, close it
  W3) at end of shutdown_timeout OR last client disconnect, release all
resources.

In that design the AsyncEngine still runs right up until the queue
completes draining. Using SMP sockets to inform Coordinator about
clean shutdown at the end.
The Master process has no way to know if the workers are exiting early
with no clients, or aborting on worker-specific shutdown_timeout
values. But the coordinator can receive a terminated message from them
over SMP sockets.



TODO: The second step (not a part of this change) is to delete
shared memory segments before PID file is deleted (all in the
Master process after this change).

Now the Master process receives signals and is responsible for
forwarding them to the kids.


The command line control process also used manually for the -k options
  to send signals also thinks of itself as Master.

How does this new closing of SMP sockets interact with that other
meaning of Master process?





Please for more informations read the patch preamble.

This is a Measurement Factory project


Some extra notes/ideas --

1) Multiple shutdown signals received by squid

In current squid when coordinator received a shutdown signal, then
replaced shutdown signal handlers with the default handlers. This
is has as result when a second shutdown signal received then the
coordinator process died immediately, without forwarding shutdown
signal to kids. The shutdown of the other kids are finished as
normal.

This patch when master process receives a shutdown signal forward
it to kids and master process is ready to receive a second shutdown
signal. When a second shutdown signal received to master and this
forwarded to kids then the kids died immediately.


Plan was to pass the signal to workers again where they kick off their
own shutdown_timeout event handlers immediately instead of hard
killing workers.

FWIW: Ubuntu Gentoo, and RHEL people are enjoying their patches that
just ignore the repeated signals.




2) The system admin shows a blocked kid (infinity loop or not
responding). He kill with the hand.

Current squid does not restart the kids killed by a TERM or KILL
signal (squid considers it as normal kid shutdown). This patch does
not change this behaviour. The admin is still able to kill with a
kill -11 and in this case the kid will restarted.

My opinion is that squid should restart kids in these cases. Should
not restart a kid only when a shutdown requested from system admin,
or when the kids dying very fast (hopeless()==true ).


TERM and KILL received by the workers often *are* signals sent by the
system admin, or scripts on their behalf. That may decrease in
popularity though when we fix the normal shutdown process issues. For
a while longer we have to take the current reality.




In related topics, I have been trying to figure out a --foreground
command line option that operates like -N but does not disable SMP,
just makes Coordinator == Master. But understanding the SMP
complexities have been blocking me so far.  Are you able and
interested in taking that forward?

Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUtALXAAoJELJo5wb/XPRjwKsIAMPzuOaxvC7WpBHOpQZpG1IZ
1tgbtosaJu3JweE7At729HLL34mR+YagaJbTz4xF6c2mkpLxxYioT6IzSxKc6YCD
mYJr8WU8uuJVI662u7w+3UyLVLI+c3vIwrw8d8NDZaKyAkOIn//Xks9YIG7h+xse
ooK/AAhMaADiS5S1FqY9OM3Q5Pn0nI3R91EpzGIeL1U5bG+43GYiOic3YSKgxSzq
8Q3YemiLj7ex00ZBtCbQ955bB8Zz1Q9I8hWgXdAFHgQKrjNmjdUDHqEg5M6E33zf
Gwpr6M3bO1gbtp7ize9vX7YxIlUjK6TUsbOFPlt9QJYEzzVxoqcgzy0lavVEiXE=
=5Qeg
-END PGP

Re: [squid-dev] Moved PID file management from Coordinator to Master

2015-01-13 Thread Tsantilas Christos

On 01/12/2015 07:22 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/01/2015 6:02 a.m., Tsantilas Christos wrote:

Hi all, this patch moves pid file managment from coordinator
process to master process.

This move is the first step necessary to avoid the following race
condition among PID file deletion and shared segment
creation/destruction in SMP Squid:

O1) The old Squid Coordinator removes its PID file and quits. N1)
The system script notices Coordinator death and starts the new
Squid. N2) Shared segments are created by the new Master process.
O2) Shared segments are removed by the old Master process. N3) New
worker/disker processes fail due to missing segments.



The Coordinator needs to continue coordinating activities over the SMP
sockets until the workers are all shutdown and SMP sockets closed,
only then should it do O2 and O1 (in that order).

The planned behaviour for worker shutdown is to:
  W1) early client FD closures into the beginning of the
shutdown_timeout period
  W2) on each client closure or connection going idle, close it
  W3) at end of shutdown_timeout OR last client disconnect, release all
resources.

In that design the AsyncEngine still runs right up until the queue
completes draining. Using SMP sockets to inform Coordinator about
clean shutdown at the end.


My sense is that the exit status can provide the same functionality and 
also is easier to be implemented.
If the worker aborted early by a segfault then I am doubt that it will 
be able to send a message to an SMP socket.



The Master process has no way to know if the workers are exiting early
with no clients, or aborting on worker-specific shutdown_timeout
values. But the coordinator can receive a terminated message from them
over SMP sockets.


We can use exit status.





TODO: The second step (not a part of this change) is to delete
shared memory segments before PID file is deleted (all in the
Master process after this change).

Now the Master process receives signals and is responsible for
forwarding them to the kids.


The command line control process also used manually for the -k options
  to send signals also thinks of itself as Master.

How does this new closing of SMP sockets interact with that other
meaning of Master process?


The master process is the simplest squid process. I believe that it is 
the best process for doing the cleanup.








Please for more informations read the patch preamble.

This is a Measurement Factory project


Some extra notes/ideas --

1) Multiple shutdown signals received by squid

In current squid when coordinator received a shutdown signal, then
replaced shutdown signal handlers with the default handlers. This
is has as result when a second shutdown signal received then the
coordinator process died immediately, without forwarding shutdown
signal to kids. The shutdown of the other kids are finished as
normal.

This patch when master process receives a shutdown signal forward
it to kids and master process is ready to receive a second shutdown
signal. When a second shutdown signal received to master and this
forwarded to kids then the kids died immediately.


Plan was to pass the signal to workers again where they kick off their
own shutdown_timeout event handlers immediately instead of hard
killing workers.


So do you believe that the workers should not restore default  handlers 
for shutdown signals. Am I correct?
It is easy to be implemented, it already implemented for 
kill-parent-hack where the master process is constrained to send 
multiple kill signals to kids.





FWIW: Ubuntu Gentoo, and RHEL people are enjoying their patches that
just ignore the repeated signals.




2) The system admin shows a blocked kid (infinity loop or not
responding). He kill with the hand.

Current squid does not restart the kids killed by a TERM or KILL
signal (squid considers it as normal kid shutdown). This patch does
not change this behaviour. The admin is still able to kill with a
kill -11 and in this case the kid will restarted.

My opinion is that squid should restart kids in these cases. Should
not restart a kid only when a shutdown requested from system admin,
or when the kids dying very fast (hopeless()==true ).


TERM and KILL received by the workers often *are* signals sent by the
system admin, or scripts on their behalf. That may decrease in
popularity though when we fix the normal shutdown process issues. For
a while longer we have to take the current reality.


ok.







In related topics, I have been trying to figure out a --foreground
command line option that operates like -N but does not disable SMP,
just makes Coordinator == Master. But understanding the SMP
complexities have been blocking me so far.  Are you able and
interested in taking that forward?


I do not know :-)
I must ask Alex for this.

Regards,
Christos




Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUtALXAAoJELJo5wb

Re: [squid-dev] [PATCH] adapting 100-Continue / A Bug 4067 fix

2015-01-07 Thread Tsantilas Christos


On 01/01/2015 01:47 AM, Alex Rousskov wrote:

On 11/09/2014 02:02 PM, Tsantilas Christos wrote:


  void
  Http::Server::processParsedRequest(ClientSocketContext *context)
  {
+if (!buildHttpRequest(context))
+return;
+
+if (Config.accessList.forceRequestBodyContinuation) {
+ClientHttpRequest *http = context-http;
+HttpRequest *request = http-request;
+ACLFilledChecklist 
bodyContinuationCheck(Config.accessList.forceRequestBodyContinuation, request, 
NULL);
+if (bodyContinuationCheck.fastCheck() == ACCESS_ALLOWED) {
+debugs(33, 5, Body Continuation forced);
+request-forcedBodyContinuation = true;



The HTTP code above sends 100-Continue responses to HTTP GET messages
unless the admin is very careful with the ACLs. This can be reproduced
trivially with

   force_request_body_continuation allow all

We should not evaluate force_request_body_continuation if the request
does not have a body IMO. The force_request_body_continuation
documentation makes that option specific to upload requests. If you
agree, please adjust the committed code accordingly.


:-(

I am attaching a patch which fixes this. The patch attached in both 
normal form and -b diff option to allow squid developers examine the 
proposed changes.


In my patch I am moving the check for Expect: 100-Continue header from 
clientProcessRequest() (located in client_side.cc file), which checks 
for valid Expect header values, to Http::Server::processParsedRequest 
method to minimise the required checks.


 I believe this relocation is valid because this check is needed only 
for HTTP protocol. For FTP protocol the Expect header is generated by 
squid and is not possible to include not supported values.


Alternately we can just check if Expect: 100-continue header exist 
inside Http::Server::processParsedRequest.





The similar FTP check seems to be inside the upload-specific code and,
hence, should not need additional do we expect a body? guards.


Yep.




Thank you,

Alex.



=== modified file 'src/client_side.cc'
--- src/client_side.cc	2015-01-01 08:57:18 +
+++ src/client_side.cc	2015-01-07 11:28:41 +
@@ -2605,23 +2605,6 @@
 return;
 }
 
-if (request-header.has(HDR_EXPECT)) {
-const String expect = request-header.getList(HDR_EXPECT);
-const bool supportedExpect = (expect.caseCmp(100-continue) == 0);
-if (!supportedExpect) {
-clientStreamNode *node = context-getClientReplyContext();
-clientReplyContext *repContext = dynamic_castclientReplyContext *(node-data.getRaw());
-assert (repContext);
-conn-quitAfterError(request.getRaw());
-repContext-setReplyToError(ERR_INVALID_REQ, Http::scExpectationFailed, request-method, http-uri,
-conn-clientConnection-remote, request.getRaw(), NULL, NULL);
-assert(context-http-out.offset == 0);
-context-pullData();
-clientProcessRequestFinished(conn, request);
-return;
-}
-}
-
 clientSetKeepaliveFlag(http);
 // Let tunneling code be fully responsible for CONNECT requests
 if (http-request-method == Http::METHOD_CONNECT) {

=== modified file 'src/servers/HttpServer.cc'
--- src/servers/HttpServer.cc	2014-12-20 12:12:02 +
+++ src/servers/HttpServer.cc	2015-01-07 11:18:41 +
@@ -248,10 +248,29 @@
 if (!buildHttpRequest(context))
 return;
 
-if (Config.accessList.forceRequestBodyContinuation) {
 ClientHttpRequest *http = context-http;
-HttpRequest *request = http-request;
-ACLFilledChecklist bodyContinuationCheck(Config.accessList.forceRequestBodyContinuation, request, NULL);
+HttpRequest::Pointer request = http-request;
+
+if (request-header.has(HDR_EXPECT)) {
+const String expect = request-header.getList(HDR_EXPECT);
+const bool supportedExpect = (expect.caseCmp(100-continue) == 0);
+if (!supportedExpect) {
+clientStreamNode *node = context-getClientReplyContext();
+quitAfterError(request.getRaw());
+// setLogUri should called before repContext-setReplyToError
+setLogUri(http, urlCanonicalClean(request.getRaw()));
+clientReplyContext *repContext = dynamic_castclientReplyContext *(node-data.getRaw());
+assert (repContext);
+repContext-setReplyToError(ERR_INVALID_REQ, Http::scExpectationFailed, request-method, http-uri,
+clientConnection-remote, request.getRaw(), NULL, NULL);
+assert(context-http-out.offset == 0);
+context-pullData();
+clientProcessRequestFinished(this, request);
+return;
+}
+
+if (Config.accessList.forceRequestBodyContinuation) {
+ACLFilledChecklist bodyContinuationCheck(Config.accessList.forceRequestBodyContinuation, request.getRaw(), NULL

Re: [squid-dev] [MERGE] Fix splay

2015-01-06 Thread Tsantilas Christos

Hi all,
I am getting assertions while squid.conf parsed using the latest squid 
sources. Looks that the reason is the splay trees. Is it possible that 
this patch causes these bugs?


I am seeing this problem when an acl localhost src 127.0.0.1/32 line 
is parsed (duplicate value?) or when proxy_auth or snmp_community acls 
parsed.


Backtace for proxy_auth acl line (acl UserChtsanti proxy_auth 
chtsanti) is the following:


#0  findchar* (this=0x0, compare=0xb89f80 Debug::Levels,
value=@0x7fff4bcf01e8: 0x1396b50 chtsanti) at 
../../include/splay.h:287

#1  Splaychar*::insert (this=0x0,
value=@0x7fff4bcf01e8: 0x1396b50 chtsanti,
compare=0x6a6620 splaystrcmp(char* const, char* const))
at ../../include/splay.h:302
#2  0x006a7130 in ACLUserData::parse (this=0x1395f50)
at UserData.cc:120
#3  0x006e5d8e in ACL::ParseAclLine (parser=...,
head=0xcb2318 Config+1336) at Acl.cc:263
#4  0x0052ad18 in parse_acl (ae=optimized out) at cache_cf.cc:1292
#5  parse_line (buff=optimized out) at cf_parser.cci:921
#6  0x0052c2b2 in parseOneConfigFile (
file_name=file_name@entry=0x1386d30 squid-http-bypass.conf,
depth=depth@entry=0) at cache_cf.cc:543
#7  0x0052cde9 in parseConfigFile (
file_name=0x1386d30 squid-http-bypass.conf) at cache_cf.cc:584
#8  0x005ffe51 in SquidMain (argc=optimized out, 
argv=0x7fff4bcf08a8)

at main.cc:1397
#9  0x0050815b in SquidMainSafe (argv=optimized out,
argc=optimized out) at main.cc:1251
#10 main (argc=optimized out, argv=optimized out) at main.cc:1244
(gdb) up
#1  Splaychar*::insert (this=0x0,
value=@0x7fff4bcf01e8: 0x1396b50 chtsanti,
compare=0x6a6620 splaystrcmp(char* const, char* const))
at ../../include/splay.h:302
302 assert (!find (value, compare));



On 01/06/2015 05:52 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 3/01/2015 1:54 a.m., Kinkie wrote:

I believe this is missing several delete Foo; in the ACL
destructors. You used new() to allocate the Splay objects,
there should be matching delete() at the same owner/abstraction
level. For example:  ACLIP::~ACLIP() { if (data) {
data-destroy(IPSplay::DefaultFree); data-destroy(); delete
data; } }  ... although, it would be even better if the data
members could be made non-pointers now they are Splay objects.
Is that easy enough to do in this patch ?


Well, splay doesn't have a destructor :\ I'll see if I can do
something about it.


Other than the new() / delete() issue. +1.


Ok, thanks. I'll also see about the empty lines.



For the record this was merged as trunk rev.13810.

Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUq1wFAAoJELJo5wb/XPRjjcIIAI1DKgOnGgX+l5HQXsc6YK2J
ALTV1FryIfE0v179uSy2RshWuq3VdgDbVBHQmHjLhcEtK2wJVNHBWzML7Bqfpdn8
O5ayKRqsjg5nhSsgr0NbaGhivK/JCM5vO3Q4vejFQp2Oy3geZOBvZfntFoUBp1xu
o2P5ZzgFU1sQ9LoM5WoCjfWwfb7BOrKbwkEX/BzK0v9iOx9b3U6dpOtCKT11EDTg
MW9x0UVjAC/TVRY9bXiBAEUHG4TPPIa5Syrx9bIqobA8u2UTLd36TQqNdfJDz3jP
sVGN6B/p6W4gbEsTJnqfP7YEXmhb+gMvCYNdNvlhS7152APdi8+TNNJkSaqWGQQ=
=buZA
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Non-HTTP bypass

2015-01-06 Thread Tsantilas Christos

Hi all,

  I am posting a new patch. Sorry for the delay but the patch was a 
little old, and many changes required..


This patch updated to apply to the latest squid sources, and uses the 
new http parser.


This patch modify the http parser to reject a method which include non 
alphanumeric characters  in first request line and also if no method or 
no url exist in request then return the ERR_WRONG_PROTOCOL error instead 
of the ERR_INVALID_REQ error.
I am fully understand that non alphanumeric chars in method or a missing 
url are not enough to consider a wrong protocol but I believe it is 
not bad.


Please read the patch preamble for more information.

Regards,
   Christos
Non-HTTP bypass

Intercepting proxies often receive non-HTTP connections. Squid cannot currently
deal with such connections well because it assumes that a given port receives
HTTP, FTP, or HTTPS traffic exclusively. This patch allows Squid to tunnel
unexpected connections instead of terminating them with an error.

In this project, we define an unexpected connection as a connection that
resulted in a Squid error during first request parsing. Which errors trigger
tunneling behavior is configurable by the admin using ACLs.

Here is a configuration sketch:

  # define what Squid errors indicate receiving non-HTTP traffic:
  acl foreignProtocol squid_error ERR_WRONG_PROTOCOL ERR_TOO_BIG

  # define what Squid errors indicate receiving nothing:
  acl serverTalksFirstProtocol squid_error ERR_REQUEST_START_TIMEOUT

  # tunnel everything that does not look like HTTP:
  on_first_request_error tunnel foreignProtocol

  # tunnel if we think the client waits for the server to talk first:
  on_first_request_error tunnel serverTalksFirstProtocol

  # in all other error cases, just send an HTTP error page response:
  on_first_request_error respond all

  # Configure how long to wait for the first byte on the incoming
  # connection before raising an ERR_REQUEST_START_TIMEOUT error.
  request_start_timeout 5 seconds

The overall intent of this TCP tunnel is to get Squid out of the communication
loop to the extent possible. Once the decision to tunnel is made, no Squid
errors are going to be sent to the client and tunneled traffic is not going to
be sent to Squid adaptation services or logged to access.log (except for a
single summary line at the end of the transaction). Connection closure at the
server (or client) end of the tunnel is propagated to the other end by closing
the corresponding connection.

This patch also:

 Add on_first_request_error, a new ACL-driven squid.conf directive that can
be used to establish a blind TCP tunnel which relays all bytes from/to the
intercepted connection to/from the intended destination address. See the sketch
above.
The on_first_request_error directive supports fast ACLs only.

 Add squid_error, a new ACL type to match transactions that triggered a given
Squid error. Squid error IDs are used to configure one or more errors to match.
This is similar to the existing ssl_error ACL type but works with
Squid-generated errors rather than SSL library errors.

 Add ERR_WRONG_PROTOCOL, a new Squid error triggered for http_port connections
that start with something that lacks even basic HTTP request structure. This
error is triggered by the HTTP request parser, and probably only when/after the
current parsing code detects an error. That is, we do not want to introduce
new error conditions, but we want to treat some of the currently triggered
parsing errors as a wrong protocol error, possibly after checking the parsing
state or the input buffer for some clues. There is no known way to reliably
distinguish malformed HTTP requests from non-HTTP traffic so the parser has
to use some imprecise heuristics to make a decision in some cases.
In the future, it would be possible to add code to reliably detect some popular
non-HTTP protocols, but adding such code is outside this project scope.

 Add request_start_timeout, a new squid.conf directive to trigger a new
Squid ERR_REQUEST_START_TIMEOUT error if no bytes are received from the
client on a newly established http_port connection during the configured
time period. Applies to all http_ports (for now).

No support for tunneling through cache_peers is included. Configurations
that direct outgoing traffic through a peer may break Squid.

This is a Measurement Factory project
=== modified file 'errors/template.list'
--- errors/template.list	2014-12-20 18:12:02 +
+++ errors/template.list	2014-12-31 10:43:41 +
@@ -30,21 +30,22 @@
 templates/ERR_FTP_UNAVAILABLE \
 templates/ERR_GATEWAY_FAILURE \
 templates/ERR_ICAP_FAILURE \
 templates/ERR_INVALID_REQ \
 templates/ERR_INVALID_RESP \
 templates/ERR_INVALID_URL \
 templates/ERR_LIFETIME_EXP \
 templates/ERR_NO_RELAY \
 templates/ERR_ONLY_IF_CACHED_MISS \
 templates/ERR_PRECONDITION_FAILED \
 templates/ERR_READ_ERROR \
 templates/ERR_READ_TIMEOUT \
 

Re: [squid-dev] [PATCH] pconn_lifetime

2014-12-24 Thread Tsantilas Christos

Patch applied to trunk (revno: 13780).

On 12/23/2014 08:52 PM, Tsantilas Christos wrote:

If there is not any objection I will apply this patch to trunk.

On 12/15/2014 12:39 PM, Tsantilas Christos wrote:

Hi all,

  I am attaching a new patch for the pconn_lifetime feature. A first
patch has posted in mailing list and discussed under the mail thread
with the same title 1-2 months ago.

This patch is similar to the old one posted, with a small fix to better
handle pipelined connections:
   1. finish interpreting the Nth request
  check whether pconn_lifetime has expired
   2. if pconn_lifetime has expired, then stop further reading and
  do not interpret any already read raw bytes of the N+1st request
   3. otherwise, read and interpret read raw bytes of the N+1st request
  and go to #1.

The above should be enough. The pipelined requests are always
idempotent, they do not have body data to take care about, and the web
clients knows that if a pipelined HTTP request failed, it should be
retried in a new connection.

I must recall the following about this patch:
- The pconn_lifetime it applies to any persistent connection,
server, client, or ICAP.
- This patch does not fix other problems may exist in current squid.
- The pconn_lifetime should not confused with the client_lifetime
timeout. They have different purpose.

This is a Measurement Factory project



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime

2014-12-23 Thread Tsantilas Christos

If there is not any objection I will apply this patch to trunk.

On 12/15/2014 12:39 PM, Tsantilas Christos wrote:

Hi all,

  I am attaching a new patch for the pconn_lifetime feature. A first
patch has posted in mailing list and discussed under the mail thread
with the same title 1-2 months ago.

This patch is similar to the old one posted, with a small fix to better
handle pipelined connections:
   1. finish interpreting the Nth request
  check whether pconn_lifetime has expired
   2. if pconn_lifetime has expired, then stop further reading and
  do not interpret any already read raw bytes of the N+1st request
   3. otherwise, read and interpret read raw bytes of the N+1st request
  and go to #1.

The above should be enough. The pipelined requests are always
idempotent, they do not have body data to take care about, and the web
clients knows that if a pipelined HTTP request failed, it should be
retried in a new connection.

I must recall the following about this patch:
- The pconn_lifetime it applies to any persistent connection,
server, client, or ICAP.
- This patch does not fix other problems may exist in current squid.
- The pconn_lifetime should not confused with the client_lifetime
timeout. They have different purpose.

This is a Measurement Factory project



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] RFC 3.5.0.3

2014-12-18 Thread Tsantilas Christos

On 12/18/2014 03:14 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks to the issue behind rev.13760 (Support http_access denials of
SslBump 'peeked' connections.) I intend to release a new beta approx.
20hrs from this writing. I hope this will be the final beta.

If there are any outstanding issues that need to be in the next beta
and can be applied to trunk before then please commit ASAP or request
a hold (I can wait 2-3 days if necessary, but rather get this out sooner).


Please wait 2-3 days...
I am working in a ssl peek-and-splice bug which is important. It will be 
ready soon.

Sorry for this.

Also I applied a patch the revno:13764 (Fix DONT_VERIFY_DOMAIN ssl 
flag) which fixes a small squid bug exist in previous releases too...


Regards,
   Christos




Thank you
Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUktMwAAoJELJo5wb/XPRjmg8IAODWu0/itmyMsR5hVH1TFhBr
wVzpo4XFTngB2GeD+1oHBdMr7DiTg/8bGU9ARkD39wxDOzH4lo6Ak//NkFSI1gO5
6z+OdXRdD34zLxQu67zvr0tNfbnB1530NNzqhnXzHJ4yOcSbyKAVg9dPucXBxH1n
zoFIW/g2vNRcD1clNA+Zmw9TIpYR1zk41ZqU3eNRieo6aThMz8I2L62IskBWU4wB
HGfSMcZDgLppWmkr9MmeKgTB9hl8VscsDHFAZJ+HV1zek1UUOSCzJXJS2Lq5r4nY
GQeHITA/SjDbFqA23XXN2iH9H34qjKlPbaHCD/XW2rqy5werquyXPLmlU4ssl1U=
=nrXj
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Support http_access denials of SslBump peeked connections.

2014-12-16 Thread Tsantilas Christos

On 12/15/2014 02:20 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/12/2014 5:30 a.m., Tsantilas Christos wrote:

Hi all,

If an SSL connection is peeked, it is currently not possible to
deny it with http_access. For example, the following configuration
denies all plain HTTP requests as expected but allows all CONNECTs
(and all subsequent  encrypted/spliced HTTPS requests inside the
allowed CONNECT tunnels):

http_access deny all ssl_bump peek all ssl_bump splice all



I see two separate bugs in your description:

1) For plain HTTP requests the bug is how the client CONNECT request
is getting past http_access deny all in the first place before
bumping is even considered to be an option.

That config should never be getting anywhere near bumping for plain
HTTP requests until *after* the CONNECT has decided to be accepted.

The call chain should be:
   httpAccept()-
   while parseOne() {
 processRequest() -
 doCallouts {
   http_access ACLs,
   adaptation,
etc
   httpStart[ bump | tunnel ]
 }
   }



Yes! this is what this patch does.
It fixes the transparent SSL bumped connections to do exactly this.




2) For intercepted traffic the bug is probably absence of
implicit/fake CONNECT wrapper around the whole connection traffic
content. The fake CONNECT should be passed through http_access before
any traffic is allowed to flow.
  - if SNI is present, then fake with that otherwise fake with TCP dst-IP.


Unfortunately we do not have the SNI info during this step. We have it 
in bumping step2, after we started the bumping.
Because we may splice the connection (establish a tunnel) in step1 we 
should check http_access before this is happens.




I think what we are missing for this #2 bug is really the earlier
proposed tcp_access (or a TLS equivalent).




The bug results in insecure bumping configurations and/or forces
admins to abuse ssl_bump directive (during step1 of bumping) for
access control (as a partial workaround).

This change sends all SSL tunnels (CONNECT and transparent)
through http_access (and adaptation, etc.) checks during bumping
step1. If (real or fake) CONNECT is denied during step1, then Squid
does not connect to the SSL server, but bumps the client
connection, and then delivers an error page (in response to the
first decrypted GET). The behavior is similar to what Squid has
already been doing for server certificate validation errors.

Please read the Technical notes included in patch preamble.


While we are missing proper non-http_access ACL controls this change
is acceptible as a workaround for bug #2.

Can you check whether it also fixes bug #1 properly?

+1.

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUjilTAAoJELJo5wb/XPRj5yYIAMvhz44CBbx9kVIEzoNOsY3T
QELsFQ/LTQ1r3CdV8i0EMCF06uL9GH592oHPNWbkXivRXLmMqo38wmviAQSnMH8b
B6n4y128l95PUPgUW4NnzdaYuH6Xn+E6Y23GLFGlvARjrTiELGaz/EwP36nhk8Kx
ZDBssyyAvEbkVMNqglyJ75ETLS59fMKC3BBzbNZ4ZOAPpeR5N6mKlmTxDgabnXup
KgpnSrXnH2g+JP/PTrde5+gyt3NJtg4j7pDgxYIvAcaFTLD4Ms2M6WqT5adLmXmc
uY/kZpD1a7tjGLc0259633HpLoaKsdupi1jlE9etdf5n4VosASNk8IOLGyeW7fo=
=s1ZX
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Support http_access denials of SslBump peeked connections.

2014-12-16 Thread Tsantilas Christos

On 12/16/2014 01:07 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

If you are happy enough this is a solid patch it can go in ASAP and I
will release a 3.5 beta to test it.



patch applied to trunk


Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUkBKDAAoJELJo5wb/XPRjKGIH/0c8c9aCU+pTYRAayy94/kvF
BE2kg+C9vkbXvdh2knAZDbNGEpUMuj/TLHs8WrZTPOU1ouzQlRwnhplMqH87uXrz
DspcpNMq1yn4/iya/P+D4Wehd/AJ294X2TDWhwfRE78v2MS9kaNaAGL2oXTDtZjT
UYNoKt1USWHLtlJytuHzCHMBG+TKqWHq/YYWEv+QEm89OE+kJPaXa9gUrQMKcW2h
VqBxUlQ66ZrF/X8jgEDFwHwQpadSwo2zw3xNGOEDDLm6EMZMGFtEBJito89+YzPm
l9//ZxjTbSU1/AQwl2EYD5AuzJ3obGUiwCfV222kV+Iikh+r7qy64RIJlxGTybc=
=pnVO
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] pconn_lifetime

2014-12-15 Thread Tsantilas Christos

Hi all,

 I am attaching a new patch for the pconn_lifetime feature. A first 
patch has posted in mailing list and discussed under the mail thread 
with the same title 1-2 months ago.


This patch is similar to the old one posted, with a small fix to better 
handle pipelined connections:

  1. finish interpreting the Nth request
 check whether pconn_lifetime has expired
  2. if pconn_lifetime has expired, then stop further reading and
 do not interpret any already read raw bytes of the N+1st request
  3. otherwise, read and interpret read raw bytes of the N+1st request
 and go to #1.

The above should be enough. The pipelined requests are always 
idempotent, they do not have body data to take care about, and the web 
clients knows that if a pipelined HTTP request failed, it should be 
retried in a new connection.


I must recall the following about this patch:
   - The pconn_lifetime it applies to any persistent connection, 
server, client, or ICAP.

   - This patch does not fix other problems may exist in current squid.
   - The pconn_lifetime should not confused with the client_lifetime 
timeout. They have different purpose.


This is a Measurement Factory project

pconn_lifetime

This patch add a new configuration option the 'pconn_lifetime' to allow users
set the desired maximum lifetime of a persistent connection.

When set, Squid will close a now-idle persistent connection that
exceeded configured lifetime instead of moving the connection into
the idle connection pool (or equivalent). No effect on ongoing/active
transactions. Connection lifetime is the time period from the
connection acceptance or opening time until now.

This limit is useful in environments with long-lived connections
where Squid configuration or environmental factors change during a
single connection lifetime. If unrestricted, some connections may
last for hours and even days, ignoring those changes that should
have affected their behavior or their existence.

This option has the following behaviour when pipelined requests tunneled
to a connection where its lifetime expired:

 1. finish interpreting the Nth request
check whether pconn_lifetime has expired
 2. if pconn_lifetime has expired, then stop further reading and
do not interpret any already read raw bytes of the N+1st request
 3. otherwise, read and interpret read raw bytes of the N+1st request
and go to #1.


This is a Measurement Factory project
=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h	2014-12-04 14:00:17 +
+++ src/SquidConfig.h	2014-12-15 09:28:15 +
@@ -72,40 +72,41 @@
 #if USE_HTTP_VIOLATIONS
 time_t negativeTtl;
 #endif
 time_t maxStale;
 time_t negativeDnsTtl;
 time_t positiveDnsTtl;
 time_t shutdownLifetime;
 time_t backgroundPingRate;
 
 struct {
 time_t read;
 time_t write;
 time_t lifetime;
 time_t connect;
 time_t forward;
 time_t peer_connect;
 time_t request;
 time_t clientIdlePconn;
 time_t serverIdlePconn;
 time_t ftpClientIdle;
+time_t pconnLifetime; /// pconn_lifetime in squid.conf
 time_t siteSelect;
 time_t deadPeer;
 int icp_query;  /* msec */
 int icp_query_max;  /* msec */
 int icp_query_min;  /* msec */
 int mcast_icp_query;/* msec */
 time_msec_t idns_retransmit;
 time_msec_t idns_query;
 time_t urlRewrite;
 } Timeout;
 size_t maxRequestHeaderSize;
 int64_t maxRequestBodySize;
 int64_t maxChunkedRequestBodySize;
 size_t maxRequestBufferSize;
 size_t maxReplyHeaderSize;
 AclSizeLimit *ReplyBodySize;
 
 struct {
 unsigned short icp;
 #if USE_HTCP

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2014-12-08 11:25:58 +
+++ src/cf.data.pre	2014-12-15 09:28:15 +
@@ -6129,40 +6129,65 @@
 TYPE: time_t
 LOC: Config.Timeout.lifetime
 DEFAULT: 1 day
 DOC_START
 	The maximum amount of time a client (browser) is allowed to
 	remain connected to the cache process.  This protects the Cache
 	from having a lot of sockets (and hence file descriptors) tied up
 	in a CLOSE_WAIT state from remote clients that go away without
 	properly shutting down (either because of a network failure or
 	because of a poor client implementation).  The default is one
 	day, 1440 minutes.
 
 	NOTE:  The default value is intended to be much larger than any
 	client would ever need to be connected to your cache.  You
 	should probably change client_lifetime only as a last resort.
 	If you seem to have many client connections tying up
 	filedescriptors, we recommend first tuning the read_timeout,
 	request_timeout, persistent_request_timeout and quick_abort values.
 DOC_END
 
+NAME: pconn_lifetime
+COMMENT: time-units
+TYPE: time_t
+LOC: Config.Timeout.pconnLifetime
+DEFAULT: 0 seconds
+DOC_START
+	Desired maximum lifetime of a persistent connection.
+	When set, Squid will close a now-idle 

Re: [squid-dev] [PATCH] url_rewrite_timeout directive

2014-12-03 Thread Tsantilas Christos

If there is not any objection I will apply the last patch to trunk...


On 11/24/2014 02:36 PM, Tsantilas Christos wrote:

This is a new patch for url_rewrite_timeout feature.

Changes over the last patch:
- The tools/helper-mux/helper-mux fixed to work with the new helpers
request-id.
- Now there is a limit on request retries, it is hardcoded to 2
retries.
- The retrying request with BH replies on storeID and redrector
helpers is nor handled inside helpers.cc code.
- other minor polishing changes

I did not remove the on_timeout option from url_rewrite_timeout
directive. Although it can be emulated using the
use_configured_response option I believe it is a clearer configuration
method. The use_configured_response looks more than a trick and I am
sure in 1-2 years, even me I develop the patch I will forget that exist
a such configuration option.

I must note again that the default behaviour of current helpers
configuration, should not change with this patch.

I hope it is OK.

Regards,
Christos




___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] splay.h replacement

2014-11-24 Thread Tsantilas Christos

On 11/21/2014 07:43 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 19/11/2014 4:08 a.m., Tsantilas Christos wrote:

The compiler has right



I know the cimpilers right about it being garbage. Its just the code
looks like there are things in those templates which are wrongly
depending on the objects being NULL pointers for certain causes.
splay.h needs a serious revamp.


I've gone through now and dropped all the if() and asserts depending
on this==NULL or this!=NULL conditionals. Will apply that when clang
3.5 confirms teh build works.


Unfortunately we are using similar checks in other places too:

 ..TRUNK/src # grep -n this *== *NULL *.cc */*.cc */*/*.cc
store.cc:1671:if (this == NULL)
auth/UserRequest.cc:127:if (this == NULL || getDenyMessage() == NULL) {

Probably used in the past to fix/prevent bugs, but I agree, it is not 
the best method...





Amos



On 11/18/2014 05:23 AM, Amos Jeffries wrote:

Y'all may have noticed the clang 3.5 errors.

lib/MemPoolChunked.cc:370:10: error: 'this' pointer cannot be
null in well-defined C++ code; pointer may be assumed to always
convert to true [-Werror,-Wundefined-bool-conversion]

include/splay.h:228:9: error: 'this' pointer cannot be null in
well-defined C++ code; comparison may be assumed to always
evaluate to false [-Werror,-Wtautological-undefined-compare]

include/splay.h:198:9: error: 'this' pointer cannot be null in
well-defined C++ code; comparison may be assumed to always
evaluate to false [-Werror,-Wtautological-undefined-compare]

include/splay.h:167:9: error: 'this' pointer cannot be null in
well-defined C++ code; comparison may be assumed to always
evaluate to false [-Werror,-Wtautological-undefined-compare]

include/splay.h:228:9: error: 'this' pointer cannot be null in
well-defined C++ code; comparison may be assumed to always
evaluate to false [-Werror,-Wtautological-undefined-compare]


Anyone in a position to update the splay tree code so it stops
depending on NULL pointer dereferences having meaning?

It is mandatory change for continued FreeBSD support.

Amos



-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUb3m7AAoJELJo5wb/XPRjdTsIANox6wcjNevn7GJ8QfjXlz/7
kLFgo3j8YRbvhiPhJVkXH53J2o0RppanD3J6+mHTyD/J44X3bv8zk21xZBoyvvPB
d8TUpZcxW6MHnUu4IVkj0KC9D3atgVOOG9lluJZ3QXy+rIGLs3N3zBl/TBGwZiSL
b3FmW4x6epp+ifUsL8p0MJ9yGTALWrCA4XSo0+ZmH3s5q35vO9Qye4N7IDoCsAZV
mUeY4v/vn3RhynFYdRmfVSst+U8X2vY0o+8l4JTsw5n/mkYXxdT/+m6yK1YsEvUt
a3dvabvmceB6Gz6+D9ru/gIxg1Z15UpfJfzP6oU/Zz+/INHtaZiz/pBcJCojHR0=
=L7FS
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] bug 4033

2014-11-24 Thread Tsantilas Christos

I attached a patch for bug 4033 in squid bugzila:
   http://bugs.squid-cache.org/attachment.cgi?id=3101action=diff

If no objection I will apply it to trunk.

Regards,
   Christos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Http::One::Parser::getHeaderField bug

2014-11-19 Thread Tsantilas Christos

On 11/19/2014 02:45 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Did that fix solve the issue for you?


Yes, please commit to trunk!



Amos

On 13/11/2014 4:06 p.m., Amos Jeffries wrote:

On 13/11/2014 5:34 a.m., Tsantilas Christos wrote:

The following patch is fixing it:

=== modified file 'src/http/one/Parser.cc' ---
src/http/one/Parser.cc  2014-09-14 12:43:00 + +++
src/http/one/Parser.cc  2014-11-12 16:31:08 + @@ -71,7
+71,7 @@ p.chop(0, sizeof(header)-1);

// return the header field-value -xstrncpy(header,
p.rawContent(), p.length()); +strcpy(header, p.c_str());


c_str() re-allocates. We can and need to avoid that here.


debugs(25, 5, returning   header); return header; }


Does this looks like an SBuf bug?


No. I think it was just me overlooking that xstrncpy() length
includes and enforces the '\0' termination.

Use this:  xstrncpy(header, p.rawContent(), p.length()+1);

Amos


-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUbJD/AAoJELJo5wb/XPRjfOEH+wW1/V7VlFnd+yRBax9dpui4
mPFO753VOrV7KBA0juEpgNTgiYdpk7UM1BcVc3WDCG0tkUVc6Cn/b2vWZ87lzk+4
aMcuavlYPwU6Pztr+tWx0Box6uTEt6XoOzErSM1hBWxRrnpgh8MYCWI1L/3ungJ8
PgHwSb8oBDbjhqRFPqMoOqqtmsVEHR0dP+XBBmPmHXzna/zMsrDQylfgEnLHCNUY
3o5DfSgW/csiYIn81fnNvwVm70ZOB7aTWJQUBHvJoP5y0yQ/4LF8SqW5PrKNc8Zw
r40zoM3TqFx7SjLNy63N4IUqwiLhqfrcTFWM5pgsNdWrha5sIJpYiHG6bNckHck=
=NSil
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] Logging fast things

2014-11-19 Thread Tsantilas Christos

Hi all,

In many cases HITs logged with zero response times. The logging entries 
are correct, those transaction took less than a millisecond. However, to 
better monitor Squid performance and to optimize things further, a user 
may want to see more precise response time measurements logged.


This patch add configurable precision for time-related fields such as 
%tr, using a .n syntax similar to the syntax used by the well-known 
printf(3) API to mean maximum field width.


Examples:
  %tr   -- 0 -- millisecond precision (no change compared to today)
  %.3tr -- 0.123 -- microsecond precision (after this project)
  %.6tr -- 0.123456 -- nanosecond precision (year 2050 trading platform??)


Technical notes
==

At the first stages of this patch I used double to record times for 
logging, bug finally I selected to use timeval structures. I hope it is 
good choice


A problem I have was to identify uninitialized timeval values.
Currently squid uses signed integers to record millisecond values and a 
negative value means uninitialized variable.
Finally I used -1 to the timeval::tv_sec field, to mark uninitialized 
timeval variables. The timeval::tv_sec member is of type time_t and we 
are already using -1 to mark uninitialized time_t variables in some 
places inside squid.
An alternate approach is to use for this purpose a MAXLONG, or maybe the 
std::numeric_limitstime_t::max() value.


The DNS time (%dt code) is still in milliseconds.

Also I did not touch mgr statistics. Still the statistics values 
computed using milliseconds.



Regards,
   Christos



This is a Measurement Factory project
Logging fast things

In many cases HITs logged with zero response times. The logging entries are
correct, those transaction took less than a millisecond. However, to better
monitor Squid performance and to optimize things further, a user may want to
see more precise response time measurements logged.

Squid already computes response times with microsecond resolution 
(timeval::tv_usec), which would be enough for any modern measurement, but
Squid loses that precision due to tvSubMs conversion.

This patch add configurable precision for time-related fields such as %tr,
using a .n syntax similar to the syntax used by the well-known printf(3) API
to mean maximum field width.

Examples:
  %tr   -- 0 -- millisecond precision (no change compared to today)
  %.3tr -- 0.123 -- microsecond precision (after this project)
  %.6tr -- 0.123456 -- nanosecond precision (year 2050 trading platform??)

This is a Measurement Factory project


=== modified file 'src/AccessLogEntry.h'
--- src/AccessLogEntry.h	2014-11-07 12:11:21 +
+++ src/AccessLogEntry.h	2014-11-19 15:51:09 +
@@ -124,58 +124,58 @@
 SslDetails();
 
 const char *user; /// emailAddress from the SSL client certificate
 int bumpMode; /// whether and how the request was SslBumped
 } ssl;
 #endif
 
 /** \brief This subclass holds log info for Squid internal stats
  * \todo Inner class declarations should be moved outside
  * \todo some details relevant to particular protocols need shuffling to other sub-classes
  * \todo this object field need renaming to 'squid' or something.
  */
 class CacheDetails
 {
 
 public:
 CacheDetails() : caddr(),
 highOffset(0),
 objectSize(0),
 code (LOG_TAG_NONE),
-msec(0),
 rfc931 (NULL),
 extuser(NULL),
 #if USE_OPENSSL
 ssluser(NULL),
 #endif
 port(NULL)
 {
 caddr.setNoAddr();
 memset(start_time, 0, sizeof(start_time));
+memset(trTime, 0, sizeof(start_time));
 }
 
 Ip::Address caddr;
 int64_t highOffset;
 int64_t objectSize;
 LogTags code;
 struct timeval start_time; /// The time the master transaction started
-int msec;
+struct timeval trTime;
 const char *rfc931;
 const char *extuser;
 #if USE_OPENSSL
 
 const char *ssluser;
 Ssl::X509_Pointer sslClientCert; /// cert received from the client
 #endif
 AnyP::PortCfgPointer port;
 
 } cache;
 
 /** \brief This subclass holds log info for various headers in raw format
  * \todo shuffle this to the relevant protocol section.
  */
 class Headers
 {
 
 public:
 Headers() : request(NULL),
 adapted_request(NULL),
@@ -214,68 +214,72 @@
 const char *method_str;
 } _private;
 HierarchyLogEntry hier;
 HttpReply *reply;
 HttpRequest *request; // virgin HTTP request
 HttpRequest *adapted_request; // HTTP request after adaptation and redirection
 
 /// key:value pairs set by squid.conf note directive and
 /// key=value pairs returned from URL rewrite/redirect helper
 NotePairs::Pointer notes;
 
 #if ICAP_CLIENT
 /** \brief This subclass holds log info for 

Re: [squid-dev] [PATCH] url_rewrite_timeout directive

2014-11-18 Thread Tsantilas Christos

On 11/18/2014 01:27 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 18/11/2014 6:10 a.m., Tsantilas Christos wrote:

On 11/16/2014 01:05 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE- Hash: SHA1

On 16/11/2014 7:38 a.m., Tsantilas Christos wrote:


 



For the record I am still extremely skeptical about the use-case
behind this feature. Timeout i a clear sign that the helper or
system behind it is broken (capacity overload / network
congestion). Continuing to use such systems in a way which
further increases network load is very often a bad choice. Hiding
the issue an even worse choice.



Maybe the administrator does not care about unanswered queries.


I posit that admin who truely do not care about unanswered questions
will not care enough to bother configuring this feature.


So he will not do it.
The default behaviour is to not use timeout.
The default behaviour is not changed.



The administrator should always care about these unanswered questions.
Each one means worse proxy performance. Offering to hide them silently
and automatically without anybody having to do any work about fixing
the underlying cause implies they are not actually problems. Which is
false implication.



This option can be used to increase proxy performance. This is the 
purpose of this option.






Imagine a huge squid proxy, which may have to process thousands of
URLs per minute and the administrator decided that a 10% of helpers
request can fail. In this case we are giving him a way to configure
the squid behaviour in this case: may ignore the probelm, or use a
predefined answer etc


Now that same Squid proxy suddenly starts randomly wrongly rejecting
just one request per minute out of all those thousands.
  Which one, why, and WTF can anybody do about it?


Just only requests which timed out and only if configured to do it...



Previous Squid would log a client request with _TIMEOUT, leave a
helper in Busy or Pending state with full trace of the pending lookup
in mgr reports, possibly even cache.log warnings about helpers queue
length.


The patch does not change squid behaviour, if the timeout is not 
configured (default).




Now all that is reduced to an overly simple aggregated Requests timed
out: hiding in a rarely viewed manager report and a hidden level-3
debug message that lists an anonymous requestId from N identical
requestIds spread over N helpers.



IF configured you will see in mgr report a Requests timed out:  for 
each running server.
If you see that a server has many timedout requests, more than the other 
servers then you can kiil it if you consider it as a problem.





The reason the helper is not answered enough fast, maybe is a
database or an external lookup failure (for example categorized
urls list as a DNS service). In these cases the system admin or
service provider, may prefer a none answer from the helper, or a
preconfigured answer, instead of waiting too long for the answer.


What to do if the internal dependencies are going badly is something
that should be handled *inside the helper*.
  After all Squid has absolutely no way to know if its generic lookup
helper has the DNS lookup on the DB server name or the DB query itself
broken. Each of which might have a different best way to respond.

The intention of the BrokenHelper code was to have the helpers
explicitly inform Squid that they were in trouble and needed help
shifting the load elsewhere.

Squid silently Forgetting that requests have been sent and then
sending *more* is a truely terrible way to fix all the cases of helper
overload.


Again, the timeout is optional. It is an extra option.
If the customer/squid-user, wants to use BH code, still can do it.




The helper can be implemented to not answer at all after a timeout
period. A policy of if you do not answer in a day, please do not
answer at all, I am not interested any mode is common in human
world, and in business.



Yes, it is also being fingered as a major reason for businesses dying
off during the recent recession


:-)
Well, we can do a huge discussion about the recent recession, but I am 
sure I have more examples than you on failing businesses under an 
recessionary environment!



. Late respondants lost out on work and
thus cashflow.


Yes but you can not avoid such cases. A stop-loss, after a reasonable 
configured timeout, is not a bad tactic in such cases.





in src/cache_cf.cc

* please move the *_url_rewrite_timeout() functions to
redirect.cc - that will help reduce dependency issues when we get
around to making the redirector config a standalone FooConfig
object.


Not so easy, because of dependencies on time-related parsing
functions I let it for now. If required we must move time parse
functions to a library file, for example Config.cc



Pity. Oh well.



* it would be simpler and easier on users to omit the
on_timeout=use_configured_response response= and just have the
existence of a response= kv

Re: [squid-dev] splay.h replacement

2014-11-18 Thread Tsantilas Christos

The compiler has right


On 11/18/2014 05:23 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Y'all may have noticed the clang 3.5 errors.

lib/MemPoolChunked.cc:370:10: error: 'this' pointer cannot be null in
well-defined C++ code; pointer may be assumed to always convert to
true [-Werror,-Wundefined-bool-conversion]

include/splay.h:228:9: error: 'this' pointer cannot be null in
well-defined C++ code; comparison may be assumed to always evaluate to
false [-Werror,-Wtautological-undefined-compare]

include/splay.h:198:9: error: 'this' pointer cannot be null in
well-defined C++ code; comparison may be assumed to always evaluate to
false [-Werror,-Wtautological-undefined-compare]

include/splay.h:167:9: error: 'this' pointer cannot be null in
well-defined C++ code; comparison may be assumed to always evaluate to
false [-Werror,-Wtautological-undefined-compare]

include/splay.h:228:9: error: 'this' pointer cannot be null in
well-defined C++ code; comparison may be assumed to always evaluate to
false [-Werror,-Wtautological-undefined-compare]


Anyone in a position to update the splay tree code so it stops
depending on NULL pointer dereferences having meaning?

It is mandatory change for continued FreeBSD support.

Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUaruvAAoJELJo5wb/XPRjEQ4IAL2tCJsN2vK4VO/sp0RX1zMv
/PPUYWUWGlU4dVhOBTHR8TQ3zrTrt+rwl+LQcOMbpPGkUMWAc9rg+y2HLk+EKiaw
sV4emS9R645O7NlZItipilnDbfQLSx82g6gbd1BWfl5vqVpkx27jttSh/0SNVnFV
B49LR9m9CcyRQaGQe5wVdwRHco2We/Kx3KB4JPNuoxjdVpBj2YDCNzRSfod++RZX
tgcjjKMaOibo3L9Fmx91zWcguzg7s0fGwwYCPn7usI6TUAeZwfv96R2sY8U3MYjc
3oiI1rq6iU5alz77uQdh4f+DSWlOplRGGeh7wz3OGWhivj3oA62sVCJFPmpCuZM=
=Fni5
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] adapting 100-Continue / A Bug 4067 fix

2014-11-10 Thread Tsantilas Christos

patch applied to trunk as revno:13697

Regards,
   Christos

On 11/10/2014 12:14 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/11/2014 11:06 p.m., Tsantilas Christos wrote:

On 11/10/2014 09:36 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE- Hash: SHA1

On 10/11/2014 10:02 a.m., Tsantilas Christos wrote:

I am re-posting the patch. There are not huge changes.


Looking over this in more detail...

Whats the point of having buildHttpRequest() a separate method
from processRequest() ?


It makes the code more readable. It is a private method for use
internally by the Http::Server class. I am suggesting to leave it
as is. We can fix method name or its documentation.



The documentation for buildHttpRequest() is wrong, no parsing
takes place there. It is purely post-parse processing of some
parsed HTTP request. ie processParsedRequest() action.


The HttpRequest::CreateFromUrlAndMethod still does parsing. But yes
we are using pre-parsed informations to build HTTP request object.


Is it ok the following documentation for buildHttpRequest?

/// Handles parsing results and build an error reply if parsing


I would say Handles parsing results. May generate then deliver an
error reply to the client if parsing

To make it clear that actual socket I/O might take place.



/// is failed, or parses the url and build the HttpRequest object
/// using parsing results. /// Return false if parsing is failed,
true otherwise.



Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUYJAfAAoJELJo5wb/XPRjs58H/3Dya79ljhnSO1H/tuh5aRFI
vp9cWogMg6OSZT640JMAxsUqL3SkbskUxj8a7aVP6nBW8Xu2VnFJ3PfEIPhq2l09
CI/wt2erj4iPqabZOhdWxNXBGfrG5t+4qIWZe9ALuXAZsjxAGABWU4Z7tYPn3P7A
cGKuwbV/3rWGBiXXJgWZuuQeEmD6+QX2j31ErGCJbvzje1acyLRG8Fo8YABiqE6N
jveeTVJ44ktzSmP2Lf9OneTLp5ktrc/vag0N0zLu6qY3xKxix/QuSBH+7Ff88wX8
ehctZxnWwnN41brIoZUXIb9Zhqr7GaJ/qYyjFCmJgyCp/bumfHJV7OIld+v5Pck=
=78Lp
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Drop some CbDataList

2014-11-10 Thread Tsantilas Christos

On 11/10/2014 03:53 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Most of the uses of CbDataList appear to be abusing it for regular
list storage without any real need for CBDATA to be involved at all.


+1



This replaces several of the simpler uses of CbDataList in favour of
std::list.

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] adapting 100-Continue / A Bug 4067 fix

2014-11-09 Thread Tsantilas Christos

I am re-posting the patch.
There are not huge changes.

Regards,
  Christos

On 11/07/2014 10:54 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2/11/2014 6:59 a.m., Tsantilas Christos wrote:

Hi all,

This patch is a fix for bug 4067:
http://bugs.squid-cache.org/show_bug.cgi?id=4067

Currently squid fails to handle correctly 100 Continue
requests/responses  when adaptation/ICAP  is used. When a
data-upload request enters squid (eg a PUT request with  Expect:
100-continue header), squid sends ICAP headers and HTTP headers to
ICAP server and stuck waiting for ever  the request body data.

This patch implements the force_request_body_continuation access
list directive which controls how Squid handles data upload
requests from HTTP and send the request body to Squid.

An allow match tells Squid to respond with the HTTP 100 or FTP 150
(Please Continue) control message on its own, before forwarding
the request to an adaptation service or peer. Such a response
usually forces the request sender to proceed with sending the body.
A deny match tells Squid to delay that control response until the
origin server confirms that the request body is needed. Delaying is
the default behavior.

This is a Measurement Factory project



Sorry its taken so long. The patch looks fine, but will need a rebase
and re-audit on top of parser-ng changes that just landed in trunk.

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUXIizAAoJELJo5wb/XPRjONUH/3D6jRYleA7FrjbylZhZlJEF
LuirY4YtJtgIJSL0Vqn/6IzSQaRLBds1X4hEdGMNJJU3WftvwRLQ8vEpM6MPadDb
gQ5aPJVkRYMRg56t/ZjxVA+oLJyb5J3kASdg2zE9LPJjRSq14lcQ+vCdOve7ceDY
nin/1hTC9Yl6LFyoLC0oQGcaBv6X9vGOXDQ74v1MCSo6Rt3m83UnqjNtSLDt1sOl
W3WLTyuEQwlc+c1Pz58bqnq1CY/wYebEET8rSIIV3rH/ohdDKV+HHb5NDRhnDeVa
eZkecsRuIdTc3LS3vSzZC9MNvSE310AsjY8+G9no5G5o61U0n90lROH/yh1m8rE=
=/UQi
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



Adapting 100-continue

Currently squid fails to handle correctly 100 Continue requests/responses 
when ICAP is used. The problems discussed in squid bugzilla:
 http://bugs.squid-cache.org/show_bug.cgi?id=4067

A short discussion of the problem:
  When a data upload request enters squid (eg a PUT request with 
Expect: 100-continue header), squid sends ICAP headers and HTTP headers to
ICAP server and stucks waiting for ever  the request body data.

This patch implements the force_request_body_continuation access list 
directive which controls how Squid handles data upload requests from HTTP and
send the request body to Squid. 
An allow match tells Squid to respond with the HTTP 100 or FTP 150
(Please Continue) control message on its own, before forwarding the
request to an adaptation service or peer. Such a response usually forces
the request sender to proceed with sending the body. A deny match tells
Squid to delay that control response until the origin server confirms
that the request body is needed. Delaying is the default behavior.

This is a Measurement Factory project
=== modified file 'src/HttpRequest.cc'
--- src/HttpRequest.cc	2014-11-05 10:18:15 +
+++ src/HttpRequest.cc	2014-11-09 10:54:49 +
@@ -95,40 +95,41 @@
 vary_headers = NULL;
 myportname = null_string;
 tag = null_string;
 #if USE_AUTH
 extacl_user = null_string;
 extacl_passwd = null_string;
 #endif
 extacl_log = null_string;
 extacl_message = null_string;
 pstate = psReadyToParseStartLine;
 #if FOLLOW_X_FORWARDED_FOR
 indirect_client_addr.setEmpty();
 #endif /* FOLLOW_X_FORWARDED_FOR */
 #if USE_ADAPTATION
 adaptHistory_ = NULL;
 #endif
 #if ICAP_CLIENT
 icapHistory_ = NULL;
 #endif
 rangeOffsetLimit = -2; //a value of -2 means not checked yet
+forcedBodyContinuation = false;
 }
 
 void
 HttpRequest::clean()
 {
 // we used to assert that the pipe is NULL, but now the request only
 // points to a pipe that is owned and initiated by another object.
 body_pipe = NULL;
 #if USE_AUTH
 auth_user_request = NULL;
 #endif
 safe_free(canonical);
 
 safe_free(vary_headers);
 
 url.clear();
 urlpath.clean();
 
 header.clean();
 
@@ -235,40 +236,42 @@
 adaptHistory_ = aReq-adaptHistory();
 #endif
 #if ICAP_CLIENT
 icapHistory_ = aReq-icapHistory();
 #endif
 
 // This may be too conservative for the 204 No Content case
 // may eventually need cloneNullAdaptationImmune() for that.
 flags = aReq-flags.cloneAdaptationImmune();
 
 errType = aReq-errType;
 errDetail = aReq-errDetail;
 #if USE_AUTH
 auth_user_request = aReq-auth_user_request;
 extacl_user = aReq-extacl_user;
 extacl_passwd = aReq-extacl_passwd;
 #endif
 
 myportname = aReq-myportname;
 
+forcedBodyContinuation = aReq-forcedBodyContinuation;
+
 // main property is which connection the request was received on (if any

Re: [squid-dev] FYI: the C++11 roadmap

2014-11-05 Thread Tsantilas Christos

On 11/05/2014 06:01 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 6/05/2014 2:21 a.m., Amos Jeffries wrote:

I have just announced the change in 3.4.5 regarding C++11 support
and accompanied it with a notice that GCC verion 4.8 is likely to
become the minimum version later this calendar year.

As it stands (discussed earlier):

* Squid-3.4 needs to build with any GCC 4.0+ version with C++03.

* Squid-3.6 will need to build with C++11.

* The Squid-3.5 situation is still in limbo and will depend on how
long we go before it branches.


Squid-3.5 retains GCC 4.0+ support shared with older versions.



We have a growing list of items needing C++11 features for simpler
implementation. At this point I am going to throw a peg out and say
Sept or Oct for starting to use C++11 specific code features.


I am not finding very good idea...
There are many cases where I am looking a new class or class method 
added in c++11, which may make my life easier, but in the other hand I 
am thinking the nightmare when I should try to build a newer squid 
version in an old system.


Do we have an idea about other open-source projects built on c++?
Which are they policy?



This peg has now moved to Nov. The code cleanup and polishing
patches now going into trunk work fine with C++11 builds but possibly
not with compilers older than Clang 3.3 or GCC 4.8.

Amos



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] FYI: the C++11 roadmap

2014-11-05 Thread Tsantilas Christos

On 11/05/2014 02:31 PM, Kinkie wrote:

MAYBE this could be mitigated by providing RPMs for RHEL6/CentOS 6
that are built on a custom server with a recent gcc but older
libraries?
What do you guys think?


Instead of providing squid packages, we may have more success if we 
provide g++ packages for version 4.8 or latter on these distributions.
Most of the people building squid from sources, want to enable or 
disable special squid features too...



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] helper queue polishing

2014-11-05 Thread Tsantilas Christos

On 11/04/2014 03:52 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 4/11/2014 8:05 a.m., Tsantilas Christos wrote:

This patch try to polish helpers queue to: 1. Make the queue limit
configurable, with the default set to 2*n_max. 2. Move common queue
limit checks inside helper.cc. 3. Make all code to use those new
helper features.

A first discussion for such changes done on squid-dev on a
mail-thread under the subject [PATCH] Prevent external_acl.cc
inBackground assertion on queue overloads :
http://www.squid-cache.org/mail-archive/squid-dev/201301/0083.html

For more informations please read documentation in patch preamble.

This is a Measurement Factory project



in src/redirect.cc:
* the change to redirector bypass is undocumented. The bypass was
previously a bypass of queueing as well as helper. If we now wait
until the queue is full then we have not bypassed the queueing.
  - I think for backward compatibility redirector_bypass should
implicitly set default queue-size to b


OK for this.
It requires to add a Helper::ChildConfig::defaultQueueSize member which 
I do not like it, but it is not bad





in cf.data.pre:
* overloading persist squid may abort its operation for
url_rewrite_children and store_id_children
   - s/persist/persists/

* then storeID helper bypassed
   s/bypassed/is bypassed/


ok for these changes



* need to document how the redirector/storeid helper bypass has changed.
  - when bypass AND queue-size0 are configured a queue exists, older
config the queue was always bypassed.


It should be ok now...
I document these changes in patch preameble, and I will add them when I 
commit the patch to trunk. Do we need such documentation on 
release-3.6.sgml file? I put here only basic documentation.





in src/external_acl.cc:
* there is no need for the temporary local int queue_size
   - use atoi() to set children.queue_size member and adjust later with
if condition like the others.


ok



* in the if-condition calling trySubmit() the debugs outputs a
hard-coded queue is too long error. This is an assumption which will
not always be true - there are many reasons a submit may fail to
submit (helper dying early, shutting down, etc).
  - the right place to output that error is inside trySubmit() where I
see it already does so.
  - if a debugs is needed on trySubmit() failure it should only say
that submit failed. Cannot say why exactly.


I made the debugs simpler.




in src/src/helper/ChildConfig.cc:
* the second constructor is passed n_max value to use. Should be
setting default queue-size to 2*m at that point instead of 0 for
helpers which are not configured explicitly with children parameters.


fixed




in src/helper.cc: (optional but desirable to reduce code)
* why are you retaining the legacy helperSubmit() now that helpers all
have a members?
  - the helpers still using old queue limits should just have
queue_size hard-coded to those limits and use the new API methods.

* helperStatefulSubmit() is/was only used by Negotiate and NTLM auth
helpers which you have documented as supporting the new queue-size. So
there should be no need for helperStatefulSubmit() to exist now.
  - you can probably remove both these wrappers by making submit()
virtual, and both submit() and trySubmit() receive a 'lastserver'
parameter (default of NULL for it).


I prefer to not make these changes now...
The helper and statefullhelper classes need to be more c++ classes and 
their code merged if possible. But this is an other project





Also:
  * missing doc/release-notesic /release-3.6.sgml entries for the updated
config directives.


I put only basic documentation. I did not put the helper behaviour 
changes. Someone can find them when reads the squid.conf.

Is it OK?



* please take care updatign to current trunk, CBDATA details have been
altered around some of these chunks.


ok.



Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUWDFhAAoJELJo5wb/XPRjjDIH/2xPrVJ6/Fegayh3jcRtR/VK
Ru/NzM9Rh5/UM1zTmHzknR+nrGoK+XOOA5c3E6MVj4+KI1o5PycRHNQlTGcqfnYO
AcOi3qRwbLmVTMty+mz1mmV/AFs73IKP4yPYkThb861ySo00Pb9WsPkJ0NVh3jon
RdxROoPkFPGimmwEiRUqpu8y8JuuxfsW/tJEwR8nLCdR0vbVbszTMLGrdiYPNOQ1
NfN2i5+z5S0sCz9qYPRYrvduwdboawGBCZ7GxQl0TsoJG2YDqZ20UUOwNImtlOFV
XB0T+zVtet10EQXHkMsuE3danN2aHQMqrxSgVF1htkRknp5WuneXyLju3wWPuIw=
=Brjc
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



Make helper queue size configurable, with consistent defaults and better overflow handling.

This patch adds a queue-size=N option to helpers configuration. This
option allows users to configure the maximum number of queued requests
to busy helpers. We also adjusted the default queue size limits to be
more consistent across all helpers and made Squid more robust on some
queue overflows:

- external_acl helpers
Make the maximum queue

Re: [squid-dev] [PATCH] sslproxy_cert_sign_hash configuration option

2014-10-07 Thread Tsantilas Christos

On 10/07/2014 10:26 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thank you for the details and your patience putting up with me.


I will try to provide more informations, when I am posting a patch.




+1 for commit with the following polish...

in src/ssl/gadgets.h please use #if !defined(...) instead of #ifndef



OK I will make this fix and I will commit to trunk.



Cheers
Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUM5WwAAoJELJo5wb/XPRj9/QIAMS1AU7iDQzEMBaPF3GIyzrS
rKl22+ebGKTu7Dou+6k52VX3f7qObCqrF1SXbzJGP8nifqGqAUQA5a5owvbpHRI4
y+QMATATkdbJ/u47mkgf/hkuk9PmBjRsVMdXWZjE0RztVfqhAMejtNndrCoIV774
iY5+0QCmvBQEyXUKhp0OIt9/WemPhKDAwBLScyp10NOlBKCxiw3SXdfsM3CsIkT2
bbAHH8Ok0Z1yF4/NLk3SoT5cGXCIUSsKflsoDX0QQc//4PMyI0Xo9joM1SFSTcfd
dKz92d6UVeHxiFk4bJi1xb2hjybFBdp7cjepyo6E8hVYgvbvk7kwgBL2JinjGv4=
=gdgP
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [PATCH] pconn_lifetime

2014-09-02 Thread Tsantilas Christos

On 09/02/2014 03:51 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2/09/2014 4:49 a.m., Tsantilas Christos wrote:

Hi all,

This patch add a new configuration option the 'pconn_lifetime' to
allow users set the desired maximum lifetime of a persistent
connection.

When set, Squid will close a now-idle persistent connection that
exceeded configured lifetime instead of moving the connection into
the idle connection pool (or equivalent). No effect on
ongoing/active transactions. Connection lifetime is the time period
from the connection acceptance or opening time until now.

This limit is useful in environments with long-lived connections
where Squid configuration or environmental factors change during a
single connection lifetime. If unrestricted, some connections may
last for hours and even days, ignoring those changes that should
have affected their behavior or their existence.

This is a Measurement Factory project


Two problems with this.

* the directive name does not indicate whether it applies to client,
server, or ICAP conections.


It applies to any connection, server, client or ICAP connections.



* the rationale for its need is a bit screwey.
  Any directives which affect Squid current running state need to
ensure in their post-reconfigure logics that state is consistent.
Preferrably create a runner, or use configDoConfigure if that is to much.


Do you mean to run over the persistent connections list and change the 
timeout?




To support that we need some API updates to globally access the
currently active connection lists for servers/client/icap. But there
are other things like stats and reporting also needing that API, so we
should look at adding it instead of yet another temporary workaround.


We do not need to make any change to support ICAP and server side 
connections. This is because we are storing this connections to 
IdleConnList or PconnPool structures.


Currently we can not change timeout for client-side keep-alived 
connections. We do not have a list with these connections available.





Amos





[PATCH] pconn_lifetime

2014-09-01 Thread Tsantilas Christos

Hi all,

This patch add a new configuration option the 'pconn_lifetime' to allow 
users set the desired maximum lifetime of a persistent connection.


When set, Squid will close a now-idle persistent connection that 
exceeded configured lifetime instead of moving the connection into the 
idle connection pool (or equivalent). No effect on ongoing/active 
transactions. Connection lifetime is the time period from the connection 
acceptance or opening time until now.


This limit is useful in environments with long-lived connections where 
Squid configuration or environmental factors change during a single 
connection lifetime. If unrestricted, some connections may last for 
hours and even days, ignoring those changes that should have affected 
their behavior or their existence.


This is a Measurement Factory project



pconn_lifetime

This patch add a new configuration option the 'pconn_lifetime' to allow users
set the desired maximum lifetime of a persistent connection.

When set, Squid will close a now-idle persistent connection that
exceeded configured lifetime instead of moving the connection into
the idle connection pool (or equivalent). No effect on ongoing/active
transactions. Connection lifetime is the time period from the
connection acceptance or opening time until now.

This limit is useful in environments with long-lived connections
where Squid configuration or environmental factors change during a
single connection lifetime. If unrestricted, some connections may
last for hours and even days, ignoring those changes that should
have affected their behavior or their existence.

This is a Measurement Factory project
=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h	2014-08-26 09:01:27 +
+++ src/SquidConfig.h	2014-09-01 10:30:40 +
@@ -92,40 +92,41 @@
 #if USE_HTTP_VIOLATIONS
 time_t negativeTtl;
 #endif
 time_t maxStale;
 time_t negativeDnsTtl;
 time_t positiveDnsTtl;
 time_t shutdownLifetime;
 time_t backgroundPingRate;
 
 struct {
 time_t read;
 time_t write;
 time_t lifetime;
 time_t connect;
 time_t forward;
 time_t peer_connect;
 time_t request;
 time_t clientIdlePconn;
 time_t serverIdlePconn;
 time_t ftpClientIdle;
+time_t pconnLifetime; /// pconn_lifetime in squid.conf
 time_t siteSelect;
 time_t deadPeer;
 int icp_query;  /* msec */
 int icp_query_max;  /* msec */
 int icp_query_min;  /* msec */
 int mcast_icp_query;/* msec */
 time_msec_t idns_retransmit;
 time_msec_t idns_query;
 } Timeout;
 size_t maxRequestHeaderSize;
 int64_t maxRequestBodySize;
 int64_t maxChunkedRequestBodySize;
 size_t maxRequestBufferSize;
 size_t maxReplyHeaderSize;
 AclSizeLimit *ReplyBodySize;
 
 struct {
 unsigned short icp;
 #if USE_HTCP
 

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2014-08-26 09:01:27 +
+++ src/cf.data.pre	2014-09-01 15:13:28 +
@@ -5981,40 +5981,65 @@
 TYPE: time_t
 LOC: Config.Timeout.lifetime
 DEFAULT: 1 day
 DOC_START
 	The maximum amount of time a client (browser) is allowed to
 	remain connected to the cache process.  This protects the Cache
 	from having a lot of sockets (and hence file descriptors) tied up
 	in a CLOSE_WAIT state from remote clients that go away without
 	properly shutting down (either because of a network failure or
 	because of a poor client implementation).  The default is one
 	day, 1440 minutes.
 
 	NOTE:  The default value is intended to be much larger than any
 	client would ever need to be connected to your cache.  You
 	should probably change client_lifetime only as a last resort.
 	If you seem to have many client connections tying up
 	filedescriptors, we recommend first tuning the read_timeout,
 	request_timeout, persistent_request_timeout and quick_abort values.
 DOC_END
 
+NAME: pconn_lifetime
+COMMENT: time-units
+TYPE: time_t
+LOC: Config.Timeout.pconnLifetime
+DEFAULT: 0 seconds
+DOC_START
+	Desired maximum lifetime of a persistent connection.
+	When set, Squid will close a now-idle persistent connection that
+	exceeded configured lifetime instead of moving the connection into
+	the idle connection pool (or equivalent). No effect on ongoing/active
+	transactions. Connection lifetime is the time period from the
+	connection acceptance or opening time until now.
+	
+	This limit is useful in environments with long-lived connections
+	where Squid configuration or environmental factors change during a
+	single connection lifetime. If unrestricted, some connections may
+	last for hours and even days, ignoring those changes that should
+	have affected their behavior or their existence.
+	
+	Currently, a new lifetime value supplied via Squid reconfiguration
+	has no effect on already idle connections unless they become busy.
+	
+	When set to '0' this limit is not used.
+DOC_END
+
 NAME: 

[PATCH] %tt (total server time) is not computed in some cases

2014-08-27 Thread Tsantilas Christos

Hi all,

The total server time is not computed in some cases, for example for 
CONNECT requests. An other example case is when server-first bumping 
mode is used and squid connects to SSL peer, but connection terminated 
before the SSL handshake completes.


The attached patch is trying to fix these cases.

This is a Measurement Factory project
%tt (total server time) is not computed in some cases

The total server time is not computed for CONNECT requests.
An other example case is when server-first bumping mode is used and squid
connects to SSL peer, but connection terminated before the SSL handshake
completes.

This is a Measurement Factory project

=== modified file 'src/FwdState.cc'
--- src/FwdState.cc	2014-08-26 09:01:27 +
+++ src/FwdState.cc	2014-08-27 10:31:20 +
@@ -225,40 +225,42 @@
 p = new Comm::Connection();
 p-peerType = ORIGINAL_DST;
 p-remote = clientConn-local;
 getOutgoingAddress(request, p);
 
 debugs(17, 3, HERE  using client original destination:   *p);
 serverDestinations.push_back(p);
 }
 #endif
 
 void
 FwdState::completed()
 {
 if (flags.forward_completed) {
 debugs(17, DBG_IMPORTANT, HERE  FwdState::completed called on a completed request! Bad!);
 return;
 }
 
 flags.forward_completed = true;
 
+request-hier.stopPeerClock(false);
+
 if (EBIT_TEST(entry-flags, ENTRY_ABORTED)) {
 debugs(17, 3, HERE  entry aborted);
 return ;
 }
 
 #if URL_CHECKSUM_DEBUG
 
 entry-mem_obj-checkUrlChecksum();
 #endif
 
 if (entry-store_status == STORE_PENDING) {
 if (entry-isEmpty()) {
 if (!err) // we quit (e.g., fd closed) before an error or content
 fail(new ErrorState(ERR_READ_ERROR, Http::scBadGateway, request));
 assert(err);
 errorAppendEntry(entry, err);
 err = NULL;
 #if USE_OPENSSL
 if (request-flags.sslPeek  request-clientConnectionManager.valid()) {
 CallJobHere1(17, 4, request-clientConnectionManager, ConnStateData,
@@ -626,40 +628,42 @@
 retryOrBail();
 }
 
 void
 FwdState::retryOrBail()
 {
 if (checkRetry()) {
 debugs(17, 3, HERE  re-forwarding (  n_tries   tries,   (squid_curtime - start_t)   secs));
 // we should retry the same destination if it failed due to pconn race
 if (pconnRace == raceHappened)
 debugs(17, 4, HERE  retrying the same destination);
 else
 serverDestinations.erase(serverDestinations.begin()); // last one failed. try another.
 startConnectionOrFail();
 return;
 }
 
 // TODO: should we call completed() here and move doneWithRetries there?
 doneWithRetries();
 
+request-hier.stopPeerClock(false);
+
 if (self != NULL  !err  shutting_down) {
 ErrorState *anErr = new ErrorState(ERR_SHUTTING_DOWN, Http::scServiceUnavailable, request);
 errorAppendEntry(entry, anErr);
 }
 
 self = NULL;	// refcounted
 }
 
 // If the Server quits before nibbling at the request body, the body sender
 // will not know (so that we can retry). Call this if we will not retry. We
 // will notify the sender so that it does not get stuck waiting for space.
 void
 FwdState::doneWithRetries()
 {
 if (request  request-body_pipe != NULL)
 request-body_pipe-expectNoConsumption();
 }
 
 // called by the server that failed after calling unregister()
 void
@@ -781,42 +785,41 @@
 ftimeout = 5;
 
 if (ftimeout  ctimeout)
 return (time_t)ftimeout;
 else
 return (time_t)ctimeout;
 }
 
 /**
  * Called after forwarding path selection (via peer select) has taken place
  * and whenever forwarding needs to attempt a new connection (routing failover).
  * We have a vector of possible localIP-remoteIP paths now ready to start being connected.
  */
 void
 FwdState::connectStart()
 {
 assert(serverDestinations.size()  0);
 
 debugs(17, 3, fwdConnectStart:   entry-url());
 
-if (!request-hier.first_conn_start.tv_sec) // first attempt
-request-hier.first_conn_start = current_time;
+request-hier.startPeerClock();
 
 if (serverDestinations[0]-getPeer()  request-flags.sslBumped) {
 debugs(50, 4, fwdConnectStart: Ssl bumped connections through parent proxy are not allowed);
 ErrorState *anErr = new ErrorState(ERR_CANNOT_FORWARD, Http::scServiceUnavailable, request);
 fail(anErr);
 self = NULL; // refcounted
 return;
 }
 
 request-flags.pinned = false; // XXX: what if the ConnStateData set this to flag existing credentials?
 // XXX: answer: the peer selection *should* catch it and give us only the pinned peer. so we reverse the =0 step below.
 // XXX: also, logs will now lie if pinning is broken and leads to an error message.
 if (serverDestinations[0]-peerType == PINNED) {
 ConnStateData *pinned_connection = request-pinnedConnection();
 debugs(17,7, pinned peer 

Re: /bzr/squid3/trunk/ r13517: Fix %USER_CA_CERT_* and %CA_CERT_ external_acl formating codes

2014-07-31 Thread Tsantilas Christos

On 07/31/2014 03:35 AM, Amos Jeffries wrote:

Hi Christos,

Can you confirm or deny for me that these %USER_CERT_* macros map to the
%ssl::cert_* logformat codes?


Not exactly.
 - The %ssl::cert_subject is equivalent to the %USER_CERT_DN external 
acl macro

 - The %ssl::cert_issuer is equivalent to the %USER_CA_CERT_DN



Their existence is one of the outstanding issues with external_acl_type
upgrade to logformat.


The certificate and certificate issuer subjects are in the form:
   C=GR, ST=ATTIKI, L=Athens, O=ChTsanti, OU=Admin, CN=fortune

The %USER_CERT_* and %USER_CA_CERT_* external acl macros designed to 
return fields of the subject. For example someone can use:

  %USER_CERT_CN or %USER_CA_CERT_O

The DN suffix means all the subject

The %ssl::cert_subject and %ssl::cert_issuer log formatting codes 
return the  cert and issuer subjects.
We need to support arguments in %ssl::cert_subject and 
%ssl::cert_issuer to have similar functionality with external acl. For 
example:

  %{CN}ssl::cert_subject
  %{CN}ssl::cert_issuer
  %{DN}ssl::cert_subject




Cheers
Amos

On 31/07/2014 3:31 a.m., Christos Tsantilas wrote:


revno: 13517
committer: Christos Tsantilas chtsa...@users.sourceforge.net
branch nick: trunk
timestamp: Wed 2014-07-30 18:31:10 +0300
message:
   Fix %USER_CA_CERT_* and %CA_CERT_ external_acl formating codes

 * The attribute part of the %USER_CA_CERT_xx and %CA_CERT_xx formating 
codes
   is not parsed correctly, make these formating codes useless.
 * The %USER_CA_CERT_xx documented wrongly
modified:
   src/cf.data.pre
   src/external_acl.cc








Re: TOS values

2014-07-17 Thread Tsantilas Christos

On 07/16/2014 11:01 PM, Alex Rousskov wrote:

On 07/16/2014 11:39 AM, Tsantilas Christos wrote:

Hi all,
  Squid currently does not make a check for the TOS values used in squid
configuration file. Squid will accept 8bit numbers as TOS values, however:
  1) TOS values with 1st ad 2nd bit set can not be used. These bits used
by the ECN. For Linux if someone try to set the 0x23 value as TOS value
(which sets 1st and 1nd bits), the 0x20 will be used instead, without
any warn for the user.

  2) Some of the TOS values are already reserved for example those which
are reserved from RFC2597.

The above may confuse squid users.

Maybe it is not bad idea to:
  - Warn users when try to use TOS value which uses the ECN bits
  - Warn users when try to use TOS values which are not reserved. The
user will know that this value is free for use.

Opinions?



This is not my area of expertise, but

* the first proposed warning sound good to me, and

* it is not clear to me whether Squid should avoid using ToS values from
RFC 2597. It feels like Squid could, in some cases, _set_ those ToS
values to use RFC 2597 features provided by its network.


I am not proposing to not use TOS values from RFC2697.
I am proposing to warn users when TOS values  other than those in 
RFC2697 are used.
The user will be able to ignore warning, but at the same time he will 
know that this TOS value is free for use.







Thank you,

Alex.






Re: TOS values

2014-07-17 Thread Tsantilas Christos

On 07/17/2014 02:51 AM, Amos Jeffries wrote:

On 17/07/2014 8:01 a.m., Alex Rousskov wrote:

On 07/16/2014 11:39 AM, Tsantilas Christos wrote:

Hi all,
  Squid currently does not make a check for the TOS values used in squid
configuration file. Squid will accept 8bit numbers as TOS values, however:
  1) TOS values with 1st ad 2nd bit set can not be used. These bits used
by the ECN. For Linux if someone try to set the 0x23 value as TOS value
(which sets 1st and 1nd bits), the 0x20 will be used instead, without
any warn for the user.

  2) Some of the TOS values are already reserved for example those which
are reserved from RFC2597.

The above may confuse squid users.

Maybe it is not bad idea to:
  - Warn users when try to use TOS value which uses the ECN bits
  - Warn users when try to use TOS values which are not reserved. The
user will know that this value is free for use.

Opinions?



This is not my area of expertise, but

* the first proposed warning sound good to me, and

* it is not clear to me whether Squid should avoid using ToS values from
RFC 2597. It feels like Squid could, in some cases, _set_ those ToS
values to use RFC 2597 features provided by its network.



For now Squid still follows RFC 2474 and have the documented comment
about ECN problems for somewhat loose RFC 3168 (ECN) support.

RFC 3260 section 4 Definition of the DS Field explicitly obsoletes the
name IPv4 TOS and IPv6 TCLASS. They are both now defined as a 6-bit
DS value followed by separate ECN bits.


IMO, we should update Squid to RFC3260 support - ie mask out the ECN
bits and prevent configuring them.  Like so:
  1) replace all config detailes named TOS with DS ones that only
takes a hex bytecode so that,
  2) DS values always be masked with 0xFC and,


At least for linux this is done by OS. If you try to use 0x23 as TOS 
value the 0x20 will be used instead.

The problem is that this is done silently, without any warn to the user.


  3) when TOS named options are found display an upgrade warning and mask
out the ECN bits.


What do you mean with TOS named options? Are they the AF1x, AF2x 
referred in RFC2597?




PS. need to add RFC3260 to the doc/rfcs/1-index.txt listing after this.

Amos





TOS values

2014-07-16 Thread Tsantilas Christos

Hi all,
 Squid currently does not make a check for the TOS values used in squid 
configuration file. Squid will accept 8bit numbers as TOS values, however:
 1) TOS values with 1st ad 2nd bit set can not be used. These bits used 
by the ECN. For Linux if someone try to set the 0x23 value as TOS value 
(which sets 1st and 1nd bits), the 0x20 will be used instead, without 
any warn for the user.


 2) Some of the TOS values are already reserved for example those which 
are reserved from RFC2597.


The above may confuse squid users.

Maybe it is not bad idea to:
 - Warn users when try to use TOS value which uses the ECN bits
 - Warn users when try to use TOS values which are not reserved. The 
user will know that this value is free for use.


Opinions?


Re: [RFC] post-cache REQMOD

2014-07-11 Thread Tsantilas Christos

The post-cache REQMOD and post-cache RESPMOD is a must for squid.

The example of PageSpeed also is very good. I must note that there are 
already similar features integrated to other commercial products, for 
example:


http://www.citrix.com/products/bytemobile-adaptive-traffic-management/tech-info.html 
  (Web and video optimization - Quality Aware Transcoding, 
Smartphone Application Acceleration and Web Optimization)


The PageSpeed example fits better to a post-cache RESPMOD feature. Is 
the post-cacge REQMOD just a first step to support all post-cache 
vectoring points?




On 07/11/2014 01:15 AM, Alex Rousskov wrote:

Hello,

 I propose adding support for a third adaptation vectoring point:
post-cache REQMOD. Services at this new point receive cache miss
requests and may adapt them as usual. If a service satisfies the
request, the service response may get cached by Squid. As you know,
Squid currently support pre-cache REQMOD and pre-cache RESPMOD.


We have received many requests for post-cache adaptation support
throughput the years, and I personally resisted the temptation of adding
another layer of complexity (albeit an optional one) because it is a lot
of work and because many use cases could be addressed without post-cache
adaptation support.

The last straw (and the motivation for this RFC) was PageSpeed[1]
integration. With PageSpeed, one can generate various variants of
optimized content. For example, mobile users may receive smaller
images. Apache and Nginx support PageSpeed modules. It is possible to
integrate Squid with PageSpeed (and similar services) today, but it is
not possible for Squid to _cache_ those generated variants unless one is
willing to pay for another round trip to the origin server to get
exactly the same unoptimized content.

The only way to support Squid caching of PageSpeed variants without
repeated round trips to the origin server is using two Squids. The
parent Squid would cache origin server responses while the child Squid
would adapt parent's responses and cache adapted content. Needless to
say, running two Squids (each with its own cache) instead of one adds
significant performance/administrative overheads and complexity.


As far as internals are concerned, I am currently thinking of launching
adaptation job for this vectoring point from FwdState::Start(). This
way, its impact on the rest of Squid would be minimal and some adapters
might even affect FwdState routing decisions. The initial code name for
the new class is MissReqFilter, but that may change.



The other candidate location for plugging in the new vectoring point is
the Server class. However, that class is already complex. It handles
communication with the next hop (with child classes doing
protocol-specific work and confusing things further) as well as
pre-cache RESPMOD vectoring point with caching initiation on top of
that. The Server code already has trouble distinguishing various content
streams it has to juggle. I am worried that adding another vectoring
point there would make that complexity significantly worse.

It is possible that we would be able to refactor/encapsulate some of the
code so that it can be reused in both the existing Server and the new
MissReqFilter classes. I will look out for such opportunities while
trying to keep the overall complexity in check.


Any objections to adding post-cache REQMOD or better implementation ideas?


Thank you,

Alex.
[1] https://developers.google.com/speed/pagespeed/





Re: [RFC] post-cache REQMOD

2014-07-11 Thread Tsantilas Christos

On 07/11/2014 05:47 PM, Alex Rousskov wrote:

On 07/11/2014 05:27 AM, Tsantilas Christos wrote:


The PageSpeed example fits better to a post-cache RESPMOD feature.


I do not think so. Post-cache RESPMOD does not allow Squid to cache the
adapted variants. Please let me know if I missed how post-cache RESPMOD
can do that.


I did not read correctly the problem you want to solve. I had in my mind 
a proxy which cache original content and then adapts the cached content 
according client rules.

But you want to cache adapted content.


However still I am not sure I can understand how the post-cache reqmod 
will help.

Assume the following scenario:
   - Client A requests original web page
   - Client B requests optimized web page (removed spaces and comments)

I am expecting a solution which will store to cache two copies of the 
web page, the optimized and the original copy.
A solution on this is can be to use a mechanism similar to the vary 
headers, for example define a ICAP header which should included to vary.
I did not look to storeID feature but probably can be used for the same 
purpose.





The key here is that PageSpeed and similar services want to create (and
cache) many adapted responses out of a single virgin response. Neither
HTTP itself nor the Squid architecture support that well. Post-cache
REQMOD allows basic PageSpeed support (the first request for small
adapted content gets large virgin content, but the second request for
small content fetches it from the PageSpeed cache, storing it in Squid
cache). To optimize PageSpeed support further (so that the first request
can get small content), we will need to add another generally useful
feature, but I would rather not bring it into this discussion (there
will be a separate RFC if we get that far).


Probably I did not understand well how the PageSpeed works or what a 
PageSpeed cache means.  But in the above scenario squid looks that will 
store only one version of the content (the small content).

Is this the only required?
What am I missing?



The alternative is to create a completely new interface (not a true
vectoring point) that allows an adaptation service to push multiple
adapted responses into the Squid cache _and_ tell Squid which of those
responses to use for the current request. While I have considered
proposing that, I still think we would be better off supporting
standard and well understood building blocks (such as standard
adaptation vectoring points) rather than such highly-specialized
interfaces. Please let me know if you disagree.



Is
the post-cacge REQMOD just a first step to support all post-cache
vectoring points?


You can certainly view it that way, but I do not propose or promise
adding post-cache RESPMOD :-).


Thank you,

Alex.




On 07/11/2014 01:15 AM, Alex Rousskov wrote:

Hello,

  I propose adding support for a third adaptation vectoring point:
post-cache REQMOD. Services at this new point receive cache miss
requests and may adapt them as usual. If a service satisfies the
request, the service response may get cached by Squid. As you know,
Squid currently support pre-cache REQMOD and pre-cache RESPMOD.


We have received many requests for post-cache adaptation support
throughput the years, and I personally resisted the temptation of adding
another layer of complexity (albeit an optional one) because it is a lot
of work and because many use cases could be addressed without post-cache
adaptation support.

The last straw (and the motivation for this RFC) was PageSpeed[1]
integration. With PageSpeed, one can generate various variants of
optimized content. For example, mobile users may receive smaller
images. Apache and Nginx support PageSpeed modules. It is possible to
integrate Squid with PageSpeed (and similar services) today, but it is
not possible for Squid to _cache_ those generated variants unless one is
willing to pay for another round trip to the origin server to get
exactly the same unoptimized content.

The only way to support Squid caching of PageSpeed variants without
repeated round trips to the origin server is using two Squids. The
parent Squid would cache origin server responses while the child Squid
would adapt parent's responses and cache adapted content. Needless to
say, running two Squids (each with its own cache) instead of one adds
significant performance/administrative overheads and complexity.


As far as internals are concerned, I am currently thinking of launching
adaptation job for this vectoring point from FwdState::Start(). This
way, its impact on the rest of Squid would be minimal and some adapters
might even affect FwdState routing decisions. The initial code name for
the new class is MissReqFilter, but that may change.



The other candidate location for plugging in the new vectoring point is
the Server class. However, that class is already complex. It handles
communication with the next hop (with child classes doing
protocol-specific work and confusing things further) as well

  1   2   3   4   5   6   7   >