Re: [squid-dev] [PATCH] Refactor path strings into class URL

2015-07-03 Thread Amos Jeffries
On 4/07/2015 4:09 a.m., Kinkie wrote:
> Hi,
> my review (copy-paste from IRC)
> 
>  SBuf::size_type urlLen = ...
>  what about using auto?
>  auto seems to only work sometimes.
>  O_O ?

In this case (./src/adaptation/icap/Options.cc) it makes the return type
non-const.


>  Looks strange in such a trivial case
>  src/client_side.cc
>  there's a SBuf tmp(..); strcpy().
>  Can't we do better?
>  we know the length of the SBuf after all
>  you could xstrncpy(..., ...->length())

Done.


>  that's one fewer copy
>  I wouldn't so systematically mention c_str as a performance
> regression
>  e.g. when doing mime lookup: yes it reallocs but it's a few bytes
>  I'd rather TODO: refactor mime*
>  alex complains if I dont. Ive avoided it on the ones I can see are
> not regressions (ie replace a copy with a copy)

>  xstrndup(tmp.c_str(),tmp.length());
>  why go c_str() if you know the length?
>  I'd go rawContent()
>  and later on xstrdup(), forgetting you know the length
>  this is all in FtpGateway.cc

Done.

>  src/gopher.cc -> xstrncpy(request, path.c_str(), MAX_URL);
>  maybe
> xstrncpy(request,path.rawContent(),min(path.length(),MAX_URL)) could do
>  IIRC xstrncpy null-terminates, right?

It does. Using:
  SBuf path = tok.remaining().substr(0, MAX_URL);
  xstrncpy(request, path.rawContent(), path.length());


>  HttpStateData::buildRequestPrefix
>  maybe it's possible to have url the other way around
>  SBuf url;
>  if (_peer && !_peer->options.originserver)
>  rul = SBuf(urlCanonical(request);
>  forgot a )
>  and then in the else branch just use url = url.path()
>  and fix the mb->appendf().
>  I'm willing to bet it'll end up being a small performance
> improvement because you'd be keepign info about length around

Done.

>  src/internal.cc
>  why use upath.cmp(storeDigestUri) == 0 ?
>  It's easier to just upath == storeDigestUri
>  more readable
>  the only case where it makes sense to use cmp is the caseCmp
> variant

Done.

>  same in internalCheck
>  etc
>  hmm

Done. Using startsWith there.


>  UrnState::getHost; why is the input parameter a SBuf& (not const)?
>  in the implementation of UrnState::getHost there's inconsistent
> spacing in result=xstrndup(urlpath.rawContent(),p-1);
>  in UrnState::RequestNeedsMenu() the length() check is redundant;
> caseCmp does it for you
>  (IIRC)
>  caseCmp() returns false if SBuf are different length
>  .menu is a prefix on one Sbuf
>  use startsWith then
>  that's a perfect use case ;)

Done. And removing RequestNeedsMenu entirely now that its one use is so
simplified.

>  r->url.path(r->url.path().substr(5)); // strip prefix "menu."
>  you could use SBuf::chop there maybe
>  yes, looks like it
>  r->url.path.chop(5) should do the trick
> 

The path() getter return is const so not modifiable. Cycling via
path(...) setter also ensures touch() is called, preparation for
canonical() being made a method.


Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Refactor path strings into class URL

2015-07-03 Thread Kinkie
Hi,
my review (copy-paste from IRC)

 SBuf::size_type urlLen = ...
 what about using auto?
 auto seems to only work sometimes.
 O_O ?
 Looks strange in such a trivial case
 src/client_side.cc
 there's a SBuf tmp(..); strcpy().
 Can't we do better?
 we know the length of the SBuf after all
 you could xstrncpy(..., ...->length())
 nod
 that's one fewer copy
 I wouldn't so systematically mention c_str as a performance
regression
 e.g. when doing mime lookup: yes it reallocs but it's a few bytes
 I'd rather TODO: refactor mime*
 alex complains if I dont. Ive avoided it on the ones I can see are
not regressions (ie replace a copy with a copy)
 xstrndup(tmp.c_str(),tmp.length());
 why go c_str() if you know the length?
 I'd go rawContent()
 and later on xstrdup(), forgetting you know the length
 this is all in FtpGateway.cc
 src/gopher.cc -> xstrncpy(request, path.c_str(), MAX_URL);
 maybe
xstrncpy(request,path.rawContent(),min(path.length(),MAX_URL)) could do
 IIRC xstrncpy null-terminates, right?
 HttpStateData::buildRequestPrefix
 maybe it's possible to have url the other way around
 SBuf url;
 if (_peer && !_peer->options.originserver)
 rul = SBuf(urlCanonical(request);
 forgot a )
 and then in the else branch just use url = url.path()
 and fix the mb->appendf().
 I'm willing to bet it'll end up being a small performance
improvement because you'd be keepign info about length around
 src/internal.cc
 why use upath.cmp(storeDigestUri) == 0 ?
 It's easier to just upath == storeDigestUri
 more readable
 the only case where it makes sense to use cmp is the caseCmp
variant
 same in internalCheck
 etc
 hmm
 UrnState::getHost; why is the input parameter a SBuf& (not const)?
 in the implementation of UrnState::getHost there's inconsistent
spacing in result=xstrndup(urlpath.rawContent(),p-1);
 in UrnState::RequestNeedsMenu() the length() check is redundant;
caseCmp does it for you
 (IIRC)
 caseCmp() returns false if SBuf are different length
 .menu is a prefix on one Sbuf
 use startsWith then
 that's a perfect use case ;)
 r->url.path(r->url.path().substr(5)); // strip prefix "menu."
 you could use SBuf::chop there maybe
 yes, looks like it
 r->url.path.chop(5) should do the trick


I see no obvious issues with the code and you mention you have run-tested
it, so +1 with the fixes above.

On Sun, Jun 28, 2015 at 1:21 PM, Amos Jeffries  wrote:

> This takes another step towards bug 1961 closure by shuffling the
> HttpRequest::urlpath member into class URL.
>
> These changes appear to work nicely. But I have not been able to test
> all code paths that have logic change so a second pair of eyes on it
> would be appreciated.
>
> Amos
>
> ___
> squid-dev mailing list
> squid-dev@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev
>
>


-- 
Francesco
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Splice to origin cache_peer

2015-07-03 Thread Tsantilas Christos

The patch appllied to trunk as rev14132.
The applied patch includes the requested fixes.

Regards,
   Christos

On 06/28/2015 03:17 PM, Amos Jeffries wrote:

On 24/06/2015 2:54 a.m., Tsantilas Christos wrote:

Currently, Squid cannot redirect intercepted connections that are
subject to SslBump rules to _originserver_ cache_peer. For example,
consider Squid that enforces "safe search" by redirecting clients to
forcesafesearch.example.com. Consider a TLS client that tries to connect
to www.example.com. Squid needs to send that client to
forcesafesearch.example.com (without changing the host header and SNI
information; those would still point to www.example.com for safe search
to work as intended!).

The admin may configure Squid to send intercepted clients to an
originserver cache_peer with the forcesafesearch.example.com address.
Such a configuration does not currently work together with ssl_bump
peek/splice rules.

This patch:

* Fixes src/neighbors.cc bug which prevented CONNECT requests from going
to originserver cache peers. This bug affects both true CONNECT requests
and intercepted SSL/TLS connections (with fake CONNECT requests). Squid
use the CachePeer::in_addr.port which is not meant to be used for the
HTTP port, apparently. HTTP checks should use CachePeer::http_port instead.

* Changes Squid to not initiate SSL/TLS connection to cache_peer for
true CONNECT requests.

* Allows forwarding being-peeked (or stared) at connections to
originserver cache_peers.


This is a Measurement Factory project.



General comment: remember that SSL (all versions) are now deprecated and
target is to kill all use of SSL (and references if we can). Please use
"TLS" for naming and documenting new things that are generic TLS/SSL and
not explicitly part of SSLv2 or SSLv3 protocols.


in src/FwdState.cc:

* Took me ages to figure out why sslToPeer contains
!userWillSslToPeerForUs. Please either rename sslToPeer  as
needTlsToPeer OR add code comments to document those logics more clearly.
  - please add comment that userWillSslToPeerForUs assumes CONNECT ==
HTTPS (which is not always true in reality).


+1. Other than that bit of polish this looks fine. The updated patch can
go in without another review.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Avoid SSL certificate db corruption with empty index.txt as a symptom.

2015-07-03 Thread Tsantilas Christos

I just show that I had forgot to attach the patch here.


On 06/23/2015 06:30 PM, Tsantilas Christos wrote:


* Detect cases where the size file is corrupted or has a clearly wrong
value. Automatically rebuild the database in such cases.

* Teach ssl_crtd to keep running if it is unable to store the generated
certificate in the database. Return the generated certificate to Squid
and log an error message in such cases.

Background:

There are cases where ssl_crtd may corrupt its certificate database. The
known cases manifest themselves with an empty db index file.  When that
happens, ssl_crtd helpers quit, SSL bumping does not work any more, and
the certificate DB has to be deleted and re-initialized.

We do not know exactly what causes corruption in deployments, but one
known trigger that is easy to reproduce in a lab is the block size
change in the ssl_crtd configuration. That change has the following
side-effects:

1. When ssl_crtd removes certificates, it computes their size using a
different block size than the one used to store the certificates. This
is may result in negative database sizes.

2. Signed/unsigned conversion results in a huge number near LONG_MAX,
which is then written to the "size" file.

3. The ssl_crtd helper refuses to store new certificates because the
database size (as described by the "size" file) exceeds the configured
limit.

4. The ssl_crtd helper exits because it cannot store a new certificates
to the database. No helper response is sent to Squid in this case.

Most likely, there are other corruption triggers -- the database
management code is of an overall poor quality. This change resolves some
of the underlying problems in hope to address at least some of the
unknown triggers as well as the known one.

This is a Measurement Factory project.


Avoid SSL certificate db corruption with empty index.txt as a symptom.

* Detect cases where the size file is corrupted or has a clearly wrong
  value. Automatically rebuild the database in such cases.

* Teach ssl_crtd to keep running if it is unable to store the generated
  certificate in the database. Return the generated certificate to Squid
  and log an error message in such cases.

Background:

There are cases where ssl_crtd may corrupt its certificate database.
The known cases manifest themselves with an empty db index file.  When
that happens, ssl_crtd helpers quit, SSL bumping does not work any more,
and the certificate DB has to be deleted and re-initialized.

We do not know exactly what causes corruption in deployments, but one
known trigger that is easy to reproduce in a lab is the block size
change in the ssl_crtd configuration. That change has the following
side-effects:

1. When ssl_crtd removes certificates, it computes their size using a
   different block size than the one used to store the certificates.
   This is may result in negative database sizes.

2. Signed/unsigned conversion results in a huge number near LONG_MAX,
   which is then written to the "size" file.

3. The ssl_crtd helper refuses to store new certificates because the
   database size (as described by the "size" file) exceeds the
   configured limit.

4. The ssl_crtd helper exits because it cannot store a new certificates
   to the database. No helper response is sent to Squid in this case.

Most likely, there are other corruption triggers -- the database
management code is of an overall poor quality. This change resolves some
of the underlying problems in hope to address at least some of the
unknown triggers as well as the known one.

This is a Measurement Factory project.


=== modified file 'src/ssl/certificate_db.cc'
--- src/ssl/certificate_db.cc	2015-04-14 07:26:12 +
+++ src/ssl/certificate_db.cc	2015-06-12 17:15:03 +
@@ -303,52 +303,59 @@
 // TODO: check if the stored row is valid.
 return true;
 }
 
 {
 TidyPointer subject(X509_NAME_oneline(X509_get_subject_name(cert.get()), NULL, 0));
 Ssl::X509_Pointer findCert;
 Ssl::EVP_PKEY_Pointer findPkey;
 if (pure_find(useName.empty() ? subject.get() : useName, findCert, findPkey)) {
 // Replace with database certificate
 cert.reset(findCert.release());
 pkey.reset(findPkey.release());
 return true;
 }
 // pure_find may fail because the entry is expired, or because the
 // certs file is corrupted. Remove any entry with given hostname
 deleteByHostname(useName.empty() ? subject.get() : useName);
 }
 
 // check db size while trying to minimize calls to size()
-while (size() > max_db_size) {
-if (deleteInvalidCertificate())
-continue; // try to find another invalid certificate if needed
-
-// there are no more invalid ones, but there must be valid certificates
-do {
-if (!deleteOldestCertificate()) {
-save(); // Some entries may have been removed. Update the index file.
-

[squid-dev] Build failed in Jenkins: coverity-fixes-matrix ยป clang,j-fbsd-93 #1

2015-07-03 Thread noc
See 


--
Started by upstream project "coverity-fixes-matrix" build number 1
originally caused by:
 Started by an SCM change
Building remotely on j-fbsd-93 (gcc farm 10.0-RELEASE-p12 
freebsd-10.0-RELEASE-p12 clang freebsd amd64-freebsd 
amd64-freebsd-10.0-RELEASE-p12 amd64) in workspace 


Deleting project workspace... Cleaning workspace...
$ bzr branch lp:~kinkie/squid/coverity-fixes 

bzr: ERROR: Connection error: while sending POST /bazaar/: [Errno 0] Error
ERROR: Failed to branch lp:~kinkie/squid/coverity-fixes
Retrying after 10 seconds
Cleaning workspace...
$ bzr branch lp:~kinkie/squid/coverity-fixes 

bzr: ERROR: Connection error: while sending POST /bazaar/: [Errno 0] Error
ERROR: Failed to branch lp:~kinkie/squid/coverity-fixes
Retrying after 10 seconds
Cleaning workspace...
$ bzr branch lp:~kinkie/squid/coverity-fixes 

bzr: ERROR: Connection error: while sending POST /bazaar/: [Errno 0] Error
ERROR: Failed to branch lp:~kinkie/squid/coverity-fixes
[description-setter] Description set: 
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev