Re: [squid-users] squid-5.4 blocking on ipv6 outage

2022-02-21 Thread Jason Haar
Well this was a wild ride, I actually tracked the problem back to
dns64/nat64!

What I discovered is that the affected webserver didn't actually have ipv6
- it only had 2 ipv4 addresses. But something in my DNS-tree (I'm
suspecting the local systemd-resolve, but can't actually find any direct
evidence) had whacked fake  DNS64/NAT64 records for each of them. I've
never seen them before so didn't realise "64:ff9b:" was a "special"
IPv6 range. I directly queried our upstream DNS recursive name server and
it didn't have those IPv6 records - but the local systemd-resolve would not
give them up. So I down/up-ed the interface (resetting systemd-resolve) and
the problem disappeared.

This new information really doesn't change the nature of the question, but
I'm afraid the problem is now resolved (for the moment) so debugging won't
catch it. If it happens again (I have never seen this before) I'll be sure
to do the debugging thang.

On Tue, Feb 22, 2022 at 3:16 AM Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 2/20/22 20:43, Jason Haar wrote:
>
> > I've noticed that the Internet ipv6 is not quite as reliable as ipv4, in
> > that squid reports it cannot connect to web servers with an ipv6 error
> > when the web server is still available over ipv4.
> >
> > eg right now one of our Internet-based web apps (which has 2 ipv6 and 2
> > ipv4 IP addresses mapped to it's DNS name) is not responding over ipv6
> > for some reason (I dunno - not involved myself) - but is working fine
> > over ipv4. Squid-5.4 is erroring out - saying that it cannot connect to
> > the first ipv6 address with a "no route to host" error. But if I use
> > good-ol' telnet to the DNS name, telnet shows it trying-and-failing
> > against both ipv6 addresses and then succeeds against the ipv4. ie it
> > works and squid doesn't. BTW the same squid server is currently fine
> > with ipv6 clients talking to it and it talking over ipv6 to Internet
> > hosts like google.com <http://google.com> - ie this is an ipv6 outage
> on
> > one Internet host where it's ipv4 is still working.
> >
> > This doesn't seem like a negative_dns_ttl setting issue, it seems like
> > squid just tries one address on a multiple-IP DNS record and stops
> > trying? I even got tcpdump up and can see that when I do a
> > "shift-reload" on the webpage, squid only sends a few SYN packets to the
> > same non-working IPv6 address - it doesn't even try the other 3 IPs?
> >
> > I also checked squidcachemgr.cgi and the DNS record isn't even cached in
> > "FQDN Cache Stats and Contents", which I guess is consistent with it's
> > opinion that it's not working.
> >
> > Any ideas what's going on there? thanks!
>
> Squid is supposed to send both A and  DNS queries for the uncached
> domain and then try the first IP it can DNS-resolve and TCP-connect to.
> If that winning destination does not work at HTTP level, then Squid may,
> in some cases, try other destinations. There are lots of variables and
> nuances related to the associated Happy Eyeballs and reforwarding
> algorithms. It is impossible to say for sure what is going on in your
> specific case without more information.
>
> Your best bet may be to share an ALL,9 cache.log that reproduces the
> problem using a single isolated test transaction:
>
>
> https://wiki.squid-cache.org/SquidFaq/BugReporting#Debugging_a_single_transaction
>
>
> HTH,
>
> Alex.
>


-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid-5.4 blocking on ipv6 outage

2022-02-20 Thread Jason Haar
Hi there

I've noticed that the Internet ipv6 is not quite as reliable as ipv4, in
that squid reports it cannot connect to web servers with an ipv6 error when
the web server is still available over ipv4.

eg right now one of our Internet-based web apps (which has 2 ipv6 and 2
ipv4 IP addresses mapped to it's DNS name) is not responding over ipv6 for
some reason (I dunno - not involved myself) - but is working fine over
ipv4. Squid-5.4 is erroring out - saying that it cannot connect to the
first ipv6 address with a "no route to host" error. But if I use good-ol'
telnet to the DNS name, telnet shows it trying-and-failing against both
ipv6 addresses and then succeeds against the ipv4. ie it works and squid
doesn't. BTW the same squid server is currently fine with ipv6 clients
talking to it and it talking over ipv6 to Internet hosts like google.com -
ie this is an ipv6 outage on one Internet host where it's ipv4 is still
working.

This doesn't seem like a negative_dns_ttl setting issue, it seems like
squid just tries one address on a multiple-IP DNS record and stops trying?
I even got tcpdump up and can see that when I do a "shift-reload" on the
webpage, squid only sends a few SYN packets to the same non-working IPv6
address - it doesn't even try the other 3 IPs?

I also checked squidcachemgr.cgi and the DNS record isn't even cached in
"FQDN Cache Stats and Contents", which I guess is consistent with it's
opinion that it's not working.

Any ideas what's going on there? thanks!

-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] XSS issue only affects bump doesn't it?

2018-10-28 Thread Jason Haar
Hi there

I'm running a vulnerable version of squid (ie "--with-openssl" and
"--enable-ssl") but due to issues with bumping not working well, don't
actually do it (ie sslcrtd_program and ssl_bump not defined/etc).

So does that mean we can't actually be affected by this vulnerability?

-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Secure basic authentication on Squid

2017-12-05 Thread Jason Haar
To reiterate Alex, "yes you can".

Squid supports "proxy over TLS" as well as the old/default "proxy over TCP"
- you use the https_port option

...but getting browsers to support it is challenging. The best way would be
to create a WPAD file that tells browsers to use "HTTPS" instead of
"PROXY". Then you can just use Proxy-Authentication using Basic and you'd
be all set. BTW, Basic has MAJOR performance benefits over any other form
of authentication IMHO. Basic over TLS is the way to go...


eg something like this

 wpad.dat --

function FindProxyForURL(url, host)
{
  // see how I used 443? If you're going to run a TLS-encrypted proxy, make
it totally appear as a HTTPS server and run it on port 443...
  //


if (isPlainHostName(host) ||  dnsDomainIs(host,"localhost.localdomain") ) {
return "DIRECT";
} else if (isInNet(host, "127.0.0.0", "255.0.0.0") || isInNet(host,
"10.0.0.0", "255.0.0.0") || isInNet(host, "172.16.0.0", "255.240.0.0")  ||
isInNet(host, "192.168.0.0", "255.255.0.0") ) {
return "DIRECT";
} else {
//
return "HTTPS secure-squid.com:443";
  }
}


On Tue, Dec 5, 2017 at 5:13 AM, Colle Christophe <
christophe.co...@ac-nancy-metz.fr> wrote:

> Hi Anthony,
>
> Thank you for your answer.
>
> That this only secures the traffic Squid<->LDAP Server, not
> browsers<->Squid.
>
> Is there a solution to secure communication between the browser and the
> proxy?
>
>
> Chris.
>
> Le 04/12/17 16:49, *Antony Stone * <antony.st...@squid.open.source.it> a
> écrit :
>
> On Monday 04 December 2017 at 16:42:30, Colle Christophe wrote:
>
> > Is there a solution to secure the "basic" authentication of squid? (with
> an
> > SSL certificate for example).
>
> https://wiki.squid-cache.org/ConfigExamples/Authenticate/Ldap section
> "SSL/TLS_adjustments"?
>
>
> Antony.
>
> --
> "Linux is going to be part of the future. It's going to be like Unix was."
>
>  - Peter Moore, Asia-Pacific general manager, Microsoft
>
>Please reply to the
> list;
>  please *don't*
> CC me.
> _______
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>


-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] dumb question: how to get http server IP into logs?

2017-08-09 Thread Jason Haar
Thanks for that guys. Dumb mistake - I had "%<A" in there instead of "%<a"
:-/

(although it's so 'dumb' that I'm now wondering "did I originally chose
that for a reason?". I've just lowercased it - I guess I'll see what breaks
;-)

On Mon, Jul 31, 2017 at 11:49 PM, Eliezer Croitoru <elie...@ngtech.co.il>
wrote:

> I looked at:
> http://www.squid-cache.org/Doc/config/logformat/
>
> and the default squid logformat:
> logformat squid  %ts.%03tu %6tr %>a %Ss/%03>Hs % %Sh/%
> Seems to contain the desired pattern.
> Am I missing something?
>
> Eliezer
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Amos Jeffries
> Sent: Monday, July 31, 2017 13:22
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] dumb question: how to get http server IP into
> logs?
>
> On 30/07/17 22:02, Jason Haar wrote:
> > Hi there
> >
> > We're running squid-3.5.23 and use ICAP (if that makes a difference)
> >
> > We also use logformat to include certain details in the logs - but I
> > can't see an option for including the actual IP address that squid uses
> > when attempting to fulfil an URL request. eg squid gets told to go to
> > twitter.com <http://twitter.com>, resolves that to 4 IPs, tries 1st -
> > fails, tries 2nd - succeeds. I'd like to record that IP in the logs
> > along with everything else. I can see variables for recording the client
> > and squid-server IP - but not the web server?
> >
> > Is that possible? I'm sure older (3.2) squid used to do that by default?
> > (DIRECT/1.2.3.4? <http://1.2.3.4?>). All our logs are now "HIER_DIRECT"
> >
>
> The code you are looking for is % <http://www.squid-cache.org/Doc/config/logformat/>
> "Server IP address of the last server or peer connection"
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] dumb question: how to get http server IP into logs?

2017-07-30 Thread Jason Haar
Hi there

We're running squid-3.5.23 and use ICAP (if that makes a difference)

We also use logformat to include certain details in the logs - but I can't
see an option for including the actual IP address that squid uses when
attempting to fulfil an URL request. eg squid gets told to go to twitter.com,
resolves that to 4 IPs, tries 1st - fails, tries 2nd - succeeds. I'd like
to record that IP in the logs along with everything else. I can see
variables for recording the client and squid-server IP - but not the web
server?

Is that possible? I'm sure older (3.2) squid used to do that by default?
(DIRECT/1.2.3.4?). All our logs are now "HIER_DIRECT"

Thanks

-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Peeking on TLS traffic: unknown cipher returned

2016-10-19 Thread Jason Haar
On Thu, Oct 20, 2016 at 5:01 PM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> Please note that "peek and make a decision based on SNI" is not what
> your configuration tells Squid to do.
>

This is a complex situation for most people (myself included), can you tell
us how to "peek and make a decision based on SNI"?

I'm probably like the original poster in that I simply want to be able to
do transparent proxy of TCP/443 so as to better log HTTPS transactions. I
wouldn't even bother with the "terminate" bit - if I wanted to blacklist
some HTTPS sites, I'd rather rely on the normal non-bumping ACLs, the
SNI-learnt domain names -  and "deny" - I don't care if a cleartext blob is
sent through to a client who thinks it's TLS - it will break and that's all
that matters. Anything better *requires* full MiTM which I want to avoid as
I believe it has no future due to pinning.

Off to upgrade to 3.5.22 :-)

-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSO and Squid, SAML 2.0 ?

2016-09-22 Thread Jason Haar
On Tue, Sep 20, 2016 at 8:39 PM, FredB <fredbm...@free.fr> wrote:

> I'm searching a way to use a secure SSO with Squid, how did you implement
> the authenticate method with an implicit proxy ?
> I'm reading many documentations about SAML, but I found nothing about Squid
>
> I guess we can only do something with cookies ?
>

Hi Fred

Proxies only support "HTTP authentication" methods: Basic, Digest, NTLM
,etc. So you either have to use one of those, or perhaps "fake" the
creation of one of those...?

eg you mentioned SAML, but gave no context beyond saying you didn't want
AD. So let's say SAML is a requirement. Well that's directly impossible as
it isn't an "HTTP authentication" method, but you could hit it from the
sides...

How about putting a SAML SP on your squid server, and it generates fresh
random Digest authentication creds for any authenticated user (ie same
username, but 30char random password), and tells them to cut-n-paste them
into their web browser proxy prompt and "save" them. That way the proxy is
using Digest and it involved a one-off SAML interaction. I say Digest
instead of Basic because Digest is more secure over cleartext - but it's
also noticeably slower than Basic over latency links, so you can choose
your poison there

If you're really keen, you can actually do proxy-over-TLS via WPAD with
Firefox/Chrome - at which point I'd definitely recommend Basic for the
performance reasons ;-)



-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Browser circunvents acl's blocking https (intercept mode)

2016-04-23 Thread Jason Haar
On Sun, Apr 17, 2016 at 9:11 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> Like Jok mentioned Chrome is probably using QUIC protocol or one of the
> other non-HTTPS is uses.
>


Other non-HTTPS? Can you expand on that? I'm aware of QUIC (udp/443) and
ensure our firewalls block it so as to force it to tcp/443 - but you're
implying there are yet more alternatives?

-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] grove.microsoft.com

2016-04-14 Thread Jason Haar
If you are blocking it, then it can't be uploading 2G? How are you
measuring that it uploads 2G? Did you change squid's logging to support
that (it doesn't log upload sizes - only download sizes by default). Are
you simply referring to the Content-Length header - as that would say 2G -
even if the upload is then blocked.

On Fri, Apr 15, 2016 at 4:04 PM, Michael Pelletier <
michael.pellet...@palmbeachschools.org> wrote:

> I am blocking grove.microsoft.com. Even though I am blocking it, I am
> seeing large, 2 Gig, uploads from the client to the proxy (which indeed
> blocks it). It is almost like the connection request (explicit) contains
> the 2 gig post request. Why is this happening? Has anyone seen this?
>
>
> Michael
>
> *Disclaimer: *Under Florida law, e-mail addresses are public records. If
> you do not want your e-mail address released in response to a public
> records request, do not send electronic mail to this entity. Instead,
> contact this office by phone or in writing.
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>


-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] intercepting tcp/443 purely for logging purposes

2016-03-21 Thread Jason Haar
It's really not much more than what I first posted (I can't send my config
- it's pretty specific to our site - you'll have to figure out the standard
stuff yourself)

So this will make a squid-3.5 server capable of doing "transparent HTTPS"
without any fiddling with the transactions. Of course it assumes you
already know how to redirect port 443 traffic onto your proxy, and know how
to reconfigure the OS to support that too (ie same as transparent HTTP on
port 80)

acl BlacklistedHTTPSsites dstdomain
"/etc/squid/acl-BlacklistedHTTPSsites.txt"
http_access deny BlacklistedHTTPSsites
https_port 3127 intercept ssl-bump cert=/etc/squid/squid-CA.cert
 cafile=/etc/squid/ca-bundle.crt generate-host-certificates=on
dynamic_cert_mem_cache_size=256MB options=ALL
sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 256MB
sslcrtd_children 32 startup=15 idle=5
acl SSL_https port 443
ssl_bump splice SSL_https


On Tue, Mar 22, 2016 at 12:05 AM, Vito A. Smaldino <
vitoantonio.smald...@istruzione.it> wrote:

> Hi all,
> great, i'm just searching for this. Jason can you kindly post the whole
> squid.conf?
> Thanks
> V
>
> 2016-03-20 22:29 GMT+01:00 Jason Haar <jason_h...@trimble.com>:
>
>> Hi there
>>
>> I'm wanting to use tls intercept to just log (well OK, and potentially
>> block) HTTPS sites based on hostnames (from SNI), but have had problems
>> even in peek-and-splice mode. So I'm willing to compromise and instead just
>> intercept that traffic, log it, block on IP addresses if need be, and don't
>> use ssl-bump beyond that.
>>
>> So far the following seems to work perfectly, can someone confirm this is
>> "supported" - ie that I'm not relying on some bug that might get fixed
>> later? ;-)
>>
>> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M
>> 256MB
>> sslcrtd_children 32 startup=15 idle=5
>> acl SSL_https port 443
>> ssl_bump splice SSL_https
>> acl BlacklistedHTTPSsites dstdomain
>> "/etc/squid/acl-BlacklistedHTTPSsites.txt"
>> http_access deny BlacklistedHTTPSsites
>>
>> The "bug" comment comes down to how acl seems to work. I half-expected
>> the above not to work - but it does. It would appear squid will treat an
>> intercept's dst IP as the "dns name" as that's all it's got - so
>> "dstdomain" works fine for both CONNECT and intercept IFF the acl contains
>> IP addresses
>>
>> I was hoping I wouldn't need ssl-bump at all, but you need squid to be
>> running a https_port, and for it to support "intercept", and to do that
>> squid insists on "ssl-bump" too - although that seems likely was a
>> programmer assumption that didn't include people like me doing mad things
>> like this? :-). I'd also guess I don't need 32 children/etc  - 1 would
>> suffice as it's never used?
>>
>> So the end result is that all CONNECT and/or intercept SSL/TLS traffic is
>> supported via the proxy, with all TLS security decisions residing on the
>> client. I get my logs, and if I want to block some known bad IP address, I
>> can: CONNECT causes a 403 HTTP error page and intercept basically ditches
>> the tcp/443 connection - which is as good as it gets without getting into
>> the wonderful world of real "bump"
>>
>> --
>> Cheers
>>
>> Jason Haar
>> Information Security Manager, Trimble Navigation Ltd.
>> Phone: +1 408 481 8171
>> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>> --
>> Vito A. Smaldino
>>
>> <http://lists.squid-cache.org/listinfo/squid-users>
>
>


-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] intercepting tcp/443 purely for logging purposes

2016-03-21 Thread Jason Haar
Yeah I know that, but there are issues with invoking peek: like the host
forgery checks suddenly kick in, and squid starts seeing SSL errors
(probably due to CentOS6 not supporting the newest standards that Chrome
uses) and then squid starts blocking things. That's why I'm sticking to
this simplest case for the moment and avoid the "peek" call


Thanks!

Jason

On Mon, Mar 21, 2016 at 8:53 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> On 21/03/2016 10:29 a.m., Jason Haar wrote:
> > Hi there
> >
> > I'm wanting to use tls intercept to just log (well OK, and potentially
> > block) HTTPS sites based on hostnames (from SNI), but have had problems
> > even in peek-and-splice mode. So I'm willing to compromise and instead
> just
> > intercept that traffic, log it, block on IP addresses if need be, and
> don't
> > use ssl-bump beyond that.
> >
> > So far the following seems to work perfectly, can someone confirm this is
> > "supported" - ie that I'm not relying on some bug that might get fixed
> > later? ;-)
> >
>
> It is supporteed.
>
> > sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M
> 256MB
> > sslcrtd_children 32 startup=15 idle=5
> > acl SSL_https port 443
> > ssl_bump splice SSL_https
> > acl BlacklistedHTTPSsites dstdomain
> > "/etc/squid/acl-BlacklistedHTTPSsites.txt"
> > http_access deny BlacklistedHTTPSsites
> >
> > The "bug" comment comes down to how acl seems to work. I half-expected
> the
> > above not to work - but it does. It would appear squid will treat an
> > intercept's dst IP as the "dns name" as that's all it's got - so
> > "dstdomain" works fine for both CONNECT and intercept IFF the acl
> contains
> > IP addresses
>
> This is because the ssl_bump rules are saying to splice immediately when
> only the pseudo-CONNECT with an IP address is known.
>
> If you use this:
>  ssl_bump peek all
>  ssl_bump splice all
>
> it will peek at the client SNI and server public cert details before
> dropping back to a transparent pass-tru. Then it will have that domain
> and any other non-encrypted details available for logging.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] intercepting tcp/443 purely for logging purposes

2016-03-20 Thread Jason Haar
Hi there

I'm wanting to use tls intercept to just log (well OK, and potentially
block) HTTPS sites based on hostnames (from SNI), but have had problems
even in peek-and-splice mode. So I'm willing to compromise and instead just
intercept that traffic, log it, block on IP addresses if need be, and don't
use ssl-bump beyond that.

So far the following seems to work perfectly, can someone confirm this is
"supported" - ie that I'm not relying on some bug that might get fixed
later? ;-)

sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 256MB
sslcrtd_children 32 startup=15 idle=5
acl SSL_https port 443
ssl_bump splice SSL_https
acl BlacklistedHTTPSsites dstdomain
"/etc/squid/acl-BlacklistedHTTPSsites.txt"
http_access deny BlacklistedHTTPSsites

The "bug" comment comes down to how acl seems to work. I half-expected the
above not to work - but it does. It would appear squid will treat an
intercept's dst IP as the "dns name" as that's all it's got - so
"dstdomain" works fine for both CONNECT and intercept IFF the acl contains
IP addresses

I was hoping I wouldn't need ssl-bump at all, but you need squid to be
running a https_port, and for it to support "intercept", and to do that
squid insists on "ssl-bump" too - although that seems likely was a
programmer assumption that didn't include people like me doing mad things
like this? :-). I'd also guess I don't need 32 children/etc  - 1 would
suffice as it's never used?

So the end result is that all CONNECT and/or intercept SSL/TLS traffic is
supported via the proxy, with all TLS security decisions residing on the
client. I get my logs, and if I want to block some known bad IP address, I
can: CONNECT causes a 403 HTTP error page and intercept basically ditches
the tcp/443 connection - which is as good as it gets without getting into
the wonderful world of real "bump"

-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Peek and Splice with SIP over TCP

2016-03-09 Thread Jason Haar
Or use socat. I have used it to allow ancient SSLv3-only clients to
communicate with TLS-only servers.

Jason

On Thu, Mar 10, 2016 at 12:28 AM, Amos Jeffries <squ...@treenet.co.nz>
wrote:

> On 9/03/2016 6:53 p.m., Howard Kranther wrote:
> > Hello, I am investigating the use of squid as a client side proxy to
> > provide TLS 1.2 support for a VOIP application using SIP over TCP.The
> > application would use TCP or TLS 1.0 to communicate with squid, which
> > would bump either of those to TLS 1.2 to communicate with a phone
> > system.The application uses a commercial SIP stack so adding an HTTP
> > CONNECT message to the start of a SIP session and processing the
> > response is problematic.
>
> Squid is an HTTP proxy. CONNECT is the only way non-HTTP compatible
> protocols can be delivered over HTTP.
>
> You need to go looking for a SOCKS proxy.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host header forgery false positives

2016-02-15 Thread Jason Haar
On Tue, Feb 16, 2016 at 2:48 AM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> Thanks for the reminder. I dont recall seeing a bug report being made.
> Though Jason has sent me a more detailed cache.log trace to work with.
>


Yeah - I actually got half-way through putting in a bug report twice - but
ditched it for this and that reason. There's also evidence that this
affects http as well as https. When I was digging through the 2G cache.log
file for the SSL intercept related forgery samples, I found some http
related ones too. I wonder if this is generic to all intercept traffic
instead of https specific?

-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] any way to get squid-4 compiled on CentOS-6?

2016-02-12 Thread Jason Haar
Hi there

Given the real work on ssl-bump seems to be in squid-4, I thought to try
it out. Unfortunately, we're using CentOS-6 and the compilers are too
old? (gcc-c++-4.4.7/clang-3.4.2)

CentOS-7 should be fine - but replacing an entire system just to have a
play is a bit too much to ask, so has anyone figured out how to get
squid-4 working on such older systems?

Thanks

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] https full url

2016-01-17 Thread Jason Haar
On 17/01/16 06:16, xxiao8 wrote:
> Basically I'm trying to see how to get the http-header info from a
> bumped ssl connection and use them directly inside
> squid.conf(including external acl), otherwise icap/ecap is unavoidable
> for bumped ssl http header analysis. 
You must have done it wrong. First check: the squid access.log should
show the entire https url (eg "(GET|CONNECT)
https://google.com/search?q=squid+is+great; - not "CONNECT
google.com:443") - if it doesn't - then ICAP can't "see" the url either

I've done it in the past and it definitely works within ICAP: eg you can
block https urls (instead of just domains) and can use ICAP to pass
https urls through AV/etc. However, cert pinning is a real problem -
especially in transparent/intercept mode. Very frustrating: the Internet
is rapidly moving to HTTPS and yet network-based security like content
filtering proxies find it hard to keep up as they have become the enemy
(because they can be used for evil as well as good). 

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] host header forgery false positives

2016-01-11 Thread Jason Haar
568 kid1| SECURITY ALERT: Host header forgery
detected on local=54.204.8.186:443 remote=192.168.0.7:44144 FD 237
flags=33 (local IP does not match any domain IP)
2016/01/12 12:45:30.568 kid1| SECURITY ALERT: on URL:
engine.a.redditmedia.com:443
2016/01/12 12:49:10.490 kid1| SECURITY ALERT: Host header forgery
detected on local=192.30.252.128:443 remote=192.168.0.7:36340 FD 79
flags=33 (local IP does not match any domain IP)
2016/01/12 12:49:10.490 kid1| SECURITY ALERT: on URL: github.com:443
2016/01/12 12:49:21.162 kid1| SECURITY ALERT: Host header forgery
detected on local=192.30.252.127:443 remote=192.168.0.7:41264 FD 250
flags=33 (local IP does not match any domain IP)
2016/01/12 12:49:21.162 kid1| SECURITY ALERT: on URL: api.github.com:443
2016/01/12 12:49:51.399 kid1| SECURITY ALERT: Host header forgery
detected on local=192.30.252.129:443 remote=192.168.0.7:50925 FD 203
flags=33 (local IP does not match any domain IP)
2016/01/12 12:49:51.399 kid1| SECURITY ALERT: on URL: github.com:443
2016/01/12 13:03:57.040 kid1| SECURITY ALERT: Host header forgery
detected on local=192.30.252.92:443 remote=192.168.0.7:46645 FD 291
flags=33 (local IP does not match any domain IP)
2016/01/12 13:03:57.040 kid1| SECURITY ALERT: on URL: live.github.com:443
2016/01/12 13:03:59.200 kid1| SECURITY ALERT: Host header forgery
detected on local=192.30.252.92:443 remote=192.168.0.7:46647 FD 275
flags=33 (local IP does not match any domain IP)
2016/01/12 13:03:59.200 kid1| SECURITY ALERT: on URL: live.github.com:443

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] problem with squidGuard redirect page after upgrading squid

2016-01-07 Thread Jason Haar
On 08/01/16 18:36, Amos Jeffries wrote:
> But you do want to block all of http://good.site/bad\.url.* right?
>
> Otherwise the malware can get around the protection trivially just by
> adding a meaningless suffix to it.

You are totally right - good catch :-)

>
> With all the scraping are you also filtering for duplicates and reducing
> multiple URLs in one doman down to fewer entries?

Yeah  - no dupes - but no manually reading to figure out patterns
either. That would take a human eye - and I want set-and-forget automation

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] problem with squidGuard redirect page after upgrading squid

2016-01-07 Thread Jason Haar
On 08/01/16 01:56, Marcus Kool wrote:
> Can you explain what the huge number of regexes is used for ? 
malware urls. I'm scraping them from publicly available sources like
phishtank, malwaredomains.com. Ironically, they don't need to be regexes
- but squid only has a "url_regex" acl type - so regex it is (can't use
dstdomain because we want to block "http://good.site/bad.url; - not all
of "good.site")

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] problem with squidGuard redirect page after upgrading squid

2016-01-06 Thread Jason Haar
On 06/01/16 00:04, Amos Jeffries wrote:
> Yes. Squid always has been able to given enough RAM. Squid stores most
> ACLs in memory as Splay trees, so entries are sorted by frequency of use
> which is dynamically adapted over time. Regex are pre-parsed and
> aggregated together for reduced matching instead of re-interpreted and
> parsed per-request.
Great to hear. I've got some 600,000+ domain lists (ie dstdomain) and
60,000+ url lists (ie url_regex) acls, and there are a couple of
"gotchas" I've picked up during testing

1. at startup squid reports "WARNING: there are more than 100 regular
expressions. Consider using less REs". Is that now legacy and ignorable?
(should that be removed?). Obviously I have over 60,000 REs
2. making any change to squid and restarting/reconfiguring it now means
I'm seeing a 12sec outage as squid reads those acls off SSD
drives/parses them/etc. With squidguard that outage is hidden because
squidguard uses indexed files instead of the raw files and that
parsing/etc can be done offline. That behavioral change is pretty
dramatic: making a minor, unrelated change to squid now involves a
10+sec outage (instead of <1sec). I'd say "outsourcing" this kind of
function to another process (such as url_rewriter or ICAP) still has
it's advantages ;-)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] problem with squidGuard redirect page after upgrading squid

2016-01-05 Thread Jason Haar
On 31/12/15 23:43, Amos Jeffries wrote:
>  But that said; everything SG provides a current Squid can also do
> (maybe better) by itself. 
Hi Amos

Are you saying the squid acl model can support (say) 100M acl lists? The
main feature of the squidguard redirector was that it had indexed files
that allowed for rapid searching for matches - is this done within squid
now? (presumably it wasn't some time ago?). If so, is that done in
memory or via the acl files? (ala SG) - the former means a much slower
squid startup?

Thanks

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] confused over ipv6 failing on ipv4-only network

2016-01-05 Thread Jason Haar
On 06/01/16 17:39, Amos Jeffries wrote:
> On 6/01/2016 5:04 p.m., Jason Haar wrote:
>> Hi there
>>
>> Weird - several times in the past couple of months I have found I cannot
>> get to http://wiki.squid-cache.org/ - I get the error below from my
>> squid-3.5.11 server which does not have a Global ipv6 address (it has a
>> Local ipv6/fe80: on the Ethernet card - but nothing else). Google.com
>> (which is fully ipv6 capable) works fine - so far only
>> wiki.squid-cache.org has shown up this way to me (ie I don't see this
>> error message.
>>
>> On the squid server, "dig a" shows valid ipv4 addresses and "dig "
>> shows the ipv6 address - but why is squid even trying to connect over
>> ipv6 If doesn't have an ipv6 address?
>>
>> Could this be a case of the "A" record failing to return fast enough,
>> forcing squid to only try ipv6 - which then leads to the error message
>> referring to the ipv6 address?
> Squid waits for both A and  before continuing after DNS lookup. The
> only way to get only IPv6 results is for your DNS server to produce no A
> results at all. Timeout _could_ do that, but the default is 30 sec so
> unlikely.

I think that must be the case, because when I saw the problem this
morning, I immediately ssh'ed into the squid server and nslookup showed
it was resolving the name to it's A record just fine (by then) - and
telnet-ing to the IPv4 address was fine too. So it must have either
timed out on the A lookups (but not the  records), or the DNS server
didn't return A records at all? I don't think there's a way to query
squid to see what it's current DNS cache is? That would definitively
answer that question


> The Squid wiki is dual-stacked with IPv4 addresses. Sice you have
> v4-only network the thing to do is find out why the IPv4 are not
> working for your Squid. 

Well yeah  - but I frankly don't see this on any other website (like
google.com) - just wiki.squid-cache.org - so I think there's something
going on between those DNS servers and my squid server sitting on a
SPARK NZ network
> This just means that IPv6 was the *last* thing tried. It is entirely
> probable that IPv4 were tried first and also failed. Particularly if you
> have dns_v4_first turned on.

No - I don't have dns_v4_first defined at all - so that should be trying
both ipv4 and ipv6 if both DNS records were available.

>
> NP: if you have dns_v4_first off (default) then the error message should
> say some IPv4 failed. Since it gets tried last.
Well that isn't happening - which is why I suspect I'm not getting any
"A" records back at all (or very late). Sadly this isn't repeatable at
will - right now the wiki is working fine


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] confused over ipv6 failing on ipv4-only network

2016-01-05 Thread Jason Haar
Hi there

Weird - several times in the past couple of months I have found I cannot
get to http://wiki.squid-cache.org/ - I get the error below from my
squid-3.5.11 server which does not have a Global ipv6 address (it has a
Local ipv6/fe80: on the Ethernet card - but nothing else). Google.com
(which is fully ipv6 capable) works fine - so far only
wiki.squid-cache.org has shown up this way to me (ie I don't see this
error message.

On the squid server, "dig a" shows valid ipv4 addresses and "dig "
shows the ipv6 address - but why is squid even trying to connect over
ipv6 If doesn't have an ipv6 address?

Could this be a case of the "A" record failing to return fast enough,
forcing squid to only try ipv6 - which then leads to the error message
referring to the ipv6 address? This error message may be correct, but is
very confusing to anyone who knows they are only running ipv4: maybe
squid should know how to differentiate between locally routable and
globally routable ipv6 addresses and basically disable ipv6 if there is
no Global route? Obviously I could recompile squid without ipv6 support,
but Amos has made it clear that is "the wrong way" - so how else could
that be done (as adding ipv6 support to an entire network is not an
option either - if it was I wouldn't be sending this email! :-)

As an aside - I've seen this several times and yet only with
wiki.squid-cache.org - perhaps there's a performance issue/bug with one
of the associated DNS servers there?

The following error was encountered while trying to retrieve the URL:
http://wiki.squid-cache.org/SquidFaq/SquidAcl

Connection to 2001:4b78:2003::1 failed.

The system returned: (101) Network is unreachable

The remote host or network may be down. Please try the request again.

Your cache administrator is webmaster.

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Host header forgery affects pure splice environment too?

2015-12-27 Thread Jason Haar
On 28/12/15 14:34, Amos Jeffries wrote:
> Removing the redirect of tcp/443 totally fixes the problem.
>
> What redirect ?

tcp/443 redirect - sorry bad choice of words (really iptables REDIRECT).
ie TOR starts working if it isn't going through squid (which I
appreciate doesn't add much to this conversation - but it does prove
it's not some generic firewall/network problem)

> Well, Squid should not get to the point of testing Host name in the HTTP
> messages. SNI is mandatory to contain a resolvable FQDN. Not doing so is
> a TLS protocol violation and Squid should just abort down to either
> terminate or blindly tunnel based on your on_unknown_protocol settings.

Ooh - I haven't heard of "on_unknown_protocol"? I don't see it in the
squid.conf.documented that comes with squid-3.5.10?

That sounds exactly what's needed. What we have here is a situation
where a "bogus" application is routing through tcp/443 - which we choose
to do transparent TLS intercept on. What I want is to use peek/splice to
improve our logging - but otherwise not fiddle with any application that
happens to run over tcp/443.

I did find "on_unsupported_protocol"  - so added
"on_unsupported_protocol tunnel SSL_https" (acl SSL_https port 443) -
but that triggered a squid-3.5.10 config error? Is this a new squid-4
feature?


> if you want to dig into this further I suggest getting a
> "debug_options ALL,9" output and looking at what cache.log says about
> the state of the request that is being checked and failing.

I think we know what the problem is: TOR is making TLS connections (I
don't know if they're HTTPS) on port 443 and uses SNI names that aren't
real?


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Host header forgery affects pure splice environment too?

2015-12-27 Thread Jason Haar
On 28/12/15 11:50, Yuri Voinov wrote:
> I think, to eliminate this error you need to splice all torify connections.
As I said - squid is configured to *only*  splice - there is no bump-ing
going on. So this is already the case

acl DiscoverSNIHost at_step SslBump1
ssl_bump peek DiscoverSNIHost
acl SSL_https port 443
ssl_bump splice SSL_https

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1




signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Host header forgery affects pure splice environment too?

2015-12-27 Thread Jason Haar
Hi there

I use TOR a bit for testing our WAFs and found that it no longer worked
on my test network that has squid configured in TLS intercept mode. I
currently have squid configured to "splice only" (with peek to get the
SNI name) - ie no bumping - purely so that the squid access_log file
contains better records on HTTPS hostnames

2015/12/28 09:22:04.189 kid1| SECURITY ALERT: Host header forgery
detected on local=194.109.206.212:443 remote=192.168.0.21:40427 FD 30
flags=33 (local IP does not match any domain IP)
2015/12/28 09:22:04.189 kid1| SECURITY ALERT: By user agent:
2015/12/28 09:22:04.189 kid1| SECURITY ALERT: on URL: www.z2b4e372r4.com:443

Removing the redirect of tcp/443 totally fixes the problem.

Anyway, it would appear that squid-3.5.10 in splice-only mode still
enables the "Host header forgery" check? Surely if all you are doing is
splice-only, it shouldn't be doing that check at all? ie I could
understand triggering blocking actions if squid was part of the
transaction in bump-mode - but when it's "only looking", it is exactly
the same as not doing splice at all - so why trigger the Host header check?

It does look like TOR has something equivalent to a /etc/host file with
fake DNS names - so it's quite understandable that freaks squid out.
Actually, if squid cannot resolve a SNI hostname, shouldn't that skip
the Host name check?

Also, this isn't that easy to test: it would appear that once I turned
off intercept and successfully used TOR, it must have cached a bunch of
things because I then re-enabled intercept and it's no longer making any
tcp/443 connections - it goes straight out on other "native" TOR ports.
So it may be this can only be tested on a fresh install (or after some
cache timeout period)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTP performance hit with Squid

2015-10-22 Thread Jason Haar
On 23/10/15 07:47, SaRaVanAn wrote:
> There is always a ~2 second delay between the request coming to our
> system and going out of Squid. Suppose if a page has lot of embedded
> URL's it's taking more time with squid in place.Suppose If I disable
> squid the page loads very fast in client browser.
Could that be DNS? Is the server configured to use valid DNS servers?
Check each of them yourself to see what their response times are like, eg

time nslookup some.valid.site.that.isn't.in.cache

maybe you'll see 2sec show up on one of them...

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] debug skype ssl_bump numeric ips to be spliced

2015-10-15 Thread Jason Haar
On 15/10/15 14:25, Amos Jeffries wrote:
> All those lines imply is a certificate verify problem inside the SSL
> library.
Would it be possible to put the ip:port in those error messages? Would
certainly help answer those questions...

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-15 Thread Jason Haar
>>>> Apparently this is related to Apple’s new “App Transport Security” 
>>>>>> protections, in particular, the fact that “the server doesn’t support 
>>>>>> forward secrecy”. Even though it doesn’t seem to be affecting mobile 
>>>>>> Safari on iOS 9 at all.
>>>>>>
>>>>>> It’s also notable that Safari seems perfectly happy with legacy 
>>>>>> server-first SSL bumping. 
>>>>>>
>>>>>> I’m using Squid 3.5.10 and this is my current config: 
>>>>>> https://gist.github.com/djch/9b883580c6ee84f31cd1
>>>>>>
>>>>>> Anyone have any idea what I can try?
>>>>> You can try bump at step3 (roughly equivalent to server-first) instead
>>>>> of step2 (aka client-first).
>>>>>
>>>>>
>>>>> Amos
>>>>>
>>>>> ___
>>>>> squid-users mailing list
>>>>> squid-users@lists.squid-cache.org
>>>>> http://lists.squid-cache.org/listinfo/squid-users
>>>> ___
>>>> squid-users mailing list
>>>> squid-users@lists.squid-cache.org
>>>> http://lists.squid-cache.org/listinfo/squid-users
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-15 Thread Jason Haar
On 16/10/15 13:08, Dan Charlesworth wrote:
> ORLY
>
> I seem to recall this happening on 10.10 as well, but it could be an El 
> Capitan thing. Do you mind reminding me of your squid config Jason?

With my config I trying to "aggressively" figure out if the transaction
is safely going to be bump-able. I'm more willing to throw away (ie
splice) things I'm unsure about than risk a client seeing an error. But
for the websites you see problems with, I see nice clean bump-ing


http_port 3128 ssl-bump cert=/etc/squid/squidCA.cert 
generate-host-certificates=on dynamic_cert_mem_cache_size=256MB options=ALL
acl DiscoverSNIHost at_step SslBump1
ssl_bump peek DiscoverSNIHost
#do we have a SNI? If not, it's not TLS
acl SNIpresent ssl::server_name_regex .*

#this file contains https sites that we do not intercept - such as banks
(because we want the data transfers to remain private)
#and accounts.google.com (because Chrome uses cert pinning for that domain)
# in general you will need to add all sites that involve cert pinning
acl NoSSLIntercept ssl::server_name_regex -i
"/etc/squid/acl-NoSSLIntercept.txt"

#this external_acl process will sanity-check HTTPS transactions that
haven't being spliced yet
#to ensure only the correct ones end up being bumped
external_acl_type checkIfHTTPS children-max=20 concurrency=20
negative_ttl=3600 ttl=3600 grace=90  %SRC %DST %PORT %ssl::>sni
/usr/local/bin/confirm_https.pl
acl is_ssl external checkIfHTTPS

ssl_bump splice !SNIpresent
ssl_bump splice NoSSLIntercept
ssl_bump bump is_ssl

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-15 Thread Jason Haar
On 16/10/15 13:34, Dan Charlesworth wrote:
> Thanks!
>
> So ignoring the “bumpable” helper check, it’s effectively peeking at step1 
> and then bumping it like my config’s doing.
>
> I wonder what else could be differentiating it. Is your proxy CA just 
> installed in the Login keychain?

Nope - did it "properly" at the OS level. Get a PEM version of your
squidCA pubkey and as root do

security add-trusted-cert -d -r trustRoot -p ssl -p smime -p IPSec -p
eap -p basic /path/squidCA.pem > /dev/null 2>&1 || true
certtool i "/path/squidCA.pem"   k=/System/Library/Keychains/X509Anchors
> /dev/null 2>&1 || true

The "ipsec/smime" stuff is actually not needed - but I don't care ;-) I
went for the carpet bombing approach for the Mac (which I don't know well)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-13 Thread Jason Haar
On 14/10/15 16:08, Dan Charlesworth wrote:
> I thought that fixed it for a second … 
>
> But in reality ssl_bump peek step1 & ssl_bump bump step3 is actually splicing 
> everything, it seems.
>
> Any other advice? :-)
Could this imply be a pinning issue? ie does Safari track the CAs used
by those sites - thus causing the problem you see? Certainly matches the
symptoms

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ssl-Bump and revoked server certificates

2015-10-06 Thread Jason Haar
Good catch - I don't think squid does CRL/OCSP checks

I'm using the external_acl_type method to achieve that: it does the
extra work and returns "ERR" for revoked certs - which (for me) causes
squid to fallback on splice mode - so that the client browser can see
the actual fault directly (ie I'm making sure revoked certs are never
bumped)

But this is a bug in squid - this means untrustworthy certs become
trusted again - not a good look

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] after changed from 3.4.13 to 3.5.8 sslbump doesn't work for the site https://banking.postbank.de/

2015-10-03 Thread Jason Haar
On 03/10/15 19:16, Amos Jeffries wrote:
> Anyhow, there have been long periods (12-18 months IIRC) where they
> were not trusted as a global CA. If your CA certificates set is from one
> of those periods your Squid will not be able to verify trust of the
> origin cert.
Should that show up in the logs somewhere? Put it this way: we have a
situation where "something" is causing a website that works without bump
to not work with it. If squid doesn't "like" something, could it
"auto-splice" - or at the very least log that there's a problem?

I'd like to find out what squid doesn't like about it because I could
probably update my external_acl_type script to detect that situation and
make squid splice the session (BTW my script already verifies the real
cert using the same CAs file that squid uses and it says it's legit - so
I don't think it's actually got anything to do with the CA itself)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] after changed from 3.4.13 to 3.5.8 sslbump doesn't work for the site https://banking.postbank.de/

2015-10-02 Thread Jason Haar
Just a reminder people, but you've gone off-topic. The postbank.de
website issue has NOTHING to do with pining

Someone mentioned earlier it's due to the HTTPS cert not having a
complete cert-chain, and that web browsers auto-correct that situation,
but squid does not. So I would say either squid should:

1. implement the same sort of auto-correction code (say) Firefox does
(which I bet is a lot of work), or
2. flick into splice-mode when there's a cert error (which could be as
much work - I dunno)

I use external_acl_type to call an external script that tries to achieve
that. Basically it manually downloads the homepage to get the cert,
checks if it's valid against the OS CA list and if not, returns ERR so
that squid splice's the connection instead of bump-ing it. Means the
entire connection blocks of course the first time this occurs, but after
that caches it and it mostly works.


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] after changed from 3.4.13 to 3.5.8 sslbump doesn't work for the site https://banking.postbank.de/

2015-10-02 Thread Jason Haar
On 02/10/15 21:38, Amos Jeffries wrote:
> I'm not sure but a custom certificate validator helper can probably do
> all this better. An example helper in Perl can be found at
> helpers/ssl/cert_valid.pl
That website worked for me because my external validator had an
exception rule for valid certs containing "bank" (which makes it "ERR" -
causing squid to splice it instead of bump it). To see this problem for
myself I removed that check and indeed bump-ing then failed to work
(squid-3.5.10)

I then pointed sslabs.com at that site and it got a "B" rating and
there's no obvious signs of a cert error - so I can't figure out what is
going wrong. I've manually downloaded the server cert using "openssl
s_client" and the cert chain validates just fine - so what is squid
doing to it? Weird...

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] after changed from 3.4.13 to 3.5.8 sslbump doesn't work for the site https://banking.postbank.de/

2015-10-02 Thread Jason Haar
On 02/10/15 23:43, Amos Jeffries wrote:
> I'm suspecting the order of these options screws things up. Or maybe
> just the use of "ALL". sslproxy_options NO_SSLv2:NO_SSLv3:ALL

...but I don't even use sslproxy_options There have been at least 3
people saying that bump doesn't work with that site - we all won't have
identical configs.

Chrome reports "ERR_CONNECTION_CLOSED" and Firefox "The connection to
banking.postbank.de was interrupted while the page was loading." - that
doesn't even sound cert-related - more TCP related (between client and
squid). Remember: the site works fine when squid is set to splice that site

I have compared the fake cert generated by squid against the real one
and there's obvious differences (using "openssl s_client -connect
banking.postbank.de:443 -servername banking.postbank.de|openssl x509
-noout -text"). References to "Certificate Policies" and "Certificate
Transparency" are present in the real cert and there's no equivalent in
the Fake cert. How that could trigger a TCP reset is beyond me? I've
also cranked up logging and there was nothing overt showing an issue

Real:

 X509v3 Certificate Policies:
Policy: 2.16.840.1.113733.1.7.23.6
  CPS: https://d.symcb.com/cps
  User Notice:
Explicit Text: https://d.symcb.com/rpa
   X509v3 Basic Constraints:
CA:FALSE
   1.3.6.1.4.1.11129.2.4.2:
...k.i.w...X..gp
.N.H0F.!..<
...u.V.../...D.>.Fv\U...N...J.F0D.
.W!z...@'..n...C.W m.K/..
S.R,...KTu..)e...w.hd..:...(.L.qQ]g..D.
g..OO.N.H0F.!.~F.n#
Y..&^.v.x.+!..n..J@9.[.J.C.1.L5.(.%%..9..
Signature Algorithm: sha256WithRSAEncryption


Fake:

X509v3 Basic Constraints:
    CA:FALSE
Signature Algorithm: sha256WithRSAEncryption




-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems with wpad in Squid3

2015-09-10 Thread Jason Haar
Too many unknowns here to guess, so if I were you I'd start with
rebooting the client, logging in and starting a sniffer (like wireshark)
- just looking at port 53 and port 80

Then start your browser (that is set to automatic network/proxy) and see
what happens. What should happen is that it looks for wpad. and if it's Windows it should also look for wpad.

If either exist it will then try to download /wpad.dat via HTTP and acts
on the content

We use WPAD - it works great. I'd suggest ditching the DHCP option -
that only ever worked for MSIE - stick to WPAD via DNS which works for
all browsers

Jason

PS: also note WPAD is about browsers - so don't expect miracles for
non-browser applications. Some apps can use it - bit most can't

On 10/09/15 08:39, Marcio Demetrio Bacci wrote:
> Hi,
>
> I'm having the following problem with my squid3:
>
> When I set the browser: "Auto-Detect proxy settings for this network"
> does not work.
>
> When we report: "Manual proxy configuration" works.
>
> Follow my configuration files:
>
> */var/www/wpad.dat*
> function FindProxyForURL(url, host) {
> if (shExpMatch(url,"*.empresa.com/* <http://empresa.com/*>"))
> {
> return "DIRECT";
> }
> if (isInNet(host, "192.168.0.0","255.255.252.0"))
> {
> return "DIRECT";
> }
> return "PROXY 192.168.0.69:3128 <http://192.168.0.69:3128>";
> }
>
>
> */etc/dhcp/dhcpd.conf*
> ddns-update-style none;
> default-lease-time 600;
> max-lease-time 7200;
> authoritative;
> option wpad-url code 252 = text;
> ddns-domainname "cmb.empresa.com <http://cmb.empresa.com>.";
> option domain-name "cmb.empresa.com <http://cmb.empresa.com>.";
>
>  
> subnet 192.168.0.0 netmask 255.255.252.0 {
>   range 192.168.1.1 19.168.3.253;
>   option routers 192.168.0.1;
>   option domain-name-servers 192.168.0.25,192.168.0.10;
>   option broadcast-address 192.168.3.255;
>   option wpad-url "http://192.168.0.69/wpad.dat\n
> <http://192.168.0.69/wpad.dat%5Cn>";
>
> }
>
>
> */etc/bind/db.empresa.com <http://db.empresa.com>*
> ;
> $TTL600
> @INSOAdns1.cmb.emprea.com <http://dns1.cmb.emprea.com>.
> root.cmb.empresa.com <http://root.cmb.empresa.com>. (
>   2015083001; Serial
>  300; Refresh
>  300; Retry
> 600; Expire
>  900 ); Negative Cache TTL
> ;
> @INNS dns1.cmb.emprea.com
> <http://dns1.cmb.emprea.com>.   
> @INMX 10   webmail.cmb.emprea.com
> <http://webmail.cmb.emprea.com>.
> ...
> proxyIN    A    192.168.0.69
> wpadINCNAMEproxy
>
>
> Is there any tool to test my proxy ?
>
> Do I need to set any library in apache2 ?
>
> Regards,
>
> Márcio Bacci
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 3.5.8 — SSL Bump questions

2015-09-09 Thread Jason Haar
On 08/09/15 20:32, Amos Jeffries wrote:
> The second one is a fake CONNECT generated internally by Squid using
Is it too late to propose that intercepted SSL transactions be logged as
something besides "CONNECT"? I know I find it confusing - and so do
others. I appreciate the logic behind it - but people are people :-)

How about  (for intercepted SSL)

PEEKED 1.2.3.4:443
GET https://github.com/image.txt

vs

PEEKED 5.6.7.8:443
SPLICED google.com:443

This way we could have a squid server that does transparent SSL plus
formal proxy (on different ports of course) and CONNECT/PEEKED/SPLICED
would enable the admin to tell the difference between a formal proxy
session and an intercepted one. ie the same transactions via formal
proxy would be

CONNECT github.com:443
GET https://github.com/image.txt

vs

CONNECT google.com:443
SPLICED google.com:443

I guess with my logging format, log parsers would skip all
PEEKED/CONNECT lines as redundant (although they're useful for us humans)

Yeah, it would break existing logging tools - but so does the "GET
https://...; stuff anyway - so they need updating too ;-)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dropbox and GoogleDrive apps won't connect with SSLBump enabled

2015-08-31 Thread Jason Haar
On 01/09/15 02:59, Shane King wrote:
> Accessing via the browser may work but the sync clients that sit in
> the system tray use certificate pinning I believe. So if certificate
> pinning is being used, ssl bumping will not work. You will see an
> alert message in the pcap followed by a connection termination.

This stopped working for me last week - I suspect there was an update or
something

Really frustrating: one of the primary reasons I want to do TLS
intercept is to AV all the viruses published on dropbox!!!

If the Cloud providers go full pinning, the future of TLS Intercept is bleak


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] can't get bump to work anymore on 3.5.7?

2015-08-22 Thread Jason Haar
On 22/08/15 13:38, HackXBack wrote:
 can you share your perl file 
 /usr/local/bin/confirm_https.pl 
 Thanks ..

It's not really useful... I hacked it together in order to be able to
differentiate a lot of the ways that I discovered bumping could fail
(client certs, lack of SNI, non-SSL). It's awful: bunches of calls to
openssl and curl - all sorts of kruft.

Very useful for me to learn about how all this works - but not designed
for production. Probably full of security risks too (eg I don't bother
sanitizing the hostnames - which are dropped into shell calls - not a
good look)

 

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] can't get bump to work anymore on 3.5.7?

2015-08-20 Thread Jason Haar
On 20/08/15 12:42, Jason Haar wrote:
 So now I can:

 1.  ###dynamically whitelist/splice non-SNI traffic via it's existence
 (commented because it didn't work - ended up splicing everything)

Figured that one out: .* is a file - .* is a regex :-)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] can't get bump to work anymore on 3.5.7?

2015-08-19 Thread Jason Haar
On 20/08/15 03:36, Alex Rousskov wrote:
 SNI is obtained during step #1. Peeking during step #1 does _not_
 preclude future bumping.

thanks for persisting with me Alex - I got there in the end! :-)

That looks a lot better, my config is now

root# egrep -i 'crtd|bump|ssl:|checkIfHTTPS' squid.conf
ssl-bump.inc|grep -v '#'
squid.conf:http_port 3128 ssl-bump cert=/etc/squid/squidCA.cert 
generate-host-certificates=on dynamic_cert_mem_cache_size=256MB options=ALL
squid.conf:https_port 3129 intercept ssl-bump
cert=/etc/squid/squidCA.cert  generate-host-certificates=on
dynamic_cert_mem_cache_size=256MB options=ALL
squid.conf:include /etc/squid/ssl-bump.inc
squid.conf:logformat logdetails %ts.%03tu %6tr %a %Ss/%03Hs %st %rm
%ru %[un %Sh/%a %mt %ssl::sni %ssl::cert_subject
ssl-bump.inc:sslcrtd_program /usr/lib64/squid/ssl_crtd -s
/var/lib/squid/ssl_db -M 256MB
ssl-bump.inc:sslcrtd_children 32 startup=15 idle=5
ssl-bump.inc:acl DiscoverSNIHost at_step SslBump1
ssl-bump.inc:ssl_bump peek DiscoverSNIHost
ssl-bump.inc:acl NoSNIpresent ssl::server_name_regex .*
ssl-bump.inc:acl NoSSLIntercept ssl::server_name_regex -i
/etc/squid/acl-NoSSLIntercept.txt
ssl-bump.inc:external_acl_type checkIfHTTPS children-max=20
concurrency=20 negative_ttl=3600 ttl=3600 grace=90  %SRC %DST %PORT
%ssl::sni /usr/local/bin/confirm_https.pl
ssl-bump.inc:acl is_ssl external checkIfHTTPS
ssl-bump.inc:ssl_bump splice !NoSNIpresent
ssl-bump.inc:ssl_bump splice NoSSLIntercept
ssl-bump.inc:ssl_bump bump is_ssl

So now I can:

1.  ###dynamically whitelist/splice non-SNI traffic via it's existence
(commented because it didn't work - ended up splicing everything)
2.  statically whitelist/splice cert pinning apps via acl NoSSLIntercept
3.  dynamically whitelist/splice some classes of websites (eg banks) by
external process checkIfHTTPS
4.  bump the rest

Can't get that ### one to work. How do I create an acl that will match
when there's any SNI - so that I can splice anything that hasn't got it?

The only remaining question I have is about SSL session resumption. If a
*bumped* session uses resumption - that's purely a squid issue  - so I
suspect that would always work? (including intercept mode?). And if it's
a spliced session, then all squid can do is allow it anyway (because in
my config, I want to splice anything that hasn't got SNI) - so that
would also work?


 Please note that doing so will give you no knowledge about the SSL
 server point of view. All your decisions will be based on what the
 client has told you. This is often not a problem because, in most cases,
 if the client lied, the [bumped or spliced] connection to the SSL server
 will not work anyway. However, if the client supplied no SNI
 information, then your bank ACL (or equivalent) may not have enough
 information to go on, especially for intercepted connections.

My only desire for doing TLS intercept is to introduce content filtering
(ie AV). So I  am quite happy throw away (ie splice) old SSL plus
non-HTTPS sessions - as the primary target I'm after is people in web
browsers downloading viruses from https://dropbox.com, etc (which aren't
old SSL: a hacker who deliberately brings up a SSLv2 system in order to
subvert my assumption is welcome to - try finding a web browser that
will talk to it :-). People who bash their way through multiple layers
of browser warning popups/etc in order to get infected are out of scope ;-)


Thanks again for your help Alex. Hopefully this conversation will be
useful for others. TLS intercept is a bit of a step up in complexity
over standard TCP ;-)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] can't get bump to work anymore on 3.5.7?

2015-08-19 Thread Jason Haar
On 19/08/15 16:07, Alex Rousskov wrote:
 Your interpretation is correct: Your configuration tells Squid to peek
 at steps #1 and #2 and then try to bump at step #3. Unfortunately, the
 last two actions (peeking at step #2 and then bumping at step #3) are
 usually not compatible. Please see the Limitations section at
 http://wiki.squid-cache.org/Features/SslPeekAndSplice


Ah! I used to use external_acl_type to run a script that would check
the SSL status of the host:port and that would allow squid to decide
whether to bump or splice. I'd turned it off for whatever reason - I
guess that's why it was working before. (in all of this I am speaking in
the context of transparent TLS - I realize for the formal proxy scenario
you typically have the SNI name/hostname via the CONNECT method). Sure
enough, once I deleted the peeks, it started bumping

So is there no way to get the SNI field from the client without breaking
the opportunity for bump? It's just that my testing has already shown
everyone using CloudFlare for HTTPS is now protected by their WAF
technology - which rejects SSL sessions that don't contain SNI. So if
you are wanting to (transparently) bump HTTPS, you can't use peek - but
you need peek in order to discover the SNI hostname, because if you
don't have that then  acls using ssl::server_name_regex and/or
external_acl_type will basically get rejected talking to vast numbers
of https servers in the world. This is a bit of a catch-22

My packet sniffer implies the SNI details is in the first TLS packet
sent from the client (ie pre-encryption). So couldn't squid just make a
note of that detail? Sort of a pre-peek I guess? I read about how
there are so many SSL extensions/etc that squid will always be running
afoul of issues if it tries to be too smart, but can't we look at this as

1. client-server (3-way TCP handshake)
2. client sends first TLS packet (contains extensions data - including
SNI). Gets to squid server
3. squid can extract that data, and make decisions immediately just on
that one packet. It can compare with acls and decide to bump or splice
4. if bump, squid forms the client-squid TLS channel and a separate
squid-server TLS channel
5. if splice, squid now forms a TCP channel to the server, then forwards
that first TLS packet, then joins the two ends
6. waits for next client packet


If all HTTPS transactions contained the hostname (because of CONNECT or
SNI), then squid could be set to default to bump, but to splice known
un-bumpable sites  - due to pinning or because they are not actually
HTTPS sites (eg Skype). It just seems like it's currently limited to
default splice, with bumping explicit things? (which I can't believe is
useful)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL-bump and Public Key Piinning (HPKP)

2015-07-05 Thread Jason Haar

On 6/07/15 2:01 am, Walter H. wrote:

reply_header_access Public-Key-Pins deny all

but this doesn't really work; is there another way?
If you think you can override all pinning options, then I'm afraid 
you're mistaken. Well written security apps should do their darndest to 
stop TLS intercept from working: eg hardwiring the CA cert into the 
application itself and barfing if it ever starts a HTTPS connection that 
isn't signed by their one CA


You have to accept that and configure for it: simply create a 
noSSLintercept acl and in there place the ones that can't be fiddled 
with. I'm still only testing TLS intercept myself, but so far I've only 
whitelisted the following


.preyproject.com
accounts.google.com
.push.hello.firefox.com

BTW, even though Chrome/Firefox support key pinning, as a general rule 
they actually support TLS intercept as well - in that if they detect the 
CA involved in a cert-chain is trusted by the *user* and is not a 
commercial CA, then they assume TLS Intercept must be involved and 
allow it to work (at least that's how it seems to work to me). Not a bad 
idea as it allows companies to do TLS intercept, but still guards 
against governments forcing commercial CAs to create fake server certs 
(let's be honest - all of this is about stopping government snooping - 
not about normal criminal behavior)


Jason
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Force LDAP groups to de-authenticate?

2015-07-03 Thread Jason Haar
On 04/07/15 06:08, Dan Purgert wrote:
 I need to kick the users and force a re-
 auth, as this is for a school environment. 

You can't really do that with proxy authentication methods. Once a
browser has successfully authenticated, it remembers that - so even if
you flush the server cache, all that happens is the browser sends the
cached credentials it has and the server revalidates: the user doesn't
even know it's happened

The only way I can think of that will serve your purposes is to move to
a portal solution instead. ie don't use proxy authentication - instead
block Internet and redirect port 80 requests to a captive portal, force
people to login there, then that action whitelists their Internet access
for the next 'n' minutes, after that time expires, they are pushed back
to the portal page again

...but that will require a different product - something like pfsense
comes to mind

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Questions Regarding Transparent Proxy, HTTPS, and ssl_bump

2015-06-24 Thread Jason Haar
On 25/06/15 06:05, James Lay wrote:
 openssl s_client -connect x.x.x.x:443 
Just a FYI but you can make openssl do SNI which helps debugging (ie
doing it your way and then doing it with SNI)

openssl s_client -connect x.x.x.x:443 -servername www.site.name

(that will allow squid to see www.site.name as the SNI)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] confused about ICAP and who's downloading what

2015-06-22 Thread Jason Haar
On 21/06/15 10:45, Antony Stone wrote:
 The former - squid does the download and passes the content to ICAP.

Great. So squid does all the network calls and ICAP simply gets to
review the content (request and/or response) and potentially change it.
Perfect :-)

Thanks!


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] confused about ICAP and who's downloading what

2015-06-20 Thread Jason Haar
Hi there

I'm starting to use ICAP as an AV content filter, having moved away from
using the  havp antivirus proxy as a parent proxy

Part of the problem with havp was that it stopped being developed years
ago and HTTP trickery had moved on in ways that basically it
couldn't support - but squid - being the wonderful piece of loved
software it is - was keeping up with the times :-)

Anyway, now that I'm trialing ICAP, I'm concerned about the same issue.
When a web page is requested by a client, what component does what? Does
squid do the download, pass the content to ICAP, or does it (like with
parent proxies), just tell the ICAP software to do the download itself?
You can see where I'm going, the latter would mean odd HTTP
applications which might work fine through squid might fail if the ICAP
software does things differently

(btw: odd can mean many things: even how dns lookups occur, ipv6
support,etc)

Thanks

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] problem with some ssl services

2015-06-17 Thread Jason Haar
On 15/06/15 11:58, Amos Jeffries wrote:
 Ensure that you are using the very latest Squid version to avoid
 problems with unsupported TLS mechanisms. The latest Squid will also
 automatically splice if its determined that the TLS connection cannot be
 bumped.
Is that supposed to be in 3.5.5? I just noticed a problem with bumping
that came down to the
web server requiring client cert validation and squid-3.5.5 failed to
splice - so it failed going through bump
(as you'd expect).

I guess I'm asking if this new SSL determination includes detecting
client certs, because that would be a
good one to detect if possible?

Now that I think of it, that might be a mugs game. The site I'm
referring to had a SSLVerifyClient optional
on a subdirectory - so it's probably quite unfair to expect a TLS
Intercept to magically know what encrypted
urls it can fiddle with and what ones it can't ;-) Hmmm, OTOH maybe  if
squid decides a server is asking
for even optional client certs, that it declares the entire SNI to be
splice instead of bump - frankly I'd live with
that (ie it might start out bumping, but then flick to splice on the
first bit of evidence that some part needed
client certs - even optional)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fw: 3.5.5 Win x64 SquidTray crash

2015-06-07 Thread Jason Haar
On 08/06/15 15:25, TarotApprentice wrote:
 Reinstalled 3.5.1 and it too had the same problem.

 Also on Server2008 it has what Microsoft call the advanced firewall which 
 seems to block inbound to the machine so I had to adjust the firewall rules 
 even though the installer had added a rule.

Yeah - windows firewall is a major pain. Better to turn the darn thing
off and rely on something else


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ssl-bump deep dive (properly creating certs)

2015-05-24 Thread Jason Haar
On 25/05/15 04:25, James Lay wrote:
 My first question is about properly creating the certs.  Looking at:

 http://wiki.squid-cache.org/ConfigExamples/Intercept/SslBumpExplicit

 this mentions using crtd, but as I understand it, crtd isn't supported
 when using transparent proxies.  So, with no crtd, as I understand it
 this is what I'll need:


I don't know where you got that from, but that's not true. I think you
are confusing the issue that when squid is used as a transparent HTTPS
proxy, it lacks the easy hostname details that a formal (ie
non-transparent) proxy has. ie when a browser asks for a secure website
via a formal proxy, it sends

CONNECT github.com:443 HTTP/1.1

So squid knows *in advance* the server is called github.com. So it
connects to github.com, downloads the public key and then uses crtd to
create a clone of it - identical except that it's signed by your
self-created Squid CA instead of Verisign/whatever

Compare that with transparent proxy mode, where all that squid knows is
that a browser has had it's outbound tcp port 443 traffic to
192.30.252.128 redirected onto it, so it doesn't know that is
github.com. If you are using squid-3.4 or less, that's all there is to
it - there's no way to figure out the cert name in a guaranteed fashion
(there are hacks, but my own experience is that they can only work up to
95% of the time - and break for some of the largest sites). With
squid-3.5 there is peek - which means squid can let the initial few
packets through (ie act like splice) - which is enough to see the
client send the SNI request to the https server and get the reply. So
peek allows squid to learn about the true server name of the https
server. At that point *I think* squid creates a forged cert, then
creates a new connection to the server, then links together the existing
client tcp channel with the new proxy-server tcp channel and carries on
intercepting (I think that's the outcome - there would have to be some
extra smoke-n-mirrors in there to make that happen)

In pseudo-code, it looks like this

if http_port and CONNECT (.*) HTTP then sni_name=$1
else if https_port and peek then sni_name=find_sni($ipaddress)
else if https_port then sni_name=$ipaddress


When all is said and done, transparent HTTPS intercept is the very last
thing you should be working on. You need to gets squid working 100% as a
formal proxy - and only then start looking at making that work in
transparent mode. And you *definitely* want ssl_crtd.


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 3.5.4 need more help with peek and splice and external helper

2015-05-06 Thread Jason Haar
On 07/05/15 10:58, Stanford Prescott wrote:
 When I start Squid with this configuration, the helper script
 bumphelper gets loaded as a process along with squid and ssl_crtd.
 When I try to browse any SSL websites there is no connection and it
 times out.

The problem is that you're calling perl with the default I/O buffering
left *enabled*. You need to add $|=1; near the top so that perl will
flush I/O immediately - that should stop the hanging

Good use of words in your acl names - I think that really helps in
understanding just what is going on with the smoke-n-mirrors that is SSL
intercept :-)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 3.5.4 Can't access Google or Yahoo SSL pages

2015-05-04 Thread Jason Haar
On 04/05/15 20:53, Chris Palmer wrote:
 There has been a change in behaviour in 3.5.4. It now really does
 prefer to contact a site using an ipv6 address rather than a v4. The
 network stack here doesn't permit v6 so the traffic to sites such as
 google was failing. Setting the following restored the previous
 behaviour:

 dns_v4_first on

As far as I'm aware squid won't try to use ipv6 unless your server has a
Global address, so that shouldn't be needed? Also, wouldn't squid simply
treat that as a DNS name that resolves to a bunch of addresses, so as
long as the IPv6 addresses fail to connect at all, it should have still
ended up succeeding with ipv4 addresses?

Finally, I'm running squid-3.5.4, don't have ipv6 (just like everyone
else, I still do have the standard fe80:xxx ipv6 link local address) and
google.com works just fine without dns_v4_first - which implies my
statements above are correct

ie this smells like you actually do have ipv6 enabled, but it's broken
in some subtle way (like the pmtu issue Amos mentioned)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid tcp_outgoing_address feature not working

2015-04-15 Thread Jason Haar
On 15/04/15 22:58, Amos Jeffries wrote:
 Squid has zero control over what TCP connections the *client* opens.
 You need to use tcpdump on the Squid machine, or machine(s) at the
 other end of the WAN1/2 connections to see what the Squid-origin
 traffic uses.

Amos is so right. Stop fiddling around with tools like traceroute whose
behaviour *might* mimic that which squid is doing and instead use
tcpdump to actually *see* what squid is doing. Anyone running network
services has got to become proficient in the use of network sniffers -
they are invaluable

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reverse-proxy with client certificates pass-thru

2015-02-16 Thread Jason Haar
On 17/02/15 11:34, Amos Jeffries wrote:
 There is splice mode in 3.5. Which is to say dont bump that traffic.

If you have a reverse-proxy between a client and backend server and the
backend server insists on seeing the client cert, then I think at best
squid is simply a tcp forwarder (ie splice mode). It could be easier to
put a xinetd-based forwarder in place or even to publish the backend
onto the Internet directly. Basically squid can add nothing

We're going through the same process with Microsoft's SCCM server. The
agents use client certs, but we're hoping we can disable the requirement
for client certs on the backend and get the DMZ security portal to do
that check itself (as we trust patching our security portal more than
patching Microsoft apps). However, that probably won't work and then we
too will be basically doing a tcp forward...

In all fairness, any HTTPS web server that is kept patched, and which
requires validating client certs before even getting to the home page is
an extremely hard target to hack. Irrespective of the security quality
of the web application itself, if the bad guys can't actually interact
with the web app (because they have no client cert), then their options
are extremely limited

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Alert unknown CA

2015-02-03 Thread Jason Haar
On 04/02/15 18:47, Daniel Greenwald wrote:
 And happens to be one that squid desperately needs to remain in order
 to continue ssl bumping..
...and is one that diminishes in value as cert pinning becomes more
popular...

It's a tough life: on the one hand we want to do TLS intercept in order
to do content filtering of HTTPS (because the bad guys are deliberately
putting more and more malware onto HTTPS websites), and yet on the other
hand we all want some things to be private.

Bring back RFC3514, then all of this would be easy!!!

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS intercept, simple configuration to avoid bank bumping

2015-01-27 Thread Jason Haar
On 27/01/15 11:13, Dan Charlesworth wrote:
 Wasn't somebody saying that you'd need write an External ACL to
 evaluate the SNI host because dstdomain isn't hooked into that code
 (yet? ever?)?

That can't be the case. If the external ACL is called without the SNI,
then at best all it can do is connect to an IP address and scrape the
server response. But some SSL servers (especially WAFs) are configured
to DROP connections if they don't see a client SNI (I've seen this with
CDN networks with my own experience with external ACLs). Only squid has
access to the SNI - it has to be done in squid code.

Jason


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS intercept, simple configuration to avoid bank bumping

2015-01-27 Thread Jason Haar
I might have found something

Turning up debugging shows that squid is learning the SNI value from an
intercepted/transparent HTTPS session (or is it learnt from the server
response?)

2015/01/28 09:23:34.328 kid1| bio.cc(835) parseV3Hello: Found server
name: www.kiwibank.co.nz

Looking that up in the source code, it's from bio.cc. However the same
file implies I should also be seeing the SNI debug line:

#if defined(TLSEXT_NAMETYPE_host_name)
if (const char *server = SSL_get_servername(ssl,
TLSEXT_NAMETYPE_host_name))
serverName = server;
debugs(83, 7, SNI server name:   serverName);
#endif


On my test Ubuntu 14.04 laptop with squid-3.5.1 and openssl-1.0.1f,
TLSEXT_NAMETYPE_host_name is defined in /usr/include/openssl/tls1.h, so
that should cause that debug line to be called - but it isn't?

I also confirmed with wireshark that my Firefox browser was generating a
SNI (although it took me a few minutes to realise I have to sniff port
3129 [which I redirected 443 onto] as well as 443 to get the full tcp
session)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS intercept, simple configuration to avoid bank bumping

2015-01-26 Thread Jason Haar

Well the documentation says

#   SslBump1: After getting TCP-level and HTTP CONNECT info.
#   SslBump2: After getting SSL Client Hello info.
#   SslBump3: After getting SSL Server Hello info.


So that means SslBump1 only works for direct proxy (ie CONNECT)
sessions, it's SslBump2 that peeks into the traffic to discover the
client SNI hostname. So I think you actually need (I'll use more
descriptive acl names and comment out those that I think don't add any
value)

acl domains_nobump dstdomain /etc/squid/domains_nobump.acl
#no added value: acl DiscoverCONNECTHost at_step SslBump1
acl DiscoverSNIHost at_step SslBump2
#don't use - breaks bump: acl DiscoverServerHost at_step SslBump3
#no added value - in fact forces peek for some reason: ssl_bump peek
DiscoverCONNECTHost all
ssl_bump peek DiscoverSNIHost all

ssl_bump splice domains_nobump
#DiscoverSNIHost should now mean Squid knows about all the SNI details
ssl_bump bump all

Sadly, this doesn't work for me *in transparent mode*. Works fine when
using squid as a formal proxy, but when used via https_port intercept,
we end up with IP address certs instead of SNI certs.

We really need someone who knows more to tell us how to make this work :-(


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] is chunked support from clients fully supported?

2015-01-23 Thread Jason Haar
Hi there

The squid.conf.documented file in squid-3.4.10 states (for
chunked_request_body_max_size), Squid does not have full support for
that feature yet.

Is that still the case? We have some people running some client software
that requires chunked support and we want to be sure the newer squid
(we're still on 3.1) supports chunked before getting back to them (and
yes we have already asked them how to test it and they don't know: sigh
- users!!!)

Thanks!

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl-bump doesn't like valid web server

2015-01-21 Thread Jason Haar
On 21/01/15 22:21, Steve Hill wrote:
 Probably not very helpful, but it works for me (squid-3.4.10,
 Scientific Linux 6.6, bump-server-first, but not using ssl_crtd).  I
 also can't see anything wrong with the certificate chain.
Found the problem. It's only occurring via transparent https - not
explicit proxy-bumping of https

The index.txt file shows two entries for the same site.

V17114624Z7D33A943166DE91162FDEEA69C6E6D7E62054DC3   
unknown   
/serialNumber=TDtNUZuQo4Ts9hs8qd1ksekvefvr7hdo/OU=GT11048499/OU=See
www.rapidssl.com/resources/cps (c)14/OU=Domain Control Validated -
RapidSSL(R)/CN=*.snap.net.nz+Sign=signTrusted
V17114624Z231617D3B75F4C18A238EF42EAFC568BF27A3485   
unknown   
/serialNumber=TDtNUZuQo4Ts9hs8qd1ksekvefvr7hdo/OU=GT11048499/OU=See
www.rapidssl.com/resources/cps (c)14/OU=Domain Control Validated -
RapidSSL(R)/CN=*.snap.net.nz+Sign=signUntrusted


The signUntrusted would be via starting the cert learning process with
an IP address, whereas the first would have been via knowing in advance
it was myaccount.snap.net.nz

So what's the root cause? Is this an example of why the peek/splice
feature of 3.5 is so important to the success of transparent HTTPS
bumping? (ie is it because there wasn't a SNI hostname)


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ssl-bump doesn't like valid web server

2015-01-21 Thread Jason Haar
Hi there

I'm running squid-3.4.10 on CentOS-6 and just got hit with ssl-bump
blocking/warning access to a website which I can't figure out why

It's https://myaccount.snap.net.nz/. Signed by a couple of layers of
intermediary certs, but seems fine (works direct with FF/Chrome/MSIE).
curl on the squid server has no trouble accessing it (using default
/etc/pki/tls/certs/ca-bundle.crt), but ssl_crtd creates a fake cert for
it as follows.

Any ideas what's up?

Thanks!


Signature Algorithm: sha1WithRSAEncryption
Issuer: C=NZ, ST=, CN=Not trusted by Squid CA
Validity
Not Before: Sep 22 08:36:12 2014 GMT
Not After : Nov 22 22:46:24 2017 GMT
Subject: serialNumber=TDtNUZuQo4Ts9hs8qd1ksekvefvr7hdo,
OU=GT11048499, OU=See www.rapidssl.com/resources/cps (c)14, OU=Domain
Control Validated - RapidSSL(R), CN=*.snap.net.nz
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] proxy pac files issues

2015-01-17 Thread Jason Haar
I just tested your WPAD script using the wonderful pactester and it
seems fine - it returned DIRECT/PROXY exactly as you intended. ie
there's nothing wrong with that WPAD

You say the clients seem to be going through the proxy for even internal
hosts? So that smells like WPAD being fundamentally broken - which
doesn't correlate with the above test result. So look again with a
packet sniffer. Bring up wireshark on a client and start the browser, go
to an internal site, stop the sniffer and review the download of the
WPAD file. I assume you are relying on DNS to point client at the WPAD,
but could you be a Windows shop and you've forgotten you also had WPAD
via DHCP and that points at a different/old WPAD file (ie one without
exceptions)?

Also test with Firefox: it has the purest WPAD support IMHO. If it
works in Firefox and not in MSIE/Chrome, then it's not a WPAD problem
(I'm not sure about me mentioning Chrome, it's just that I know Google
designed Chrome to use the same OS settings that MSIE does when it can -
so any bug/issue with those libraries could affect Chrome if they affect
MSIE)


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3 SSL bump: Google drive application could not connect

2015-01-07 Thread Jason Haar
On 08/01/15 18:41, Chris Bennett wrote:
 Interesting thread so far.  Has anyone thought of using Bro-IDS as a
 feedback loop for some of this advanced logic for bypassing bumping?

The external acl method mentioned earlier probably out-does using some
NIDS feedback loop. In my testing it causes squid to block that new
connection until it returns, and that means your external acl script can
simply attempt a SSL transaction against the end-server and in realtime
figure out that it's SSL or not. And then cache it, blah, blah blah.

The advantage is that it will do a lookup on new HTTPS sessions and
potentially have the answer immediately (ie it can bump on first
attempt), whereas a NIDS would only find out the answer after squid has
defaulted to passthrough/splice mode, so it would only work properly on
future connections to that site.

 I like the active external acl solution since it meets a need, but
 there is overhead.  I'm not quite sure what Bro logs for non-HTTPS
 443 traffic, but I thought I'd chime in with the above idea if anyone
 wants to expand on it further :)

If you think the external acl method is too expensive to run, how do you
expect to feed this NIDS data back into squid? I think you'd find you'd
need an external acl check to do that bit anyway :-)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3 SSL bump: Google drive application could not connect

2015-01-06 Thread Jason Haar
On 06/01/15 05:28, Eliezer Croitoru wrote:
 In 3.5 there will be present a new feature which called peek and
 splice that can give an interface to squid and the admin which will
 allow the admin to know couple things about the connection from squid
 and specifically first the client TLS request.
Is there an example document showing just how to do this? Looking at the
current docs, I can't quite figure out how to layer them all together to
achieve what I'd imagine 99% of sysadmins wanting to do ssl-bump need to
do. Even squid-3.4 works very well without peek/splice - if you are
using it as a formal proxy. But it all falls apart with transparent tcp
443 - as squid only has the dst IP...

What I'd like to do is to use peek to grab the SSL server name the
client sends so that  it is available to acls (and external acl calls -
and logging?) as if the client had gone CONNECT server.name:443?

A quick sniff with wireshark shows Firefox (as an example) sends the
server name as a client SNI request in the first real packet (ie after
the 3-way), so that smells to my naive understanding as good for a
peek - and should allow squid to do an initial chat with the client,
get the SNI, then dupe with the real server, then decide if to splice or
bump the rest? Clients that don't support SNI will probably have to be
spliced - I don't care - I'm only interested in running AV scanners and
porn filters over HTTPS requests from web browsers - the 0.1% remaining
SSL traffic can slip through the cracks for all I care ;-)
 

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3 SSL bump: Google drive application could not connect

2015-01-04 Thread Jason Haar
On 05/01/15 15:44, Eliezer Croitoru wrote:
 A squid helper is nice but... a NFQUEUE helper that can verify if to
 FORWARD or BUMP the connection would be a better suited solution to my
 opinion.
Not sure if you're ignoring the ssl-peek work, but squid still needs to
be able to peek in order for squid to know the actual HTTPS server
name the client is connecting to before it's able to call any external
helper/etc. As that involves understanding SSL (which is a huge chunk of
code) - that means it's not appropriate for a kernel solution - it has
to be done at Layer-7 (ie squid - but not some app called by squid as
that's too late to see the data it needs)

eg after hearing how James Harper wrote his own external https-tester
script, I've written my own and have been merrily testing it under
squid-3.4.10 (ie not 3.5 with peek). In proxy-mode it works great, the
https-tester script is passed the DNS name and port, it manually uses
curl to test that to ensure it's a real HTTPS server and returns OK,
else it returns ERR - making squid fall-back on passthrough/splice mode.
That means it can detect non-SSL apps, as well as client-cert protected
HTTPS webservers (which you also have to drop back to splice with - you
can never successfully MiTM a client-cert based SSL session).

However, the moment you try to do transparent https proxy, things break.
In that case, squid-3.4 only sees the destination IP, and https_tester
can only try to curl -k https://ip.add.ress:port/; - which only works
for *some* webservers. A lot have WAFs on them and righteously ditch the
incoming connection when they recognise the client (my script) doesn't
know what their hostname is. eg any HTTPS site using cloudfront.net is
in that category. Of course it still works - but in passthrough mode -
which isn't the outcome we're after.

I'm going to have to look at squid-3.5 ;-)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3 SSL bump: Google drive application could not connect

2015-01-03 Thread Jason Haar
On 01/01/15 00:11, James Harper wrote:
 The helper connects to the IP:port and tries to obtain the certificate, and 
 then caches the result (in an sqlite database). If it can't do so within a 
 fairly short time it returns failure (but keeps trying a bit longer and 
 caches it for next time). Alternatively if the IP used to be SSL but is now 
 timing out it returns the previously cached value. Negative results are 
 cached for an increasing amount of time each time it fails, on the basis that 
 it probably isn't SSL.
That sounds great James! I'd certainly like to take a look at it too

However, you say SSL  - did you mean HTTPS? ie discovering a ip:port
is a IMAPS server doesn't really help squid talk to it - surely you want
to discover HTTPS servers - and everything else should be
pass-through/splice?

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] odd wccp issue affecting only some web servers

2014-12-10 Thread Jason Haar
On 05/12/14 14:22, Amos Jeffries wrote:

 One is a HIT the other a MISS?
  Squid ACLs?
  TCP connection issue?

Found the problem. We had three proxies and the Cisco ASA was load
balancing between them. Ended up the 2nd proxy had INPUT DROP instead
of INPUT ALLOW in iptables (everything else being correct and
eyeballed as good) and simply didn't work as a transparent proxy! As
it was only 1 of 3, we had some sites worked, some didn't. :-)

Fixed ;-)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] anyone transparently proxying ipv6?

2014-12-08 Thread Jason Haar
Hi there

We're not even running ipv6 yet so this is a curiosity question for me
:-) We're using transparent proxy for ipv4 (via WCCP); ipv6 will show up
at some stage - so forewarned is forearmed and all that

I see from the squid documentation that the normal transparent proxy
options disable ipv6 - except if it's TPROXY - in which case it's
disables authentication and maybe IPv6 on the port

It does look like TPROXY (via iptables) does support transparently
modifying packets in non-NAT mode, but the maybe makes me think it
isn't tested? Is anyone successfully transparently proxying ipv6
traffic? Can TPROXY be used over WCCP?

Thanks!

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Running SCCM through Squid

2014-12-07 Thread Jason Haar
I don't think you can do it. The SCCM protocol is *NOT* HTTP - the
geniuses at Microsoft created this faux-HTTP that runs on standard HTTP
ports - I think you'll find only IIS supports it.

Unless you can make squid proxy non-HTTP traffic, I think you're out of
luck. We're looking at doing the same thing using client certs and will
probably use stunnel (instead of laying the SCCM server bare-assed on
the Internet)

Jason

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Centralized Squid - design and implementation

2014-11-18 Thread Jason Haar
On 19/11/14 01:39, Brendan Kearney wrote:
 i would suggest that if you use a pac/wpad solution, you look into
 pactester, which is a google summer of code project that executes pac
 files and provides output indicating what actions would be returned to
 the browser, given a URL. 
couldn't agree more. We have it built into our QA to run before we ever
roll out any change to our WPAD php script (a bug in there means
everyone loses Internet access - so we have to be careful).

Auto-generating a PAC script per client allows us to change behaviour
based on User-Agent, client IP, proxy and destination - and allows us to
control what web services should be DIRECT and what should be proxied.
There is no other way of achieving those outcomes.

Oh yes, and now that both Chrome and Firefox support proxies over HTTPS,
I'm starting to ponder putting up some form of proxy on the Internet for
our staff to use (authenticated of course!) - WPAD makes that something
we could implement with no client changes - pretty cool :-)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslbump working with 3.4.9 but not in intercept mode?

2014-11-10 Thread Jason Haar
On 11/11/14 00:06, Amos Jeffries wrote:
 Grr, strdup bites again. Backtrace please if you can.
I'm not a developer, so here's my attempt, let me know if I need to do
something else

(gdb) run
Starting program: /usr/sbin/squid -N
[Thread debugging using libthread_db enabled]
Detaching after fork from child process 29759.
Detaching after fork from child process 29760.
Detaching after fork from child process 29761.
Detaching after fork from child process 29762.
Detaching after fork from child process 29763.
Detaching after fork from child process 29764.
Detaching after fork from child process 29765.
Detaching after fork from child process 29766.
Detaching after fork from child process 29767.
Detaching after fork from child process 29768.
Detaching after fork from child process 29769.
Detaching after fork from child process 29770.
Detaching after fork from child process 29771.

Program received signal SIGABRT, Aborted.
0x003f40032625 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x003f40032625 in raise () from /lib64/libc.so.6
#1  0x003f40033e05 in abort () from /lib64/libc.so.6
#2  0x0059cbbb in fatal_dump(char const*) ()
#3  0x0082a6bb in xstrdup ()
#4  0x006b528c in ACLUrlPathStrategy::match(ACLDatachar
const**, ACLFilledChecklist*, ACLFlags) ()
#5  0x006f9478 in ACL::matches(ACLChecklist*) const ()
#6  0x006f in ACLChecklist::matchChild(Acl::InnerNode
const*, __gnu_cxx::__normal_iteratorACL* const*, std::vectorACL*,
std::allocatorACL*  , ACL const*) ()
#7  0x006faeb3 in Acl::AndNode::doMatch(ACLChecklist*,
__gnu_cxx::__normal_iteratorACL* const*, std::vectorACL*,
std::allocatorACL*  ) const ()
#8  0x006f9478 in ACL::matches(ACLChecklist*) const ()
#9  0x006f in ACLChecklist::matchChild(Acl::InnerNode
const*, __gnu_cxx::__normal_iteratorACL* const*, std::vectorACL*,
std::allocatorACL*  , ACL const*) ()
#10 0x006fae2e in Acl::OrNode::doMatch(ACLChecklist*,
__gnu_cxx::__normal_iteratorACL* const*, std::vectorACL*,
std::allocatorACL*  ) const ()
#11 0x006f9478 in ACL::matches(ACLChecklist*) const ()
#12 0x006fc474 in ACLChecklist::matchAndFinish() ()
#13 0x006fce90 in ACLChecklist::nonBlockingCheck(void
(*)(allow_t, void*), void*) ()
#14 0x00635f1a in ?? ()
#15 0x005bc2b8 in FwdState::Start(RefCountComm::Connection
const, StoreEntry*, HttpRequest*, RefCountAccessLogEntry const) ()
#16 0x005bc706 in FwdState::fwdStart(RefCountComm::Connection
const, StoreEntry*, HttpRequest*) ()
#17 0x0053c572 in ConnStateData::switchToHttps(HttpRequest*,
Ssl::BumpMode) ()
#18 0x0053cde9 in ?? ()
#19 0x0054860f in ?? ()
#20 0x006fc63b in ACLChecklist::checkCallback(allow_t) ()
#21 0x0054df1a in ?? ()
#22 0x006ffa46 in AsyncCall::make() ()
#23 0x00702b02 in AsyncCallQueue::fireNext() ()
#24 0x00702e50 in AsyncCallQueue::fire() ()
#25 0x00593cf4 in EventLoop::runOnce() ()
#26 0x00593e48 in EventLoop::run() ()
#27 0x00613e48 in SquidMain(int, char**) ()
#28 0x006147d8 in main ()
(gdb) quit
A debugging session is active.

Inferior 1 [process 29756] will be killed.

Quit anyway? (y or n) y


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslbump working with 3.4.9 but not in intercept mode?

2014-11-10 Thread Jason Haar
I applied the patch and now it works! I can transparently access port
443-based websites with ssl-bump :-)

Thanks Amos :-)


On 11/11/14 02:20, Amos Jeffries wrote:

 You have an urlpath_regex ACL test depending on URIs containing paths.
 Which is not the case with CONNECT.

 The attached patch should fix the crash.

 Amos


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] https intercept breaks non-HTTPS port 443 traffic?

2014-11-10 Thread Jason Haar
Hi there

Now that I've got ssl-bump working with port 443 intercept, I now find
non-HTTPS apps that operate on port 443 no longer work. eg for ssl-bump
in standard proxy mode I had an ACL to disable bump when an application
(like Skype, which doesn't use HTTPS) tried CONNECT-ing to ip addresses,
but with intercept mode that needed to be removed as all outbound https
intercepted sessions begin with them being to an ip address.

I just brought up a remote SSH server on port 443 and when I try to
telnet to it, instead of getting the OpenSSH banner, I see nothing, but
the remote server receives a SSL transaction from squid. All makes
sense, but is there a way for bump to fail open on non-SSL traffic? I
see squid 3.5 mentions peek and at_step - are those components going
to be the mechanism to solve this issue? Just curious, I'm only
testing/playing with intercepting port 443, but it's interesting to see
where this is going

Finally, when I attempted this connection, cache.log reported

fwdNegotiateSSL: Error negotiating SSL connection on FD 25:
error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol (1/-1/0)

I guess that's it squealing about getting non-SSL content back from the
server (ie the SSH banner). Shouldn't that be a bit more verbose - to
help sysadmins figure out what was behind it. eg

fwdNegotiateSSL: Error negotiating SSL connection from
192.168.22.11:44382 - 1.2.3.4:443 (FD 25): error:140770FC:SSL
routines:SSL23_GET_SERVER_HELLO:unknown protocol (1/-1/0)

At the very least, with that I could have a cronjob grep through my
cache.log to auto-create a bump none acl ;-)

Thanks

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] could sslbump handle client certs better?

2014-11-05 Thread Jason Haar
I haven't tested this so I may be embarrassing myself, but I doubt
client certs and sslbump play nicely together as the end-server would
never see any possible client cert interaction

I was wondering how quickly the need of a client cert is announced?
Could/does squid notice the server requirement for client certs and fall
back into passthrough mode? It would certainly be a great option to
have. ie force most https traffic through sslbump, but allow squid to
bypass it for the (very) few sites that require client certs. Some may
want to turn off such a feature, but most would probably be like me and
purely interested in using sslbump for enabling SSL content filtering,
and I really doubt we'll be seeing many viruses via client-cert
protected https any time soon ;-)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-3.4.8 sslbump breaks facebook

2014-10-17 Thread Jason Haar
I applied the patch to 3.4.8, built it and reset the cache, and now
facebook.com and youtube.com work when they caused the error before

Well done - all sorted by the looks of it :-)

Jason

On 17/10/14 05:59, Christos Tsantilas wrote:

 A patch for this bug attached to 4102 bug report.
 Please test it and report any problem.

 Regards,
   Christos



 On 10/16/2014 12:14 PM, Amm wrote:

 On 10/16/2014 02:35 PM, Jason Haar wrote:
 On 16/10/14 20:54, Jason Haar wrote:
 I also checked the ssl_db/certs dir and
 removed the facebook certs and restarted - didn't help
 let me rephrase that. I deleted the dirtree and re-ran ssl_crtd -s
 /usr/local/squid/var/lib/ssl_db -c - ie restarted with an empty cache.
 It didn't help. It created a new fake facebook cert - but the cert
 doesn't fully match the characteristics of the real cert

 http://bugs.squid-cache.org/show_bug.cgi?id=4102

 Please add weight to bug report :)

 Amm.

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid-3.4.8 sslbump breaks facebook

2014-10-16 Thread Jason Haar
:
clientLogRequest: al.url='www.facebook.com:443'
2014/10/16 18:40:17.951 kid1| HttpHeader.cc(1531) ~HttpHeaderEntry:
destroying entry 0x30c5fd0: 'Host: www.facebook.com:443'
2014/10/16 18:40:17.951 kid1| client_side.cc(3899) getSslContextStart:
Finding SSL certificate for /C=US/ST=CA/L=Menlo Park/O=Facebook,
Inc./CN=*.facebook.com+Sign=signTrusted in cache
2014/10/16 18:40:17.951 kid1| client_side.cc(3904) getSslContextStart:
SSL certificate for /C=US/ST=CA/L=Menlo Park/O=Facebook,
Inc./CN=*.facebook.com+Sign=signTrusted have found in cache
2014/10/16 18:40:17.952 kid1| client_side.cc(3906) getSslContextStart:
Cached SSL certificate for /C=US/ST=CA/L=Menlo Park/O=Facebook,
Inc./CN=*.facebook.com+Sign=signTrusted is valid
2014/10/16 18:40:17.956 kid1| ctx: enter level  0: 'www.facebook.com:443'
2014/10/16 18:40:17.956 kid1| HttpHeader.cc(1531) ~HttpHeaderEntry:
destroying entry 0x30c0810: 'Host: www.facebook.com:443'

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-3.4.8 sslbump breaks facebook

2014-10-16 Thread Jason Haar
On 16/10/14 20:54, Jason Haar wrote:
 I also checked the ssl_db/certs dir and
 removed the facebook certs and restarted - didn't help
let me rephrase that. I deleted the dirtree and re-ran ssl_crtd -s
/usr/local/squid/var/lib/ssl_db -c - ie restarted with an empty cache.
It didn't help. It created a new fake facebook cert - but the cert
doesn't fully match the characteristics of the real cert

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] getting sslbump cert errors on major sites

2014-10-02 Thread Jason Haar
Hi there

I'm using sslbump and I just got blocked logging into hotmail for the
first time (yeah, slumming it ;-)

The cert is for bay181.mail.live.com, and squid is generating a CN=Not
trusted by x type cert, as I assume it wasn't signed by a CA that
squid knew about?

I whitelisted live.com (ie don't bump it any more) and the problem goes
away for Firefox

I'm running Ubuntu 14.04, so does this mean that the db of CA's Ubuntu
trusts does not include the same CA-chain that browsers do?

ie, I have

http_port 3128 ssl-bump cert=/usr/local/squid/etc/squidCA.cert 
capath=/etc/ssl/certs/

so this means the CA's Ubuntu lists in /etc/ssl/certs/  is out of date
compared with Firefox?

Really a rhetorical question, just kinda wanting to know about where
sslbump will run into trouble, etc :-)

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users