[squid-users] Ssl-bump deep dive (intercept last post and final thoughts)

2015-05-31 Thread James Lay
So this has been REALLY good!  The tl;dr:  ssl-bumping is pretty easy
even with intercept, ssl-bumping with access control is a little more
difficult...jump to the config to skip the chit chat.

My goal has always been to a content filter based on url regex.  This
works just fine for http traffic, but is much more difficult for https
traffic just for the case of you may or may not know the host you're
going to, depending on the site/app.  I'll be real honest hereI'm
only doing this to protect/filter the traffic of two kids, on laptops,
iPhone, and Android phone, so it's a mixed bag of content and, since
it's just the two of them in a home environment, I get to play around
and see what works and what doesn't.

Below is a close as I can get transparent intercept ssl-bump with
content filtering with using a list of domains/urls with both http and
https.  I still have to use a list of broken sites, which are large
netblocks (17.0.0.0/8..Apple anyone?) because some of these I just can't
seem to get host/domain information during the ssl handshake.  As I
discovered after attempting to put this into production, I have not
been able to emulate using wget or curl an https session that doesn't
have any SNI information, so that threw me for a loop.  TextNow is a
great example (I'm including a packet capture of this in this post).
There's no host information in the client hellothere's no host
information in the server hello.buried deep in the certificate ONLY
is the commonName=.*textnow.me...that's it.  This dashed my hopes of
using an url_regex for access control with all https sessions.  I have
%ssl::cert_subject in my logging, and I never did see this log in any
of my tests...and I tested a BUNCH of different peek/stare/splice/bump
cominations..so I don't think squid is actually seeing this from the
certificate.

Another challenge is getting http url_regex filtering to work with https
filtering.  My method of filtering means not having an http_access
allow localnet, which directly conflicted with also trying to filter
https.  The solution was to add an acl for port 443, then http_access to
just allow it, as our filtering was going to happen for https further
down.

I know there's a fair amount of people who just want to plop in some
config files, run a few commands, and be up and running.  The below
configuration has two additional files it references, http_url.txt,
which is an a list of domains/urls (\.apple\.com for example), and the
aptly named broken, which is a IP list (17.0.0.0/8).  The broken list
should be (semi) trusted and are sites that we just can't get SNI or
hostname information from.  If you've created a single cert/key pair
from the Squid documentation, you won't need the key= line in your
https_port directive.  If you've followed along in my posts, you already
have the configure line from my previous posts.  Change the
commands/config to fir where your squid config and ssl_db are.  So after
configuring, make sure you:

sudo /opt/libexec/ssl_crtd -c -s /opt/var/ssl_db
sudo chown -R nobody /opt/var/ssl_db/

As I believe in a lot of logging, and actually looking at said logging,
below is what you can expect to see in your logs (mine logs to syslog,
again, change this if you log to a different file):

Allowed http to .apple.com in http_url.txt:
May 31 17:03:48 gateway (squid-1): 192.168.1.100 - -
[31/May/2015:17:03:48 -0600] GET
http://init.ess.apple.com/WebObjects/VCInit.woa/wa/getBag? HTTP/1.1 - -
200 5243 TCP_MISS:ORIGINAL_DST -
Denied http to symcb.com not in http_url.txt
May 31 17:03:48 gateway (squid-1): 192.168.1.100 - -
[31/May/2015:17:03:48 -0600] GET http://sd.symcb.com/sd.crt HTTP/1.1 -
- 403 3618 TCP_DENIED:HIER_NONE -
Spliced https IP in broken.txt (google block 216.58.192.0/19)
May 31 17:04:34 gateway (squid-1): 192.168.1.101 - -
[31/May/2015:17:04:34 -0600] CONNECT 216.58.216.138:443 HTTP/1.1 - -
200 568 TCP_TUNNEL:ORIGINAL_DST peek
Spliced https IP in broken.txt that we got SNI or bumped site in
http_url.txt look exactly the same
May 31 17:09:45 gateway (squid-1): 192.168.1.100 - -
[31/May/2015:17:09:45 -0600] CONNECT 23.222.157.21:443 HTTP/1.1
init.itunes.apple.com - 200 30314 TCP_TUNNEL:ORIGINAL_DST peek

The only drag with the configuration is you won't see when an https
session is terminated when the IP/url is not in the broken.txt, or the
http_url.txt:

[17:20:53 jlay@analysis:~$] wget -d
--ca-certificate=/etc/ssl/certs/sslsplit.crt https://www.yahoo.com
Setting --ca-certificate (cacertificate) to /etc/ssl/certs/sslsplit.crt
DEBUG output created by Wget 1.16.1 on linux-gnu.

URI encoding = ‘UTF-8’
--2015-05-31 17:20:59--  https://www.yahoo.com/
Resolving www.yahoo.com (www.yahoo.com)... 206.190.36.45,
206.190.36.105, 2001:4998:c:a06::2:4008
Caching www.yahoo.com = 206.190.36.45 206.190.36.105
2001:4998:c:a06::2:4008
Connecting to www.yahoo.com (www.yahoo.com)|206.190.36.45|:443...
connected.
Created socket 3.
Releasing 0x7fdf67eecdd0 (new refcount 1).
Initiating 

Re: [squid-users] Ssl-bump deep dive (intercept last post and final thoughts)

2015-05-31 Thread James Lay
On Mon, 2015-06-01 at 13:00 +1200, Amos Jeffries wrote:

 On 1/06/2015 11:56 a.m., James Lay wrote:
  So this has been REALLY good!  The tl;dr:  ssl-bumping is pretty easy
  even with intercept, ssl-bumping with access control is a little more
  difficult...jump to the config to skip the chit chat.
  
  My goal has always been to a content filter based on url regex.  This
  works just fine for http traffic, but is much more difficult for https
  traffic just for the case of you may or may not know the host you're
  going to, depending on the site/app.  I'll be real honest hereI'm
  only doing this to protect/filter the traffic of two kids, on laptops,
  iPhone, and Android phone, so it's a mixed bag of content and, since
  it's just the two of them in a home environment, I get to play around
  and see what works and what doesn't.
  
  Below is a close as I can get transparent intercept ssl-bump with
  content filtering with using a list of domains/urls with both http and
  https.  I still have to use a list of broken sites, which are large
  netblocks (17.0.0.0/8..Apple anyone?) because some of these I just can't
  seem to get host/domain information during the ssl handshake.  As I
  discovered after attempting to put this into production, I have not
  been able to emulate using wget or curl an https session that doesn't
  have any SNI information, so that threw me for a loop.  TextNow is a
  great example (I'm including a packet capture of this in this post).
  There's no host information in the client hellothere's no host
  information in the server hello.buried deep in the certificate ONLY
  is the commonName=.*textnow.me...that's it.  This dashed my hopes of
  using an url_regex for access control with all https sessions.  I have
  %ssl::cert_subject in my logging, and I never did see this log in any
  of my tests...and I tested a BUNCH of different peek/stare/splice/bump
  cominations..so I don't think squid is actually seeing this from the
  certificate.
  
  Another challenge is getting http url_regex filtering to work with https
  filtering.  My method of filtering means not having an http_access
  allow localnet, which directly conflicted with also trying to filter
  https.  The solution was to add an acl for port 443, then http_access to
  just allow it, as our filtering was going to happen for https further
  down.
  
  I know there's a fair amount of people who just want to plop in some
  config files, run a few commands, and be up and running.  The below
  configuration has two additional files it references, http_url.txt,
  which is an a list of domains/urls (\.apple\.com for example), and the
  aptly named broken, which is a IP list (17.0.0.0/8).  The broken list
  should be (semi) trusted and are sites that we just can't get SNI or
  hostname information from.  If you've created a single cert/key pair
  from the Squid documentation, you won't need the key= line in your
  https_port directive.  If you've followed along in my posts, you already
  have the configure line from my previous posts.  Change the
  commands/config to fir where your squid config and ssl_db are.  So after
  configuring, make sure you:
  
  sudo /opt/libexec/ssl_crtd -c -s /opt/var/ssl_db
  sudo chown -R nobody /opt/var/ssl_db/
  
  As I believe in a lot of logging, and actually looking at said logging,
  below is what you can expect to see in your logs (mine logs to syslog,
  again, change this if you log to a different file):
  
  Allowed http to .apple.com in http_url.txt:
  May 31 17:03:48 gateway (squid-1): 192.168.1.100 - -
  [31/May/2015:17:03:48 -0600] GET
  http://init.ess.apple.com/WebObjects/VCInit.woa/wa/getBag? HTTP/1.1 - -
  200 5243 TCP_MISS:ORIGINAL_DST -
  Denied http to symcb.com not in http_url.txt
  May 31 17:03:48 gateway (squid-1): 192.168.1.100 - -
  [31/May/2015:17:03:48 -0600] GET http://sd.symcb.com/sd.crt HTTP/1.1 -
  - 403 3618 TCP_DENIED:HIER_NONE -
  Spliced https IP in broken.txt (google block 216.58.192.0/19)
  May 31 17:04:34 gateway (squid-1): 192.168.1.101 - -
  [31/May/2015:17:04:34 -0600] CONNECT 216.58.216.138:443 HTTP/1.1 - -
  200 568 TCP_TUNNEL:ORIGINAL_DST peek
  Spliced https IP in broken.txt that we got SNI or bumped site in
  http_url.txt look exactly the same
  May 31 17:09:45 gateway (squid-1): 192.168.1.100 - -
  [31/May/2015:17:09:45 -0600] CONNECT 23.222.157.21:443 HTTP/1.1
  init.itunes.apple.com - 200 30314 TCP_TUNNEL:ORIGINAL_DST peek
  
  The only drag with the configuration is you won't see when an https
  session is terminated when the IP/url is not in the broken.txt, or the
  http_url.txt:
  
  [17:20:53 jlay@analysis:~$] wget -d
  --ca-certificate=/etc/ssl/certs/sslsplit.crt https://www.yahoo.com
  Setting --ca-certificate (cacertificate) to /etc/ssl/certs/sslsplit.crt
  DEBUG output created by Wget 1.16.1 on linux-gnu.
  
  URI encoding = ‘UTF-8’
  --2015-05-31 17:20:59--  https://www.yahoo.com/
  Resolving www.yahoo.com (www.yahoo.com)... 

Re: [squid-users] Ssl-bump deep dive (intercept last post and final thoughts)

2015-05-31 Thread Amos Jeffries
On 1/06/2015 11:56 a.m., James Lay wrote:
 So this has been REALLY good!  The tl;dr:  ssl-bumping is pretty easy
 even with intercept, ssl-bumping with access control is a little more
 difficult...jump to the config to skip the chit chat.
 
 My goal has always been to a content filter based on url regex.  This
 works just fine for http traffic, but is much more difficult for https
 traffic just for the case of you may or may not know the host you're
 going to, depending on the site/app.  I'll be real honest hereI'm
 only doing this to protect/filter the traffic of two kids, on laptops,
 iPhone, and Android phone, so it's a mixed bag of content and, since
 it's just the two of them in a home environment, I get to play around
 and see what works and what doesn't.
 
 Below is a close as I can get transparent intercept ssl-bump with
 content filtering with using a list of domains/urls with both http and
 https.  I still have to use a list of broken sites, which are large
 netblocks (17.0.0.0/8..Apple anyone?) because some of these I just can't
 seem to get host/domain information during the ssl handshake.  As I
 discovered after attempting to put this into production, I have not
 been able to emulate using wget or curl an https session that doesn't
 have any SNI information, so that threw me for a loop.  TextNow is a
 great example (I'm including a packet capture of this in this post).
 There's no host information in the client hellothere's no host
 information in the server hello.buried deep in the certificate ONLY
 is the commonName=.*textnow.me...that's it.  This dashed my hopes of
 using an url_regex for access control with all https sessions.  I have
 %ssl::cert_subject in my logging, and I never did see this log in any
 of my tests...and I tested a BUNCH of different peek/stare/splice/bump
 cominations..so I don't think squid is actually seeing this from the
 certificate.
 
 Another challenge is getting http url_regex filtering to work with https
 filtering.  My method of filtering means not having an http_access
 allow localnet, which directly conflicted with also trying to filter
 https.  The solution was to add an acl for port 443, then http_access to
 just allow it, as our filtering was going to happen for https further
 down.
 
 I know there's a fair amount of people who just want to plop in some
 config files, run a few commands, and be up and running.  The below
 configuration has two additional files it references, http_url.txt,
 which is an a list of domains/urls (\.apple\.com for example), and the
 aptly named broken, which is a IP list (17.0.0.0/8).  The broken list
 should be (semi) trusted and are sites that we just can't get SNI or
 hostname information from.  If you've created a single cert/key pair
 from the Squid documentation, you won't need the key= line in your
 https_port directive.  If you've followed along in my posts, you already
 have the configure line from my previous posts.  Change the
 commands/config to fir where your squid config and ssl_db are.  So after
 configuring, make sure you:
 
 sudo /opt/libexec/ssl_crtd -c -s /opt/var/ssl_db
 sudo chown -R nobody /opt/var/ssl_db/
 
 As I believe in a lot of logging, and actually looking at said logging,
 below is what you can expect to see in your logs (mine logs to syslog,
 again, change this if you log to a different file):
 
 Allowed http to .apple.com in http_url.txt:
 May 31 17:03:48 gateway (squid-1): 192.168.1.100 - -
 [31/May/2015:17:03:48 -0600] GET
 http://init.ess.apple.com/WebObjects/VCInit.woa/wa/getBag? HTTP/1.1 - -
 200 5243 TCP_MISS:ORIGINAL_DST -
 Denied http to symcb.com not in http_url.txt
 May 31 17:03:48 gateway (squid-1): 192.168.1.100 - -
 [31/May/2015:17:03:48 -0600] GET http://sd.symcb.com/sd.crt HTTP/1.1 -
 - 403 3618 TCP_DENIED:HIER_NONE -
 Spliced https IP in broken.txt (google block 216.58.192.0/19)
 May 31 17:04:34 gateway (squid-1): 192.168.1.101 - -
 [31/May/2015:17:04:34 -0600] CONNECT 216.58.216.138:443 HTTP/1.1 - -
 200 568 TCP_TUNNEL:ORIGINAL_DST peek
 Spliced https IP in broken.txt that we got SNI or bumped site in
 http_url.txt look exactly the same
 May 31 17:09:45 gateway (squid-1): 192.168.1.100 - -
 [31/May/2015:17:09:45 -0600] CONNECT 23.222.157.21:443 HTTP/1.1
 init.itunes.apple.com - 200 30314 TCP_TUNNEL:ORIGINAL_DST peek
 
 The only drag with the configuration is you won't see when an https
 session is terminated when the IP/url is not in the broken.txt, or the
 http_url.txt:
 
 [17:20:53 jlay@analysis:~$] wget -d
 --ca-certificate=/etc/ssl/certs/sslsplit.crt https://www.yahoo.com
 Setting --ca-certificate (cacertificate) to /etc/ssl/certs/sslsplit.crt
 DEBUG output created by Wget 1.16.1 on linux-gnu.
 
 URI encoding = ‘UTF-8’
 --2015-05-31 17:20:59--  https://www.yahoo.com/
 Resolving www.yahoo.com (www.yahoo.com)... 206.190.36.45,
 206.190.36.105, 2001:4998:c:a06::2:4008
 Caching www.yahoo.com = 206.190.36.45 206.190.36.105
 2001:4998:c:a06::2:4008
 Connecting to