Re: [squid-users] Alternative ways of tracking users on unauthenticated proxy

2015-05-26 Thread Mr J Potter
OK - got it working...

added the lines:

external_acl_type userlookup ttl=60 concurrency=1 %SRC
/opt/squid354/libexec/ext_sql_session_acl -dsn DBI:mysql:database=pf --user
root --password xxx --table  currentUsers --uidcol ip  --usercol uid
--tagcol ip --persist
acl userlookup external userlookup
http_access allow localnet userlookup
http_access allow localnet

Now I get this in my logfiles:
10.15.228.12 - 0001 [26/May/2015:12:56:23 +0100] POST
http://www.bing.com/fd/ls/lsp.aspx HTTP/1.1 204 391 TCP_MISS:ORIGINAL_DST

I'll write all this up somewhere, as variations on what I have here is what
people are always asking for:
- Users log in via a web page, not a 407 popup box
- Authenticates to AD
- Users are filtered depending on who they are (via squidGuard)
- Logs activity against users
- logs them all off at a particular time
- No proxy settings (intercept HTTP+HTTPS)


thanks,

Jim Potter
Network Manager
Oasis Brislington (formerly Brislington Enterprise College)

On 26 May 2015 at 11:39, Mr J Potter jpotter...@because.org.uk wrote:

 Hi Amos,

 OK this looks promising (if not actually working...)

 So I have a config line:
 external_acl_type userlookup ttl=60 %SRC
 /opt/squid354/libexec/ext_sql_session_acl -dsn DBI:mysql:database=pf --user
 root --password  --table currentUsers --uidcol ip  --usercol uid
 --tagcol ip --persist --debug

 Where currentUsers looks like:
 mysql select * from currentUsers;
 +--+--+-+
 | uid  | ip   | enabled |
 +--+--+-+
 | 0003 | 10.15.228.12 | 1   |
 +--+--+-+

 so running this externally I use:

 /opt/squid354/libexec/ext_sql_session_acl -dsn DBI:mysql:database=pf
 --user root --password fv89j8j6eg2 --table currentUsers --uidcol ip
 --usercol uid --tagcol ip --debug

 this replies with a username if I put in:
 anything 10.15.228.12

 So what is the anything about? And I'm still not getting any username in
 my logfiles. Do I need to use the acl name somewhere else in the config
 file too?

 thanks,

 Jim Potter
 Network Manager
 Oasis Brislington (formerly Brislington Enterprise College)

 On 25 May 2015 at 12:07, Amos Jeffries squ...@treenet.co.nz wrote:

 On 25/05/2015 8:38 p.m., Mr J Potter wrote:
  Hi all,
 
  I'm setting up a system for using iPads in our school, and I'm stuck a
 bit
  on tracking what the students are doing on them.
 
  First up, I reaaly don't want a Pop-up login box from a 407 response
 from a
  proxy server, so I'm looking for some other way to track who is doing
 what.
 
  What i have set up so far is PacketFence with an SSL-bump transparent
 proxy
  (I've put the CAs o all the ipads) which works well in that users have
 to
  log in before they get internet access. This works (they get a web page,
  login and get 50 minutes of internet before it disconnects them), but
 the
  only way I have of tracking users is by working out who was on each ipad
  (from packetfence) then matching it against squid logs, which is messy.

 Squid comes bundled with a ext_sql_session_acl helper that looks up a
 database and produces OK/ERR (and username for logging) depending on
 whether the key given to it exists in the DB already.
 http://www.squid-cache.org/Versions/v4/manuals/ext_sql_session_acl.html

 You just need to get an UID metric. IP address, MAC address, and/or
 EUI-64 (IPv6 link-local) are suitable there. It sounds like your
 packetfence would be a good way to populate that DB too.

 
  One plan I had would be to add/remove entries in dns or hosts for users,
  eg  IP address 10.2.3.4   - hostname  fbloggs  (the user's login code)
 so
  usernames would show up in the client hostname field, but squid caches
  these I think.

 Yes. Dont do that with DNS.

 Amos

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Alternative ways of tracking users on unauthenticated proxy

2015-05-25 Thread Mr J Potter
Hi all,

I'm setting up a system for using iPads in our school, and I'm stuck a bit
on tracking what the students are doing on them.

First up, I reaaly don't want a Pop-up login box from a 407 response from a
proxy server, so I'm looking for some other way to track who is doing what.

What i have set up so far is PacketFence with an SSL-bump transparent proxy
(I've put the CAs o all the ipads) which works well in that users have to
log in before they get internet access. This works (they get a web page,
login and get 50 minutes of internet before it disconnects them), but the
only way I have of tracking users is by working out who was on each ipad
(from packetfence) then matching it against squid logs, which is messy.

One plan I had would be to add/remove entries in dns or hosts for users,
eg  IP address 10.2.3.4   - hostname  fbloggs  (the user's login code) so
usernames would show up in the client hostname field, but squid caches
these I think. Another option would be via iptables somehow.

Can anyone suggest any other possible workarounds for this?

thanks,

Jim Potter
Network Manager
Oasis Brislington (formerly Brislington Enterprise College)
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Issues with SSL from specific sites,

2015-01-08 Thread Mr J Potter
Hi all,

I have a weird problem connecting to one specific domain:

https://cdnjs.cloudflare.com/ajax/libs/es5-shim/4.0.5/es5-shim.min.js

this site works fine if I connect directly, but if I go via my squid
instance, it fails (see below).

I have squid 3.3.11 with optional SSL-bump set up and working fine for the
most part, but it will not allow me onto this one domain. Its not in any
filtered list (I've connected out SSLBump and all filtering/redirecting on
my test server).

It says unavailable to establish SSL connection... one point is when I
connect to this site via chrome it tells me the encryption method is
outdated - is squid refusing to connect due to this?

thanks in advance for any help.

root@dirvish:~# wget
https://cdnjs.cloudflare.com/ajax/libs/es5-shim/4.0.5/es5-shim.min.js -dv
Setting --verbose (verbose) to 1
DEBUG output created by Wget 1.13.4 on linux-gnu.

URI encoding = `UTF-8'
URI encoding = `UTF-8'
--2015-01-08 13:14:56--
https://cdnjs.cloudflare.com/ajax/libs/es5-shim/4.0.5/es5-shim.min.js
Resolving dirvish (dirvish)... 10.15.244.47
Caching dirvish = 10.15.244.47
Connecting to dirvish (dirvish)|10.15.244.47|:3128... connected.
Created socket 4.
Releasing 0x0171a990 (new refcount 1).

---request begin---
CONNECT cdnjs.cloudflare.com:443 HTTP/1.1
User-Agent: Wget/1.13.4 (linux-gnu)

---request end---
proxy responded with: [HTTP/1.1 503 Service Unavailable
Server: squid/3.3.11
Mime-Version: 1.0
Date: Thu, 08 Jan 2015 13:14:57 GMT
Content-Type: text/html
Content-Length: 3129
X-Squid-Error: ERR_CONNECT_FAIL 101
Vary: Accept-Language
Content-Language: en

]
Proxy tunneling failed: Service UnavailableUnable to establish SSL
connection.
root@dirvish:~#

squid config:
cache_effective_user proxy
shutdown_lifetime 2 seconds

cache_peer courage.bristol-cyps.org.uk  parent3128  0  round-robin

forwarded_for off

#url_rewrite_program /usr/bin/squidGuard -c
/var/lib/squidguard/squidGuard.conf

#auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
#auth_param ntlm children 30 startup=5 idle=10
#auth_param ntlm keep_alive on

#acl authdUsers proxy_auth REQUIRED

acl unchecked_sites dstdomain
/var/lib/squidguard/db/BEC/alwaysAllowed/domains
acl unchecked_regex dstdom_regex
/var/lib/squidguard/db/BEC/alwaysAllowed/regex

acl bumpedDomains dstdomain .google.com .youtube.com
acl localDomains dstdomain .bec.lan .bcc.lan .because.org.uk
acl directDomains dstdomain .gcsepod.com .cloudflare.com

#acl localhost src 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
acl HTTPS proto HTTPS

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 81  # Jamie 'Fish lips' Oliver
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 4433## VPN
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on localhost is a local user
http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow unchecked_sites
http_access allow localhost
#http_access allow authdUsers
http_access allow localnet
#http_access deny all

always_direct allow localDomains
always_direct allow directDomains
#always_direct allow bumpedDomains
#always_direct deny HTTPS
#always_direct allow bumpedDomains
#always_direct allow HTTPS
#always_direct allow bumpedDomains
always_direct deny all
#never_direct allow all
#always_direct deny all

strip_query_terms off

#logformat squid  %ts.%03tu %6tr %A %Ss/%03Hs %st %rm %ru %[un
%Sh/%a %mt
#logformat common %a %[ui %[un [%tl] %rm %ru HTTP/%rv %Hs %st
%Ss:%Sh
access_log daemon:/var/log/squid/access.log common
#access_log syslog:local4 common

dns_nameservers 10.15.244.8 10.15.244.13

Re: [squid-users] SSL_bump ACL for destdomain

2014-02-04 Thread Mr J Potter
I use a pac file that points some domains to an ssl-bump proxy and
some to a non-ssl bump. works for me:

function FindProxyForURL(url, host) {
if (
dnsDomainIs(host, .because.org.uk) ||
dnsDomainIs(host, .bec.lan) ||
dnsDomainIs(host, .nbt.nhs.uk) ||
isInNet(host,10.15.0.0,255.255.0.0) ||
isInNet(host,127.0.0.1,255.0.0.0) ||
isInNet(host,127.0.0.1:1793, 255.0.0.0) ||
isPlainHostName(host) ||
dnsDomainIs(host, iriscamera.bec.lan)
) {
return DIRECT;
}
if (
dnsDomainIs(host,youtube.com)
) {
return PROXY 10.15.244.40:3129;// ssl bump youtube
}

return PROXY 10.15.244.26:3128;  // dont
bump anything else
}

Jim


On 4 February 2014 10:34, Yury Paykov cry5...@cry5tal.in wrote:
 Hello, squid users, I'm currently having an issue trying to configure Squid
 (use 3.3) to bypass a handful of sites.
 I mean, i want squid to NOT bump the connection.

 I employ the following in the config :

 acl https_proxy dstdomain www.google.com
 acl https_proxy dstdomain google.ru

 ssl_bump none https_proxy
 ssl_bump server-first all

 This should work like If google, do not bump, else ssl-bump the connection
 However, it doesn't work as expected and instead bumps google as well

 When I used debugging, I saw that squid actually checks IP address and then
 - the PTR entry, where neither is *google* anything

 2014/02/04 14:36:30.428| Acl.cc(336) matches: ACLList::matches: checking
 https_proxy
 2014/02/04 14:36:30.428| Acl.cc(319) checklistMatches:
 ACL::checklistMatches: checking 'https_proxy'
 2014/02/04 14:36:30.428| DomainData.cc(131) match: aclMatchDomainList:
 checking '173.194.71.94'
 2014/02/04 14:36:30.428| DomainData.cc(135) match: aclMatchDomainList:
 '173.194.71.94' NOT found
 2014/02/04 14:36:30.428| DomainData.cc(131) match: aclMatchDomainList:
 checking 'lb-in-f94.1e100.net'
 2014/02/04 14:36:30.428| DomainData.cc(135) match: aclMatchDomainList:
 'lb-in-f94.1e100.net' NOT found


 MY QUESTION IS  - Is there a way to use CN information from server
 certificate which is retrieved with /server-first/ method? Can I construct
 an ACL rule based on it?



 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-bump-ACL-for-destdomain-tp4664589.html
 Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] squid 3.3.8 running away

2014-02-04 Thread Mr J Potter
Hi all,

More on this - squid -k parse gives this warning about my
file_descriptors line in squid.conf
WARNING: max_filedescriptors disabled. Operating System
setrlimit(RLIMIT_NOFILE) is missing

I've seen in a previous post from Amos that this is an issue. But how
do I fix it? I've got another system with squid3.2 on, which doesn't
have this issue. resources.h looks like it provides setrlimit method,
and that's present in /usr/include... any idea what causes this? I
assume I need a few more headers in there somewhere.

thanks again,

Jim

On 31 January 2014 11:35, Mr J Potter jpotter...@because.org.uk wrote:
 Hi Eleizer,

 thanks for getting back to me.

 OK - yes. Its a VMWare virtual machine with 2 interfaces - 1 to
 outside world, one with the clients on. Its 2 processor, 4GB RAM,
 Debian wheezy 64bit

 I can't set it up as router only as it isn't a gateway, and I'm not
 keen on rejigging the network to test this, but traffic is likely to
 be ~10-20Mbps when it dies, connections per seconds in the hundreds.

 'Kills it' is cpu usage hits 99.8-101% and stays there. And it stops
 forwarding requests.

 there is nothing I can see in the logs (3.2 gave a message about file
 descriptors)

 I've been through all this before with s squid 3.2 box, and the
 solution then was file descriptors. I've not set up disk caching on
 this setup (or at least I haven't got a line setting about disk cache
 - does it create one by default?). My next thing to try would be
 adding a disk cache maybe...


 Regards,

 Jim Potter
 BEC Network Manager


 On 31 January 2014 09:58, Eliezer Croitoru elie...@ngtech.co.il wrote:
 Hey Mr,

 I will might not find the solution in one sec but..
 What interfaces do you have?
 What is the network load?
 Have you tried to use SMP on this machine?
 In order to analyze the basic traffic size you can use the machine as a
 router only based on linux.
 This would give you the basic picture of the network traffic load.
 The basic information is RPS or Connections Per Second.

 What are the symptoms of kills it?
 If you do have some logs that will describe it will help to see them.

 Thank,
 Eliezer


 On 31/01/14 11:48, Mr J Potter wrote:

 Hi all,

 I'm trying to roll out SSLBump internet filtering. I've got it all
 working fine under test conditions, but squid grinds to a halt using
 100% CPU.

 I've gone through all the comments on this online, and it seems all to
 point to file descriptors. (I think I've fixed this proviously by
 setting this value with squid 3.2).

 In my current setup I don't have any disk cache, and adding
 file_descriptors doesn't fix it. It only seems to go when load hist a
 certain threshold. It can run fine all day, but when I add more
 traffic, it just dies. I.m pretty sure its nothing to do with SSLBump
 as I use this server for all youtube traffic via SSLBUmp (its fine)
 but when I put everything else through with no SSL bump, the load
 seems to kill it.

 ... and it doesn't seem to be using the 2GB RAM allocated to it for
 cache. the whole machine only uses ~750MB.

 Any idea anyone? config files and details below...

 thankyou

 Jim Potter
 BEC Network Manager

 SNIP


[squid-users] squid 3.3.8 running away

2014-01-31 Thread Mr J Potter
Hi all,

I'm trying to roll out SSLBump internet filtering. I've got it all
working fine under test conditions, but squid grinds to a halt using
100% CPU.

I've gone through all the comments on this online, and it seems all to
point to file descriptors. (I think I've fixed this proviously by
setting this value with squid 3.2).

In my current setup I don't have any disk cache, and adding
file_descriptors doesn't fix it. It only seems to go when load hist a
certain threshold. It can run fine all day, but when I add more
traffic, it just dies. I.m pretty sure its nothing to do with SSLBump
as I use this server for all youtube traffic via SSLBUmp (its fine)
but when I put everything else through with no SSL bump, the load
seems to kill it.

... and it doesn't seem to be using the 2GB RAM allocated to it for
cache. the whole machine only uses ~750MB.

Any idea anyone? config files and details below...

thankyou

Jim Potter
BEC Network Manager

pac file:
function FindProxyForURL(url, host) {
if (
dnsDomainIs(host, .because.org.uk) ||
dnsDomainIs(host, .bec.lan) ||
dnsDomainIs(host, .nbt.nhs.uk) ||
isInNet(host,10.15.0.0,255.255.0.0) ||
isInNet(host,127.0.0.1,255.0.0.0) ||
isInNet(host,127.0.0.1:1793, 255.0.0.0) ||
isPlainHostName(host) ||
dnsDomainIs(host, iriscamera.bec.lan)
) {
return DIRECT;
}
if (
dnsDomainIs(host,youtube.com)
) {
return PROXY 10.15.244.40:3129;   // this is squid
3.3.8 box, sslbump port
}

//return PROXY 10.15.244.26:3128;
  return PROXY 10.15.244.40:3128;
}


squid.conf:
cache_effective_user proxy
shutdown_lifetime 2 seconds

#cache_peer caffreys.bristol-cyps.org.ukparent3128  3130  default
#cache_peer courage.bristol-cyps.org.uk parent3128  3130  default
#no-delay
#no-query no-digest no-netdb-exchange
## default

##cache_peer_access caffreys.bristol-cyps.org.uk allow all
##cache_peer_access courage.bristol-cyps.org.uk allow all

forwarded_for off

url_rewrite_program /usr/bin/squidGuard -c /var/lib/squidguard/squidGuard.conf

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 30 startup=5 idle=10
auth_param ntlm keep_alive on

acl authdUsers proxy_auth REQUIRED
#acl authdUsers ident REQUIRED

acl unchecked_sites dstdomain /var/lib/squidguard/db/BEC/alwaysAllowed/domains

#acl localhost src 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines
acl HTTPS proto HTTPS

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 81  # Jamie 'Fish lips' Oliver
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 4433## VPN
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

### HTTPS busting bit!!!
##ssl_bump client-first all
ssl_bump server-first all
sslproxy_cert_error allow all

## Or may be deny all according to your company policy
# sslproxy_cert_error deny all
sslproxy_flags DONT_VERIFY_PEER
sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s
/var/lib/squid3/ssl_db -M 4MB
sslcrtd_children 5

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports
#http_access deny CONNECT SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on localhost is a local user
http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow unchecked_sites
http_access allow localhost
http_access allow authdUsers
http_access allow localnet
#http_access deny all
always_direct allow HTTPS
#never_direct allow all

strip_query_terms off

#logformat squid  %ts.%03tu %6tr %A %Ss/%03Hs %st %rm %ru %[un
%Sh/%a %mt
#logformat common %a 

Re: [squid-users] squid 3.3.8 running away

2014-01-31 Thread Mr J Potter
Hi Eleizer,

thanks for getting back to me.

OK - yes. Its a VMWare virtual machine with 2 interfaces - 1 to
outside world, one with the clients on. Its 2 processor, 4GB RAM,
Debian wheezy 64bit

I can't set it up as router only as it isn't a gateway, and I'm not
keen on rejigging the network to test this, but traffic is likely to
be ~10-20Mbps when it dies, connections per seconds in the hundreds.

'Kills it' is cpu usage hits 99.8-101% and stays there. And it stops
forwarding requests.

there is nothing I can see in the logs (3.2 gave a message about file
descriptors)

I've been through all this before with s squid 3.2 box, and the
solution then was file descriptors. I've not set up disk caching on
this setup (or at least I haven't got a line setting about disk cache
- does it create one by default?). My next thing to try would be
adding a disk cache maybe...


Regards,

Jim Potter
BEC Network Manager


On 31 January 2014 09:58, Eliezer Croitoru elie...@ngtech.co.il wrote:
 Hey Mr,

 I will might not find the solution in one sec but..
 What interfaces do you have?
 What is the network load?
 Have you tried to use SMP on this machine?
 In order to analyze the basic traffic size you can use the machine as a
 router only based on linux.
 This would give you the basic picture of the network traffic load.
 The basic information is RPS or Connections Per Second.

 What are the symptoms of kills it?
 If you do have some logs that will describe it will help to see them.

 Thank,
 Eliezer


 On 31/01/14 11:48, Mr J Potter wrote:

 Hi all,

 I'm trying to roll out SSLBump internet filtering. I've got it all
 working fine under test conditions, but squid grinds to a halt using
 100% CPU.

 I've gone through all the comments on this online, and it seems all to
 point to file descriptors. (I think I've fixed this proviously by
 setting this value with squid 3.2).

 In my current setup I don't have any disk cache, and adding
 file_descriptors doesn't fix it. It only seems to go when load hist a
 certain threshold. It can run fine all day, but when I add more
 traffic, it just dies. I.m pretty sure its nothing to do with SSLBump
 as I use this server for all youtube traffic via SSLBUmp (its fine)
 but when I put everything else through with no SSL bump, the load
 seems to kill it.

 ... and it doesn't seem to be using the 2GB RAM allocated to it for
 cache. the whole machine only uses ~750MB.

 Any idea anyone? config files and details below...

 thankyou

 Jim Potter
 BEC Network Manager

 SNIP


Re: [squid-users] logging issues

2013-05-08 Thread Mr J Potter
Works for me!

A few notes for anyone who needs them below.

Thanks again everyone.

Jim
UK

Issues/gotchas:
It doesn't work behind parent proxies.
It works with NTLM and ident
You need your own certificate authority on all clients.

To build squid3.2 on debian 7:
dependencies: install everything so you can build squid3.1 from source
get squid 3.2 source and build with:
./configure \
--prefix=/srv/squid32 \
--sysconfdir=/srv/squid32/conf \
--localstatedir=/srv/squid32/var \
--enable-auth \
--enable-auth-ntlm=SSPI,smb_lm \
--enable-ssl \
--enable-ssl-crtd \
--enable-icap-client

Follow instructions on creating a CA from:
http://www.mydlp.com/how-to-configure-squid-3-2-ssl-bumping-dynamic-ssl-certificate-generation/

Here's my config

cache_effective_user proxy

#cache_peer caffreys.bristol-cyps.org.ukparent3128  3130  default
cache_peer courage.bristol-cyps.org.uk  parent3128  3130  default
#no-delay
#no-query no-digest no-netdb-exchange
## default

#cache_peer_access caffreys.bristol-cyps.org.uk allow all
cache_peer_access courage.bristol-cyps.org.uk allow all

forwarded_for off

url_rewrite_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf

#auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
#auth_param ntlm children 20 startup=0 idle=1

#acl authdUsers proxy_auth REQUIRED
acl authdUsers ident REQUIRED


acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines
acl HTTPS proto HTTPS

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

## HTTPS busting bit!!!
ssl_bump allow all
sslproxy_cert_error allow all

# Or may be deny all according to your company policy
# sslproxy_cert_error deny all
sslproxy_flags DONT_VERIFY_PEER
sslcrtd_program /srv/squid32/libexec/ssl_crtd -s
/srv/squid32/var/lib/ssl_db -M 4MB
sslcrtd_children 5


# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on localhost is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow authdUsers
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access allow all

always_direct allow HTTPS
never_direct allow all

#emulate_httpd_log on
strip_query_terms off
#log_fqdn on

logformat squid  %ts.%03tu %6tr %A %Ss/%03Hs %st %rm %ru %[un %Sh/%a %mt

dns_nameservers 10.15.244.8 10.15.244.13

# Squid normally listens to port 3128
#http_port 3128
http_port 3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB key=/srv/squid32/ssl/private.pem
cert=/srv/squid32/ssl/public.pem
icp_port 3130

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /srv/squid32/var/cache/squid 3000 16 256

# Leave coredumps in the first cache dir
coredump_dir /srv/squid32/var/cache/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320


#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

On 8 May 2013 08:39, Mr J Potter jpotter...@because.org.uk wrote:
 Hi Amos,

 Ok i'll try that. Thanks for your advice.

 Jim

 On May 7, 2013 4:16 PM, Amos Jeffries squ...@treenet.co.nz wrote:

 On 8/05/2013 2:56 a.m., Mr J Potter wrote:

 Hi Amos,

 OK - SPDY is new to me...

 I work in a school, and I'm trying to filter/monitor specific Google
 apps (allow mail+docs, block google plus+searching among other tasks).
 I've done this with squidguard in the past, but Google seem to use
 these HTTPS CONNECT methods more

[squid-users] logging issues

2013-05-07 Thread Mr J Potter
Hi all,

I'm having a problem with filtering user access specifically with
Google (Mail, docs, calendars etc) - it looks to me like not all the
requests the client makes are showing up in squid access log.

As far as I can tell, AJAX requests aren't logged, and I know google
are big AJAX fans, so I reckon this is the first place to look - is
there any way of telling squid to log AJAX requests in the access.log
file?

thanks in advance,

Jim
UK


Re: [squid-users] squid3 + squidguard on Debian

2013-01-29 Thread Mr J Potter
I have similar setup (Deb6 + squidguard), and I've had similar trouble with
SquidGuard not behaving, and I've found it has trouble with subdomains - if
you have eg these 2 entries in the same blacklist file

uploaded.net
thing.uploaded.net

it won't block ??one or either?? I can't remember behaviour exactly, but it
caused me trouble.

hope that helps

Jim Potter

On 29 January 2013 10:37, Amos Jeffries squ...@treenet.co.nz wrote:

 On 29/01/2013 9:38 p.m., Andre Lorenz wrote:

 Hello together,

 actually i'm a little bit confused.

 i have debian 6 system, set up with squid3 and squidqurad.
 now squidguard is not blocking all domains/sides inside the db.

 if i call a blocked side inside the browser i can access this side.
 as example. i have blocked domain uploaded.net. i can call via browser,
 but if i run
 on commandline:
 echo http://fra-7m18-stor08.uploaded.net - - GET | /usr/bin/squidGuard
 -c /etc/squid/squidGuard.conf

 i'm getting the redirect statement.
 http://www.google.com -/- - GET


 Firstly, tried calling it with the line Squid would have used with IP
 addresses etc all in place?


 Secondly, this is not a redirect statement from SquidGuard. This is a
 re-write statement.

 The result of redirect statement is a 30x redirection message to the
 client.
 The result of a re-write statement is a 200 OK response to the client with
 body content supplied by the alternative server (think Phishing attack or
 response hijacking).



 if i run ps -ef| grep squid i'm getting
 root 31513 1  0 09:07 ?00:00:00 /usr/sbin/squid3 -YC -f
 /etc/squid3/squid.conf
 proxy31515 31513  8 09:07 ?00:00:49 (squid) -YC -f
 /etc/squid3/squid.conf
 proxy31517 31515  0 09:07 ?00:00:00 (squidGuard) -c
 /etc/squid/squidGuard.conf
 proxy31518 31515  0 09:07 ?00:00:00 (squidGuard) -c
 /etc/squid/squidGuard.conf
 proxy31519 31515  0 09:07 ?00:00:00 (squidGuard) -c
 /etc/squid/squidGuard.conf
 proxy31520 31515  0 09:07 ?00:00:00 (squidGuard) -c
 /etc/squid/squidGuard.conf
 proxy31521 31515  0 09:07 ?00:00:00 (squidGuard) -c
 /etc/squid/squidGuard.conf

 which is obviously correct.

 i also rebuild the the database several times.
 any ideas ??


 Contact the SquidGuard developers or use support channels?

 SquidGuard is not part of the Squid Project, and not really supported
 here.

 Amos


Re: [squid-users] slow reconfigure on squid3

2012-07-04 Thread Mr J Potter
Hi all,

thanks for your responses...

versions - I use the standard ones with Debian squeeze (2.7.stable9 and 3.1.6)

Yes there are lots of helpers - 25 NTLM helpers and 10 squiduguard
helpers, so this could account for slow reconfig.

Upgrading to 3.2 seems like a good bet - are there ready-rooled squid
3.2 debs available for Squeeze or do I have to make my own?

We currently run squid in 3 different flavours of authentication -
NTLM for PCs, ident for macs and digest for guest network, so have 3
distinct squid setups running on our proxy server. Would it be worth
setting these all up as non-caching, then set up a parent caching
server, or will setting them up as cache peers make them share their
caches at all?

cheers

Jim
UK

On 2 July 2012 14:44, Marcus Kool marcus.k...@urlfilterdb.com wrote:
 Squid reconfigure can indeed take a long time. Especially when Squid
 uses lots of memory and starts helpers.  Starting helpers takes a
 large amount of kernel resources when Squid is large, e.g. 2+ GB
 since it forks itself and replaces its copy by a new process.  The
 fork can take a long time. If you use a URL rewritor you can
 easily have 24 or more of them and this makes 24 copies of a large
 process.

 How large is squid ?
 Can you post the output of
ps -o pid,stime,sz,vsz,rss,args -C squid

 I wrote a test program to test the performance of forking X times
 a large process. I can post it if you are interested.

 Marcus



 On 07/02/2012 05:08 AM, Mr J Potter wrote:

 Hi all,

 Does anyone have any tips on how to fix this issue:

 We've just moved to squid3 from squid2, and now when we do squid3 -k
 reconfigure we get about 30 seconds of squid refusing/failing to
 forward requests while it rejigs itself. I don't know if this is
 squid3 rescanning the cache or doing something with squidguard (we
 have a fairly complex+large squidguard setup)? I don't think this
 happened with squid2.

 What can we do to make this less noticeable?

 - make it reconfigure faster?
 - multiple squid servers - can we do failover somehow (either proxy
 DNS record points to them both, or they automatically redirect (is
 this what cache peers are for?))?
 - go back to squid 2 - I didn't see any end user benefits of squid3
 over squid2...

 any help would be greatly appreciated.

 thanks

 Jim Potter
 UK





[squid-users] slow reconfigure on squid3

2012-07-02 Thread Mr J Potter
Hi all,

Does anyone have any tips on how to fix this issue:

We've just moved to squid3 from squid2, and now when we do squid3 -k
reconfigure we get about 30 seconds of squid refusing/failing to
forward requests while it rejigs itself. I don't know if this is
squid3 rescanning the cache or doing something with squidguard (we
have a fairly complex+large squidguard setup)? I don't think this
happened with squid2.

What can we do to make this less noticeable?

- make it reconfigure faster?
- multiple squid servers - can we do failover somehow (either proxy
DNS record points to them both, or they automatically redirect (is
this what cache peers are for?))?
- go back to squid 2 - I didn't see any end user benefits of squid3
over squid2...

any help would be greatly appreciated.

thanks

Jim Potter
UK


Re: [squid-users] HTTP 407 responses

2012-02-15 Thread Mr J Potter
Hi Amos,

Thanks for your help on this...

I've had to change tack on this in light of what you have said and
have now got NTLM authentication working.

- any form of http authentication is going to kick up a login box -
there is no way round this, right?

With , NTLM I am now getting the NTLM login 3 times before it lets me
in (apparently this is normal)


Can you recommend the best/least bad approach to go for here? I;m
setting up a guest wireless system, and I just want a way to get (non
domain) devices to get a chance to login to get an internet
connection, but all the ways I've found have major flaws.


- LDAP basic authentication works fine but is insecure
- LDAP digest requires a new type of password hash to be set up in my
directory services
- NTLM requires 3 login attempts

Or do I move away from http authentication entirely?

thanks in advance,

Jim
UK

On 13 February 2012 22:25, Amos Jeffries squ...@treenet.co.nz wrote:
 On 14.02.2012 04:15, Mr J Potter wrote:

 Hi team,

 I'm trying to set up an authenticating squid proxy with a nice login box
 rather than the one the browser pops up with a HTTP 407 request... Does
 anyone know how to do this? The main reasons for this are (1) to make it
 look nice (2) so that I don't have to tell people to put in DOMAIN\user
 into the box, (3) put some instructions as to what is going on and (4) to
 add a limited guest login option.


 (1) is not supported by any of the web specifications at this point. Someone
 in the IETF had a nice proposal to allow headers to be set from form tag
 fields in HTML. I'm not sure where that went, at the time I saw it was still
 looking for support to get to the Draft stage.

 (2) is a feature of the AD or Samba PDC backend. They can be set to require
 the DOMAIN part or add a default value if missing.

 (3) permitting the server to determine what gets displayed on the login area
 opens it to phishing vulnerabilities. For most of the auth schemes the realm
 parameter is used by browsers after some heavy input validation as part of
 the title or descriptive text of the login popup. If you set it to a sane
 value the popup is self-explanatory to all users.




 This is where I am so far...

 - I've got NTLM authentication working
 - I've got a nice login page in ERR_CACHE_ACCESS_DENIED
 and ERR_ACCESS_DENIED
 - I've still got to write the bit to authenticate people, but I'm not too
 worried about that.

 Highlights from my squid.conf file looks like this:

 auth_param ntlm program /usr/bin/ntlm_auth
 --helper-protocol=squid-2.5-ntlmssp
 auth_param ntlm children 45


 acl authdUsers proxy_auth REQUIRED


 http_access deny !authdUsers   ### Kicks up a 407 request
 http_access deny all

 The second last line is the tricky one - I can see why the line

 http_access allow authdUsers


 would trigger a 407 request, but I'd hoped the deny ! option would get
 around this.


 Nope. Both lines REQUIRE auth challenge before they can be tested. The deny
 line ending in an auth ACL also produces auth challenge when it matches. The
 browser takes it from there.

 The modern browsers all protect themselves against attackers by discarding
 the response body (your page) on 407/403 status and using a safe popup they
 own and can trust for secure user interaction.


 What you can do instead of altering the form and popup is present a session
 with splash page (your instructions) ahead of the login popup like so:

  external_acl_type session ...
  acl doneSplash external session

  # URI to display splash page with your instructions (no login form allowed
 though)
  acl splash url_regex ^http://example.com/Splash

  # link ACL to splash page
  deny_info 307:http://example.com/Splash?r=%s doneSplash

  # let splash page go through no limits.
  http_access allow splash

  # bounce to splash page if not logged in yet AND this is a new session
  http_access deny !authedUsers !doneSplash

  # do login
  http_access allow authedUsers


 The page Splash gets passed the original URI in r=%s, which it can use to
 present a continue/ accept link after reading.

 Amos


Re: [squid-users] HTTP 407 responses

2012-02-15 Thread Mr J Potter
Hi Alex,

I've got it working fine on domain members. I should have explained
better - I'm setting up a guest wireless network in a school, so all
devices that attach will be personal, non domain, and as a rule I
won't get the chance to configure them before they connect.

The devices that I want to connect will be mostly student laptops,
smartphones and visitors' devices.

The plan is to set up proxy DHCP autoconfig and/or transparent port
forwarding trick to point people towards the proxy (https is likely
not to like this I know), but I want a way of getting people to say
who they are and give them internet access accordingly. I;m using
squid/squidguard to great effect for the domain machines, and I'd like
to use the same set of rules for folks connecting their own devices.

How has anyone else done this? the options I've found are basic,
digest or NTLM all of which have major issues in terms of security,
configuration or usability respectively.

Jim


 Jim,

 If you are getting login prompts like this (especially 3 times) it's likely
 your NTLM auth is not working.

 In normal use with NTLM on domain member hosts, you should never see them,
 not even when opening the browser for the first time. The browser should
 pass through authentication from the logged on Windows session.

 I would check the permissions on the winbindd_privileged folder (usually in
 /var/run/samba or /var/cache/samba) and make sure your squid user can write
 to it. Some distros actually change the permissions on that folder after
 winbind has started in the init script.

 You might also want to check winbind is working by issuing wbinfo -u and
 wbinfo -g  - you should get a list of domain users and groups.

 Alex


Re: [squid-users] Squid block list

2012-02-15 Thread Mr J Potter
I've been using squidguard for years. Its great - you can block/allow
by user, workstation, time or url, and rewrite urls (for instance I
can force all google image searches to be safe, and block certain
search terms).

I looked at dansguardian too but squidguard won my vote at the time
(about 5 years ago). I don't know about any others.

and there's what looks like an OK front end for it too (squidguard
manager) or a webmin module but I've never used them in anger.

Jim

On 15 February 2012 13:51, Muhammad Yousuf Khan sir...@gmail.com wrote:
 Hello All,

 I need a suggestion as i am new to squid-world and i don't wanna waist
 my time on RD rather for the perfect solution which is scalable and
 reliable. so as every Squid administrator want to restrict the
 unwanted website access during working hours so i am here to ask the
 same thing however i know how to implement squid and how to use
 squid.conf and how to block the destination and i also know that there
 are websites that are providing databases for squid to use as block
 list and also consistently updating the databases. so i want such a
 tool or supporting tool which can at least update the data files on
 weekly bases however i will manage the implementation of rules on my
 own. so kindly suggest me.

 Thank you.

 MYK


[squid-users] HTTP 407 responses

2012-02-13 Thread Mr J Potter
Hi team,

I'm trying to set up an authenticating squid proxy with a nice login box
rather than the one the browser pops up with a HTTP 407 request... Does
anyone know how to do this? The main reasons for this are (1) to make it
look nice (2) so that I don't have to tell people to put in DOMAIN\user
into the box, (3) put some instructions as to what is going on and (4) to
add a limited guest login option.

This is where I am so far...

- I've got NTLM authentication working
- I've got a nice login page in ERR_CACHE_ACCESS_DENIED
and ERR_ACCESS_DENIED
- I've still got to write the bit to authenticate people, but I'm not too
worried about that.

Highlights from my squid.conf file looks like this:

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 45


acl authdUsers proxy_auth REQUIRED


http_access deny !authdUsers   ### Kicks up a 407 request
http_access deny all

The second last line is the tricky one - I can see why the line

http_access allow authdUsers


would trigger a 407 request, but I'd hoped the deny ! option would get
around this.

Any ideas? I've promised I'll get this going be the end of the week...

any help would be really handy.

thanks

Jim Potter
UK