Re: [squid-users] error: #error .... is not 32-bit or 64-bit

2014-09-03 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 3/09/2014 11:42 p.m., Santosh Bhabal wrote:
> Amos,
> 
> My machine has already installed C++ compiler.
> 
> [root@localhost ~]# rpm -qa | grep -i c++ 
> libstdc++-4.4.7-4.el6.x86_64 gcc-c++-4.4.7-4.el6.x86_64 
> libstdc++-devel-4.4.7-4.el6.x86_64
> 

Perhapse its not installed correctly then, or not in your available
PATHs. Because autoconf is searching for one of these compiler
binaries on your system:

configure:5701: checking for g++
configure:5731: result: no
configure:5701: checking for c++
configure:5731: result: no
configure:5701: checking for gpp
configure:5731: result: no
configure:5701: checking for aCC
configure:5731: result: no
configure:5701: checking for CC
configure:5731: result: no
configure:5701: checking for cxx
configure:5731: result: no
configure:5701: checking for cc++
configure:5731: result: no
configure:5701: checking for cl.exe
configure:5731: result: no
configure:5701: checking for FCC
configure:5731: result: no
configure:5701: checking for KCC
configure:5731: result: no
configure:5701: checking for RCC
configure:5731: result: no
configure:5701: checking for xlC_r
configure:5731: result: no
configure:5701: checking for xlC
configure:5731: result: no
configure:5755: checking for C++ compiler version
... command not found

Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUBwcMAAoJELJo5wb/XPRjrxwH+wUWXuQCIYLvM3My8JfR+Sv3
mPh746+Z61K5N9WZDQMOc7H14vuJ03wZmpgc/y8xjKCVr9qyPcxi81aJfspE2f+u
yiIezPBVFs+/IirV3s19raCothP2+alAZd19Wu7EVtjY9BJvsA5W71wwgsmnmLah
kVW+bquMoa9JMPJoy70CgR5RP3IfQd+9IdHcpCVf38lp31qleXAKw3ewQ2fScw3d
ceAT63gcEEw0HDCve8mfUu26HyCRwUTWYNoMy3Re2CXHiWS7WywG1NKHGoZYN2jt
DUMiYbKACdKCp1FU7KETb23PXb3esOzEGhI2rfD2ejPsMTp9Gt53XPb670TvKhc=
=Cx5W
-END PGP SIGNATURE-


Re: [squid-users] out-of-band authentication (like ident but better)

2014-09-02 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2/09/2014 10:02 p.m., James Harper wrote:
> I mentioned at the tail of another email, I'd like to see a better
> out-of-band authentication protocol than ident. Such a protocol
> would have:
> 
> . a single connection from squid over which all identification
> requests travel. Not one connection per request as with ident. .
> two way authentication (psk or certificate) . encryption (tls) .
> full connection description (src ip, src port, dst ip, dst port) so
> that interception proxy works (ident only exchanges port numbers) .
> optional reverse connection (client connects to squid rather than
> squid connecting to client - only useful for a single proxy server
> but means no firewall exceptions on the client) . probably still
> use port 113 (not that it really matters...)
> 
> Does such a thing exist already?

The "external" ACL type runs a (or several) helper programs on
persistent connections which perform arbitrary out-of-band operations
and return to Squid the authorization approval to allow/deny the
transaction.

There is Negotiate authentication. The security tokens are setup
out-of-band and used securely in-band.

I also have a patch implementing OAuth 2.0 Bearer authentication for
Squid. Although it needs some polishing and clients supporting
proxy-auth Bearer seem to be a rarity still. Sponsorship welcome to
get those final steps completed.

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUBb55AAoJELJo5wb/XPRjQjUIAL9JK6YCo/2q7a0fQAgLL5qi
ZyKiSaTAaBj5vr2AQQTrrUs2KLrKvt0rEr+EIPXja2ZFArlDkCYbIGCkNC7VuSuI
Ftwa6LJaTq5vuMWn3ih4s00pERKjviSUesxlDJzQZwjNqJtiP69uxbo8EBsGTLVQ
Qs83D8RwNmAi6XyM6U7M6hMYRUZksD9t4WLAfmD5Q+ivDnw5ehIlig6XOPHYnBHM
ObpNaGZ6ZPliK65+FO4fAP+zW6meLPo/Zv2lMOvpjFvVdTb1vH48zqOVr57EAy4a
WlIm8oiAu09VLFNA0Lmry/hs8+qk0fsNNEDx2fFHfFnHULzXFab2FwpSvmfsS3U=
=6RCw
-END PGP SIGNATURE-


Re: [squid-users] Forward Proxy Mode HTTPS Connect with invalid server certificate

2014-09-01 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 30/08/2014 6:55 a.m., Eduard Deffner wrote:
> Dear Team!
> 
> My problem is about using squid in the forward proxy mode. Squid
> Version 3.3.8 under openSUSE 13.1 in conjunction with squidguard 
> The general function everythings works well. But if any client in
> our LAN try to connect to a https-Site that have a invalid server
> certificate (the URL of the cert is other than the URL of the site)
> the proxy refuse the connection. If the cert is valid everything is
> OK.

If you are using proper forward proxy mode and CONNECT requests then
teh proxy has nothign to do with the HTTPS. All the proxy does is open
a TCP connection to the server and pump bytes back and forth between
client and server machines.
 Anything related to te connection TLS is strictly between the client
and server software which are communicating over that tunnel.

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUBH01AAoJELJo5wb/XPRj1KYIAJP/GAV7fN+sskeBlmrJiQGh
X6RBcmhU3WvSLcjIMoejFWFXZ9RvRXOOQxq5sGHcdMMIseF/ePusgkaHrJGstk3c
qZBpePyrgxh3r6i7KNSd99vsCo9u+786DtjO+1d7aXy09zgJJ6Hh/K2kysL/wO0C
LFt3XfKElULmqQqPEKWHcwRmAeXCXURVAjar7chuBa/333bWRMxt0l5O9y4I3AQg
7sVvpwGoEAg3el/PBxDgX1jiNuZziGSsMkqpiHldbF/gYLckgsckHB0bbU1hFjWP
xoCfTx3sgxCDTIJ9RPTEKOeE8BArCmqzyE8kYhaC7LIrJMXsxZzL26T0CQwU8QE=
=cJ+5
-END PGP SIGNATURE-


Re: [squid-users] error: #error .... is not 32-bit or 64-bit

2014-09-01 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Your machine is missing a C++ compiler.

Squid is known to build on g++ and usually clang or Intel CC. Others
are a best-effort situation.

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUBHl6AAoJELJo5wb/XPRj/oEH/jDoqnX61gfzfR6IiCzWc0zF
bhJEcArG7zwEVSjkCukXGh4x1HRLcbDpswEvN99maZDXKoSzvqkxWpD9W4gAr7iU
5ImocqSVLIinNWnyKYEbK8KKqX4Urj2TfObmsL/guNMuChcrEKZtw9D13DboSg2y
aTJemwF1nKp5tOGxKriBREEuxvq1p685EvWogZMxDqPwsYyEIMoOXmGQkZjnfH7t
HW5ZRxgbBXtRkD9Ou/NVHaBL51zssDtOb6rWLwxiEXGJ6XNnMDXyiDudMvB3bXPB
2L2uuZvQitnGyZIkqVhSqK9PyisbUa9bu7ORH0gGZ+fyjvIsKbdAHSR3hNdpVrI=
=8IwO
-END PGP SIGNATURE-


Re: [squid-users] error: #error .... is not 32-bit or 64-bit

2014-09-01 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2/09/2014 1:21 a.m., Santosh Bhabal wrote:
> Yes :)

Can you mail me the config.log and include/autoconf.h files produced
by the Squid ./configure please?

Amos


-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUBHOyAAoJELJo5wb/XPRj/aEIAMT26s9gu1Kwd9alSOEmt6rE
Ix4zGKIbnjPgigOYN0P0uqBG/Otdj67ZvEDQ0bhgnDPeRug2soog9xnQn+frqokH
rfHfSVB0vvEmvxMf6MlyEo9rHk3pfMpouLOJyVpd4TExyZZy1hBpJaESAcesJdpD
AQsnnr6ZlfA+YoPq7WBhjIGIccDzaY9SHemcA7qF9eVZ+R+51ul7EPA2Y4lT/rsz
7IeeSBwuvuZTaD9EeWmM0GKbdEmNoFBr+UyzXHEr7lfuM1jS+2b2TQTu16thaypE
cR7EHjFyDXEz4ud4vCwyeNnakP6yukizK0CIAgUXmFXiFJknGBLSj+lnKMcPtFU=
=TEX6
-END PGP SIGNATURE-


Re: [squid-users] error: #error .... is not 32-bit or 64-bit

2014-09-01 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2/09/2014 12:53 a.m., Santosh Bhabal wrote:
> CentOS release 6.3 (Final) x86_64
> 

Did you run ./configure before building?

We built Squid on CentOS 6 and 7 without problems before releasing.

Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUBG6nAAoJELJo5wb/XPRjJsUIAIo4dyCrvEbgBG9/gdZHmRJf
7acCu/OKn1teLnets1WpzAEytgpQvS6tfF8XEwq7sWet8ECUfhSCPtG/9evKluEw
9xPekYf+eLYrZkt6X8e6Uw5FKWkL3Ng6CslWyKFtwp9tepa49h/ZZA322R3ca6ks
Ui8ABuvc0ebw2TqH5TJCUWR5zM9RGMK5m4TABKrGx0fNRdCvzH5t6veoSVXn+C9C
+3yQ9oTtiD3JWGioAWuho+PrfKRDIr4SZpJcZDZ0vFprOYbTevMOi04Vjcr8in9V
5JEFmuNxjzEGtE+CRel3u/ssxzRLrdWy2XXOwzuL8ASTPC9te8+J6sTD0i23ka8=
=h/ks
-END PGP SIGNATURE-


Re: [squid-users] error: #error .... is not 32-bit or 64-bit

2014-09-01 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 1/09/2014 9:34 p.m., Santosh Bhabal wrote:
> Hello Experts,
> 
> I am getting below error while compiling Squid 3.4.7 :
> 
> [root@localhost squid-3.4.7]# make all Making all in compat 
> make[1]: Entering directory `/opt/squid-3.4.7/compat' 
> source='assert.cc' object='assert.lo' libtool=yes \ DEPDIR=.deps
> depmode=none /bin/sh ../cfgaux/depcomp \ /bin/sh ../libtool
> --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H  -I.. -I../include
> -I../lib -I../src -I../include -I../libltdl -c -o assert.lo
> assert.cc libtool: compile:  g++ -DHAVE_CONFIG_H -I.. -I../include
> -I../lib -I../src -I../include -I../libltdl -c assert.cc  -o
> .libs/assert.o In file included from ../compat/compat.h:51, from
> ../include/squid.h:66, from assert.cc:32: ../compat/types.h:134:2:
> error: #error size_t is not 32-bit or 64-bit In file included from
> ../compat/compat.h:81, from ../include/squid.h:66, from
> assert.cc:32: ../compat/stdvarargs.h:31:2: error: #error XX **NO
> VARARGS ** XX In file included from ../compat/compat.h:80, from
> ../include/squid.h:66, from assert.cc:32: 
> ../compat/compat_shared.h:97: error: field 'ru_stime' has
> incomplete type ../compat/compat_shared.h:98: error: field
> 'ru_utime' has incomplete type In file included from
> ../compat/compat_shared.h:219, from ../compat/compat.h:80, from
> ../include/squid.h:66, from assert.cc:32: ../compat/strtoll.h:14:
> error: 'int64_t' does not name a type assert.cc: In function 'void
> xassert(char*, char*, int)': assert.cc:36: error: 'stderr' was not
> declared in this scope assert.cc:36: error: 'fprintf' was not
> declared in this scope assert.cc:37: error: 'abort' was not
> declared in this scope make[1]: *** [assert.lo] Error 1 make[1]:
> Leaving directory `/opt/squid-3.4.7/compat' make: ***
> [all-recursive] Error 1


Interesting errors. What operating system are you building on and are
you cross-building for any particular other system?

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUBGuoAAoJELJo5wb/XPRjubUH/0c+c+shBlAmhehbcRJwjeaI
Fscp5c7f7k8E4TAdoJqKhFVTSzkEp8MpRLv1OImpf5FsDF5ZZ9apXk87L7rr42Hi
lNF/043MVYLsFMTzQX/u/cAVmw65HIVwxVpbrQwvFr9es0JpcZlTmQzb2getzPg4
dQlAtbTjdqbc+T3Up9+lno8VDtOXtKf2tn48CX8BWiBVWzIL8qt70OMtVmsHLBma
8I2faZt7ks6I0yI0gsNhZyWEOo/rX3opLCp01unNKuyn5dJ7LP9v2uCoPik+2X4W
yBxmeuLWV+pE3IyZUbAB4kCjlQzNhkIfAUMIq25ZFpRgOBw2R1yF1R8Y3X203ck=
=6cz0
-END PGP SIGNATURE-


Re: [squid-users] Re: parent problem - TCP_MISS/403 from parent

2014-08-31 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 1/09/2014 12:30 a.m., Dmitry Melekhov wrote:
> On 29.08.2014 18:46, Dmitry Melekhov wrote:
>> On 29.08.2014 18:17, babajaga wrote:
>>> I remember a bug, I detected in my favourite squid2.7, also in
>>> a sandwiched config, with another proxy inbetween: It was not
>>> possible to have both squids listen on 127.0.0.1:a/b; had to
>>> use 127.0.0.1:a; 127.0.0.2:b
>> 
>> That's what I have- one listens on 8090 another one on 8092. So
>> this is not problem. What I can't understand now what is
>> difference between firefox request - which works, and squid
>> request- on  which squid says that it is missed, I have to look
>> into traffic :-)
>> 
> OK, I see correct requests from squid to parent squid. But looks
> like they are http 1.1. But, as I said before, havp works, and it
> use 1.0, as I see too. Looks like bug, so I'll report one asap :-)
> 

That is not itself a bug. HTTP/1.1 is the latest version of HTTP
supported by Squid and 1.1 outgoing is required to be sent.

Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUAyvTAAoJELJo5wb/XPRjzr0H/0LrDL3ARaXlbqNVQ/HPWdYn
r/iQyYkdpn2ASUZoLcN8LQx8rV689BUwaDhkatQHkJja17BuW7gH6xVWUv2jaHVh
kA/q6N2UreACDJ8ebi1+QOkzNN1rxMhUBQzK/oby0tfKlN66PrFckQztlrXZ8VUa
iNnqI+ij5GGHntE2Qg6hE5HN8EKD8n6KTUTRI3giWFA7CRu0xXZINANargOKrX6P
XKdxV/dZwBLizBLapxnkGTJUUBv2BMOW+cgQOGt3W8uaAc3sfQoHNe9r8RNzAPuz
7vFWGGP3cGj8ijHE3Fxx6cFy9cQdz8rZQmPZ2vek+o0kYIXWfbokgxZQpjEv/aM=
=flWi
-END PGP SIGNATURE-


Re: [squid-users] Fwd: access.log destinatin server ip

2014-08-29 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Have you considered using the Squid native log format instead of the
"Apache common" web server log format?

Squid native format is designed for logging information about both
client and server.

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUAJsRAAoJELJo5wb/XPRjA3IH/iu20Lo5A8nbg/qLN0jsAZcI
OQ+vlL6+VdqAcnjKkGpjHN256HZh6m3cALiXfBpU5yh0zXR0Gc1oWfj7EhwQ1aLM
bAUOV1XSxxRfFUcDtOtGNGQs18AOivrGgCZlFaVhAqOO0FRkBQuyY3PrgQ+5L0ic
ifbbP4Sza/HESRT1im0gsF1LPI8NrrI9yVtCvNlAlj/Izk8eKuJFQ8GMLlKMIzE8
OktuWIPXSwmAZMxDwMJH7nlQETol0dZNxAlYIrfG7b7o4i/9ouYu64hX9msRAFQh
ojJi6Awo02jN2Jqam7NH5KylqH36gfyvl+dqlDv3rrbjVCL/V9PLaUFLx0KjBMc=
=m1P6
-END PGP SIGNATURE-


Re: [squid-users] Re: source address ip spoofing

2014-08-28 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 29/08/2014 11:09 a.m., Julian wrote:
> Hi Eliezer,
> 
> I understand what you say, but we use external IPs for our network
> hosts (nothing in 192.168.x.x range).

How is any of the software along the HTTP traffic route supposed to
know that?


> What I need is to direct the traffic to our proxy using the wpad
> mechanism (which works just fine for us) but to make our proxy
> completely transparent to external destinations. I think TPROXY
> Squid might be a way to do it,  but we only use Squid 2.7 now.

The IP spoofed by TPROXY is the IP received on the TCP packets, it is
not necessarily the end users IP.

TPROXY is also incompatible with manual and WPAD configuration. TPROXY
traffic has CVE-2009-0801 security checks applied to it, which on
explicitly configured proxy traffic will lead to infinite forwarding
loops as the proxy transparently relays to its own IP.


Going back to your original post there are two incorrect statements
which may be confusing you...

1)
> Proxy Auto-Discovery on our users browsers is able to get activated
> by a wpad.dat file which transparently redirects our users HTTP
> requests
to our
> Proxy Server.

WPAD is sometimes called "transparent configuration". Emphasis on
configuration. There is no redirect happening at all, anywhere.

The client software is explicitly using "Automatic Discovery" (the
__AD) to locate the proxy it is going to tranfer through without the
user having to do anything.

> 
> The way our Proxy Server works now is by hiding the IP address of
> users getting directed to our machine.

What the proxy does is called "Application Layer Gateway". From the
outside it looks a bit like what NAT does, the TCP layer IP:port
address changes to one for the gateway service (aka Squid) so that TCP
reply packets are able to return to the proxy.


What you want is just not possible at all with Squid-2.7 and unlikely
to be possible with any newer release either. Consider what happens
when the proxy generates a new connection: TCP SYN packets with the
client IP on them ... the TCP SYN-ACK packets get sent straight back
to that client IP ... then what? connection hangs.

> 
> We want to keep running with our Proxy in the same deployment
> scenario, except that we need external Internet destinations to see
> the requests coming from our hosts IP(s) instead of our Proxy.
> 

HTTP is designed to operate with multiple intermediaries in similar
ways to how SMTP and DNS operate with
proxies/relays/recursive-resolver. The X-Forwarded-For header(**) is
how HTTP relays details about the *sequence* of client IPs which are
used to reach the origin server.
 

So, Why are you requesting this? what real problem are you trying to
solve that makes you think about spoofing the client IP?

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJT//kKAAoJELJo5wb/XPRjYwQIALlPG52K65lcke/cjBTbcGFI
BCP+dP9GT5SaI2zW+QrV9i/wmw5g9YdHGvssbMblIn2HTuYdTXdjXgUCXTc1LjsI
c7KU55apgyViVqgb6XWSPixTPOeaAXJu2RoqxoOD9IWxjbr93Ut5zw1O9dTqxYNX
fJbGcKDHeJ8z0QMk3IKp89+GozUc2G0K1eVk+hREQWjt2J2KZmZIY3DonMfUAmqM
i3BaBtJ2PFfATbkNQ1kJ1MwGFonrafmIakfDU1wp0MvUvjV9msKwA7e+S9xAqgD+
ivW7hKGJBQi0I7VJbWhhHcENrWa6nCQHGq1HJZ6vfObHCFGQ7knW4/QB+uTn/JI=
=Teo/
-END PGP SIGNATURE-


Re: [squid-users] Whether we can redirect video traffic to squid 2.7 via porting mirror

2014-08-28 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 29/08/2014 4:17 a.m., johnzeng wrote:
> 
> I see , but it will be normal way , we can redirect full http
> traffic via route-map or Wccp ,
> 
> but if we redirect part video traffic only , porting mirror + 302
> http packet will be safe way .
> 

No, port mirroring is most unsafe way to configure and not possible
with HTTP agents.

HTTP is designed to work with proxy intermediaries like Squid as part
of the messaging system. Perhapse you need to read
, which documents how
HTTP works with Squid. Maybe also
 which documents how
caches operate in HTTP.

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJT/1uWAAoJELJo5wb/XPRjyxAH/R9Osq5ljHnkEfqh7C84ol5m
7RpX/QL0zE7N8qikhCZbj3cmGz5OTiQIU7cPCj4cWMrU6Ge0txy4g4UoHk7yjW9/
6+SC8PtibkGKE8nqkDXa7TaAvYfsSIz/wxGXRhPsgQ8GbPv/Fkg6cw/fYae3n2xd
xOKWf4wCBrjBt2qwBZvvYnxsHUVts4L57mk/JpB5L33ANk4yJpdAW5MUg5xHKWBw
dA0vFOTbFvUsqRnpnFEac419moGFksqXwUL83b0330vos1OSks0F6aooqbhfSHyc
JhYA+RfQXHNPwDML9x4nlQbrV8wihsJG3agwJ0P2Ur77BSdv/SwnmWzQ7b2UbdY=
=Q5Yd
-END PGP SIGNATURE-


Re: [squid-users] Fresh Freebsd 10 and squid 2.7.9 "Try to set MAKE_JOBS_UNSAFE" error

2014-08-28 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 29/08/2014 1:02 a.m., Soporte Técnico wrote:
> I´m trying to install squid 2.7.9 in a fresh new freebsd 10 amd64
> and make install show this error.
> 
> Any idea?

Contact the FreeBSD package maintainers?

Also, you could try installing a newer Squid release. 3.3 is available
in FreeBSD ports.


Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJT/zQLAAoJELJo5wb/XPRj8XoIANIVES+IGX9M3Oc4GCsZqU1D
zCFFrTDQ6tybLBn1oyyTPSHfQ2kq7L3RxrAFGN4DTSkN7LHVUFpCwEDfAl5ic6WS
UN7cIsNJ5WTFOdhRYjL19FGpLwkKk/cqclrrkKVxoivWHPKPLhgGYvHDK+7Udf40
oLjVnZQDqwsHvbQpFDwCrcNn5/ITf5IOIwfMNfyquR6CQdThKKaPppswiTyQ5TYj
SQCc4/vZQwORV1aq+3pd3XZmmGAa+ej9PFtLYgjS+2cC5CbSNu3ip1+fplEMHCpN
kZoYnSklgGaDzYkRJ4eONq/bdVxTB8GjwB7FI9SI3+Bxv2Nr884PHIyO0lIeF28=
=D4gQ
-END PGP SIGNATURE-


Re: [squid-users] source address ip spoofing

2014-08-27 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 28/08/2014 7:28 a.m., Julian wrote:
> Hello Squid Dev. Team and Users,
> 
> I need your advice on a Squid deployment scenario.
> 
> We have deployed on our network a physical machine with Squid 2.7
> listening on port 8080. Proxy Auto-Discovery on our users browsers
> is able to get activated by a wpad.dat file which transparently
> redirects our users HTTP requests to our Proxy Server.
> 
> The way our Proxy Server works now is by hiding the IP address of
> users getting directed to our machine.
> 
> Question is... can we have our Proxy Server working in the same
> deployment scenario but doing Source IP Address Spoofing and making
> content requests that do not hide users IP(s)?

The clients original IP is transmitted in the X-Forwarded-For HTTP
header by default. Unless your proxy admin has configured that header
to be deleted or off your appliction should be able to find it there.

http://www.squid-cache.org/Doc/config/forwarded_for/

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJT/qEWAAoJELJo5wb/XPRjHGgH/j/UPCkhYft8jpsvEDd9JBH0
r220fBdx+ixw6EnKyhFKV9lt95tSOlm6x4lmEwg1fwl+5oqobHP6/qYansUKHiiG
ucFq6MIWArC2NBuKubD11yLlbaAV4KDeY8kmwKbqQxnOUQ5bYkkbf09hpREBXtKs
oXq5/IUUhWe0/Kl8orRiJkwITZwiNNVFUsXZKCkdlHB8Wx6rfjmv+llEgUcfwQn/
OIdBXw3GFsp+YyFFgACIMV5kuygZheO4dZY7PRzYM8FwpIclmpubu2GDk8Pql1Tj
L3IsryG34MjQRp6UVpVvcmOUAIZ0V1MhZOKpztZfy4Bbs6UrjnjzbuQK/bTLCFA=
=frAw
-END PGP SIGNATURE-


[squid-users] Squid 3.4.7 is available

2014-08-27 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

The Squid HTTP Proxy team is very pleased to announce the availability
of the Squid-3.4.7 release!


This release is a security and bug fix release resolving a major
vulnerability and several other issues found in the prior Squid releases.


The major changes to be aware of:


* CVE-2014-3609 : SQUID-2014:2 Denial of service in request processing

  http://www.squid-cache.org/Advisories/SQUID-2014_2.txt

This vulnerability allows any client who is allowed to use the proxy to
perform a denial of service attack on Squid. This issue is particularly
impacting reverse-proxy installations.

  A simple squid.conf workaround is available for quick use and those
  unable to upgrade. See the Advisory notice for details.


* Various SSL-bump certificate mimic errors

These bugs show up most notably for users of Firefox complaining about
a sec_error_inadequate_key_usage error. They are caused by Squid
generating a fake certificate with the wrong X.509 version details for
the TLS extensions being mimiced in that certificate.


* Bug #4080: worker hangs when client identd is not responding

This bug shows up as the Squid worker process hanging. It occurs only
when IDENT protocol is enabled and the client identd fails to respond.
IDENT protocol use may be enabled either for access control or logging
purposes.


* Portability improvements

As always we seek to support as many popular operating systems as
possible. This release contains several updates to fix build issues and
increase the supported operating systems and CPU architectures.



 All users of Squid are urged to upgrade to this release as soon as
possible.



 See the ChangeLog for the full list of changes in this and earlier
 releases.

Please refer to the release notes at
http://www.squid-cache.org/Versions/v3/3.4/RELEASENOTES.html
when you are ready to make the switch to Squid-3.4

Upgrade tip:
  "squid -k parse" is starting to display even more
   useful hints about squid.conf changes.

This new release can be downloaded from our HTTP or FTP servers

 http://www.squid-cache.org/Versions/v3/3.4/
 ftp://ftp.squid-cache.org/pub/squid/
 ftp://ftp.squid-cache.org/pub/archive/3.4/

or the mirrors. For a list of mirror sites see

 http://www.squid-cache.org/Download/http-mirrors.html
 http://www.squid-cache.org/Download/mirrors.html

If you encounter any issues with this release please file a bug report.
http://bugs.squid-cache.org/


Amos Jeffries

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJT/gq3AAoJELJo5wb/XPRjUIUH/AgC2Z2H4ziAxLnwWP9Z2Br5
Y1gAbN1I+wYwuGDGoFrvuHX49rVKWt0N6+i8bw0dwJgR+lBqqCS87EUdcDiALvDh
RqspxZBxh4AZE1SSJJx/EDLlT5q653okxQJ2b16/YNreEMp3W0LEpQMgEjoNZ+mn
4FZz79XuMOdl+oridn419jRb6c5p4mPlEAoPe4AVyMylvEg3PTGnlkckY9oAtxqT
VWwsAy6ZIvM3hp0QECqJVOcEqfmnQ6tVvvebPgQjXOlAYCS4sGnDtUPMu3yFEDYa
vDKy77LTvI1DF4zXFsAUxPonY4HBO66ekkWa9K0MENrrXxUOZnl+6E5JtziFL7g=
=xKq5
-END PGP SIGNATURE-


[squid-users] [ADVISORY] SQUID-2014:2 Denial of service in request processing

2014-08-27 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

__

Squid Proxy Cache Security Update Advisory SQUID-2014:2
__

Advisory ID:SQUID-2014:2
Date:   August 28, 2014
Summary:Denial of service in request processing
Affected versions:  Squid 3.x -> 3.3.12
Squid 3.4 -> 3.4.6
Fixed in version:   Squid 3.3.13, 3.4.7
__

http://www.squid-cache.org/Advisories/SQUID-2014_2.txt
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-3609
__

Problem Description:

 Due to incorrect input validation in request parsing Squid is
 vulnerable to a denial of service attack when processing
 Range requests.

__

Severity:

 This problem allows any trusted client to perform a denial of
 service attack on the Squid service.

__

Updated Packages:

 This bug is fixed by Squid version 3.3.13 and 3.4.7

 In addition, patches addressing this problem for stable releases
 can be found in our patch archives:

Squid 3.0:
http://www.squid-cache.org/Versions/v3/3.0/changesets/squid-3.0-9201.patch

Squid 3.1:
http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-10488.patch

Squid 3.2:
http://www.squid-cache.org/Versions/v3/3.2/changesets/squid-3.2-11828.patch

Squid 3.3:
http://www.squid-cache.org/Versions/v3/3.3/changesets/squid-3.3-12680.patch

Squid 3.4:
http://www.squid-cache.org/Versions/v3/3.4/changesets/squid-3.4-13168.patch


 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

Squid-3.x:

 All Squid-3.x versions up to and including 3.3.12 are vulnerable
 to the problem.

Squid-3.4:

 All Squid-3.4 versions up to and including 3.4.6 are vulnerable
 to the problem.

__

Workaround:

 Add the following access control lines to squid.conf above any
 http_access allow lines:

 acl validRange req_header Range \
  ^bytes=([0-9]+\-[0-9]*|\-[0-9]+)(,([0-9]+\-[0-9]*|\-[0-9]+))*$

 acl validRange req_header Request-Range \
  ^bytes=([0-9]+\-[0-9]*|\-[0-9]+)(,([0-9]+\-[0-9]*|\-[0-9]+))*$

 http_access deny !validRange

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the squid-users@squid-cache.org mailing list is your primary
 support point. For subscription details see
 http://www.squid-cache.org/Support/mailing-lists.html.

 For reporting of non-security bugs in the latest release
 the squid bugzilla database should be used
 http://bugs.squid-cache.org/.

 For reporting of security sensitive bugs send an email to the
 squid-b...@squid-cache.org mailing list. It's a closed list
 (though anyone can post) and security related bug reports are
 treated in confidence until the impact has been established.

__

Credits:

 The vulnerability was discovered by Matthew Daley.

__

Revision history:

 2014-08-26 11:54 GMT Initial Report
 2014-08-26 18:28 GMT CVE Assignment
 2014-08-27 15:18 GMT Patches and Packages Released
__
END

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJT/gntAAoJELJo5wb/XPRjgDwIAJoMyiWY2wMpThWkag6WkqUP
Tn+hsLRc6cBORwyOZNyYSloZh8v4C8WKfl96wTew1sLSZrCrHDx1iLXozJeSRLiW
Mnzv9wN7MdmyhRou4FEspuQj8IjenvSrk4Eg56+vc6g3caUeVHuCzmNdjmPss6q0
3OxFbzIpx69xakhHLXQEG+3LmPPZMz/479mlrb8AsJ2t/4v0GXRyd8KrhL323EFS
ZZCk6o/rZNOnTOVEcABbwWBsvaA1d2WMVSJ9s3adPT9c32n6OyX4UPm8sijGLDkT
mAKk5+3t+nExpaSFjk/Q+708fHR6Iatqgf2UqWWXYcMkQKKdETxFXXwKx6zT7pA=
=lBYi
-END PGP SIGNATURE-


[squid-users] Squid 3.3.13 is available

2014-08-27 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

The Squid HTTP Proxy team is very pleased to announce the availability
of the Squid-3.3.13 release!


This release is a security fix release resolving a major vulnerability
found in the prior Squid releases.

REMINDER: This and older releases are already deprecated by
  Squid-3.4 availablility.


The major changes to be aware of:

* CVE-2014-3609 : SQUID-2014:2 Denial of service in request processing

  http://www.squid-cache.org/Advisories/SQUID-2014_2.txt

This vulnerability allows any client who is allowed to use the proxy to
perform a denial of service attack on Squid. This issue is particularly
impacting reverse-proxy installations.

  A simple squid.conf workaround is available for quick use and those
  unable to upgrade. See the Advisory notice for details.



 See the ChangeLog for the full list of changes in this and earlier
 releases.

 All users are urged to upgrade as soon as possible.


Please remember to run "squid -k parse" when testing upgrade to a new
version of Squid. It will audit your configuration files and report
any identifiable issues the new release will have in your installation
before you "press go". We are still removing the infamous "Bungled
Config" halting points and adding checks, so if something is not
identified please report it.



Please refer to the release notes at
http://www.squid-cache.org/Versions/v3/3.3/RELEASENOTES.html
when you are ready to make the switch to Squid-3.3

Upgrade tip:
  "squid -k parse" is starting to display even more
   useful hints about squid.conf changes.

This new release can be downloaded from our HTTP or FTP servers

 http://www.squid-cache.org/Versions/v3/3.3/
 ftp://ftp.squid-cache.org/pub/squid/
 ftp://ftp.squid-cache.org/pub/archive/3.3/

or the mirrors. For a list of mirror sites see

 http://www.squid-cache.org/Download/http-mirrors.html
 http://www.squid-cache.org/Download/mirrors.html

If you encounter any issues with this release please file a bug report.
http://bugs.squid-cache.org/


Amos Jeffries

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJT/gnhAAoJELJo5wb/XPRjxbQH/j8yDWmRKHoeEttXmci9vXUY
2HMUloJjC7AMDkMWM9CaPebwNsLeTQEmtoQ5DtnxhPZA/QcXCf+sjWQNv+Kyrpx8
f6psq3jMVXn+xgDHeDd1EvBa+a3XqYkRp7tKxz4IDsIGxfja5L7W39PGV6ErHlMa
b4U674R7GJM9xLpj6sfeKWoW2xhv7620i7Zk8ZZVpYH/mwgxW7TRjYmev4YVnixC
XG+f0ExseElc+fNvvc2bGsXKgQBAy1S1DjxnagQ+FrIEyT4R9nUge4YC6G0JPmbW
73XMx9blJp5jby7WgKD+YLufJbJAY4TdT6mETN4TwYecaoy/2vZJ/wxW/6TLLis=
=kP6w
-END PGP SIGNATURE-


Re: [squid-users] illegal instruction with 3.4.6 (no problem with 3.4.4)

2014-08-27 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 28/08/2014 3:01 a.m., Alfredo Rezinovsky wrote:
> 
> There's no log output, it just exists. No coredump either. strace
> output is useful ?

Not really for this kind of thing.

You wil have to run under a debugger, or enable core dumps in the OS
settings for that. Some details on how to do that can be found at
http://wiki.squid-cache.org/SquidFaq/BugReporting



The big changes since 3.4.4 were an autotools version bump, and C++11
detection.

You can try adding --disable-arch-native and see if it is the -march
build options going bad in your compiler.

If that does not work the problem is probably in the autotools scripts
for detection of CPU type. That would be an autotools (autoconf)
project problem.


Another thing possible is if you auto-apply patches which are now
applying as "reversed" that can make them actually cause a problem.

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJT/f60AAoJELJo5wb/XPRjH0sH/R93AB3D/SSP5hQZIooGzUzY
9eSD60WaSZVraCfn5oVaqDdKNz+8RLoh4mvlnvwz9S3p8AmH7NeeYb0mmrtV5/fr
jT2x5QwXofdQGO7l2zE5sPcblQ+C+u5Tplof/AENcTqIjXzYV73Tt+YF35zZj0rR
dvO7woX0y2k6jk/dE0o4BtRG0l2qW76QOFcrd/Yqmv0l7wZAgftzgD/D4sYcKUMp
Pio08BGOjRYMk+F4T1ZBvvvBYB21LMnA5j86oZ7nHRpyeZiQ9HhsEOnb0fu34pe9
0XwNO8f9wWXvXk+97tYECD99jbMb8DEBRPOgFq5j83Ym6RPJxy0fSPxVNzYy8GM=
=O7Ur
-END PGP SIGNATURE-


Re: [squid-users] Re: kerberos_ldap_group stopped working with subdomains

2014-08-27 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 26/08/2014 7:44 a.m., Markus Moeller wrote:
> Hi Pavel,
> 
> Can you remove line 263 from support_krb5.cc and recompile ?  It is
> fixed in the trunk for 3.5.
> 
> The line is safe_free(principal_name);
> 
> Regards Markus
> 

For the record, this fix is now in 3.4.7.

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJT/elDAAoJELJo5wb/XPRjsk0H/irbDYvwbf8Asg/XWuxX1vK8
w0aiTACKtr/G3le2qpKz5eZLtG+6J5fznujN04wFDBdOmwfr4j+MWV8IcYO3Ij/y
SfdsGIu7oRljQrBUMWop5Leyxg3kqYcQc+8316mlAgr4SdLeQTFN+8H+jpv2Rdv3
Ftxaf0/eVnnujnwnnU5UijVXJ5pur/IMeXv+raByCzFdRVJm4ooHxJYMwe5vYzgI
ParSG9zlslZh3xR9Ae75Joo3R9S5PN6qnwiBTw4e73NP9m3aiDOyYHIOXIWEf2/Y
BD4hlTm7j9sJWumyBh0b0VD2D05cYV7eVlZzOkqoBWsiTkBNMf4z5kEpmvenjt0=
=RLho
-END PGP SIGNATURE-


Re: [squid-users] Re: Squid not listening on any port

2014-08-27 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 27/08/2014 5:19 p.m., israelsilva1 wrote:
> israelsilva1 wrote
>> 
>> babajaga wrote
>>> 1) Pinger exiting. You might try to disable pinger in
>>> squid.conf pinger_enable off
>>> 
>>> Just for completeness: Pls, publish squid.conf, without
>>> comments. Anonymized.
>> Disabled and it started listening!
>> 
>> Thanks a lot...
> 
> Now the question is: Why did pinger fail and should I bother fixing
> it?

Most common reasons for fails are opening its sockets. Either the
binary needs root owner:group and access to open the necessary
sockets, or IPv6 socket for ICMPv6 fails to open on true dual-stack or
split-stack operating systems.

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJT/eiVAAoJELJo5wb/XPRjSRcIAL0DrCjgG49aBBD4JrjViFyn
0w/XqS+7qNHNhlGJX5ZfcV0V8yLpvJssNxa69WzIy50Tymb/mg68lozoUdhquof8
vH7rr/6fDxKw25Y5roahmHxNHiN9T5injq2JERqPRrd1kh/gpGA4Hul4ZDVgHtci
ibmVLzwDjyRMXV82QgQ6IWJ+a11ZxgYtsODQqnBIdTcYIJFk3zzelc/1YDhWeDX/
P+cPtq2jNOrb9faTj/GZbspuzCOboHcC2V3+3js+iGjXuH2Ca2OVEckGJ69HUqfT
uyaJ4Nm6uCyOLXhU1kpcwtyMiyZFrdB9vz80FixqV1A5jiM2xpvTQkzfrF0PlP8=
=C54d
-END PGP SIGNATURE-


Re: [squid-users] Very slow initial reply

2014-08-27 Thread Amos Jeffries
On 27/08/2014 8:50 p.m., Bruno Guerreiro wrote:
> Hello.
> Thanks for your reply.
> DNS was also my first thought, but what surprises me is that on the same 
> server, nginx or direct are ok, but squid takes almost a minute. Also 
> nslookup and dig work fast.
> And this happens everytime. But i'll look for DNS failures on the server
> Anyone has any other idea?

In Squid you can configure dns_timeout for how long it will wait in
total for DNS results to come back. This will make an error response
happen faster for this type of DNS error.

As Eliezer mentioned, the fix is in your local DNS server config or the
upstream domains DNS server config.
 * Your recursive DNS server used by Squid apparently has a long timeout
on waits for a response from the domains NS-1 before moving on to its
NS-2 etc. most of the actual lag you are seeing is coming from that.

 * The proper fix is for the upstream domain admin to fix their NS
servers of course. If you contact them about which servers are having
trouble that might get it fixed for everyone.

Amos



Re: [squid-users] FW: squid 3.3.10 always gives TCP_MISS for SSL requests

2014-08-25 Thread Amos Jeffries
On 26/08/2014 3:29 p.m., Lawrence Pingree wrote:
> I'm not sure if this is right or not, but wouldn't your refresh patterns
> need to have the "ignore-private" to cache ssl? Amos may know better, but I
> don't see that option specified in your "All Files" refresh_patterns.

HTTPS is not particularly private in the HTTP sense. It is just regular
HTTP traffic wrapped in underlying transport security encryption. It
does have a security scope difference from HTTP as to though due to that
encryption.

That scope difference is handled by the URL scheme portion. For example
Squid must not and will not HIT on a http:// URL in cache for https://
request of otherwise identical URL, and vice versa.

>From the administrative viewpoint there is a higher risk with HTTPS of
application designers breaking things and making vulnerable software
simply by not understanding the above. There is high pressure to get
privacy protection right with "insecure" http:// but weak for "secure"
https:// on things like OAuth traffic and eCommerce checkout pages where
they should have sent Cache-Control:private or no-store regardless.

Amos



Re: [squid-users] FW: squid 3.3.10 always gives TCP_MISS for SSL requests

2014-08-25 Thread Amos Jeffries
On 26/08/2014 12:11 p.m., Ragheb Rustom wrote:
> Dear All,
> 
> I have lately installed squid 3.3.11 on Centos 6.5 x86_64 system. I have
> configured it as a transparent SSL_BUMP proxy. All is working well I can
> browse all SSL websites successfully after I have imported my generated CA
> file. The problem is that no matter how many times I request the SSL
> websites I always get a TCP_MISS in the squid access log. Among other
> websites I am trying to cache yahoo.com, facebook and youtube but most
> websites are always being served directly from source nothing is being
> served for the squid proxy. Please find below my configuration files. I
> deeply value any help on this matter.
> 

For a start configure this and re-check:
  strip_query_terms off

That will allow your logs to show the full URL Squid is considering for
cache HIT/MISS. You may find that a few hundred seemingly identical log
entris are in fact highly variable in the query string portion. Such
requests cannot be combined/HIT.

> squid.conf file:
> 
> acl snmppublic snmp_community public
> acl bamboe src 10.128.135.0/24
> #uncomment noway url, if necessary.
> #acl noway url_regex -i "/etc/squid/noway"
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 1935  # http acl Safe_ports port 21  #
> ftp acl Safe_ports port 443 # https acl Safe_ports port 70 
> # gopher acl Safe_ports port 210 # wais acl Safe_ports port
> 1025-65535  # unregistered ports acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http acl Safe_ports port 591 #
> filemaker acl Safe_ports port 777 # multiling http
> 
> 
> acl CONNECT method CONNECT
> #http_access deny noway
> http_access allow manager localhost
> http_access allow bamboe
> http_access deny manager

The above http_access bits...

> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports

... should be in here.

> http_access allow localhost
> htcp_access deny all
> miss_access allow all

That is the default, you should get faster operation removing
miss_access entirely.
> 
> # NETWORK OPTIONS
> http_port 8080
> http_port 8082 intercept
> https_port 8081 intercept ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=8MB cert=/etc/squid/myconfigure.pem
> key=/etc/squid/myconfigure.pem ssl_bump server-first all always_direct allow
> all sslproxy_cert_error allow all sslproxy_flags DONT_VERIFY_PEER
> 

Avoid DONT_VERIFY_PEER as much as possible. It is "considered harmful"
for security. Also usually unnecessary if the machines trusted CA
certificates are setup properly and up to date.

> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/ssl_db -M 8MB
> sslcrtd_children 5 hierarchy_stoplist cgi-bin ? .js .jsp mivo.tv
> 192.168.10.29 192.168.10.30 static.videoku.tv acl QUERY urlpath_regex
> cgi-bin \? .js .jsp 192.168.10.29 192.168.10.30 youtube.com indowebster.com
> static.videoku.tv no_cache deny QUERY
> 

Aha!

  "no_cache deny QEURY"

The "no_" part is obsolete syntax. What this line actually does is force
all URLs with a query string ('?') to never be cached.

This is the source of your MISS log entries. Remove it to get at least a
chance at some HITs.

Also, hierachy_stoplist is not useful in your configuration. You can
probably remove it entirely. If your squid complains when its missing,
set it to the default:
   hierarchy_stoplist /cgi-bin/ \?


> #  MEMORY CACHE OPTIONS
> cache_mem 6000 MB
> maximum_object_size_in_memory 16 KB
> memory_replacement_policy heap GDSF
> 
> # DISK CACHE OPTIONS
> cache_replacement_policy heap LFUDA
> cache_dir aufs /cache1 30 64 256
> store_dir_select_algorithm least-load
> minimum_object_size 16 KB
> maximum_object_size 2 GB

Put these global default min/max size limits above the cache_dir lines.
Recent but outdated Squid like yoru 3.3 had a bug where the
maximum_object_size is ignored if configured after cache_dir. Position
for it does not normally matter, so placing it first always works and
avoids needless annoyance.


> cache_swap_low 97
> cache_swap_high 99
> 
> #LOGFILE OPTIONS
> access_log stdio:/var/log/squid/access.log cache_log
> /var/log/squid/cache.log cache_store_log none cache_swap_log
> /cache1/swap.state logfile_rotate 5 log_icp_queries off buffered_logs off
> 
> #OPTIONS FOR TUNING THE CACHE


 Since Squid-3.2 some of the override and ignore options have changed.

* ignore-no-cache is obsolete. Traffic with Cache-Control:no-cache will
be cached properly by default.
 - remove this option from your config file.

* combining reload-into-ims and ignore-reload is harmful.
 - ignore-reload makes Squid either HIT or MISS, rendering the
revalidate CLIENT_REFRSH performance optimizations enabled by
reload-into-ims useless.

* ignore-private is harmful. Traffic with Cache-Control:private has
mandatory revalidation. What can be cached will be cached properly by
default, this option only causes all private data to b

Re: [squid-users] Fwd: New to FreeBSD, Squid experiencing request loops

2014-08-24 Thread Amos Jeffries
On 25/08/2014 2:22 p.m., orientalsniper wrote:
> nginx is serving as reverse proxy listening on 10.2.0.4-10.2.0.9 HTTP
> for some games patches.
> 
> pfSense serves as firewall, captive portal and among other services.
> 
> By NAT, I think you mean pfSense is doing it? pfSense is 10.0.0.1,
> 10.1.0.1 and 10.2.0.1.
> I have a NAT rule in pfSense to redirect all LAN2 HTTP traffic to
> 10.2.0.2 (port 3128).
> 

Great, that clarifies a lot.

The problem is that NAT is being done on a separate box from Squid. The
current Squid attempt to be as fully transparent as possible in
intercept/transparent mode. That includes ensuring the domain/IP the
client was contacting is actually the one Squid is using too - that is
mandatory due to CVE-2009-0801 issues.

With NAT on a separate box Squid only knows its own IP as the
destination. So on the outbound things get looped.


What you need to do to fix this is move the NAT rule changing port to
3128 onto the Squid VM. Have pfSense route port 80 traffic with 10.2.0.2
as the gateway router (policy routing) unless it came from 10.2.0.2 in
the first place.

After that your proxy should be usable. But there are some additional
security issues that need resolving as well:

 1) renumber the interception port in Squid to something other than
3128. Squid needs to use 3128 for forward-proxy traffic from the
clients, manager API acces, icons, etc.

 2) update the Squid VM firewall to prevent external machines directly
accesing the intercept port you choose. It is only needed to be used by
packets between Squid and the firewall on the same machine. If any
outside machines do access it you will have looping problems and
potentially a DoS happening.


> WORK-PC (10.1.0.3) ACL was redudant and I forgot to delete it, since
> it's part of 10.0.0.0/8
> 
> Regarding "tcp_outgoing_address   127.0.0.1" that was one of my
> attempts to fix my issue, I've tried 10.2.0.2 also.

You should not need to set outgoing IP at all. Remove that before
testing the above changes.


HTH
Amos


Re: [squid-users] Fwd: New to FreeBSD, Squid experiencing request loops

2014-08-24 Thread Amos Jeffries
On 25/08/2014 12:37 p.m., orientalsniper wrote:
> Hello all, I'm having the same problem as this guy:
> 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-transparent-proxy-with-one-nic-access-denied-problem-td4664881.html
> 
> When I try to access a website I get a Access Denied by Squid message
> and in the access.log I see I'm getting a forwarding loop error.
> 
> But we have different network setup and he's using Ubuntu. I'm running Squid 
> 3.4
> 
> I'm running 2 VM's: 1 for pfSense and the other for FreeBSD (nginx + squid)
> 
> I have the following network:
> WAN1 + WAN2 in pfSense
> 10.0.0.1/24 (LAN1 in pfSense)
> 10.1.0.1/24 (LAN2 in pfSense)
> 10.2.0.1/24 (LAN3 in pfSense) > (connecting to nginx+squid[10.2.0.2] VM)
> 

What is nginx in the mix for?
 and what is pfSense doing?
 where are the NATs happening? **


** you must have at least three layers of NAT for that described setup
to work:
  clients-->10.2.0.2 (for delivery to nginx)
  10.2.0.2:80 -> 10.2.0.2:3128 (nginx outgoing MITM capture to Squid)
  127.0.0.1 -> 10.2.0.2
  10.2.0.2 -> Internet

> My squid.conf:

(elided the comments for you so we can read it easier.)

> 
> acl whatismyip dstdomain whatismyip.cc
> http_access allow whatismyip
> 
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> acl WORK-PC srcdomain 10.1.0.3

10.1.0.3 is not a domain name. It is an IP address. Use src ACL type.

> 
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow localhost manager
> http_access deny manager
> 
> http_access allow localnet
> http_access allow localhost
> 
> http_port 10.2.0.2:3128 intercept
> 
> cache_dir ufs /var/squid/cache/squid 100 16 256
> coredump_dir /var/squid/cache/squid
> 
> refresh_pattern ^ftp:  1440   20%   10080
> refresh_pattern ^gopher:   1440   0%   1440
> refresh_pattern -i (/cgi-bin/|\?) 0   0%   0
> refresh_pattern .  0   20%   4320
> cache_effective_user squid
> cache_effective_group squid
> check_hostnames off
> unique_hostname squidcache
> dns_nameservers 8.8.8.8
> tcp_outgoing_address   127.0.0.1
> 

127.0.0.1 is not a globally routable IP address. Nor can it be NAT'ed to
one. Outgoing traffic from Squid to any other host is guaranteed to fail
delivery.


Amos


Re: [squid-users] Only checking URLs via Squid for SSL

2014-08-24 Thread Amos Jeffries
On 24/08/2014 9:32 p.m., Nicolás wrote:
> Hi Amos,
> 
> El 24/08/2014 0:52, Amos Jeffries escribió:
>> On 24/08/2014 1:00 a.m., Nicolás wrote:
>>> Hi,
>>>
>>> I'm using Squid 3.3.8 as a transparent proxy, it works fine with HTTP,
>>> but I'd like to avoid cacheing HTTPS sites, and just determine whether
>>> the requested URL is listed as denied on Squid (via 'acl dstdom_regex'
>>> for instance), otherwise just make squid act as a proxy to the URL's
>>> content. Is that even possible without using SSL Bump? Otherwise, could
>>> you recommend the simplest way of achieving this?
>>>
>> No it is only possible with bumping. For transparent interception of
>> port 443 (HTTPS) use squid-3.4 with server-first bumping at minimum,
>> preferrably squid-3.5 with peek-n-splice when it comes out.
>>
>> If you bump and still do not want to cache for some reason the cache
>> access control can be used like so:
>>
>>acl HTTPS proto HTTPS
>>cache deny HTTPS
>>
>>
>> Amos
>>
> 
> I finally installed Squid 3.4.6 from source with --enable-ssl and
> --enable-ssl-crtd options and put the corresponding configuration line
> for ssl-bump:
> 
> https_port 0.0.0.0:3130 intercept ssl-bump
> cert=/opt/certs/server.crt key=/opt/certs/server.key
> 
> This cert is self-signed and evidently it produces the
> 'sec_error_untrusted_issuer' error on the clients' browsers. Would that
> warning desappear if I used a recognized CA to sign that cert that would
> match the Squid box's FQDN, or is the installation of the autosigned
> cert on every client's browser the only option here?

If the browser does not trust the signing CA it will warn.

Amos


Re: [squid-users] Only checking URLs via Squid for SSL

2014-08-23 Thread Amos Jeffries
On 24/08/2014 1:00 a.m., Nicolás wrote:
> Hi,
> 
> I'm using Squid 3.3.8 as a transparent proxy, it works fine with HTTP,
> but I'd like to avoid cacheing HTTPS sites, and just determine whether
> the requested URL is listed as denied on Squid (via 'acl dstdom_regex'
> for instance), otherwise just make squid act as a proxy to the URL's
> content. Is that even possible without using SSL Bump? Otherwise, could
> you recommend the simplest way of achieving this?
> 

No it is only possible with bumping. For transparent interception of
port 443 (HTTPS) use squid-3.4 with server-first bumping at minimum,
preferrably squid-3.5 with peek-n-splice when it comes out.

If you bump and still do not want to cache for some reason the cache
access control can be used like so:

  acl HTTPS proto HTTPS
  cache deny HTTPS


Amos



Re: [squid-users] Re: Filter squid cached files to multiple cache dirs

2014-08-23 Thread Amos Jeffries
On 24/08/2014 6:06 a.m., dxun wrote:
> So, to sum it all up (please correct me if I'm wrong) - it is possible to
> have multiple cache_dirs AND instruct a single squid instance to place files
> in those caches according to file size criteria using
> min_file_size/max_file_size params on the cache_dir directive. Also,
> maximum_object_size directive is basically a global max_file_size param
> applied to all cache_dirs, so it has to be specified BEFORE any particular
> cache_dir configuration.

Sort of.
 * default value for maximum_object_size is 4MB, which is used until you
change it.
 * maximum_object_size is the default value for cache_dir max-size=N
parameter. Its current value is applied only if you omit that parameter
from a cache_dir.

For example (not a good idea to actually do it like this):
  # default maximum_object_size is 4 MB
  cache_dir ufs /a 100 16 256

  maximum_object_size 8 MB
  cache_dir ufs /b 100 16 256

  maximum_object_size 2 MB
  cache_dir ufs /c 100 16 256

Is the same as writing:

  cache_dir ufs /a 100 16 256 max-size=4194304
  cache_dir ufs /b 100 16 256 max-size=8388608
  cache_dir ufs /c 100 16 256 max-size=2097152


> 
> If that is the case, I am wondering - is this separation actually
> inadvisable for any reason?

It is advised for better performance on high throughput configurations
with multiple cache_dir.
It does not matter for other configurations.


> Is there a better way to separate files
> according to their transiency and underlying data store speed?

Squid automatically separates out the most recently and frequently used
objects for storage in the high speed RAM cache. Also monitors the drive
I/O stats for overloading. There is just no differentiation between HDD
and SSD speeds (yet) - although indirectly via the loading checks SSD
can see more object throughput then HDD.

rock cache type is designed to reduce disk I/O loading on objects with
high temporal locality ("pages" often requested or updated together in
bunches), particularly if they are small objects.

Transiency is handled in memory, or by RAM caching objects for a while
before they go near disk. This is controlled by
maximum_object_size_in_memory, objects over that limit will have disk
I/O regardless of transiency in older Squid. Upcoming 3.5 releases only
involve disk on them if they are actually cacheable.


? What would
> you recommend?


Upstream recommendation is to configure maximum_object_size, then your
cache_dir ordered by the size of objects going in there (smallest to
largest).

Also, to use a Rock type cache_dir for the smallest objects. It can be
placed on the same HDD as an AUFS cache and working together a rock for
small objects and AUFS for large objects can utilize larger HDD sizes
better than either cache type alone.
 * 32KB object size is the limit for rock in current stable releases,
that is about to be increased with squid-3.5.

Based on theory and second-hand reports: I would only use an SSD for
rock type cache with block size parameter for the rock cache sized to
match the SSD sector or page size. So that only writing a single rock
block/page causes each SSD sector/page to bump further towards its
lifetime write limit.

Amos


Re: [squid-users] Nudity Images Filter for Squid

2014-08-23 Thread Amos Jeffries
On 23/08/2014 7:08 a.m., Stakres wrote:
> Hi Guys,
> 
> We just released a new free tool for Squid:  Nudity Images Filter for Squid
>   

Its probably best to avoid PHP for publicly distributed helpers. At east
if you want them to be used widely.

PHP CLI is an unusual interpreter to have installed, and has known
issues with engine timeouts closing the helper scripts unexpectedly
while in use by Squid.

Amos



Re: [squid-users] negotiate_wrapper returns asteriks

2014-08-22 Thread Amos Jeffries
On 22/08/2014 10:00 p.m., Melvin Williams wrote:
> Hello, 
> 
> I hope some can help me. I want to use squid for authentication and send the 
> username to dansguardian. Here's the config of the authentiction program:
> 
> auth_param negotiate program /usr/lib/squid3/negotiate_wrapper_auth -d --ntlm 
> /usr/bin/ntlm_auth --diagnostics --helper-protocol=gss-spnego --domain=DOMAIN 
> --kerberos /usr/lib/squid3/negotiate_kerberos_auth -r -d -s GSS_C_NO_NAME
> 
> I always get "negotiate_wrapper: Return 'AF = * username" where username is 
> the currently logged in user. Where is this asteriks comming from. I can't 
> map 
> "* username" to dansguardian filter-groups. 

Hmm. Would this happen to be an AF response coming from the ntlm_auth
helper by chance?
 is it sending back "AF * username" ?


Amos


Re: [squid-users] blockVirgin Works for CONNECT but Custom Response does not work

2014-08-22 Thread Amos Jeffries
On 22/08/2014 7:14 p.m., Rafael Akchurin wrote:
> Hello Jatin,
> 
> Unfortunately I cannot answer your question. But why would you like to bump 
> the connection when admin *explicitly* specified it as *not to be bumped*. I 
> think eCap adapter here acts as a passive beast just scanning what admin 
> tells it to, not what it thinks it needs to scan.
> 

Indeed.

Jatin I think you need to check exactly what response the eCAP adapter
is producing for these CONNECT requests. The status code, content-type
header and message body all need to be in agreement to have any chance
at all of working. You may even have to use a 302/303 status to redirect
to a different URL which has the content in it.

Keep in mind also that the mainstream popular browsers simply will not
display anything except their own error pages in response to
unsuccessful CONNECT. Perhapse a bit on the extreme side, but that is
how they have chosen to prevent security vulnerabilities which have been
abused badly in the past.

Amos



Re: [squid-users] Re: Individual delay pools and youtube

2014-08-21 Thread Amos Jeffries
On 22/08/2014 12:24 a.m., fpap wrote:
> You are very right Antony!
> 
>> 1. are all the youtube videos which go over-limit HTTPS connections?
> Yes!
> 
>> 2. can the client go over-limit with any other URL provided it's HTTPS? 
> Yes!
> 
> So... is there any thing to do in order to limit the bandwidth of clients
> downloading/viewing videos over htpps? If not possible in squid, I accept
> any other ways.
> 
> Thank you very much!


I recommend you use the operating system QoS functionality. They are
more fine grained than Squid delay_pools. Squid can provide TOS markings
on connections to servers via tcp_outgoing_tos for those controls to
work with.

Amos


Re: [squid-users] Poor cache

2014-08-21 Thread Amos Jeffries
On 21/08/2014 11:56 p.m., Délsio Cabá wrote:
> Hi,
> 
> I have just update to the latest version, and the results are clear:
> cat  /var/log/squid/access.log  | awk '{print $4}' | sort | uniq -c | sort -rn
>  486561 TCP_MISS/200
>   89612 TCP_MISS/304
>   52123 TCP_MEM_HIT/200
>   40408 TCP_MISS/206
>   36267 TCP_MISS/302
>   20904 TCP_MISS/204
>   12246 TCP_IMS_HIT/304
>   12171 TCP_MISS/404
>   10533 TCP_MISS/301
>9145 TCP_MISS/000
>6004 TCP_OFFLINE_HIT/200
> ..
> 
> It's said that MISS/301, MISS/303 are not cacheable without special
> instructions.
> 
> What are those SPECIAL instructions?

http://tools.ietf.org/html/rfc7234#section-3

301 is a status code defined as cacheable by default.
303 depends on the other conditions.

Amos



Re: [squid-users] Does Squid send connection information of client and server to c-icap?

2014-08-21 Thread Amos Jeffries
On 21/08/2014 7:48 p.m., m.shahverdi wrote:
> Hi,
> Does squid send client and server IPs and ports to c-icap when sending
> request or response to it?

Why would those be relevant? ICAP is for content filtering, not packet
routing.

Squid-3.2 and later send custom annotation headers with whatever has
been configured.
 http://www.squid-cache.org/Doc/config/adaptation_meta/
 http://www.squid-cache.org/Doc/config/adaptation_send_client_ip/
 http://www.squid-cache.org/Doc/config/adaptation_masterx_shared_names/

Amos



Re: [squid-users] acl limit

2014-08-21 Thread Amos Jeffries
On 21/08/2014 7:16 p.m., k simon wrote:
> Hi,Lists,
> 
>I plan to  use "acl isp-xxx dst" to define tons of route prefix over
> 27,000 items. Does it reasonable?

Squid should be able to handle it, but its probably best to aggregate
the ranges first to minimize the work necessary per-request.

Squid takes start-end/mask syntax which can range across odd numbers of
CIDR boundaries. So a clean CIDR prefix listing has potentially far more
entries than strictly necessary for Squid config files.

Amos



Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-20 Thread Amos Jeffries
On 21/08/2014 2:37 p.m., sq...@proxyplayer.co.uk wrote:
> 
>> which one?
> It's client --> unbound --> if IP listed in unbound.conf --> forwarded
> to proxy --> page or stream returned to client
> 
> For others it's client --> unbound --> direct to internet with normal DNS
> 

Replace "forwarded to proxy" with "IP address forged as proxy".
Which is the source of the problem, your proxy does not have any TLS
security certificates or keys to handle the HTTPS traffic properly, and
no way to identify what the real server actually is.

Squid does not yet support receiving SNI, nor do many client software
support sending it. So the only way this can work is with packets
*routed* through the Squid device. The unbound setup you have cannot work.


What I am looking for is the network topology over which the TCP
connections are supposed to flow. VPN connection, LAN connection, WAN
connection, etc.
 This is necessary in order to identify which device is the suitable
gateway to setup a "tunnel" to the proxy. Then we can look at what types
of tunnel are appropriate for your situation.

Amos



Re: [squid-users] https://weather.yahoo.com redirect loop

2014-08-20 Thread Amos Jeffries
On 21/08/2014 2:23 p.m., Lawrence Pingree wrote:
> No, I mean they are intentionally blocking with a configured policy,
> its not a bug. :) They have signatures that match Via headers and
> forwarded for headers to determine that it's squid. This is because
> many hackers are using bounces off open squid proxies to launch web
> attacks.
> 

That still sounds like a bug. Blocking on squid existence makes as much
sense as blocking all traffic with UA header containing "MSIE" on
grounds that 90% of web attacks come with that agent string.
The content inside those headers is also context specific, signature
matching will not work beyond a simple proxy/maybe-proxy determination
(which does not even determine non-proxy!).


A proposal came up in the IETF a few weeks ago that HTTPS traffic
containing Via header should be blocked on sight by all servers. It got
booted out on these grounds:

* the "bad guys" are not sending Via.

* what Via do exist are being sent by "good guys" who obey the specs but
are othewise literally forced (by law or previous TLS based attacks) to
MITM the HTTPS in order to increase security checking on that traffic
(ie. AV scanning).

Therefore, the existence of Via is actually a sign of *good* health in
the traffic and a useful tool for finding culprits behind the well
behaved proxies.
 Rejecting or blocking based on its existence just increases the ratio
of nasty traffic which makes it through. While simultaneously forcing
the "good guys" to become indistinguishable from "bad guys". Only the
"bad guys" get any actual benefit out of the situation.


Basically "via off" is a bad idea, and broken services (intentional or
otherwise) which force it to be used are worse than terrible.

Amos


Re: [squid-users] https://weather.yahoo.com redirect loop

2014-08-20 Thread Amos Jeffries
On 21/08/2014 5:08 a.m., Lawrence Pingree wrote:
> Personally I have found that the latest generation of Next Generation
> Firewalls have been doing blocking when they detect a via with a
> squid header,

Have you been making bug reports to these vendors?
 Adding Via header is mandatory in HTTP/1.1 specification, and HTTP
proxy is a designed part of the protocol. So any blocking based on the
simple existence of a proxy is non-compliance with HTTP itself. That
goes for ports 80, 443, 3128, 3130, and 8080 which are all registered
for HTTP use.

However, if your proxy is emitting "Via: 1.1 localhost" or "Via: 1.1
localhost.localdomain" it is broken and may not be blocked so much as
rejected for forwarding loop because the NG firewall has a proxy itself
on localhost. The Via header is generated from visible_hostname (or the
OS hostname lookup) and supposed to contain the visible public FQDN of
the each server the message relayed through.

Amos


Re: [squid-users] Poor cache

2014-08-20 Thread Amos Jeffries
On 21/08/2014 6:05 a.m., Délsio Cabá wrote:
> Hi,
> Using version: Squid Cache: Version 3.1.10  (Centos RPM)
> 

Ah. The version itself is probably most of the prooblem.

3.1 does not cache traffic with Cache-Control:no-cache, which these days
consists of a large percentage (30-40) of all traffic. That is resolved
in 3.2 and later, along with better caching of private and authenticated
traffic.

You can find details of newer CentOS RPM packages from Eliezer at
http://wiki.squid-cache.org/KnowledgeBase/CentOS

Amos



Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-20 Thread Amos Jeffries
On 21/08/2014 8:59 a.m., sq...@proxyplayer.co.uk wrote:
> why are you using unbound for this at all?
> 
> Well, we use a geo location service much like a VPN or a proxy.
> For transparent proxies, it works fine, squid passes through the SSL
> request and back to the client.
> For VPN, everything is passed through.
> But with unbound, we only want to pass through certain requests and some
> of them have SSL sites.
> Surely, there's a way to pass a request from unbound, and redirect it
> through the transparent proxy, returning it straight to the client?
> 

I'm not sure what you mean, unbound is a DNS server it does not process
HTTP protocol at all. All it does is tell the client where the *web
server* for a domain is located. But the client only needs to know which
route to use.

With a client connecting over WAN through a proxy you have:
 client --WAN--> proxy --> Internet
 client <--WAN-- proxy <-- Internet
plus for non-proxied traffic:
 client --WAN--> Internet
 client <--WAN-- Internet

With a client connecting over a VPN you have:
 client --VPN--> proxy --> Internet
 client <--VPN-- proxy <-- Internet
plus for non-proxied traffic:
 client --VPN--NAT--> Internet
 client <--VPN--NAT-- Internet

in both above cases the gateway router receiving WAN or VPN traffic is
responsible for the NAT/TPROXY/WCCP interception.

What I've gathered so far is that you are trying to achieve one of these:

A)
 client --VPN--> proxy --> Internet
 client <--VPN-- proxy <-- Internet
plus for non-proxied traffic:
 client --WAN--> Internet
 client <--WAN-- Internet


B)
 client --VPN--> proxy --> Internet
 client <--WAN-- proxy <-- Internet
plus for non-proxied traffic:
 client --VPN--> Internet
 client <--WAN-- Internet


which one?

Amos



Re: [squid-users] Re: server failover/backup

2014-08-20 Thread Amos Jeffries
On 21/08/2014 5:33 a.m., nuhll wrote:
> Some Logs:

These logs are showing a problem...


> ==> /var/log/squid3/cache.log <==
> 2014/08/20 19:33:19.809 kid1| client_side.cc(777) swanSong:
> local=192.168.0.1:3128 remote=192.168.0.125:62595 flags=1
> 2014/08/20 19:33:20.227 kid1| client_side.cc(777) swanSong:
> local=192.168.0.1:3128 remote=192.168.0.125:62378 flags=1
> 2014/08/20 19:33:20.232 kid1| client_side.cc(900) deferRecipientForLater:
> clientSocketRecipient: Deferring request
> http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win_deDE-final.MPQ
> 2014/08/20 19:33:20.232 kid1| client_side.cc(1518)
> ClientSocketContextPushDeferredIfNeeded: local=192.168.0.1:3128
> remote=192.168.0.125:62611 FD 29 flags=1 Sending next

This appears to be a client (192.168.0.125) connecting to what it thinks
is a regular forward-proxy port:
  http_port 3128
or
  http_port 192.168.0.1:3128


> 2014/08/20 19:33:20.235 kid1| client_side.cc(777) swanSong:
> local=192.168.0.1:3128 remote=192.168.0.125:62611 flags=1
> 2014/08/20 19:33:20.638 kid1| client_side.cc(777) swanSong:
> local=192.168.0.1:3128 remote=192.168.0.125:62669 flags=1
> 
> ==> /var/log/squid3/access.log <==
> 1408555999.808  10552 192.168.0.125 TCP_MISS/503 3899 GET
> http://dist.blizzard.com.edgesuite.net/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win-final.MPQ
> - HIER_DIRECT/192.168.0.4 text/html
> 1408556000.232   9976 192.168.0.125 TCP_MISS/503 3844 GET
> http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win-final.MPQ
> - HIER_DIRECT/192.168.0.4 text/html
> 1408556000.232   9975 192.168.0.125 TCP_MISS/503 3803 GET
> http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win_deDE-final.MPQ
> - HIER_DIRECT/192.168.0.4 text/html

This above shows Squid receiving various requests for blizzard.com
domains and relaying them to the web server at 192.168.0.4.

Do you actually have a blizzard.com web server running at 192.168.0.4  ?
 I dont think so.


> 1408556000.638406 192.168.0.125 TCP_MISS/200 1642 CONNECT
> dws1.etoro.com:443 - HIER_DIRECT/149.126.77.194 -
> 

It seems to me that you are mixing the HTTP traffic modes up.

Squid accepts traffic with two very different on-wire syntax formats,
and also with possibly mangled TCP packet details. These combine into 3
perutatiosn we call traffic "modes".

1) forward-proxy (aka manual or auto-configured explicit proxy)
  - port 3128 traffic syntax designed for proxy communication. Nothing
special needed to service the traffic.

2) reverse-proxy (aka accelerator / CDN gateway)
  - port 80 traffic syntax designed for web server communication.
Message URLs need reconstructing and an origin cache_peer server is
expected to be explicitly configured.

3) interception proxy (aka transparent proxy)
  - port 80 traffic syntax and also possible TCP/IP mangling of the
packet IPs. Any mangling needs to be detected and undone, input
validation security checks applied, then the reverse-proxy URL
manipulations performed.
  NP: if the security checks fail caching will be disabled for request,
but it will still be serviced as a transparent MISS.
  NP2: if the security checks fail and the TCP packet details are broken
you will get 503 exactly as logged above.


What you need to do for a properly working proxy is ensure that:
* each mode of traffic is sent to a separate http_port opened by Squid.
 - you may use multiple port directives as long as each has a unique
port number.
* each http_port directive is flagged as appropriate to indicate the
traffic mode being received there.



>From the logs above it looks to me like you are possibly intercepting
the blizzard traffic and NAT'ing it to a forward-proxy port 3128.

You probably need to actually configure this to get rid of the 503s:

 http_port 3128
 http_port 3129 intercept

and change your NAT rules to -j REDIRECT packets to port 3129. Leave
your DHCP rules sending traffic to port 3128.

Amos



Re: [squid-users] Re: ONLY Cache certain Websites.

2014-08-20 Thread Amos Jeffries
On 19/08/2014 3:42 a.m., nuhll wrote:
> Just to clarify my problem: I dont use it as a transparente proxy! I
> distribute the proxy with my dhcp server and a .pac file. So it gets used on
> all machines with "auto detection proxy"
> 

Your earlier config file posted contained:

  http_port 192.168.0.1:3128 transparent

transparent/intercept mode ports are incompatible with WPAD and PAC
configuration. You need a regular forward-proxy port (no "transparent")
for receiving that type of traffic.

This is probably a good hint as to what yoru problem actually is. The
logs you posted in other email are showing what could be the side effect
of this misunderstanding. I will reply to that email with details.

Amos



Re: [squid-users] Re: server failover/backup

2014-08-20 Thread Amos Jeffries
On 21/08/2014 5:29 a.m., nuhll wrote:
> Hello,
> thanks for your help.
> 
> I own a dhcp server which spread the proxy ip:port to all clients (proxy
> settings are default "search for") so all programs are using this proxy
> automatic for http requests.

Not quite. Only the applications which obey DHCP based WPAD
auto-configuration.

There is also DNS based WPAD, and lots of applications (Java based and
mobile apps mostly) which do not auto-configure at all.


> 
> I use Linux version 3.2.0-4-amd64 (debian-ker...@lists.debian.org) (gcc
> version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian 3.2.60-1+deb7u3
> 
> I worked hard to upgrade to 3.3.8. Im not a linux guru. 
> 

:-( sorry. The Debian maintainer team has a 3.4 package almost ready but
it has been held up by other administrative details for a few months.

Amos



Re: [squid-users] Re: server failover/backup

2014-08-20 Thread Amos Jeffries
On 21/08/2014 7:22 a.m., Antony Stone wrote:
> On Wednesday 20 August 2014 at 21:08:03 (EU time), nuhll wrote:
> 
>> accel the sites i want to cache.
>>
>> But how? Information about this is crazy much
>>
>> http://wiki.squid-cache.org/SquidFaq/ReverseProxy
>>
>> But how to cache?
> 
> Simple answer - with a caching proxy server.
> 
> Longer answer - accelerator mode is incompatible with caching mode - you use 
> either one, or the other, but not both on the same proxy.

This is wrong. Acceleration and caching are simply separate features.
They are *independent*, not incompatible. Both forward- and reverse-
(accel) proxy can and do cache in exactly the same ways.

So nuhll,
 you will get exactly the same caching behaviour from Squid regardless
of using accel mode or a regular proxy port. Only transparent/intercept
mode has strange caching behaviours.

Amos



Re: [squid-users] Poor cache

2014-08-20 Thread Amos Jeffries
On 20/08/2014 9:21 a.m., Délsio Cabá wrote:
> Hi guys,
> Need some help on cache. Basically I do not see many caches.
> 
> root@c /]# cat  /var/log/squid/access.log  | awk '{print $4}' | sort |
> uniq -c | sort -rn
>   17403 TCP_MISS/200
>3107 TCP_MISS/304

 - objects in the client browser cache were used.

>1903 TCP_MISS/000

 - server was contacted but no response came back. This is bad. Seeing
it in such numbers is very bad.
 It is a strong sign that TCP window scaling, ECN or ICMP blocking
(Path-MTUd) issues are occuring on your traffic.


>1452 TCP_MISS/204

 - "204 no content" means there was no object to be cached.

>1421 TCP_MISS/206

 - Range request responses. Squid cannot cache these yet, but they
should be cached in the client browser and contribute to those 304
responses above.

>1186 TCP_MISS/302

 - along with the MISS/301, MISS/303 these are not cacheable without
special instructions.

> 659 TCP_MISS/503
> 641 NONE/400
> 548 TCP_MISS/301
> 231 TCP_OFFLINE_HIT/200

 - cached object used.

> 189 TCP_MISS/404
> 126 TCP_IMS_HIT/304

 - cache object found, but objects in the client browser cache were used.

> 112 TCP_MISS/504
>  68 TCP_MISS/401
>  56 TCP_MEM_HIT/200

 - cached object used.

>  50 TCP_SWAPFAIL_MISS/304

 - cached object found, but disk error occurred loading it. And the
client request was conditional. So object in client browser cache used
instead.

>  49 TCP_REFRESH_UNMODIFIED/200

 - cached objects found, mandatory update check required and resulted in
Squid cached object being delivered to client.

>  46 TCP_SWAPFAIL_MISS/200
>  39 TCP_MISS/500
>  36 TCP_MISS/502
>  34 TCP_REFRESH_UNMODIFIED/304

 - cached objects found, mandatory update check required and resulted in
client browser cache object being used.


>  31 TCP_MISS/403
>  25 TCP_MISS/400
>  19 TCP_CLIENT_REFRESH_MISS/200

 - cached object found, but client request forced a new fetch.

>  17 TCP_REFRESH_MODIFIED/200

- cached object found, mandatory update check resulted in a new object
being used.

>  11 NONE/417
>   9 TCP_MISS/303
>   6 TCP_HIT/000

 - cached object used, but client disconnected before it could be delivered.

>   5 TCP_MISS/501
>   5 TCP_HIT/200

 - cached object used.

>   4 TCP_MISS/202

 - this is usually only seen on POST or PUT. Which are not cacheable by
Squid.

>   3 TCP_MISS/412
>   2 TCP_SWAPFAIL_MISS/000

 - cache object found, but disk error while loading it and the client
disconnected before a server response was found.

>   2 TCP_MISS/408
>   1 TCP_MISS/522
>   1 TCP_MISS/410
>   1 TCP_MISS/405
>   1 TCP_CLIENT_REFRESH_MISS/000

 - cached object found, but client request mandated an update check.
Then client disconnected before that was completed.



All the 4xx and 5xx status responses are only cacheable short term and
only if the server explicitly provides caching information. It looks
like the servers in your traffic are not providing that info (or not
correctly).


Also, this grep counting does not account for what method the
transaction used. Things like the 204 response and 30x responses
cacheability depend on what method is involved.


So I see 19k MISS and 4k HIT. About 18% hit rate.


What version of Squid are you using?

Amos


Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-20 Thread Amos Jeffries
On 20/08/2014 1:12 p.m., Eliezer Croitoru wrote:
> I wasn't sure but I am now.
> You are doing something wrong and I cannot tell what exactly.
> Try to share this script output:
> http://www1.ngtech.co.il/squid/basic_data.sh
> 
> There are missing parts in the whole setup such as clients IP and server
> IP, what GW are you using etc..
> 
> Eliezer


Probably expecting DNS based forgery to hijack the connections is the
mistake.

When receiving HTTPS all Squid has to work with are the two TCP packet
IP addresses. If one of them is the client IP and the other is forged by
DNS (unbound), what server is to be contacted?

Hostname from the "accel" hack is buried inside the encryption which has
not yet arrived from the client. So Squid has to decrypt some future
traffic in order to discover what server to contact right now to get the
cert details which need to be emitted in order to start decrypting that
future traffic. Impossible situation.
 But Squid is not aware of that, it just uses the TCP packet dst IP
(itself) and tries to get server TLS certificate from there. Entering in
an infinite loop of lookups instead of a useful decryption.


proxyplayer.co.uk;
 why are you using unbound for this at all?

Amos



Re: [squid-users] what AV products have ICAP support?

2014-08-18 Thread Amos Jeffries
On 18/08/2014 9:30 p.m., Jason Haar wrote:
> Hi there
> 
> I've been testing out squidclamav as an ICAP service and it works well.
> I was wondering what other AV vendors have (linux) ICAP-capable
> offerings that could similarly be hooked into Squid?
> 
> Thanks
> 

http://www.icap-forum.org/icap?do=products&isServer=checked

Amos


Re: [squid-users] server failover/backup

2014-08-18 Thread Amos Jeffries
On 19/08/2014 9:09 a.m., Mike wrote:
> Question, when we copy the /etc/squid/passwd file itself from "server 1"
> to "server 2", and when using the same squid authentication, why does
> server 2 not accept the username and passwords in the file that works on
> server 1?
> Is that file encrypted by server 1?
> Do we need to create a new passwd file from scratch on server 2, and use
> a script to "import" it into that new passwd file from server 1?
> 
> The main differences:
> Server 1 is 64 bit OS Fedora 8 using squid Version 2.6.STABLE19
> Server 2 is recently installed OS with 32 bit CentOS 6.5 i686 (due to
> hardware being 32bit), squid 3.4.5.
> 
> Does that 64 versus 32 bit file setup and creation make an impact? Or
> how about the 2.6.x versus 3.4.x?

Two possibilities:

1) long passwords encrypted with DES.

The current releases Squid NCSA helper checks length of DES passwords
and rejects if they are more than 8 charecters long instead of silently
truncating and accepting bad input.

If your users have long passwords and you encrypted them into the
original file with DES then they need to be upgraded. Logging in with
only the first 8 characters of their password should still work with DES.

2) OS-specific hash algorithm was used to encrypt.

Blowfish and SHA1 algorithms are not universally available. The NCSA
helper which is built against a library missing one of these algorithms
cannot login users with a password file generated using them.

You may have to migrate users via MD5, or ensure libcrypt is used to
build the new Squid helper.

HTH
Amos



Re: [squid-users] Re: server failover/backup

2014-08-18 Thread Amos Jeffries
On 19/08/2014 10:48 a.m., Mike wrote:
> On 8/18/2014 4:27 PM, nuhll wrote:
>> Question: why u spam my thread?
>>
>>
>>
>> -- 
>> View this message in context:
>> http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667249.html
>>
>> Sent from the Squid - Users mailing list archive at Nabble.com.
>>
> This is an email list. I created a new email to
> squid-users@squid-cache.org for assistance from anyone that uses the
> email list. I was told some time ago that Nabble is not recommended
> since it does not always place them in a proper layout according to the
> email user list, so to use it via email, not the website.
> 

Your first email was created as a reply to the thread
"In-Reply-To: <1408378851794-4667247.p...@n4.nabble.com>"

Amos



Re: [squid-users] Very slow site via squid

2014-08-18 Thread Amos Jeffries
On 18/08/2014 11:48 p.m., babajaga wrote:
> I have a squid 2.7 setup on openWRT, running on a 400Mhz/64MB embedded
> system.
> First of all, a bit slow (which is another issue), but one site is
> especially slow, when accessed via squid:
> 
> 1408356096.498  25061 10.255.228.5 TCP_MISS/200 379 GET
> http://dc73.s290.meetrics.net/bb-mx/submit? - DIRECT/78.46.90.182 image/gif
> 1408356103.801  46137 10.255.228.5 TCP_MISS/200 379 GET
> http://dc73.s290.meetrics.net/bb-mx/submit? - DIRECT/78.46.90.182 image/gif
> 
> Digging deeper, (squid.conf: debug ALL,9) I see this:
> 2014/08/18 11:17:26| commConnectStart: FD 198, dc44.s290.meetrics.net:80
> 2014/08/18 11:18:00| fwdConnectDone: FD 198:
> 'http://dc44.s290.meetrics.net/bb-mx/submit?//oxNGf
> 
> which should explain the slowness.
> 
> Example of http-headers:
> 
> Cache-Control no-cache,no-store,must-revalidate
> Content-Length43
> Content-Type  image/gif
> Date  Mon, 18 Aug 2014 10:04:52 GMT
> Expires   Mon, 18 Aug 2014 10:04:51 GMT
> Pragmano-cache
> Servernginx
> X-Cache   MISS from my-embedded-proxy
> X-Cache-LookupMISS from my-embedded-proxy:3128
> ---
> Acceptimage/png,image/*;q=0.8,*/*;q=0.5
> Accept-Encoding   gzip, deflate
> Accept-Language   de,en-US;q=0.7,en;q=0.3
> Connectionkeep-alive
> Cookieid=721557E9-A0E0-C549-7D6A-B2D622DA4B1F
> DNT   1
> Host  dc73.s290.meetrics.net
> Referer   http://www.spiegel.de/
> User-AgentMozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101
> Firefox/31.0
> 
> I can only suspect something special regarding their DNS.
> Any other idea ?

I agree, its likely their DNS response timeor TCP handshake timeouts
happening.

The latest squid-3.x stable releases may be able to help with this. We
have separated the DNS lookup and TCP handshake operations so the info
about bad connections is stored longer for overall faster transactions.

Also, in my experiene the worst slow domains like this are usually
advertising hosts. So blocking their transactions outright (and quickly)
can boost page load time a huge amount. It is worth having a look at
what those requests are for.

Amos


Re: [squid-users] Re: ONLY Cache certain Websites.

2014-08-16 Thread Amos Jeffries
On 16/08/2014 8:02 a.m., nuhll wrote:
> I got nearly all working. Except Battle.net. This problem seems to known, but
> i dont know how to fix.
> 
> http://stackoverflow.com/questions/24933962/squid-proxy-blocks-battle-net

That post displays a perfectly working proxy transaction. No sign of an
error anywhere.


> https://forum.pfsense.org/index.php?topic=72271.0
> 

Contains three solutions. All of which are essentially turn on PNP at
the router.

Amos


Re: [squid-users] CDN / JS 503 Service Unavailable

2014-08-16 Thread Amos Jeffries
On 15/08/2014 11:22 p.m., Paul Regan wrote:
> Urg, thats like standing front of the class for everyone to stare!
> 

If you are not able to take constructive criticisms, sysadmin is
probably not the best ine of work for you :-)

I see you seem to have found the problem. So consider these a free audit.

> 
> here you go :
> 
> cache_effective_user squid
> 
> url_rewrite_program /usr/sbin/ufdbgclient -l /var/ufdbguard/logs
> url_rewrite_children 64
> 
> acl localnet src 
> acl eu-edge-IP src 
> acl eu-buscon-edge-IP src 
> acl eu-inet-dmz src 
> acl na-subnet src 
> acl na-inet-dmz src 
> acl na-buscon-edge-IP src 
> acl st-buscon-vpc src 
> acl eu-mfmgt src 
> 
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> 
> acl CONNECT method CONNECT
> 
> hosts_file /etc/hosts
> 
> dns_nameservers   
> 
> http_access deny !Safe_ports
> 
> http_access deny CONNECT !SSL_ports
> 
> acl infrastructure src
> 
> http_access allow localhost manager
> http_access allow infrastructure manager
> http_access deny manager
> 
> acl mo-whitelist dstdomain "/etc/squid/mo-whitelist"
> http_access allow mo-whitelist
> 
> acl mo-blockedsites dstdomain "/etc/squid/mo-blockedsites"
> deny_info http://restricted_content_blockedsites.html mo-blockedsites
> http_access deny mo-blockedsites
> 
> acl mo-blockedkeywords urlpath_regex "/etc/squid/mo-blockedkeywords"
> deny_info http://restricted_content_keywords.html mo-blockedkeywords
> http_access deny mo-blockedkeywords
> 
> acl mo-nocache dstdomain "/etc/squid/mo-nocache"
> no_cache deny mo-nocache

The correct name for that directive is "cache", has been since Squid-2.4.
As in, what you should have there is:
 cache deny mo-nocache


> 
> acl mo-blockedIP src "/etc/squid/mo-blockedIP"
> acl mo-allowURLs dstdomain src "/etc/squid/mo-allowURLs"
> 
> http_access allow mo-blockedIP mo-allowURLs
> http_access deny mo-blockedIP
> deny_info http://restricted_content_blockedip.html mo-blockedIP
> 
> acl mo-allowNYIP src "/etc/squid/mo-allowNYIP"
> http_access allow mo-allowNYIP
> 
> http_access allow na-subnet mo-allowURLs
> http_access deny na-subnet
> deny_info http://restricted_content_subnet.html na-subnet
> 
> http_access allow localnet
> http_access deny st-buscon-vpc
> http_access allow eu-edge-IP
> http_access allow eu-inet-dmz
> http_access allow eu-buscon-edge-IP
> http_access allow na-inet-dmz
> http_access allow na-buscon-edge-IP
> http_access allow eu-mfmgt
> 
> acl ftp proto FTP
> always_direct allow ftp
> 
> acl purge method PURGE
> http_access allow purge localhost
> http_access deny purge

Hmm.. What you have here is a pure forward-proxy configuration.
If you need to purge things from the cache of a forward-proxy then it is
caching badly/wrong.

I know that Squid does cache some things badly, but we have taken great
pains to ensure that those cases are conservative. The wrong cases
shoudl all take form of dropping things which should have been kept,
rather than storing things which should have been dropped.

Are you perhase finding that you need to manually erase content
permitted into cache by the refresh rules with "override-expire
ignore-no-store ignore-private". Ignoring private and no-store in
particular are very dangerous... think Captcha images, username in image
form for embeded session display, company private information, etc.

> 
> http_access allow localhost
> http_access deny all
> 
> http_port 8080
> 
> cache_dir aufs /squid-cache 39322 16 256
> cache_replacement_policy heap LFUDA
> 
> cache_swap_low 96
> cache_swap_high 98
> 
> cache_mem 256 MB
> 
> maximum_object_size 64 KB

It's a little unclear why you are limiting cached objects to 64KB while
refresh patterns also force archive and binary executable types to be
cached. You have 40.25 GB of cache space available.

> maximum_object_size_in_memory 20 KB
> 
> quick_abort_min 0 KB
> quick_abort_max 0 KB
> 
> memory_pools off
> 

Have you tested performance with these on recently?


HTH
Amos



Re: [squid-users] Proxy server Spec..

2014-08-14 Thread Amos Jeffries
On 15/08/2014 3:12 a.m., Stephan Viljoen wrote:
> Hi There,
> 
> I’m putting together a new Proxy server for a medium sized ISP (4000 users
> plus) and would appreciate a few pointers from you good folks.
> 
> I vaguely remember  a debate a few years ago about SSD vs. conventional
> drives and was wondering which would be the best to use these days ?

It boils down to Squid having write-mostly behaviour with its caches.
That is something HDD cope with better.

There is some variance between SSD models write cycles. So YMMV, but
basically Squid burns through disks faster than manufacturer specs
indicate and noticably faster than with HDD.

We have also been improving Squid caching behaviour in ways that affect
these generalizations. Collapsed forwarding, Rock cache and related
in-transit object handling all reduce disk writes in the latest Squid.
So things are improving, but I am not sure how noticeably.

Whatever you do though, do not mirror or splice the cache drive(s) with
RAID. That just wears them out twice as fast, or kills two when one dies.

> Also ,
> apparently it’s better to use a higher Mhz CPU rather than more cores ? Is
> this still the case or does squid handle multiple cores better these days? 

Yes to both. Yes Squid handles multi-core CPU better, but it is still
very intensive and better to have faster cycles than more cores. If you
can maximize both, even better.

> 
> Also note , I’m thinking of rather boosting web performance rather than to
> save bandwidth. So I’m going to try and keep my cached objects as small as
> possible.

The two come as a package. The processing a proxy does always slows the
MISS traffic down. This is only compensated for by cache serving popular
objects as HIT faster than non-proxied traffic and reducing upstream
bandwidth (to allow for greater total throughput on MISS traffic).

> 
> PowerEdge T620 NO CPU No RAM No HDD - 3yr Pro NBD
> 2 x Intel(R) Xeon(R) E5-2630 v2 2.60GHz 15M Cache 7.2GT/s QPI Turbo HT 
> 128GB RAM
> 2 x 200GB Solid State Disk SAS 6Gbps 2.5"
> 

NP: hyper threading does not count towards actual CPU core or cycles.

I recommend dedicating one CPU for the OS and the other to Squid. 1-2 GB
of cache_mem for memory caching, but no disk cache to begin with. See
how that flies, then expand it with disk cache if needed.

Amos


Re: Fwd: [squid-users] Request Entity Too Large Error in Squid Reverse Proxy

2014-08-14 Thread Amos Jeffries
On 15/08/2014 12:59 a.m., Robert Cicerelli wrote:
> On 8/14/2014 8:10 AM, Amos Jeffries wrote:
>> If you can provide your squid.conf it would be really helpful
>> understanding this. Amos 
> I think the terminology is confusing because it's the terminology used
> in the pfsense box that squid is running on. Nevertheless, squid.conf is
> below:
> 
> == squid.conf starts below 
> 
> http_port 10.10.14.1:3128
> icp_port 7
> dns_v4_first off

NP: not necessary. "off" is the default of dns_v4_first.

> pid_filename /var/run/squid.pid
> cache_effective_user proxy
> cache_effective_group proxy
> error_default_language en
> icon_directory /usr/pbi/squid-i386/etc/squid/icons
> visible_hostname localhost
> cache_mgr admin@localhost

Set visible_hostname correctly to an externally accessible hostname.

> access_log /var/squid/logs/access.log
> cache_log /var/squid/logs/cache.log
> cache_store_log none
> sslcrtd_children 0
> logfile_rotate 1
> shutdown_lifetime 3 seconds
> # Allow local network(s) on interface(s)
> acl localnet src  10.10.14.0/24
> uri_whitespace strip
> 
> acl dynamic urlpath_regex cgi-bin \?
> cache deny dynamic

You may want to reconsider that. Squid since 2.6 are erfectly capable of
caching dynamic content correctly provided you add the refresh_pattern
rule for (/cgi-bin/|\?) in the right place.

> cache_mem 2000 MB
> maximum_object_size_in_memory 32 KB
> memory_replacement_policy heap GDSF
> cache_replacement_policy heap LFUDA
> cache_dir ufs /var/squid/cache 500 16 256
> minimum_object_size 0 KB
> maximum_object_size 4 KB
> offline_mode offcache_swap_low 90
> cache_swap_high 95
> 
> # No redirector configured
> 
> 
> #Remote proxies
> 
> 
> # Setup some default acls
> acl allsrc src all
> acl localhost src 127.0.0.1/32
> acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901 3128
> 1025-65535
> acl sslports port 443 563
> acl manager proto cache_object
> acl purge method PURGE
> acl connect method CONNECT
> 
> # Define protocols used for redirects
> acl HTTP proto HTTP
> acl HTTPS proto HTTPS
> 
> http_access allow manager localhost
> 
> http_access deny manager
> http_access allow purge localhost
> http_access deny purge
> http_access deny !safeports
> http_access deny CONNECT !sslports
> 
> # Always allow localhost connections
> http_access allow localhost
> 
> quick_abort_min 0 KB
> quick_abort_max 0 KB

All of these...

> request_body_max_size 0 KB
> delay_pools 1
> delay_class 1 2
> delay_parameters 1 -1/-1 -1/-1
> delay_initial_bucket_level 100
> # Throttle extensions matched in the url
> acl throttle_exts urlpath_regex -i "/var/squid/acl/throttle_exts.acl"
> delay_access 1 allow throttle_exts
> delay_access 1 deny allsrc

... do nothing. Except, the delay_pools still involve 32-bit limits so
may also be your issue if it is 32-bit related.

> 
> # Reverse Proxy settings
> http_port 75.145.82.58:80 accel defaultsite=deeztek.com vhost
> https_port 75.145.82.58:443 accel
> cert=/usr/pbi/squid-i386/etc/squid/53dfccd7cbb37.crt
> key=/usr/pbi/squid-i386/etc/squid/53dfccd7cbb37.key
> defaultsite=deeztek.com vhost
> #
> cache_peer 10.10.14.254 parent 443 0 proxy-only no-query no-digest
> originserver login=PASS round-robin ssl sslflags=DONT_VERIFY_PEER
> front-end-https=auto name=rvp_webserver.deeztek.com
> 
> #
> cache_peer 10.10.14.201 parent 443 0 proxy-only no-query no-digest
> originserver login=PASS round-robin ssl sslflags=DONT_VERIFY_PEER
> front-end-https=auto name=rvp_owa.deeztek.com
> 
> #
> cache_peer 10.10.14.251 parent 458 0 proxy-only no-query no-digest
> originserver login=PASS round-robin ssl sslflags=DONT_VERIFY_PEER
> front-end-https=auto name=rvp_cloud.deeztek.com
> 
> #
> cache_peer 10.10.14.238 parent 443 0 proxy-only no-query no-digest
> originserver login=PASS round-robin ssl sslflags=DONT_VERIFY_PEER
> front-end-https=auto name=rvp_ewa.deeztek.com
> 
> #
> cache_peer 10.10.14.250 parent 443 0 proxy-only no-query no-digest
> originserver login=PASS round-robin ssl sslflags=DONT_VERIFY_PEER
> front-end-https=auto name=rvp_mail.deeztek.com
> 
> #
> cache_peer 10.10.14.254 parent 80 0 proxy-only no-query no-digest
> originserver login=PASS round-robin name=rvp_admin.grubbcontractors.com
> 

Note that "round-robin" peer selection does not exactly jive well with
explicit cache_peer_access restricting each peer to only accepting
certain domains traffic. The access rules make each round-robin group a
set of 1 peer.


Okay lets simplify these ACLs ...

> acl rvm_deeztek.com url_regex -i ^https://secure.deeztek.com/.*
> acl rvm_deeztek.com url_regex -i ^https://w

Re: Fwd: [squid-users] Request Entity Too Large Error in Squid Reverse Proxy

2014-08-14 Thread Amos Jeffries
On 14/08/2014 6:12 a.m., Robert Cicerelli wrote:
> On 8/13/2014 7:22 AM, Amos Jeffries wrote:
>> On 13/08/2014 10:29 p.m., Robert Cicerelli wrote:
>>> Can anyone offer some help on this?
>>>
>>> I'm having a problem that just started after I implemented squid reverse
>>> proxy. I have a couple of applications on one of the apache servers
>>> behind the reverse proxy. Every time someone tries to upload relatively
>>> large files to the application (7 MB, 30 MB), they get the following
>>> error:
>>>
>>> Request Entity Too Large
>>>
>>> If I try to perform the same operation without going through the squid
>>> reverse proxy, the uploads work with no problems.
>>>
>>> I'm using proxy 3.1.20
>>> <https://github.com/pfsense/pfsense-packages/commits/master/config/31>
>>> on pfsense. I tried posting this issue on the pfsense support forums and
>>> I have gotten zero replies so I'm trying the squid mailing list. The
>>> situation has become a big problem so I would appreciate some help on
>>> this.
>>>
>>> A few parameters I've adjusted to various values with no success:
>>>
>>> Minimum object size
>>> Maximum object size
>>> Memory cache size
>>> Maximum download size
>>> Maximum upload size
>>>
>>> Thanks a lot
>>>
>> Can you provide a sample of the request HTTP headers being sent to Squid
>> for one of these failed uploads?
>>
>> Amos
>>
>>
>>
> One more thing to add that I just discovered:

The terminology used in your description may be clear when applied to an
origin server, but becomes unclear when applied to a proxy situation
(where there are two of everything).

> 
> First a little background for the sake of clarification, I'm using squid
> in reverse proxy in order to forward appropriate https requests to
> multiple servers behind the firewall since we only have on public IP
> address.

Okay, so far good.

> In the particular instance I'm having a problem with, we have a
> web application on one of the web servers that's running over https.

Okay.

> So,
> I created a webserver in squid

Did you mean a http_port with "accel" configured? ...

> pointing to the IP of the actual
> webserver

 ... or a cache_peer directive?

> and I set the port to 443 since the web application  on the
> web server is only configured to respond to 443.

... sounds like cache_peer. But, did you also set "ssl" flag and SSL/TLS
options to make the connection HTTPS, or just leave it sending HTTP to
port 443?

> Then i created a
> mapping group 

 a what?

> that listened for four https URIs, one of the URIs being
> the secure web application in question and I binded it to the webserver
> I created earlier.

 huh? "binded" how exactly?

If you can provide your squid.conf it would be really helpful
understanding this.

Amos

> 
> So now, as a test, I created a virtual host to listen on port 80 for the
> web application in question in addition to the virtual host listening on
> 443. I removed the URI for that app from the existing mapping group. I
> created another webserver in squid and this time instead of pointing it
> to port 443 I pointed to port 80. Then I created another mapping group
> that listened for the web application on 443 and I binded it to the
> newly created webserver which is now pointed to 80. I tested the file
> upload and it worked like a charm. So, the problem seems to arise when i
> create a web server in squid and point it to port 443 of the webserver. 
> And just in case anyone asks, I did disable internal certificate. Not
> sure if that makes a difference.
> 
> Hopefully what i wrote is clear and it will help pinpoint the problem.
> 
> Thanks a lot
> 
> 
> 



Re: Fwd: [squid-users] Request Entity Too Large Error in Squid Reverse Proxy

2014-08-14 Thread Amos Jeffries
On 14/08/2014 1:09 a.m., Robert Cicerelli wrote:
> On 8/13/2014 7:22 AM, Amos Jeffries wrote:
>> On 13/08/2014 10:29 p.m., Robert Cicerelli wrote:
>>> Can anyone offer some help on this?
>>>
>>> I'm having a problem that just started after I implemented squid reverse
>>> proxy. I have a couple of applications on one of the apache servers
>>> behind the reverse proxy. Every time someone tries to upload relatively
>>> large files to the application (7 MB, 30 MB), they get the following
>>> error:
>>>
>>> Request Entity Too Large
>>>
>>> If I try to perform the same operation without going through the squid
>>> reverse proxy, the uploads work with no problems.
>>>
>>> I'm using proxy 3.1.20
>>> <https://github.com/pfsense/pfsense-packages/commits/master/config/31>
>>> on pfsense. I tried posting this issue on the pfsense support forums and
>>> I have gotten zero replies so I'm trying the squid mailing list. The
>>> situation has become a big problem so I would appreciate some help on
>>> this.
>>>
>>> A few parameters I've adjusted to various values with no success:
>>>
>>> Minimum object size
>>> Maximum object size
>>> Memory cache size
>>> Maximum download size
>>> Maximum upload size
>>>
>>> Thanks a lot
>>>
>> Can you provide a sample of the request HTTP headers being sent to Squid
>> for one of these failed uploads?
>>
>> Amos
>>
>>
>>
> I hope this is what you are looking for:

Almost, I am looking also for the request-line portion. Which contains
method and URL. There is significant difference between
HEAD,GET,PUT,POST when it comes to request payload.


> 
> Host: admin.grubbcontractors.com
> 
> Connection: keep-alive
> 
> Content-Length: 2085564


NP: this upload is just under 2MB.
Possible 32-bit wrap issue?

> 
> Cache-Control: max-age=0
> 
> Accept:
> text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
> 
> Origin: https://admin.grubbcontractors.com
> 
> User-Agent: Mozilla/5.0 (Windows NT 6.0; WOW64) AppleWebKit/537.36
> (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36
> 
> Content-Type: multipart/form-data;
> boundary=WebKitFormBoundaryNg9SBUsDeAOqgB09
> 
> Referer: https://admin.grubbcontractors.com/insert_bid2.cfm?id=48
> 
> 


> 2014/08/13 09:06:14.556| created HttpHeaderEntry 0x28f3f740: 'Via : 1.1
> localhost (squid/3.1.22)

This is bad. Your Squid public hostname being "localhost" is almost
guaranteed to cause problems with other servers.
 You need to setup the Squid machine such that its gethostname() OS
interface produces a proper Internet compliant hostname. As a workaround
Squid visible_hostname directive can be set to a FQDN, but that does not
fix any other software using the OS gethostname() API.


AMos


Re: [squid-users] Log Daemon (queue is too large)

2014-08-13 Thread Amos Jeffries
On 13/08/2014 10:09 p.m., Warren Baker wrote:
> HI all,
> 
> I noticed this error message (multiple entries) for yesterday and
> today on Squid 3.3.11
> 
> 2014/08/13 00:01:06 kid1| Logfile:
> daemon:/util/var/squid/log/access.log: queue is too large; some log
> messages have been lost.
> 
> Its not a very high utilized proxy so I was a little surprised this
> happened. I assume something may have caused a spike in traffic
> resulting in the log buffer filling up but whats concerning is that it
> never recovers until a -k reconfigure was issued, a -k rotate didnt
> help. So all log entries for yesterday and today are gone.
> 
> Any ideas on why it doesn't recover and possibly what could have
> caused the issue? As looking at the access logs leading up to the
> event there is nothing that stands out.

Are you using the default Squid daemon or a custom one?
Can you reproduce this problem with the current 3.4 stable release?

Are you able to identify what the daemon helper is doing when it is
loosing log lines?

Amos


Re: Fwd: [squid-users] Request Entity Too Large Error in Squid Reverse Proxy

2014-08-13 Thread Amos Jeffries
On 13/08/2014 10:29 p.m., Robert Cicerelli wrote:
> 
> Can anyone offer some help on this?
> 
> I'm having a problem that just started after I implemented squid reverse
> proxy. I have a couple of applications on one of the apache servers
> behind the reverse proxy. Every time someone tries to upload relatively
> large files to the application (7 MB, 30 MB), they get the following error:
> 
> Request Entity Too Large
> 
> If I try to perform the same operation without going through the squid
> reverse proxy, the uploads work with no problems.
> 
> I'm using proxy 3.1.20
> 
> on pfsense. I tried posting this issue on the pfsense support forums and
> I have gotten zero replies so I'm trying the squid mailing list. The
> situation has become a big problem so I would appreciate some help on this.
> 
> A few parameters I've adjusted to various values with no success:
> 
> Minimum object size
> Maximum object size
> Memory cache size
> Maximum download size
> Maximum upload size
> 
> Thanks a lot
> 

Can you provide a sample of the request HTTP headers being sent to Squid
for one of these failed uploads?

Amos



Re: [squid-users] HTTP/HTTPS transparent proxy doesn't work

2014-08-12 Thread Amos Jeffries
On 13/08/2014 4:33 p.m., agent_js03 wrote:
> Hello,
> 
> I am having trouble with my squid setup. Here is exactly what I am trying to
> do: I am setting up a VPN server and I want all VPN traffic to be
> transparently proxied by squid with ssl bumping enabled. Right now when I
> try to do this I get an access denied page from the client.
> 
> Here are lines from my squid.conf:
> 
> =
> acl localnet src 192.168.1.0/24 # local network
> acl localnet src 192.168.3.0/24 # vpn network
> http_access allow localnet
> http_access allow localhost
> http_access deny all
> http_port 192.168.1.145:3127 intercept
> http_port 192.168.1.145:3128 intercept ssl-bump
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
> key=/etc/squid3/ssl/private.pem cert=/etc/squid3/ssl/public.pem
> always_direct allow all
> ssl_bump allow all
> sslproxy_cert_error allow all
> sslproxy_flags DONT_VERIFY_PEER
> sslcrtd_program /usr/lib/squid3/ssl_crtd -s /var/lib/ssl_db -M 4MB
> sslcrtd_children 5
> 
> =
> 
> Here are my iptables rules:
> 
> =
> sysctl -w net.ipv4.ip_forward=1
> iptables -F
> iptables -t nat -F
> 
> # transparent proxy for vpn
> iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 80 -j DNAT
> --to-destination 192.168.1.145:3127
> iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 443 -j DNAT
> --to-destination 192.168.1.145:3128
> 
> iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
> 
> iptables --table nat --append POSTROUTING --out-interface ppp+ -j MASQUERADE
> iptables -I INPUT -s 192.168.3.0/24 -i ppp+ -j ACCEPT
> iptables --append FORWARD --in-interface eth0 -j ACCEPT
> 
> =
> 
> 
> When I connect to VPN and try to browse the web I get the following error in
> /etc/squid3/cache.log on the vpn server:
> 
> 2014/08/12 21:21:02 kid1| ERROR: No forward-proxy ports configured.
> 2014/08/12 21:21:02 kid1| WARNING: Forwarding loop detected for:
> GET /Artwork/SN.png HTTP/1.1
> Host: www.squid-cache.org
> User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:30.0) Gecko/20100101
> Firefox/30.0
> Accept: image/png,image/*;q=0.8,*/*;q=0.5
> Accept-Language: en-US,en;q=0.5
> Accept-Encoding: gzip, deflate
> Referer: http://www.google.com/
> Via: 1.1 localhost (squid/3.2.11)
> X-Forwarded-For: 127.0.0.1
> Cache-Control: max-age=259200
> Connection: keep-alive
> 
> 
> 2014/08/12 21:21:02 kid1| ERROR: No forward-proxy ports configured.
> 
> 
> 
> I am wondering about this erro "No forward-proxy ports configured." What do
> I need to change about my squid.conf that would allow me to do transparent
> proxying?


1) "ERROR: No forward-proxy ports configured."

This is getting to be a FAQ. I've added a wiki page about it.
http://wiki.squid-cache.org/KnowledgeBase/NoForwardProxyPorts

2) "WARNING: Forwarding loop detected for:"

This is a side effect of the above problem. Forwarding loop fetching the
error page artwork directly from a intercept port.

Amos


Re: [squid-users] Re: ONLY Cache certain Websites.

2014-08-12 Thread Amos Jeffries
On 12/08/2014 7:57 a.m., nuhll wrote:
> Thanks for your help.
> 
> But i go crazy. =)
> 
> Internet is slow as fuck. I dont see any errors in the logs. And some
> services (Battle.net) is not working.
> 
> /etc/squid3/squid.conf
> debug_options ALL,1 33,2
> acl domains_cache dstdomain "/etc/squid/lists/domains_cache"
> cache allow domains_cache
> acl localnet src 192.168.0.0
> acl all src all
> acl localhost src 127.0.0.1
> cache deny all
> 
> #access_log daemon:/var/log/squid/access.test.log squid
> 
> http_port 192.168.0.1:3128 transparent
> 
> cache_dir ufs /daten/squid 10 16 256
> 
> range_offset_limit 100 MB windowsupdate
> maximum_object_size 6000 MB
> quick_abort_min -1
> 
> 
> # Add one of these lines for each of the websites you want to cache.
> 
> refresh_pattern -i
> microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000
> reload-into-ims
> 
> refresh_pattern -i
> windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
> 432000 reload-into-ims
> 
> refresh_pattern -i
> windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000
> reload-into-ims
> 
> #kaspersky update
> refresh_pattern -i
> geo.kaspersky.com/.*\.(cab|dif|pack|q6v|2fv|49j|tvi|ez5|1nj|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)
> 4320 80% 432000 reload-into-ims
> 
> #nvidia updates
> refresh_pattern -i
> download.nvidia.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
> 432000 reload-into-ims
> 
> #java updates
> refresh_pattern -i
> sdlc-esd.sun.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
> 432000 reload-into-ims
> 
> # DONT MODIFY THESE LINES
> refresh_pattern \^ftp:   144020% 10080
> refresh_pattern \^gopher:14400%  1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
> refresh_pattern .   0   20% 4320
> 
> #kaspersky update
> acl kaspersky dstdomain geo.kaspersky.com
> 
> acl windowsupdate dstdomain windowsupdate.microsoft.com
> acl windowsupdate dstdomain .update.microsoft.com
> acl windowsupdate dstdomain download.windowsupdate.com
> acl windowsupdate dstdomain redir.metaservices.microsoft.com
> acl windowsupdate dstdomain images.metaservices.microsoft.com
> acl windowsupdate dstdomain c.microsoft.com
> acl windowsupdate dstdomain www.download.windowsupdate.com
> acl windowsupdate dstdomain wustat.windows.com
> acl windowsupdate dstdomain crl.microsoft.com
> acl windowsupdate dstdomain sls.microsoft.com
> acl windowsupdate dstdomain productactivation.one.microsoft.com
> acl windowsupdate dstdomain ntservicepack.microsoft.com
> 
> acl CONNECT method CONNECT
> acl wuCONNECT dstdomain www.update.microsoft.com
> acl wuCONNECT dstdomain sls.microsoft.com
> 
> http_access allow kaspersky localnet
> http_access allow CONNECT wuCONNECT localnet
> http_access allow windowsupdate localnet
> 
> #test
> http_access allow localnet
> http_access allow all
> http_access allow localhost
> 
> 
> /etc/squid/lists/domains_cache
> microsoft.com
> windowsupdate.com
> windows.com
> #nvidia updates
> download.nvidia.com
> 
> #java updates
> sdlc-esd.sun.com
> #kaspersky
> geo.kaspersky.com
> 
> /var/log/squid3/access.log
> 1407786051.567  17909 192.168.0.125 TCP_MISS/000 0 GET
> http://dist.blizzard.com.edgesuite.net/hs-pod/beta/EU/4944.direct/base-Win-deDE.MPQ
> - DIRECT/dist.blizzard.com.edgesuite.net -
> 1407786051.567  17909 192.168.0.125 TCP_MISS/000 0 GET
> http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/base-Win.MPQ -
> DIRECT/llnw.blizzard.com -

The blizzard.com servers did not produce a response for these requests.
Squid waited almost 18 seconds and nothing came back.

TCP window scaling, ECN, Path-MTU discovery, ICMP blocking are things to
look for here. Any one of them could be breaking the connection from
transmitting or receiving properly.

The rest of the log shows working traffic. Even for battle.net. I
suspect battle.net uses non-80 ports right? I doubt those are being
intercepted in your setup.

> /var/log/squid3/cache.log
> 2014/08/11 21:51:29| Squid Cache (Version 3.1.20): Exiting normally.
> 2014/08/11 21:53:04| Starting Squid Cache version 3.1.20 for
> x86_64-pc-linux-gnu...

Hmm. Which version of Debian (or derived OS) are you using? and can you
update it to the latest stable? squid3 package has been at 3.3.8 for
most of a year now.

> 2014/08/11 21:53:04| Process ID 32739
> 2014/08/11 21:53:04| With 65535 file descriptors available
> 2014/08/11 21:53:04| Initializing IP Cache...
> 2014/08/11 21:53:04| DNS Socket created at [::], FD 7
> 2014/08/11 21:53:04| DNS Socket created at 0.0.0.0, FD 8
> 2014/08/11 21:53:04| Adding nameserver 8.8.8.8 from squid.conf
> 2014/08/11 21:53:04| Adding nameserver 8.8.4.4 from squid.conf
> 2014/08/11 21:53:05| Unlinkd pipe opened on FD 13
> 2014/08/11 21:53:05| Local cache digest enabled; rebuild/rewrite every
> 3600/3600 sec
> 2014/08/11 21:53:05| Store logging disabled
> 2014/08/11 21:53:05| Swap maxSize 10240 + 262144 KB, estimated 7897088
> object

Re: [squid-users] find the cached pages by squid?

2014-08-09 Thread Amos Jeffries
On 10/08/2014 2:56 a.m., Kinkie wrote:
> Hello Mark,
>   access.log contains the list of URLs requested by any client to the
> cache (if enabled, of course).
> If you wish, you can then verify whether they have been cached (and
> whether the cached entry is still considered valid) by requesting them
> (or at least their headers via the HEAD http verb) with the
> Cache-Control: only-if-cached HTTP header - you can do that with any
> command-line HTTP client such as curl or wget.

You have to disable strip_query_terms (set to "no") in order to do this
on dynamic domains. Also, when Vary: header exists in the server
response the content of the request headers listed in Vary matter.

An easier way is (probably) to use the same ACL from "cache deny blah"
on a line "access_log stdio:uncached.log blah". That uncached.log will
contain only the transactions which were forced not to cache.
 NP: This does not necessarily mean they would cache normally though.

Amos



Re: [squid-users] Memory Leaks after OS upgrade

2014-08-09 Thread Amos Jeffries
On 9/08/2014 6:28 a.m., "Nils Hügelmann (anonymoX.net)" wrote:
> Hi,
> 
> after upgrading to opensuse 13.1, squid3.4 (same with current 3.HEAD)
> leaks memory after a short time and stops responding to snmp queries(not
> sure if that's related).
> 

>From what version? If it was 3.1 or older they have significantly
smaller memory cache by default.

> In a test setup i can reproduce the issue as follows:
> 
> Test with 100 rps: Everything good
> Test with 250 rps: After some minutes, only 100 to 200 requessts pass,
> memory increases with constant speed, some snmp queries fail (timeout)
> or take longer to process, nothing in log
> Test with 500 rps: Only around 100 queries pass, memory increases
> faster, most snmp queries fail, nothing in logs
> 
> Same with 1 or multiple workers.
> 
> In cachemgr, i see that Cumulative allocated volume continues to
> increase to values like 7 GB while actual VIRT mem is 300MB, RSS 160MB

Cumulated allocation volume is an incrementing count of how much memory
has been allocated, it does not account for how much has been freed.

If the VIRT mem and current size are constant then there is no leak.

That said, we do know that 3.4 will free memory cache allocations a bit
ater than it should. Just not where they are happening.

> 
> Any ideas how to fix this or otherwise how to get more debug information?
> 

Your snmp_* settings would be useful.
 Also are you using SMP support? (workers directive)

http://wiki.squid-cache.org/KnowledgeBase/DebugSections
http://wiki.squid-cache.org/SquidFaq/BugReporting
http://wiki.squid-cache.org/KnowledgeBase/OpenSUSE


Amos


Re: [squid-users] let squid to request the page using client IP?

2014-08-08 Thread Amos Jeffries
On 8/08/2014 12:31 p.m., Brendan Kearney wrote:
> On Fri, 2014-08-08 at 11:48 +1200, Jason Haar wrote:
>> Googling "apache x-forwarded-for" led me to mod_extract_forwarded
>>
>> http://www.openinfo.co.uk/apache/
>>
> 
> from the apache mod_proxy page:
> 

mod_proxy is about making Apache into a reverse-proxy. *generating* the
X-Forwarded-For headers etc.

The query was about passing the client IP through Squid to be *received*
in Apache.

The answer is to:
 use the forwarded_for directive in squid.conf.
 read the contents from X-Forwarded-For in Apache config.

Amos


Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-08 Thread Amos Jeffries
On 8/08/2014 8:55 a.m., sq...@proxyplayer.co.uk wrote:
> Current config below:
> 
>>> In my network I have unbound redirecting some sites through the proxy
>>> server and checking authentication, If I redirect www.thisite.com it
>>> works corectly. However, as soon as SSL is used https://www.thissite.com
>>> it doesn't resolve at all. Any ideas what I have to do to enable ssl
>>> redirects in unbound or squid?
>>
>> Handle port 443 traffic and the encrypted traffic there.
>> You are only receiving port 80 traffic in this config file.
> 
> I am already redirecting 443 traffic but the proxy won't pick it up.
> There is a SSL ports directive in the squid.conf so it should accept them?

You mean SSL_Ports ACL?  that only restricts HTTP "CONNECT" method
tunnel requests to the port(s) usually used by SSL.

It does nothing to receive and decrypt HTTPS in its native port 443
format. Which is what you need to do, since your unbound server is
claiming that your Squid is the origin web server for those https://
traffic.

You are at least missing https_port and all the sslproxy_* directives
for outgoing HTTPS. Then also you are probably missing the TLS/SSL
certificate security keys, including any DNS entries for IPSEC, DNSSEC,
DANE, HSTS etc.


> For example, this line redirect all HTTP traffic but as soon as the
> browser wants a SSL connection, it is dropped:
> local-data: "anywhere.mysite.com. 600 IN A 109.xxx.xx.xxx"
> local-zone: "identity.mysite.com." redirect

Of course. Your Squid box is not listening on port 443 (HTTPS). By using
DNS in this way you are claiming that your 109.xxx.xx.xxx machine is
providing *all* services of that domain. Things naturally break when you
overlook one or more services your clients are using from it.

Amos


Re: [squid-users] Problem with a website...

2014-08-08 Thread Amos Jeffries
On 7/08/2014 11:17 p.m., brekler88 wrote:
> Hello everyone, im having problem with 1 website, my PC does not pass by the
> squid proxy, so its all fine, i can access the website normally, 
> http://www.sintegra.fazenda.pr.gov.br/sintegra/, but when i try to access by
> squid it does not access, and does not get denied... i look into the logs
> and couldnt see anything...
> the message i get is this..
> 
> O seguinte erro foi encontrado ao tentar recuperar a URL:
> http://www.sintegra.fazenda.pr.gov.br/sintegra/
> 
> Impossível determinar o endereço IP do nome de host
> www.sintegra.fazenda.pr.gov.br (impossible to get the IP address of the host
> www...)
> 
> O servidor DNS retornou: (DNS returned)
> 
> Server Failure: The name server was unable to process this query.
> Isto significa que o cache não pode resolver o nome de host contido na URL.
> Verifique se o endereço está correto.
> 
> Seu administrador do cache é root.
> 
> Any ideas ? 

The DNS server being used by Squid is broken somehow. That domain
resolves for me.

Amos



[squid-users] Re: Forwarding loop on squid 3.3.8

2014-08-06 Thread Amos Jeffries
On 7/08/2014 3:28 a.m., James Michels wrote:
> El miércoles, 6 de agosto de 2014, Amos Jeffries 
> escribió:
> 
>> On 7/08/2014 1:26 a.m., Karma sometimes Hurts wrote:
>>> Greetings,
>>>
>>> I'm trying to setup a transparent proxy on Squid 3.3.8, Ubuntu Trusty
>>> 14.04 from the official APT official repository. All boxes including
>>> the Squid box are under the same router, but the squid box is on a
>>> different server than the clients. Seems that for some reason the
>>> configuration on the squid3 box side is missing something, as a
>>> forwarding loop is produced.
>>>
>>> This is the configuration of the squid3 box:
>>>
>>>   visible_hostname squidbox.localdomain.com
>>>   acl SSL_ports port 443
>>>   acl Safe_ports port 80  # http
>>>   acl Safe_ports port 21  # ftp
>>>   acl Safe_ports port 443 # https
>>>   acl Safe_ports port 70  # gopher
>>>   acl Safe_ports port 210 # wais
>>>   acl Safe_ports port 1025-65535  # unregistered ports
>>>   acl Safe_ports port 280 # http-mgmt
>>>   acl Safe_ports port 488 # gss-http
>>>   acl Safe_ports port 591 # filemaker
>>>   acl Safe_ports port 777 # multiling http
>>>   acl CONNECT method CONNECT
>>>   http_access allow all
>>>   http_access deny !Safe_ports
>>>   http_access deny CONNECT !SSL_ports
>>>   http_access allow localhost manager
>>>   http_access deny manager
>>>   http_access allow localhost
>>>   http_access allow all
>>>   http_port 3128 intercept
>>>   http_port 0.0.0.0:3127
>>>
>>> This rule has been added to the client's boxes:
>>>
>>>   iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
>>> 192.168.1.100:3128
>>
>> Thats the problem. NAT is required on the Squid box *only*.
>>
>>
> Ok, but if NAT is required on the Squid box exclusively, how do I redirect
> all outgoing traffic sent to the port 80 over a client to another box
> (concretely the one where Squid runs) without using such NAT?
> 

covered in the rest of what I wrote earlier.

Policy routing. AKA make default gateway for port 80 traffic from each
client be the Squid box.
 The easiest way to do that is to simply make Squid box the default
gateway for all clients, and have only Squid box aware of the real
gateway. Requires the Squid box be able to handle the full network
traffic load.
 Harder way is setting default gateway for only port 80 traffic be the
Squid box rest going to real gateway.

http://wiki.squid-cache.org/ConfigExamples/Intercept/IptablesPolicyRoute


> 
>>>
>>> 192.168.1.100 corresponds to the squid3 box. In the log below
>>> 192.168.1.20 is one of the clients.
>>
>>
>> When receiving intercepted traffic current Squid validate the
>> destination IP address against the claimed Host: header domain DNS
>> records to avoid several nasty security vulnerabilities connecting to
>> that Host domain. If that fails the traffic is instead relayed to the
>> original IP:port address in the TCP packet. That address arriving into
>> your Squid box was 192.168.1.100:3128 ... rinse, repeat ...
>>
>> Use policy routing, or a tunnel (GRE, VPN, etc) that does not alter the
>> packet src/dst IP addresses to get traffic onto the Squid box.
>>
>>
> I thought packets were not mangled over the same network unless
> specifically done via iptables.

Correct. And you have done that mangling with "-j DNAT" on the client
machines. Squid box does not have access to those client machines
kernels to un-mangle.


> Does that mean that the squid3 box
> currently has trouble resolving the Host domain, i.e. google.com and
> therefore tries relaying to the original packet ip? Seems to resolve it via
> the 'host' or 'ping' commands.
> 

Domains do not always resolve to the same IPs. We see a lot of
false-negative results from Host verification for Google and Akamai
hosted domains due to the way they rotate, geo-base, and IP-base DNS
results in real-time. Thus the fallback to original IP.

Amos


Re: [squid-users] Squid as internet traffic monitor

2014-08-06 Thread Amos Jeffries
On 6/08/2014 9:30 p.m., Babelo Gmvsdm wrote:
> Hi,
> 
> I would like to use a Squid Server only as an Internet Traffic Monitor.
> To do this I used an Ubuntu 14.04 with Squid 3.3 on it.
> 
> 
> I plugged the squid on a cisco switch port configured as a monitor 
> destination.
> The port connected to the backbone switch is configured as monitor source.
> I configured the IP of the Squid to be the same as real gateway used by users.
> I configured the squid to be in transparent mode with : http_port 3128 
> intercept
> I put an iptable rule that should forward http packets to the squid on port 
> 3128.
> 
> Unfortunately it does not work.

If I'm reading that right you now have two boxes using the same gateway
IP for themselves.
 Which do the packets go to from the client?
 Where do the packets from Squid go when using the gateway IP as source
address?
 Where do the TCP SYN-ACK packets go?

Amos


Re: [squid-users] Forwarding loop on squid 3.3.8

2014-08-06 Thread Amos Jeffries
On 7/08/2014 1:26 a.m., Karma sometimes Hurts wrote:
> Greetings,
> 
> I'm trying to setup a transparent proxy on Squid 3.3.8, Ubuntu Trusty
> 14.04 from the official APT official repository. All boxes including
> the Squid box are under the same router, but the squid box is on a
> different server than the clients. Seems that for some reason the
> configuration on the squid3 box side is missing something, as a
> forwarding loop is produced.
> 
> This is the configuration of the squid3 box:
> 
>   visible_hostname squidbox.localdomain.com
>   acl SSL_ports port 443
>   acl Safe_ports port 80  # http
>   acl Safe_ports port 21  # ftp
>   acl Safe_ports port 443 # https
>   acl Safe_ports port 70  # gopher
>   acl Safe_ports port 210 # wais
>   acl Safe_ports port 1025-65535  # unregistered ports
>   acl Safe_ports port 280 # http-mgmt
>   acl Safe_ports port 488 # gss-http
>   acl Safe_ports port 591 # filemaker
>   acl Safe_ports port 777 # multiling http
>   acl CONNECT method CONNECT
>   http_access allow all
>   http_access deny !Safe_ports
>   http_access deny CONNECT !SSL_ports
>   http_access allow localhost manager
>   http_access deny manager
>   http_access allow localhost
>   http_access allow all
>   http_port 3128 intercept
>   http_port 0.0.0.0:3127
> 
> This rule has been added to the client's boxes:
> 
>   iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
> 192.168.1.100:3128

Thats the problem. NAT is required on the Squid box *only*.

> 
> 192.168.1.100 corresponds to the squid3 box. In the log below
> 192.168.1.20 is one of the clients.


When receiving intercepted traffic current Squid validate the
destination IP address against the claimed Host: header domain DNS
records to avoid several nasty security vulnerabilities connecting to
that Host domain. If that fails the traffic is instead relayed to the
original IP:port address in the TCP packet. That address arriving into
your Squid box was 192.168.1.100:3128 ... rinse, repeat ...

Use policy routing, or a tunnel (GRE, VPN, etc) that does not alter the
packet src/dst IP addresses to get traffic onto the Squid box.

Amos


Re: [squid-users] Re: Configuring WCCPv2, Mask Assignment

2014-08-06 Thread Amos Jeffries
On 5/08/2014 12:27 a.m., Squid user wrote:
> Hi Amos.
> 
> Could you please be more specific?
> 
> I cannot find any wccp-related directive in Squid named IIRC or similar.

IIRC = "If I Recall Correctly".
I am basing my answer on code knowledge I gained a year or two back.

Just re-checked the code and confirmed. The flag names on
wccp2_service_info are the same for both hash and mask methods. What
they do is different and hard-coded into Squid.

For mask assignment the static mask of 0x1741 is sent from Squid for
each of the fields you configure a flag for.

http://www.squid-cache.org/Doc/config/wccp2_service_info/


Examples of what you need for your earlier requested config (Sorry about
the line wrap):

  wccp2_service_info 80 protocol=tcp flags=src_ip_hash
priority=240 ports=80

with mask assignment method sets the mask to be 0x1741 on the packet
src-IP when protocol is TCP and dst-port 80.


  wccp2_service_info 90 protocol=tcp flags=dst_ip_hash
priority=240 ports=80

with mask assignment method sets the mask to be 0x1741 on the packet
dst-IP when protocol is TCP and dst-port 80.


Amos


Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-06 Thread Amos Jeffries
On 5/08/2014 1:13 p.m., sq...@proxyplayer.co.uk wrote:
> In my network I have unbound redirecting some sites through the proxy
> server and checking authentication, If I redirect www.thisite.com it
> works corectly. However, as soon as SSL is used https://www.thissite.com
> it doesn't resolve at all. Any ideas what I have to do to enable ssl
> redirects in unbound or squid?

Handle port 443 traffic and the encrypted traffic there.
You are only receiving port 80 traffic in this config file.


There are other problems in the config file displayed. Notes inline.

> 
> squid.conf
> #
> # Recommended minimum configuration:
> #
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32 ::1
> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
> 
> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> acl localnet src fc00::/7# RFC 4193 local private network range
> acl localnet src fe80::/10# RFC 4291 link-local (directly
> plugged) machines
> 
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> 

You should erase all of the lines above. They are duplicated below.

> #
> # Recommended minimum Access Permission configuration:
> #
> # Only allow cachemgr access from localhost
> http_access allow manager localhost
> http_access deny manager
> 

NOTE: Current best practice recommendation is to have the manager access
control lines after the CONNECT one below. Saves on a couple of slow
regex calculations on certain types of DoS attacks.

> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
> 
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> 
> # We strongly recommend the following be uncommented to protect innocent
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32 ::1
> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
> 
> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> acl localnet src fc00::/7# RFC 4193 local private network range
> acl localnet src fe80::/10# RFC 4291 link-local (directly
> plugged) machines
> 
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> 
> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports

NP: these four lines above are now occuring three times in a row in your
http_access rules. Only the first occurance will have any useful effect,
the rest just waste processing time.

> 
> external_acl_type time_squid_auth ttl=5 %SRC /usr/local/bin/squidauth

What does this helper do exactly to earn the term "authentication"?
TCP/IP address alone is insufficient to verify the end-users identity.


> acl interval_auth external time_squid_auth
> http_access allow interval_auth
> http_access deny all
> http_port 80 accel vhost allow-direct
> hierarchy_stoplist cgi-bin ?
> coredump_dir /var/spool/squid
> 
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0%0
> refresh_pattern .   020% 4320
> 

Amos


Re: [squid-users] https url filter issue

2014-08-06 Thread Amos Jeffries
On 6/08/2014 6:20 p.m., Sucheta Joshi wrote:
> Hi,
> 
> We are using facebook share api in our application for which user need to
> login using main site.  Following URL if I need to allow and not have full
> access for facebook for user then how to do it?
> 
> https://www.facebook.com/dialog/oauth?client_id=206510072861784&response_typ
> e=code&redirect_uri=http://app.ripplehire.com/ripplehire/connect/facebook&sc
> ope=publish_stream
> 
> I don't have option for dstdom_regex here as it is the main site.
> 
> I am able to do filter in other proxyies using keyword like my client id
> "206510072861784"  So it will allow only my API call and not whole site.
> 
> How to do this in Squid?

The only way to find any details about the URL path on HTTPS traffic is
to configure ssl-bump and MITM decrypt the TLS/SSL traffic.

Amos



Re: [squid-users] Quick question

2014-08-05 Thread Amos Jeffries
> -Original Message-
> From: Lawrence Pingree
>
> I have a 175 gigabyte cache file system. What would be the optimal L1
and L2
> cache dirs allocated for this cache size to perform well?
>

On 6/08/2014 11:52 a.m., Lawrence Pingree wrote:
> Anyone?

That depends on the OS filesystem underlying the cache, and the size of
objects in it.

The L1/L2 settings matter on FS which have a per-directory limit on
inode entries, or need to scan the full list on each file open/stat
event (I think that was FAT32, NTFS, maybe ext2, maybe old unix FS). On
FS which do not do those two things they are just an admin convenience.

Amos


Re: [squid-users] Re: Configuring WCCPv2, Mask Assignment

2014-08-04 Thread Amos Jeffries
On 4/08/2014 11:59 p.m., Squid user wrote:
> Hi.
> 
> Could you provide any help on the below?
> 
> Basically, what I need is to know whether Squid has a directive to be
> used when Mask assignment is used, allowing to send to the WCCP client
> what is the mask that should be used.
> I have seen none, so far.
> It is possible to set the assignment to Mask, but if Squid cannot tell
> the WCCP client which mask should be used, then mask assignment will not
> work.

IIRC it is the same flags, or set in the router.

Amos



Re: [squid-users] WCCP for ISP (urgent services)

2014-08-04 Thread Amos Jeffries
On 3/08/2014 7:00 p.m., Délsio Cabá wrote:
> I need a urgent professional support on setting up wccp for a small ISP
> Router CISCO 1900
> Switch 3750 with wccp support

The Squid Software Foundation provides a list of commercial support
services at http://www.squid-cache.org/Support/services.html if you
still need assistance with this.

Amos



Re: [squid-users] Re: ONLY Cache certain Websites.

2014-08-04 Thread Amos Jeffries
On 3/08/2014 9:25 p.m., nuhll wrote:
> Seems like "acl all src all" fixed it. Thanks!
> 
> One problem is left. Is it possible to only cache certain websites, the rest
> should just redirectet?

The "cache" directive is used to tell Squid any transactions to be
denied storage (deny matches). The rest (allow matches) are cached (or
not) as per HTTP specification. http://www.squid-cache.org/Doc/config/cache/

Redirect is done with url_rewrite_program helper or a deny_info ACL
producing a 30x status and alternative URL for the client to be
redirected to. Although I guess you used the word "redirectet" to mean
something other than HTTP redirection - so this may not be what you want
to do.

Amos



Re: [squid-users] Re: ONLY Cache certain Websites.

2014-08-02 Thread Amos Jeffries
On 3/08/2014 3:07 a.m., nuhll wrote:
> im not able to fix it.
> 
> Normal websites work. But i cant get it to cache (or even allow access to
> Windows Update or Kaspersky).
> 
> Whats i am doin wrong?
> 
> 2014/08/02 17:05:35| The request GET
> http://dnl-16.geo.kaspersky.com/updaters/updater.xml is DENIED, because it
> matched 'localhost'
> 2014/08/02 17:05:35| The reply for GET
> http://dnl-16.geo.kaspersky.com/updaters/updater.xml is ALLOWED, because it
> matched 'localhost'
> 
> 
> 2014/08/02 17:06:32| The request CONNECT 62.128.100.41:443 is DENIED,
> because it matched 'localhost'
> 2014/08/02 17:06:32| The reply for CONNECT 62.128.100.41:443 is ALLOWED,
> because it matched 'localhost'
> 
> 
> 014/08/02 17:07:07| The request CONNECT sls.update.microsoft.com:443 is
> DENIED, because it matched 'localhost'
> 2014/08/02 17:07:07| The reply for CONNECT sls.update.microsoft.com:443 is
> ALLOWED, because it matched 'localhost'
> 

So what access.log linesmatch these transactions?

> 
> my config atm:
> debug_options ALL,1 33,2
> acl localnet src 192.168.0.0
> acl all src 0.0.0.0

1) you are defining the entire Internet to be a single IP address
"0.0.0.0" ... which is invalid.

This should be:
   acl all src all

> acl localhost src 127.0.0.1
> 
> access_log daemon:/var/log/squid/access.test.log squid
> 
> http_port 192.168.0.1:3128 transparent
> 
> cache_dir ufs /daten/squid 10 16 256
> 
> range_offset_limit 100 MB windowsupdate
> maximum_object_size 6000 MB
> quick_abort_min -1
> 
> 
> # Add one of these lines for each of the websites you want to cache.
> 
> refresh_pattern -i
> microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000
> reload-into-ims
> 
> refresh_pattern -i
> windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
> 432000 reload-into-ims
> 
> refresh_pattern -i
> windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000
> reload-into-ims
> 
> refresh_pattern -i
> geo.kaspersky.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
> 432000 reload-into-ims
> 
> # DONT MODIFY THESE LINES
> refresh_pattern \^ftp:   144020% 10080
> refresh_pattern \^gopher:14400%  1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
> refresh_pattern .   0   20% 4320
> 
> acl kaspersky dstdomain .kaspersky.com
> acl windowsupdate dstdomain windowsupdate.microsoft.com
> acl windowsupdate dstdomain .update.microsoft.com
> acl windowsupdate dstdomain download.windowsupdate.com
> acl windowsupdate dstdomain redir.metaservices.microsoft.com
> acl windowsupdate dstdomain images.metaservices.microsoft.com
> acl windowsupdate dstdomain c.microsoft.com
> acl windowsupdate dstdomain www.download.windowsupdate.com
> acl windowsupdate dstdomain wustat.windows.com
> acl windowsupdate dstdomain crl.microsoft.com
> acl windowsupdate dstdomain sls.microsoft.com
> acl windowsupdate dstdomain productactivation.one.microsoft.com
> acl windowsupdate dstdomain ntservicepack.microsoft.com
> 
> acl CONNECT method CONNECT
> acl wuCONNECT dstdomain www.update.microsoft.com
> acl wuCONNECT dstdomain sls.microsoft.com
> 
> http_access allow kaspersky localnet
> http_access allow CONNECT wuCONNECT localnet
> http_access allow windowsupdate localnet
> 
> http_access allow localnet
> http_access allow localhost
> 

The above rule set is equivalent to:
 http_access allow localhost
 http_access deny !localnet
 http_access allow all

Amos



Re: [squid-users] Blank page due to 500 internal server error of embedded page

2014-08-01 Thread Amos Jeffries
On 1/08/2014 9:08 p.m., Sebastian Fohler wrote:
> Can someone help mit to analyze why opts.optimize.webtrends.com throws a
> 500 error and therefor blocks the viewing of www.microsoft.com, when we
> try to open it through our squid 3.1.0? We only get a blank page while
> accessing www.microsoft.com since the webtrends page is a showstopper.
> It does work without the webtrends part if we use an acl to block
> opts.optimize.webtrends.com but the goal is to use it.
> 
> In my access log I see at that point such an entry:
> 
> 1406807304.655 22 10.32.15.38 TCP_MISS/500 814 GET
> http://ots.optimize.webtrends.com/ots/ots/js-3.2/311121/WT3kWEufwRsLlH8slwbtLTNN5QBiDzq9vp8gZr621gN75AfQRSUmjjmHw9v467a1LAsHxv8mErZFm2WnMzL7PG0U9WUAA9EVl-8Uq3AiHWJWXYnai8ClAbSlc0KTREwYxOcNuFSx056gNwrphrXOnv4lgdNE_6MNho1ocIMLKqOKzfbq3NOUNSuELXAXZLW5He6xZbq69vVOnYxGSb1_4sXYFBoDykxYI-2MiTFFZ4yLYpWHerYcZTErfZmCv-IvxLf1jnEj5NCdO38RY6TZnnhQCxmLC6T6uWCDb5C-YTjjOHBWO8_9Bk0JhetB3JQ260sbHwAMS-4ML2t765WLj9P9XWgHCvVueZR94AfUJcvFTquX5mvCcsqmLmdTXiMWH_QPAiKoelxQsDs9dha9-2uhbNkqIp_XL_warU0y6dhrLJK0gETba7dYGpGxxgDVEgxy_NdVVeuiS1F1qUHo0cWEX9Wpgg9IMSwPIfKrXHsqr6tfwnh4eS2uqBUmmUKTIy2iagb_iLeKwSDDUQ2uexOBTnyGZIuTdZnOV70vARXeG5m32bRxb6mbWhXI9GLLI_DTq-PED-e7C55zw77Ej59Yenj4EvAH3MCtuxZgF-aP00hpAAIYY2-Sm_Xii6XJvrmnVvVvBxNaMUtrepN5KfFAOOIcjCvKT24PfAb-kY2GiVzjydcb1dr3vl0Z2KOrn9Ffw3kVobQNeCUn4vmvs6yd8DGmmyBcvuEa2zgLcHRsuG72V62i8nIA_zX5LPrPRTk_wMo5LY0FwrLri-u80pJ3F0PZW-fqUPuh3Q9rro1EnV5w1hse2UYp2_qjYThcZ-GzynlSp5DXY2uFWaDJdQvhymrBHzKtejJGUUOW-A8AEOr3sCCNDG3k2eAUVkI9iCQAVVtU2sJWzxJb6b2gcgRdz6THeKkNYAuac!
> 
> 4F3Svh6v
> 5tw_YxU3IJK7fjGiOov1r0xIpsDHJMLltAKHEvq2q10xamtxvUst3abc7g7DZJAdPK-Psv1HT10NWD8bpN9m__Ks7zzoFPtjK0iVbGYRgeOIG7OYV9-XoyvSvFpzx2po0WLluGzIy5MxWFCQpPX4iIJoFc4khz25Q3poV6FzM7yRpoWIpZVYo_iSrfr-dCrJ5nJXRXkrbYs_oJq3oBM8J4HXuw8SOUqfdppXxNwDPf_LOg0TLw3CFDgeY_Hn67LbInwgy0wQer67BBpg3PQfS4mouJkqO-ayWB2ybOkjrQNx6RfYIuZNIZfuc4S2lzRLxJx6WPrQCTbu3u1EDBrUef2SvSZrXI_joes5JFxRcXkiB3GHtgHjbON_KsxBq5xIz2ZIO4EEIDxpSSm7z3xxr9GMWelYX9ARtJzfkLkaqZa-J6z00sIu5z4EB0KRzjEHYXPvuw6IsEu3fuUFvaMxsE9Edli44dtwXqtzmXBW3ZpX20P_-GwgJSrdwDrYlDScrwcimKWiiCuOcwsqkUtg6F6cxNSAcpsYgXGGRA3kFmMDCqrZPHeVksw3aN6cA4lM7xi0CQbSNyfDK8liy8WFhtb-APhDSnUb0h8Yeh-J9UJV_39pQIP_DIQWZRcpb4altJZgB8dPSfg4vXpoyhMZs8bf20NJosLX3zmyDfStSQI7oV5eOUJEr2w4uDENkziWBN7loRld9HzAFwXIAHo7a-URpWfRiteWg0iNT9W6IuI7J2Z3pwtypFQuYjeR8Th7lXa-ip9-hpTCTgaaG2mLnxHk6c0NkoXyoZqFfnN8NE96Gzs1hpdYYSzha3LztWrJXfRFirLgHoHLbJ60mjeJ-FR9_IZMyNRiaxDOFAbOd1S6E9L5Tf7VuOJjFujfyxGx9y0QmYfTEHkeLyoU8bhD9WJoDZ0Uwp--1PSJmAFQHFbb1YkbMIS9jruhpSdfSe2HvrHtxy8Z6FKuv3tcZCfcM8UMxL18-!
> 
> 9M1zg7kPc
> EexJ-omBVY4AYA3Z5isSSnRwibBpQVRMaDEXV5QRjKhUVJWcdYjhQw~~ -
> HIER_DIRECT/31.186.231.66 text/html
> 
> 
> What setting could cause such a behaviour or how do I find out which
> setting?

* Extremely long URL 2061 bytes. Anything >1024 has low reliability.

* Possibly containing several invalid characters un-encoded in the path
section (CR and SP).

* Unidentified issue on the upstream server. Note that 31.186.231.66 has
been sent the request, the 500 status is either generated by that server
or by Squid breaking on something in its reply output.


First step is to try upgrading your Squid. 3.1.0 is not an official
version, 3.1.0.X releases are betas. The current stable release is
3.4.6. You can also do far better debugging of HTTP with debug_options
11,2 in the current releases.


Possible workaround is to add this near the top of your access rules:

 acl WT dstdomain ots.optimize.webtrends.com
 http_access deny WT

Amos


Re: [squid-users] assertion failed: cbdata.cc:464: "c->locks > 0

2014-07-30 Thread Amos Jeffries
On 30/07/2014 10:44 p.m., Labusch, Christian (regio iT) wrote:
> Hello all,
> 
> i have a little Problem with the "new"  Directive "client_delay_pools". Our 
> old Config with the normal delay_classes works fine. But we need additional a 
> Upload-Limit.
> 
> Testmachine with very simple standard config:
> 
> - OS: Linux debian 3.2.0-4-486 #1 Debian 3.2.60-1+deb7u1 i686 GNU/Linux
> - Squid Cache: Version 3.4.6. (configure options:  '--enable-delay-pools')
> 
> Squid.conf (Limitation 1 Mbit/s):
> 
> client_delay_pools 1
> client_delay_initial_bucket_level 100
> client_delay_access 1 allow localnet
> client_delay_access 1 deny all
> client_delay_parameters 1 128000 128000
> 
> To start the squid-Daemon brings  the following error:
> 
> Cache.log:
> 
> 2014/07/30 10:33:02 kid1| assertion failed: cbdata.cc:464: "c->locks > 0"



> Do you have any ideas?

http://bugs.squid-cache.org/show_bug.cgi?id=3696

We require a stack trace to identify the cause of this one. If you can
obtain one from 3.4.6 (or better a current 3.HEAD tarball) that would be
very helpful. Please ensure that it actually has the function names
rather than hex numbers and add to the bug report along with the "squid
-v" output.

Amos


Re: [squid-users] why squid can block https when i point my browser to port , and cant when its transparent ?

2014-07-29 Thread Amos Jeffries
On 30/07/2014 11:59 a.m., Alex Rousskov wrote:
> On 07/27/2014 04:49 PM, Jason Haar wrote:
> 
>> I do wonder where this will end.
> 
> Since one cannot combine interception, inspection, and secure delivery,
> this can only end when at least one of those components dies.
> 
> Interception is probably the weak link here because it can be removed(*)
> by technological means if enough folks decide it has to go. Inspection
> (by trusted intermediaries) and secure delivery (through trusted
> intermediaries) will probably stay (with modifications) because their
> existence sprouts from the human nature (rather than just lack of
> development discipline, will, and resources).
> 
> 
>> How long before Firefox starts pinning,
>> then MSIE, then it gets generalized, etc?
> 
> If applied broadly, pinning in an interception world will clash with
> government, corporate, and parental desire to protect "assets".  With
> todays technology, pinning can only survive on a limited scale IMHO. The
> day after tomorrow, if interception dies, replaced by trusted
> intermediaries, pinning will not be a problem.
> 
> 
> Either that, or the entire web content is going to be owned by a few
> content providers that would guarantee that their content is safe and
> appropriate (hence, does not need to be inspected). This is what Google
> claims with its pinning solution today, and I suspect it is not the
> responsibility they actually want and enjoy.

It is also a false claim.


Shared hosting providers are a well known source of malware and viral
infection. Google hosted sites are no different even though their
https:// service is pinned. They do well enough to only get an "also
ran" mention but that is still not clean enough to warrant a bypass of
inspection (hundreds or a few thousand infection points make up their
their low % rating).

Amos



Re: [squid-users] TCP_MISS then TCP_DENIED

2014-07-29 Thread Amos Jeffries
On 30/07/2014 5:18 a.m., pe...@pshankland.co.uk wrote:
> Hi, I have configured a new install of Squid on CentOS 6.5 via yum. I
> have followed some of the guides on the Squid wiki to get AD group
> authentication working but am getting some strange results when looking
> within the access.log.
> 
> As you can see from the following log entries, the server, with an
> authentication user logged in and browsing to www.google.com, gets a
> couple of TCP_MISS/200 entries and then TCP_DENIED/407 before going back
> to TCP_MISS/200 again:
> 
> 1406653633.180220 172.29.94.15 TCP_MISS/200 3863 CONNECT
> ssl.gstatic.com:443 admin_pete DIRECT/74.125.230.119 -
> 1406653633.180 78 172.29.94.15 TCP_MISS/200 3524 CONNECT
> www.google.com:443 admin_pete DIRECT/173.194.41.116 -
> 1406653633.182  0 172.29.94.15 TCP_DENIED/407 3951 CONNECT
> www.google.com:443 - NONE/- text/html
> 1406653633.185  0 172.29.94.15 TCP_DENIED/407 4280 CONNECT
> www.google.com:443 - NONE/- text/html
> 1406653633.194  0 172.29.94.15 TCP_DENIED/407 3955 CONNECT
> ssl.gstatic.com:443 - NONE/- text/html
> 1406653633.196  0 172.29.94.15 TCP_DENIED/407 4284 CONNECT
> ssl.gstatic.com:443 - NONE/- text/html
> 1406653633.247 72 172.29.94.15 TCP_MISS/200 3862 CONNECT
> www.gstatic.com:443 admin_pete DIRECT/74.125.230.127 -
> 1406653633.249  0 172.29.94.15 TCP_DENIED/407 3955 CONNECT
> www.gstatic.com:443 - NONE/- text/html
> 1406653633.252  0 172.29.94.15 TCP_DENIED/407 4284 CONNECT
> www.gstatic.com:443 - NONE/- text/html
> 1406653633.394  0 172.29.94.15 TCP_DENIED/407 3955 CONNECT
> apis.google.com:443 - NONE/- text/html
> 
> It is a bit confusing as the web page loads but I get all these denied
> logs within access.log.
> 
> Could someone help me understand what this means?

Since you mention "AD group authentication" I asume you have used NTLM
or Negotiate authentication.

Two things to be aware of when reading these logs:

1) the entries are logged at time of transaction completion. So the
admin_pete CONNECT requests that got a MISS/200 actually started far
earlier than the denied ones. eg the one 1406653633 (logged) - 72ms
(duration) ==> started 1406653561.

 ... that helps you read the log for identifying #2 ...

2) Authentication requires multiple HTTP transactions to perform an
authentication handshake. Both NTLM and Negotiate have mandatory fresh
handshakes on every new connection. NTLM always has an extra transaction
in the middle of the handshake.
So you get a denied first then a success. This shows up worst of all
with HTTPS like above where every tunnnel attempt requires a new connection.

3) browsers also have a tendency to open multiple connections at a time.
Sometimes this can be attributed to "happy eyeballs" sometimes they are
just grabbing more for future performance. That (or NTLM) is probably
the case for these attempts which are only 3ms apart.

Amos


Re: [squid-users] External ACL tags

2014-07-28 Thread Amos Jeffries
On 29/07/2014 4:42 a.m., Steve Hill wrote:
> 
> I'm trying to build ACLs based on the tags returned by an external ACL,
> but I can't get it to work.
> 
> These are the relevant bits of my config:
> 
> external_acl_type preauth children-max=1 concurrency=100 ttl=0
> negative_ttl=0 %SRC %>{User-Agent} %URI %METHOD /usr/sbin/squid-preauth
> acl preauth external preauth
> acl need_http_auth tag http_auth
> http_access allow !tproxy !tproxy_ssl !https preauth
> http_access allow !preauth_done preauth_tproxy
> http_access allow proxy_auth postauth
> 
> 
> 
> I can see the external ACL is being called and setting various tags:
> 
> 2014/07/28 17:29:40.634 kid1| external_acl.cc(1503) Start:
> externalAclLookup: looking up for '2a00:1a90:5::14
> Wget/1.12%20(linux-gnu) http://nexusuk.org/%7Esteve/empty GET' in
> 'preauth'.
> 2014/07/28 17:29:40.634 kid1| external_acl.cc(1513) Start:
> externalAclLookup: will wait for the result of '2a00:1a90:5::14
> Wget/1.12%20(linux-gnu) http://nexusuk.org/%7Esteve/empty GET' in
> 'preauth' (ch=0x7f1409a399f8).
> 2014/07/28 17:29:40.634 kid1| external_acl.cc(871) aclMatchExternal:
> "2a00:1a90:5::14 Wget/1.12%20(linux-gnu)
> http://nexusuk.org/%7Esteve/empty GET": return -1.
> 2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked: preauth = -1
> async
> 2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked:
> http_access#7 = -1 async
> 2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked: http_access
> = -1 async
> 2014/07/28 17:29:40.635 kid1| external_acl.cc(1371)
> externalAclHandleReply: reply={result=ERR, notes={message:
> 53d67a74$2a00:1a90:5::14$baa34e80d2d5fb2549621f36616dce9000767e93b6f86b5dc8732a8c46e676ff;
> tag: http_auth; tag: cp_auth; tag: preauth_ok; tag: preauth_done; }}

Hi Steve,
 This is how tag= keys were originally designed to work. Only to allow
one tag to be assigned to any HTTP transaction. The tag type ACL and
%EXT_TAG configurations still operate that way.

The "note" ACL type should match against values in the tag key name same
as any other annotation. If that does not work try a different key name
than "tag=".

Amos



Re: [squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-28 Thread Amos Jeffries
On 29/07/2014 3:39 a.m., Makson wrote:
> Amos Jeffries wrote
>> 1) broken cacheability headers.
>> The Expires: header says (Date: + 360 days), and s-maxage says 360days
>> BUT ... Last-Modified says 1970. So Last-Modified + s-maxage is already
>> expired.
>>   NP: this is not breaking Squid which still (incorrectly) uses Expires
>> header in preference to s-maxage. But when we fix that bug this server
>> will start to MISS constantly.
> 
> So this is caused by the application? It is made by IBM, if you fix this
> bug, i guess we need to keep using the older version of Squid.
> 
> 
> Amos Jeffries wrote
>>  Does the matching "HTTP Server REQUEST" to the parent peer for the
>> eclipse transaction contain an If-Modified-Since and/or If-Match header?
> 
> Sorry, i didn't get that, would you please explain me in more detail?

There is a HTTP request to the parent server leading to that reply you
posted the headers for. What is the request headers?

Amos


Re: [squid-users] https url filter issue

2014-07-28 Thread Amos Jeffries
On 28/07/2014 10:15 p.m., Sucheta Joshi wrote:
> 
> 
> 
> Hi,
> 
> Our client is using Squid proxy.  We need to do following configurations in
> Squid Proxy.  We are using SquidGard UI to configure this.
> 
> Block facebook and linkedin main sites but allow access to some of the
> facebook and Linkedin URL’s based on certain keywords.While doing this
> settings it url_regex worked for http access, but when we tested same for
> https it gives webpage not found.
> 
> Need input on this.

Look in your Squid access.log.

Notice how the HTTPS traffic shows up as CONNECT requests with a
hostname/IP and ":" then port number. *only*.

Like so:
 "CONNECT static-a.cdn.facebook.com:443 1.1"

This "static-a.cdn.facebook.com:443" part is the URL available to Squid
(and passed on to the squidguard URL helper). If you are going to use
regex patterns to match on URL that is all you have available for the
pattern to work on.

PS. you would be better off using dstdom_regex or dstdomain ACL types in
squid.conf when expecting to match CONNECT requests by URL.

Amos



Re: [squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-28 Thread Amos Jeffries
I was looking for Vary headers from the origin server. But none visible.

Instead I see

1) broken cacheability headers.
The Expires: header says (Date: + 360 days), and s-maxage says 360days
BUT ... Last-Modified says 1970. So Last-Modified + s-maxage is already
expired.
  NP: this is not breaking Squid which still (incorrectly) uses Expires
header in preference to s-maxage. But when we fix that bug this server
will start to MISS constantly.


2) Authorization: header from eclipse.
 Server-authenticated requests can receive cached content but require
revalidation to the server to confirm that the content is legit for this
user. The server is responding with a whole new response object (200)
where I would expect a 304.
 Does the matching "HTTP Server REQUEST" to the parent peer for the
eclipse transaction contain an If-Modified-Since and/or If-Match header?

Amos


Re: [squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-28 Thread Amos Jeffries
On 28/07/2014 9:37 p.m., Makson wrote:
> Amos Jeffries wrote
>> 2) explicit hostname "serverb.domain:9443". I find it highly unlikely
>> that you will be finding server A being requested for URLs at that
>> hostname.
> 
> We now have the public URL for app.domain set to servera.domain.
> 
> 
> Amos Jeffries wrote
>> 1) https:// on the URLs. Squid is not suposed to be sending these over
>> un-encrypted peer connections. I dont recall any explicit prevention of
>> that, but there might be.
> 
> A little progress finally, we have two types of clients for our app server,
> one is web browser, and the other is eclipse, for the same request, server B
> will try to query server A ONLY if the request is sent by web browser, i
> tried to look into the log file in server A, no difference between URLs for
> the requests sent by these two types of clients, strange?
> 
> # record for request sent by web browser in server B
> 1406539824.298  3 172.17.210.5 TCP_MISS/200 3736 GET
> https://servera.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4
> - SIBLING_HIT/172.17.192.33 application/octet-stream
> 
> # record for request sent by eclipse in server B
> 1406540067.167409 172.17.210.5 TCP_MISS/200 3670 GET
> https://servera.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4
> - FIRSTUP_PARENT/172.17.96.148 application/octet-stream
> 

Excellent.

Would you be able to show the HTTP request coming from each of those
celints, and the HTTP reply coming from the origin parent server?
 debug_options 11,2 will log the necessary details in the current squid
releases. Older Squid require "tcpdump -s0" to capture them all.


Amos


Re: [squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-27 Thread Amos Jeffries
On 27/07/2014 1:34 a.m., Makson wrote:
> Amos Jeffries wrote
>> Showing that server B is in fact qeuerying server A for the objects. But
>> it would seem that server A did not have them cached.
>>
>> It may be that these responses use Vary: header. ICP does not handle
>> that type of response properly. You may get better behaviour using HTCP
>> instead of ICP between the siblings.
>>
>>
>> I also note that you have 40GB of RAM allocated to each of these Squid
>> instances. Do you actually have over 100GB of RAM on those machines
>> (*excluding* swap space)?
>>
>> Amos
> 
> Hi Amos,
> 
> Thanks for your reply, i am now using HTCP, still don't get it work :-( ,

Ah. Been scratching my head over this for a while.

The log records you mentioned shoed two things hich might be interfering.

1) https:// on the URLs. Squid is not suposed to be sending these over
un-encrypted peer connections. I dont recall any explicit prevention of
that, but there might be.

2) explicit hostname "serverb.domain:9443". I find it highly unlikely
that you will be finding server A being requested for URLs at that hostname.

All publicly visible URLs from "app.domain" going through these proxies
should be using the public[1] domain name for app.domain's service, not
the proxies unique hostname:port's. That includes your testing requests.

[1] public in the context that clients are told it, not necessarily
Internet-public.

Amos



Re: [squid-users] timeout option needed for ipv6 even in squid-3.4.6?

2014-07-27 Thread Amos Jeffries
On 28/07/2014 10:35 a.m., Jason Haar wrote:
> Hi there
> 
> I'm seeing a reliability issue with squid-3.1.10 through 3.4.6 accessing
> ipv6 sites.
> 
> The root cause is that the ipv6 "Internet" is still a lot less reliable
> than the ipv4 "Internet". Lots of sites seem to have a "flappy"
> relationship with ipv6 which is not reflected in their ipv4 realm. This
> of course has nothing to do with squid directly - but impacts it
> 
> So the issue I'm seeing is going to some websites that have both ipv6
> and ipv4 addresses, ipv6 "working" (ie no immediate "no route" type
> errors), but when squid tries to connect to the ipv6 address first, it
> hangs so long on "down" sites that it times out and never gets around to
> trying the working ipv4 address. It also doesn't appear to remember the
> issue, so that it continues to be down (ie the ipv6 address that is down
> for a website isn't cached to stop squid going there again [for a
> timeframe])
> 
> Shouldn't squid just treat all ipv6 and ipv4 addresses assigned to a DNS
> name in a "round robin" fashion, keeping track of which ones are
> working? (I think it already does that with ipv4, I guess it isn't with
> ipv6?). As per Subject line, I suspect squid needs a ipv6 timeout that
> is shorter than the overall timeout, so that it will fallback on ipv4?

No. round-robin IP connections from a proxy cause more problems than
they solve. HTTP multiplexing / persistent connections, DNS behaviours,
browser "happy eyeballs" algorithm are all involved or affected by the
IP selection. A lot of applications use stateful sessions in the
assumption that browser once found an IP will stick with it, so the best
thing for Squid to do is the same.

An IP is just an IP, regardless of version. Connectivity issues happen
just as often in IPv4 as in IPv6 (more so when "carrier grade" NAT gets
involved). The only special treatment IPv6 gets is sorting first by
default ("dns_v4_first on" can change that) since 79% of networks today
apparently have IPv6 connectivity operating faster by at least a 1ms
than IPv4. It also avoids a bunch of potential issues with NAT and other
IPv4-only middleware.



Squid already does cache IP connectivity results. The problems are
firstly, whenever DNS supplies new or updated IP information the connect
tests have to be retried. Connection issues are quite commone even in
IPv4 and usually temporary. Secondly that Squid timeouts (below) are not
by default set to the right values to make the sites you noticed work
very well.

There are several limits which you can set in Squid to speed up or slow
down the whole process:

 dns_timeout - for how long Squid will wait for DNS results. The default
here is 30 seconds. If your DNS servers are highly reliable you can set
that lower.
 ** If the problems sites are taking a long time to respond to 
queries this will greatly affect eth connection time. Setting this down
closer to 10 sec can help for specific sites with fully broken DNS
servers, but harms others which merely have slow DNS servers. YMMV, but
I recommen checking the  lookup speed for your specific problem
sites before changing this.

 connect_timeout - for how long Squid waits for TCP SYN/SYN-ACK
handshake to occur. The default here is a full minute. What you set this
to depends on the Squid series:
 * In 3.1 and older this covered DNS lookup and a TCP handshakes for
each IP address found by DNS. In these versions you increase the timeout
to get better IPv6 failover behaviour.
 * In 3.2 and later this covers only one TCP handshake. In these
versions you *decrease* it to improve performance. You can safely set it
to a few seconds, but be aware of your Squid machines networking stack
behaviour regarding TCP protocol retries and timeouts to determine what
values will help or hurt [1]

 forward_max_retries - how many times Squid will attempt a full connect
cycle (one connect_timeout). Default in stable releases is 10, squid-3.5
release is bumping this up to 25. What you set this to depends on the
Squid series again, but as a side effect of connect_timeout changes. In
all versions you can get better connectivity by increasing the value.
For several of teh top-ten websites 25 is practically required just to
get past the many IPv6 addresses they advertise and attempt any IPv4.

 forward_timeout - for how long in total Squid will attempt to connect
to the servers (via all methods). The default here is 4 minutes. You can
set it longer to allow automated systems better connectivity chances,
but most people do not have that type of patience so 4 min before
getting the "cannot connect" error page is probably a bit long already.
You should not have to change this.


> 
> i.e. right now I can't get to http://cs.co/ as their ipv6 address is
> down, but their ipv4 address is up and working - but squid won't try it
> because it hangs so long trying the ipv6 address (and on the flip-side,
> www.google.com is working fine over ipv6). To put it another way,
> squid

Re: [squid-users] Tproxy immediately closing connection

2014-07-26 Thread Amos Jeffries
On 25/07/2014 10:02 a.m., Jan Krupa wrote:
> Hi all,
> 
> I've been struggling to configure transparent proxy for IPv6 on my
> Raspberry Pi acting as a router following the guide:
> http://wiki.squid-cache.org/Features/Tproxy4
> 
> Despite all my efforts, all I got was squid squid immediately closing
> connection after it was established (not rejecting connection, three-way
> handshake is successful and then the client receives RST packet).
> 

Do you have libcap2 installed and libcap2-dev used to build Squid?
 there have been a few issues where its absence were not notified by Squid.

Amos



Re: [squid-users] Re: YouTube Resolution Locker

2014-07-26 Thread Amos Jeffries
On 26/07/2014 8:36 p.m., Stakres wrote:
> HI Amm,
> 
> Everyone is free to modify the script (client side) by sending YouTube urls
> only, no need to send all the Squid traffic.
> Then, we collect nothing, the requests are reviewed by the script and it
> returns modified urls to lock the YouTube resolutions.
> We do not make any statistics, we do not share data with internal or
> external teams.
> 
> We're not "new to programming" and we DO realize security and privacy
> issues, you're free to use the API or not, we force nobody.
> Everyone is free to spend time for developing a similar function or use ours
> for a quick solution.
> 
> The "one small function"is for free to all, we spent time to develop this
> API and we're a commercial company. So, do you work for free ? we do not.
> If you are interested by the complete API, no problem just contact us and
> I'm sure we will find an arrangement 
> 
> No problem for the No offence, all comments are welcome.
> 
> PS: Sorry for being off-topic on squid mailing list, too.
> 
> Bye Fred


It would be better practice to publish a script which is pre-restricted
to the YT URLs which your server is useful for and your initial
advertisement stated its purpose was.

That would protect your servers from excessive bandwidth from naive
administrators, help to offer better security by default, and protect
your company from this type of complaint and any future legal
accusations that may arise from naive use of the script.

Amos



Re: [squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-25 Thread Amos Jeffries
On 26/07/2014 11:44 a.m., Makson wrote:
> Thanks for your reminder, i think the HTML RAW tag caused the problem, send
> the log again.
> 
> Some records found in access.log in server b, 
> 
> 1406185920.441   1282 172.17.210.5 TCP_MISS/200 814 GET
> https://serverb.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_houAAK2yEeOvOJ84krOqLg/_EPGIsq20EeOEJLtkkn17bg/h2LjUv8WJVDwJ3rcbA6_u3fNuJylQ0sQlSZdRL_IMkA
> - FIRSTUP_PARENT/172.17.96.148 application/octet-stream
> 1406185921.151  46349 172.17.210.5 TCP_MISS/200 219202 GET
> https://serverb.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_hpCwIK2yEeOvOJ84krOqLg/_EN-HVK20EeOEJLtkkn17bg/rnslrsXloPXpudCIXRFjShexoc97mr7-2RxWPs7pVnI
> - FIRSTUP_PARENT/172.17.96.148 application/octet-stream
> 
> 
> All records found in access.log in server a, 
> 
> 1406185543.094  0 172.17.192.145 UDP_MISS/000 124 ICP_QUERY
> https://serverb.domain:9443/ccm/authenticated/identity?redirectPath=%2Fccm%2Fjauth-issue-token
> - HIER_NONE/- -
> 1406185544.871  0 172.17.192.145 UDP_MISS/000 79 ICP_QUERY
> https://serverb.domain:9443/ccm/auth/authrequired - HIER_NONE/- -
> 1406185565.202  0 172.17.192.145 UDP_MISS/000 124 ICP_QUERY
> https://serverb.domain:9443/ccm/authenticated/identity?redirectPath=%2Fccm%2Fjauth-issue-token
> - HIER_NONE/- -
> 1406185566.732  0 172.17.192.145 UDP_MISS/000 79 ICP_QUERY
> https://serverb.domain:9443/ccm/auth/authrequired - HIER_NONE/- -
> 1406185615.090  0 172.17.192.145 UDP_MISS/000 124 ICP_QUERY
> https://serverb.domain:9443/ccm/authenticated/identity?redirectPath=%2Fccm%2Fjauth-issue-token
> - HIER_NONE/- -
> 

Showing that server B is in fact qeuerying server A for the objects. But
it would seem that server A did not have them cached.

It may be that these responses use Vary: header. ICP does not handle
that type of response properly. You may get better behaviour using HTCP
instead of ICP between the siblings.


I also note that you have 40GB of RAM allocated to each of these Squid
instances. Do you actually have over 100GB of RAM on those machines
(*excluding* swap space)?

Amos



Re: [squid-users] Change Protocol of Squid Error Pages

2014-07-25 Thread Amos Jeffries
On 26/07/2014 5:42 a.m., max wrote:
> Am 25.07.2014 13:38, schrieb Amos Jeffries:
>> On 25/07/2014 9:09 p.m., max wrote:
>>> Hey there,
>>> i'm wondering is it possible to change the protocol of Squid error
>>> Pages?
>>>
>>> For Example:
>>>
>>> When squid redirects to "deny_info 307:ERR_BLOCK" the request is made in
>>> http but i want to use https.
>>> Is that possible?
>>> I am not able to use https://somedomain because of dynamic content on
>>> the Error Page.
>> You answered your own question right there.
>>
>> The 307 code is just an instruction for the client to fetch a different
>> URL - the one following the ':' in deny_info parameter. That can be any
>> valid URI. Including https:// ones.
>>
>> Dynamic content in the page that deny_info URL presents has nothing to
>> do with Squid.
>>
>> Amos
>>
>>
> Well yes, in my case it has.
> I use Squid to load the dynamic Content. My ERR_BLOCK calls a Page with
> an iframe - this loads content.
> So i would would need to call the URI with some kind of variable. A
> token to call the iframe Data.
> like
> https://somepage.tld/?=randomtokenhere
> But i dont know if there is a way i can do that within squid.conf
> 
> Cheers
> Max


  "deny_info 307:ERR_BLOCK"

causes Squid to generate the Http response message:

 HTTP/1.1 307 See Other\r\n
 Location: ERR_BLOCK\r\n
 \r\n

Please see <http://www.squid-cache.org/Doc/config/deny_info/> for the
available macro codes. This may require you to upgrade your Squid if it
is older than 3.2.

Amos


Re: [squid-users] How to install squid 3.4.6 in freebsd

2014-07-25 Thread Amos Jeffries
On 26/07/2014 12:57 a.m., Soporte Técnico wrote:
> Anyone have idea how i can download / install squid 3.4.6 in freebsd 9?
> 
> There´s any tutorial, instructions, download sites or similar?

http://wiki.squid-cache.org/KnowledgeBase/FreeBSD

Amos



Re: [squid-users] FW: Problem with server IO resource, need to reduce logging level by excluding specific sites from being logged

2014-07-25 Thread Amos Jeffries
On 25/07/2014 11:28 p.m., RYAN Justin wrote:
> Cheers Marcus,
> I did see via googling a rule of thumb quote " cache_mem = total physical 
> memory / 3" - ref 
> http://forums.justlinux.com/showthread.php?126396-Squid-cache-tuning there is 
> a more complex formula quoted too.
> 
> Money and access constraints negate the move to faster storage :)
> 
> I will look into your recommendations.
> 
> The question of removing noise from being logged still exists - would be a 
> nice to have option

Depends on what you mean by noise.

I assume you mean entries in access.log ...

The relevant directive is in your config file as "cache_access_log".
Nowdays that should be configured as:

  access_log /squid/logs/access.log squid

the line can be followed by a list of ACL names, all of which must match
for a transaction to be recorded in the log file.


For example; in order to log only requests for example.com

  acl example1 dstdomain example.com
  access_log /squid/logs/access.log squid example1


... or in order to omit all CONNECT requests:


  # ACL for CONNECT is already defined.
  access_log /squid/logs/access.log squid CONNECT


Amos



Re: [squid-users] 3.HEAD and delay pools

2014-07-25 Thread Amos Jeffries
On 25/07/2014 10:25 p.m., masterx81 wrote:
> Hi!
> I'm trying to limit the bandwidth of squid and i've a problem.
> I'm using the following directives:
> 
> But on reconfigure i get the error:
> 
> squid -v list the "--enable-delay-pools" compile option, so seem all ok...
> 
> What i'm doing wrong?
> 

Using Nabble to send graphical quotations to a text-only mailing list.
Please try again without the fancy quoting.

> And also, what's the best way to limit upload bandwidth of squid?

Using operating system QoS controls. They work far better than Squid
delay pools do.

> client_delay_pools?

If need be, yes.

Amos


Re: [squid-users] Change Protocol of Squid Error Pages

2014-07-25 Thread Amos Jeffries
On 25/07/2014 9:09 p.m., max wrote:
> Hey there,
> i'm wondering is it possible to change the protocol of Squid error Pages?
> 
> For Example:
> 
> When squid redirects to "deny_info 307:ERR_BLOCK" the request is made in
> http but i want to use https.
> Is that possible?
> I am not able to use https://somedomain because of dynamic content on
> the Error Page.

You answered your own question right there.

The 307 code is just an instruction for the client to fetch a different
URL - the one following the ':' in deny_info parameter. That can be any
valid URI. Including https:// ones.

Dynamic content in the page that deny_info URL presents has nothing to
do with Squid.

Amos



Re: [squid-users] Set up squid as a transparent proxy

2014-07-25 Thread Amos Jeffries
On 25/07/2014 10:15 a.m., Israel Brewster wrote:
> I have been using Squid 2.9 on OpenBSD 5.0 for a while as a transparent 
> proxy. PF on the proxy box rdr-to redirects all web requests not destined for 
> the box itself to squid running on port 3128. Squid then processes the 
> request based on a series of ACLs, and either allows the request or redirects 
> (deny_info ... all) the request to a page on the proxy box.
> 

There are some big changes in OpenBSD between those versions. Have you
tried divert-to in the PF rules and tproxy option on the Squid http_port ?

Amos


Re: [squid-users] Trouble with Session Handler

2014-07-25 Thread Amos Jeffries
On 25/07/2014 7:13 p.m., Cemil Browne wrote:
> Hi all, I'm trying to set up a situation as follows:  I have a web
> server at [server]:80   .  I've got squid installed on [server]:3000 .

This is back to front.

Squid should be the gateway listening on [server]:80, with the web
server listening on a private IP of the machine, also port 80 if
possible (ie localhost:80).


> The requirement is to ensure that any request to web server protected
> content (/FP/*) is redirected to a splash page (terms and conditions),
> accepted, then allowed.  I've got most of the way, but the last bit
> doesn't work.  This is on a private network.
> 
> Squid config:
> 
> http_port 3000 accel defaultsite=192.168.56.101
> cache_peer 127.0.0.1 parent 80 0 no-query originserver
> 
> 
> external_acl_type session ttl=3 concurrency=100 %SRC
> /usr/lib/squid/ext_session_acl -a -T 60
> 
> acl session_login external session LOGIN
> 
> external_acl_type session_active_def ttl=3 concurrency=100 %SRC
> /usr/lib/squid/ext_session_acl -a -T 60
> 

Each of the above two external_acl_type definitions runs different
helper instances. Since you have not defined a on-disk database that
they share the session data will be stored in memory for whichever one
is startign teh sessions, but inaccessible to teh one checking if
session exists.


> acl session_is_active external session_active_def
> 

What you should have is exactly *1* external_acl_type directive, used by
two different acl directives.

Like so:
  external_acl_type session ttl=3 concurrency=100 %SRC
/usr/lib/squid/ext_session_acl -a -T 60

  acl session_login external session LOGIN
  acl session_is_active external session

> acl accepted_url url_regex -i accepted.html.*
> acl splash_url url_regex -i ^http://192.168.56.101:3000/splash.html$
> acl protected url_regex FP.*

Regex has implicit .* before and after every pattern unless an ^ or $
anchor is specified. You do not have to write the .*

Also, according to your policy description that last pattern should be
matching path prefix "/FP" not any URL containing "FP".

> 
> http_access allow splash_url
> http_access allow accepted_url session_login
> 
> http_access deny protected !session_is_active
> 
> deny_info http://192.168.56.101:3000/splash.html session_is_active

It is best to use splash.html as static page deliverd in place of the
access denied page:
 deny_info splash.html session_is_active

then have the ToC accept button URL be the one which begins the session.

So stitching the above changes into your squid.conf you should have this:

  http_port 192.168.56.101:80 accel defaultsite=192.168.56.101
  cache_peer 127.0.0.1 parent 80 0 no-query originserver

  external_acl_type session ttl=3 concurrency=100 %SRC
/usr/lib/squid/ext_session_acl -a -T 60

  acl session_login external session LOGIN
  acl session_is_active external session
  deny_info /etc/squid/splash.html session_is_active

  acl accepted_url urlpath_regex -i accepted.html$
  acl splash_url url_regex -i ^http://192.168.56.101/splash.html$
  acl protected urlpath_regex ^/FP

  http_access allow splash_url
  http_access allow accepted_url session_login
  http_access deny protected !session_is_active


Amos


Re: [squid-users] Re: Squid not listening on any port

2014-07-24 Thread Amos Jeffries
On 24/07/2014 9:41 p.m., israelsilva1 wrote:
> Thanks, yeah tried that too but not errors...
> 
> /[root@dxb-squid34 ~]# squid -N -d 100

 Um, "100" is not a debug level between 0 and 9.


> 2014/07/24 13:38:18| Warning: empty ACL: acl blockfiles urlpath_regex -i
> "/etc/squid/local/bad/blockfiles"
> 2014/07/24 13:38:18| Current Directory is /root
> 2014/07/24 13:38:18| Starting Squid Cache version 3.4.6 for
> x86_64-unknown-linux-gnu...
> 2014/07/24 13:38:18| Process ID 12812
> 2014/07/24 13:38:18| Process Roles: master worker
> 2014/07/24 13:38:18| With 4096 file descriptors available
> 2014/07/24 13:38:18| Initializing IP Cache...
> 2014/07/24 13:38:18| DNS Socket created at 0.0.0.0, FD 6
> 2014/07/24 13:38:18| Adding nameserver 10.11.1.11 from squid.conf
> 2014/07/24 13:38:18| Adding nameserver 10.11.1.12 from squid.conf
> 2014/07/24 13:38:18| helperOpenServers: Starting 0/100 'squidGuard'
> processes
> 2014/07/24 13:38:18| helperOpenServers: No 'squidGuard' processes needed.
> 2014/07/24 13:38:18| Logfile: opening log /var/log/squid/access.log
> 2014/07/24 13:38:18| WARNING: log name now starts with a module name. Use
> 'stdio:/var/log/squid/access.log'
> 2014/07/24 13:38:18| Local cache digest enabled; rebuild/rewrite every
> 3600/3600 sec
> 2014/07/24 13:38:18| Logfile: opening log /var/log/squid/store.log
> 2014/07/24 13:38:18| WARNING: log name now starts with a module name. Use
> 'stdio:/var/log/squid/store.log'
> 2014/07/24 13:38:18| Swap maxSize 210944000 + 2097152 KB, estimated 16387780
> objects
> 2014/07/24 13:38:18| Target number of buckets: 819389
> 2014/07/24 13:38:18| Using 1048576 Store buckets
> 2014/07/24 13:38:18| Max Mem  size: 2097152 KB
> 2014/07/24 13:38:18| Max Swap size: 210944000 KB
> 2014/07/24 13:38:18| Rebuilding storage in /cache2/squid (clean log)
> 2014/07/24 13:38:18| Rebuilding storage in /cache3/squid (clean log)
> 2014/07/24 13:38:18| Rebuilding storage in /cache4/squid (clean log)
> 2014/07/24 13:38:18| Using Least Load store dir selection
> 2014/07/24 13:38:18| Current Directory is /root
> 2014/07/24 13:38:18| Finished loading MIME types and icons.
> 2014/07/24 13:38:18| HTCP Disabled.

Squid is still loading the cache_dir data into memory and there has been
no mention of ports loaded from the config file yet.
 Where is the rest of the startup log?

Amos


Re: [squid-users] cache_peer_access - no longer working as expected

2014-07-24 Thread Amos Jeffries
On 23/07/2014 1:16 p.m., Matthew Croall wrote:
> Hi,
> 
> Long time Squid user, first time posting so I hope I am doing this correctly.
> 
> Having recently upgraded Squid from 3.1 to 3.3 at both organisations I
> support, I have noticed that cache_peer selection doesn't seem to obey
> cache_peer_access anymore.
> 
> Squid Cache: Version 3.3.8
> Ubuntu
> configure options:  '--build=x86_64-linux-gnu' '--prefix=/usr'
> '--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
> '--infodir=${prefix}/share/info' '--sysconfdir=/etc'
> '--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3'
> '--srcdir=.' '--disable-maintainer-mode'
> '--disable-dependency-tracking' '--disable-silent-rules'
> '--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3'
> '--mandir=/usr/share/man' '--enable-inline' '--enable-async-io=8'
> '--enable-storeio=ufs,aufs,diskd,rock'
> '--enable-removal-policies=lru,heap' '--enable-delay-pools'
> '--enable-cache-digests' '--enable-underscores' '--enable-icap-client'
> '--enable-follow-x-forwarded-for'
> '--enable-auth-basic=DB,fake,getpwnam,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB'
> '--enable-auth-digest=file,LDAP'
> '--enable-auth-negotiate=kerberos,wrapper'
> '--enable-auth-ntlm=fake,smb_lm'
> '--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group'
> '--enable-url-rewrite-helpers=fake' '--enable-eui' '--enable-esi'
> '--enable-icmp' '--enable-zph-qos' '--enable-ecap'
> '--disable-translation' '--with-swapdir=/var/spool/squid3'
> '--with-logdir=/var/log/squid3' '--with-pidfile=/var/run/squid3.pid'
> '--with-filedescriptors=65536' '--with-large-files'
> '--with-default-user=proxy' '--enable-linux-netfilter'
> 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fPIE -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wall'
> 'LDFLAGS=-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now'
> 'CPPFLAGS=-D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE
> -fstack-protector --param=ssp-buffer-size=4 -Wformat
> -Werror=format-security'
> 
> Config extract:
> # No Authentication
> cache_peer 10.60.184.47 parent 8080 0 no-digest no-query
> name=minimum_filtering login=user:secret
> cache_peer_access minimum_filtering allow trusted_computers
> cache_peer_access minimum_filtering allow admin_subnet
> cache_peer_access minimum_filtering deny all
> 
> # Requires Authentication
> cache_peer 10.60.184.47 parent 8080 0 no-query no-digest
> name=regular_filtering login=PASS
> cache_peer_access regular_filtering deny trusted_computers
> cache_peer_access regular_filtering deny admin_subnet
> cache_peer_access regular_filtering allow all
> 
> Prior any trusted computer or anyone from the admin subnet would not
> get a http basic auth logon box and would always pass through the
> minimum_filtering peer. Since upgrading users from all over the place
> and myself are now getting logon boxes every now and then, it just
> seems like it is just load balancing and ignoring the
> cache_peer_access controls.
> 
> Has anyone else experienced this? Any help at all would be greatly 
> appreciated!

You are the first to report an issue of this type IIRC. There are a
couple of traffic handling behaviours changed between those series of
Squid which may be relevant. So...

 What does the rest of your squid.conf contain?
 Any sign of issues in cache.log?
  (perhapse with "debug_options ALL,1")

Amos


Re: [squid-users] conditional configuration: are nested if's ok?

2014-07-24 Thread Amos Jeffries
On 18/07/2014 3:37 a.m., ferna...@lozano.eti.br wrote:
> Hi,
> 
> from squid.conf.documented, regarding conditional configuration:
> 
> NOTE: An else-if condition is not supported.
> 
> This mean we cannot have nested if's, like:
> 
> workers 2
> cache_dir rock /cache/shared 2000 min-size=1 max-size=31000
> max-swap-rate=250 swap-timeout=350
> if ${process_number} = 4
> # no aufs for coordinator
> else
> if ${process_number} = 3
> # no aufs for disker
> else
> cache_dir aufs /cache/worker${process_number} 2000 16 256
> min-size=31001 max-size=346030080
> endif
> endif
> 
> 
> []s, Fernando Lozano
> 


I expect that should work. Have you tried it?

Amos


Re: [squid-users] ICAP Error

2014-07-24 Thread Amos Jeffries
On 24/07/2014 10:46 a.m., Roman Gelfand wrote:
> I am getting an error, below, when when attempting to bring up
> http://ads.adfox.ru/173362/goLink?.
> 
> How can I troubleshoot this?
> 



> 
> This means that some aspect of the ICAP communication failed.
> 
> Some possible problems are:
> 
> The ICAP server is not reachable.
> 
> An Illegal response was received from the ICAP server.
> 
> 
> 
> 
> Generated Wed, 23 Jul 2014 22:53:21 GMT by websap.masmid.com (squid)
> 

The ICAP server has bugs.

If "websap.masmid.com" is your Squid server look at the ICAP protocol
coming back from the ICAP server on these requests. That will require
either "debug_options ALL,1 93,9" in squid.conf or a "tcdump -s0"
traffic trace.

Otherwise there is nothing you can do except report it to the
administrator of "websap.masmid.com".

Amos



  1   2   3   4   5   6   7   8   9   10   >