Re: [squid-users] Squid-3.5.28 slowdown

2019-03-01 Thread Michael Hendrie

> On 1 Mar 2019, at 9:34 pm, Enrico Heine  wrote:
> 
> >>just a shot into the dark<<, is it possible that you use the adaption 
> >>service for ICAP?

There is an eCAP adaptation service but not ICAP, would eCAP be effected by the 
same condition reported the bug report you linked to?  
Early in the investigation I did change 'ecap enable off' and do 'squid -k 
reconfigure' while the condition was present but it didn't restore speed, a 
full squid restart was required.

> If so, fast test, this should return 0 if u are not affected by this, if 
> higher than 0 check the link below:
> netstat -pa | grep CLOSE_WAIT | wc -l 
> 
> also have a look into /var/log/kern.log 

I will check these out next time the condition occurs

Thanks,

Michael



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid-3.5.28 slowdown

2019-03-01 Thread Michael Hendrie
Hi Guys,

I have a squid-3.5.28 installation that is deployed to do transparent ssl-bump 
of HTTPS traffic (linux bridge, tproxy).  The server is not overly busy, CPU 
and RAM usage is low + no swap being used yet regularly the squid service is 
choking HTTPS traffic to a point where websites are timing out.  Any other 
traffic flowing through the bridge is unaffected and continues to operate at 
normal expected speeds.

I have checked all obvious things, CPU/RAM usage, network interface errors, 
conntrack table and TCP resource exhaustion yet all look fine. There is no 
caching taking place and disk IO is not a problem.

During the times where squid is slow, even using squidclient to query squid 
state is extremely slow to respond, as you can see below snip from the 
access.log the mgr:coutners and mgr:5min requests are taking up to 30 seconds 
to complete when usually the response time is 0:

1551397216.978 31 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/5min - HIER_NONE/- - -
1551397220.633  0 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/5min - HIER_NONE/- - -
1551397233.385  2 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/5min - HIER_NONE/- - -
1551397237.431 14 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/counters - HIER_NONE/- - -
1551397262.074  2 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/5min - HIER_NONE/- - -
1551397280.644 17 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/counters - HIER_NONE/- - -
1551397314.764  1 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/5min - HIER_NONE/- - -
1551397330.455  5 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/counters - HIER_NONE/- - -
1551397377.265  4 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/5min - HIER_NONE/- - -
1551397385.727  0 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/counters - HIER_NONE/- - -
1551397396.161 17 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/5min - HIER_NONE/- - -
1551397432.974 11 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/counters - HIER_NONE/- - -
1551397462.897 11 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/counters - HIER_NONE/- - -
1551397492.759  7 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/counters - HIER_NONE/- - -
1551397522.611  9 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/counters - HIER_NONE/- - -
1551397552.521 12 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/counters - HIER_NONE/- - -
1551397582.484 17 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/counters - HIER_NONE/- - -
1551397612.446 10 127.0.0.1 TCP_MISS/000 0 GET 
cache_object://localhost/counters - HIER_NONE/- - -

I have a number of these severs deployed, all running same 
hardware/config/squid versions and only this one is experiencing an 
issue.looking for suggestions on what could be occurring and how to debug 
further?  

Thanks,

Michael___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP_TUNNEL and ecap

2019-01-21 Thread Michael Hendrie
Hi All,

I have an ecap adapter that amongst other things tracks response size.  This 
works fine for HTTP and ssl-bump'd HTTPS but not for TCP_TUNNEL responses as 
they are not seen by the ecap adapter.

I understand that in most cases adaptation of a tunnelled HTTPS response is 
pointless as it would result message corruption but wondering if it is at all 
possible to get the TCP_TUNNEL response seen by ecap, I cant see a config 
option for it in 3.5 or 4.5.

Thanks, 

Michael
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ssl-bump splice on unsupported ciphers

2015-12-08 Thread Michael Hendrie
Hi All,

I've read a few articles that indicate squid-3.5 and below doesn't support 
ssl-bump'ing ECDHE ciphers.

Is this correct? If so, is it possible to create/structure acl and ssl-bump 
rules to splice on unsupported ciphers?  

I've looked through the available ACL options and doesn't seem to be possible 
unless I'm missing something.

Thanks,

Michael
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_bump peek in squid-3.5.3

2015-04-24 Thread Michael Hendrie

 On 23 Apr 2015, at 9:22 pm, James Lay j...@slave-tothe-box.net wrote:
 
 Michael,
 
 Could you post your entire config here if possible?  Many of us continue to 
 face challenges with ssl_bump and a working config would be great.  Thank you.
 
 James

My ssl_bump configuration is contained in a separate conf file that is 
“included” via the main squid.conf file.  There is nothing special about my 
main squid.conf, here is the contents of the include:

https_port 8090 tproxy ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/squid/ssl-bump.cer 
key=/etc/squid/ssl-bump.key cafile=/etc/squid/ssl-bump.cer
acl p8090 myportname 8090
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump p8090

Which was built using information from 
http://wiki.squid-cache.org/Features/SslPeekAndSplice 
http://wiki.squid-cache.org/Features/SslPeekAndSplice


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ssl_bump peek in squid-3.5.3

2015-04-23 Thread Michael Hendrie
Hi All

I’ve been running squid-3.4.x in tproxy mode with ssl_bump server-first for 
some time and has been working great.

I have just moved to 3.5.3 to use peek to overcome some issues with sites that 
require SNI to serve up the correct certificate.  In most cases this is work 
well however I seem to have an issue that (so far) only effects the Safari web 
browser with certain sites.  As an example, https://twitter.com 
https://twitter.com/ and https://www.openssl.org https://www.openssl.org/ 
will result in a Safari error page “can’t establish a secure connection with 
the server”.  There is also a correlating entry in the cache.log 'Error 
negotiating SSL connection on FD 45: error:140A1175:SSL 
routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback (1/-1)’

Google shows some hits for this SSL error on other products, mostly nginx, but 
nothing suggesting in those posting seems to have worked for me (settings 
specific SSL/TLS versions and ciphers)

If use a different browser the above mentioned sites work as expected.  If 
continue to bump ‘server-first’ for these problem sites they also load as 
expected in Safari however I’m hoping to move to peek exclusively to overcome 
SNI issues.

Anyone experiencing the same thing or have any suggestions?  ssl_bump related 
config below:

https_port 8090 tproxy ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/squid/ssl-bump.cer 
key=/etc/squid/ssl-bump.key
acl p8090 myportname 8090
acl step1 at_step SslBump1
#acl broken_peek dstdomain .twttr.com .twitter.com .facebook.com .openssl.org
#ssl_bump server-first broken_peek
ssl_bump peek step1
ssl_bump bump p8090

Thanks!

Michael


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_bump peek in squid-3.5.3

2015-04-23 Thread Michael Hendrie

 On 23 Apr 2015, at 4:21 pm, Amos Jeffries squ...@treenet.co.nz wrote:
 
 On 23/04/2015 6:29 p.m., Michael Hendrie wrote:
 Hi All
 
 I’ve been running squid-3.4.x in tproxy mode with ssl_bump
 server-first for some time and has been working great.
 
 I have just moved to 3.5.3 to use peek to overcome some issues with
 sites that require SNI to serve up the correct certificate.  In most
 cases this is work well however I seem to have an issue that (so far)
 only effects the Safari web browser with certain sites.  As an
 example, https://twitter.com https://twitter.com/ and
 https://www.openssl.org https://www.openssl.org/ will result in a
 Safari error page “can’t establish a secure connection with the
 server”.  There is also a correlating entry in the cache.log 'Error
 negotiating SSL connection on FD 45: error:140A1175:SSL
 routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback (1/-1)’
 
 Please try the latest snapshot of 3.5 series. There are some TLS session
 resume and SNI bug fixes.

Thanks Amos, but I did try squid-3.5.3-20150420-r13802 before posting….any 
other suggestions?

Michael
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_bump peek in squid-3.5.3

2015-04-23 Thread Michael Hendrie

 On 23 Apr 2015, at 4:28 pm, Michael Hendrie mich...@hendrie.id.au wrote:
 
 
 On 23 Apr 2015, at 4:21 pm, Amos Jeffries squ...@treenet.co.nz wrote:
 
 On 23/04/2015 6:29 p.m., Michael Hendrie wrote:
 Hi All
 
 I’ve been running squid-3.4.x in tproxy mode with ssl_bump
 server-first for some time and has been working great.
 
 I have just moved to 3.5.3 to use peek to overcome some issues with
 sites that require SNI to serve up the correct certificate.  In most
 cases this is work well however I seem to have an issue that (so far)
 only effects the Safari web browser with certain sites.  As an
 example, https://twitter.com https://twitter.com/ and
 https://www.openssl.org https://www.openssl.org/ will result in a
 Safari error page “can’t establish a secure connection with the
 server”.  There is also a correlating entry in the cache.log 'Error
 negotiating SSL connection on FD 45: error:140A1175:SSL
 routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback (1/-1)’
 
 Please try the latest snapshot of 3.5 series. There are some TLS session
 resume and SNI bug fixes.
 
 Thanks Amos, but I did try squid-3.5.3-20150420-r13802 before posting….any 
 other suggestions?
 
 Michael

OK, I seem to have resolved this now, for the benefit of everyone else on the 
list:

In the above tests the generated certificate was being signed by a RootCA that 
was installed as trusted in the browser certificate store.  

I had previously noticed in my test environment (and thought completely 
unrelated) that bumped requests using the new peek/bump in 3.5.x were not 
sending the entire certificate chain to the browser but since they trusted the 
RootCA that was fine.  In my production environment however I use an 
IntermediateCA to sign the bumped requests, this causes a browser error as the 
clients only trust the RootCA.  As part of investigation to resolve this, I 
found that adding ‘cafile=/path/to/signing_ca_bundle’ to the ‘https_port' line 
(which in my config is exactly the same file as ‘cert=‘) that all certs are 
sent to the client, and I no longer face the issue with Safari and 
https://twitter.com https://twitter.com/ or https://www.openssl.org 
https://www.openssl.org/ regardless of using RootCA or InterCA to sign bumped 
requests.

Not sure why but ‘ssl_bump server-first’ sends the entire chain without 
specifying ‘cafile=‘ and ‘ssl_bump peek/bump’ doesn’t…but anyway my problem is 
solved!

Michael

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid + sslbump compile errors

2012-04-02 Thread Michael Hendrie

On 06/02/2012, at 10:08 AM, Henrik Nordström wrote:

 sön 2012-02-05 klockan 14:09 -0600 skrev James R. Leu:
 
 certificate_db.cc: In member function ‘void Ssl::CertificateDb::load()’:
 certificate_db.cc:455:1: error: ‘index_serial_hash_LHASH_HASH’ was not 
 declared in this scope
 
 Hm.. fails for me as well. Please try the attached patch.

Getting the same error as the original poster with 3.2.0.16.  Patch fixes part 
of the errors but not all.  Remaining is :

certificate_db.cc: In member function ‘bool 
Ssl::CertificateDb::deleteInvalidCertificate()’:
certificate_db.cc:522: error: invalid conversion from ‘void*’ to ‘const _STACK*’
certificate_db.cc:522: error:   initializing argument 1 of ‘void* 
sk_value(const _STACK*, int)’
certificate_db.cc: In member function ‘bool 
Ssl::CertificateDb::deleteOldestCertificate()’:
certificate_db.cc:553: error: invalid conversion from ‘void*’ to ‘const _STACK*’
certificate_db.cc:553: error:   initializing argument 1 of ‘void* 
sk_value(const _STACK*, int)’
certificate_db.cc: In member function ‘bool 
Ssl::CertificateDb::deleteByHostname(const std::string)’:
certificate_db.cc:570: error: invalid conversion from ‘void*’ to ‘const _STACK*’
certificate_db.cc:570: error:   initializing argument 1 of ‘void* 
sk_value(const _STACK*, int)’

This is with Scientific Linux 6.1 (x86_64):
OpenSSL 1.0.0-fips 29 Mar 2010
gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) 


 
 Regards
 Henrik
 
 openssl-1.0.0g.diff



Re: [squid-users] squid + sslbump compile errors

2012-04-02 Thread Michael Hendrie

On 02/04/2012, at 6:29 PM, Henrik Nordström wrote:

 mån 2012-04-02 klockan 16:47 +0930 skrev Michael Hendrie:
 On 06/02/2012, at 10:08 AM, Henrik Nordström wrote:
 
 sön 2012-02-05 klockan 14:09 -0600 skrev James R. Leu:
 
 certificate_db.cc: In member function ‘void Ssl::CertificateDb::load()’:
 certificate_db.cc:455:1: error: ‘index_serial_hash_LHASH_HASH’ was not 
 declared in this scope
 
 Hm.. fails for me as well. Please try the attached patch.
 
 Getting the same error as the original poster with 3.2.0.16.  Patch fixes 
 part of the errors but not all.  Remaining is :
 
 certificate_db.cc: In member function ‘bool 
 Ssl::CertificateDb::deleteInvalidCertificate()’:
 certificate_db.cc:522: error: invalid conversion from ‘void*’ to ‘const 
 _STACK*’
 certificate_db.cc:522: error:   initializing argument 1 of ‘void* 
 sk_value(const _STACK*, int)’
 certificate_db.cc: In member function ‘bool 
 Ssl::CertificateDb::deleteOldestCertificate()’:
 certificate_db.cc:553: error: invalid conversion from ‘void*’ to ‘const 
 _STACK*’
 certificate_db.cc:553: error:   initializing argument 1 of ‘void* 
 sk_value(const _STACK*, int)’
 certificate_db.cc: In member function ‘bool 
 Ssl::CertificateDb::deleteByHostname(const std::string)’:
 certificate_db.cc:570: error: invalid conversion from ‘void*’ to ‘const 
 _STACK*’
 certificate_db.cc:570: error:   initializing argument 1 of ‘void* 
 sk_value(const _STACK*, int)’
 
 This is with Scientific Linux 6.1 (x86_64):
 OpenSSL 1.0.0-fips 29 Mar 2010
 gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) 
 
 The problem is due to a RedHat patch to OpenSSL 1.0 where OpenSSL lies
 about it's version. Not yet sure what is the best way to solve this but
 I guess we need to make configure probe for these OpenSSL features
 instead of relying on the advertised version if we want to support
 --enable-ssl-crtd on these OS version.

Thanks for the info, I have used the '--with-openssl=' configure option to 
compile against a different OpenSSL version (1.0.0g) and this compiled without 
error.

 
 It should be fixed in Fedora rawhide, but apparently can't be fixed for
 released versions of Fedora or RHEL having the hacked openssl version.
 
 Regards
 Henrik
 



Re: [squid-users] requests per second

2012-03-12 Thread Michael Hendrie

On 13/03/2012, at 12:07 AM, guest01 wrote:

 Hi,
 
 We are using Squid as forward-proxy for about 10-20k clients with
 about 1200RPS.
snip
 
 IMHO, it is really important which features you are planning to use.
 For example, we are using authentication (kerberos, ntlm, ldap) and
 ICAP content adaption. Without that, our RPS-rate would be much
 higher. Because of a lacking SMP-support in 3.1, we are using 4
 instances per server. At the beginning, the setup used to be much
 simpler! ;-)
 

Also, understanding your traffic throughput (mbps) and cache-hit ratio and not 
just request/second is a big factor in scoping required hardware.  When 
benchmarking with Web Polygraph, you can see the difference throughput and 
cache-hit has on overall server performance.

As an example, one particular server I benchmarked in a forward proxy 
configuration performed as follows:

1200 requests-per-second @ ~350mbps
or 
2700 requests-per-second @ ~200mbps

That was with Polygraph configured to achieve around 15% byte hit ratio.

Changing the byte hit ratio of the test up to around 40% resulted in a huge 
increase in request rate throughput due to a lot more content being satisfied 
from the high-speed disk array.  A 40% byte hit wasn't realistic for the 
traffic pattern the server was going to see so was an unrealistic test result.

Who knows what the results would have looked like if I added auth, a few ACL's 
different refresh patterns etc.

I think it is very difficult for anyone to answer (other than a guide) whether 
using hardware component X will achieve a result of Y unless they're using the 
exact same hardware (not just 1 component), have the same configuration and 
same traffic patterns.


 hth,
 Peter
 
 On Mon, Mar 12, 2012 at 1:47 PM, David B. haazel...@gmail.com wrote:
 Hi,
 
 It's only a reverse proxy cache, not a proxy. This is different.
 We use squid only for images.
 
 Squid : 3.1.x
 OS : debian 64 bits
 
 Le 12/03/2012 12:44, Student University a écrit :
 Hi David 
 
 You achieve 2K with what version of squid ,,,
 do you have any special configuration tweaks ,,,
 
 also what if i use SSD [200,000 Random Write 4K IOPS]
 
 Best Regards ,,,
 Liley
 



Re: [squid-users] requests per second

2012-03-12 Thread Michael Hendrie

On 11/03/2012, at 10:21 PM, Amos Jeffries wrote:

 On 9/03/2012 4:52 a.m., Student University wrote:
 Hi ,
 This is Liley ,,,
 
 can anyone tell me what
 requests per second can squid3 serves ,
 especially if we run it on the top of a hardware with OCZ RevoDrive 3
 X2 (200,000 Random Write 4K IOPS)
 
 Thanks in advance .
 
 These are some performance stats from network admin who have been willing to 
 donate the info publicly:
 http://wiki.squid-cache.org/KnowledgeBase/Benchmarks

How do we post results on the above wiki page?

 
 As for the OCZ question, Squid has been known to burn through SSDs a lot 
 faster than manufacturer claims of their lifetime. Squid traffic is 
 mostly-write with 50Mbps write peak rates where SSD are manufactured for 
 mostly-read I/O patterns. I've recently been told of one ISP reaching around 
 100Mbps writes on average with no trouble at all.
 
 The OCZ is rated well above that, so is unlikely to be a visible bottleneck. 
 You are more likely to be throttled by the speed Squid can parse new 
 requests. Which is CPU bound.
 
 Amos
 



Re: [squid-users] enabling X-Authenticated-user

2012-02-29 Thread Michael Hendrie

On 01/03/2012, at 1:45 PM, Brett Lymn wrote:

 On Thu, Mar 01, 2012 at 03:07:42PM +1300, Amos Jeffries wrote:
 On 01.03.2012 14:32, Brett Lymn wrote:
 I have an application that pays attention to the X-Authenticated-User
 header.
 
 Why? what does it do?
 
 
 Apparently, it believes it.  I don't _think_ it actually does any
 further authentications based on the information from what I can see but
 just uses the username presented for its own internal machinations.
 
 I need to use this application as an upstream proxy and need to
 have the user authentication passed from squid through to this
 application.
 
 What happens to the user if Squid accepts the credentials and 
 authenticates them. But the other proxy does not? important.
 
 
 Given that both are querying the same auth database (windows AD) this is
 unlikely in our situation.
 
 I know about the login=PASS cache_peer directive but I am
 wondering how that plays with negotiated authentication schemes like
 kerberos.
 
 
 In HTTP proxy-auth credentials are decided at each and every hop down 
 the chain servers. login= is the way Squid uses to determine what 
 credentials are valid for the next peer. The same directive can also 
 completely replace the downstream credentials, wholly or partially and 
 send a new set upstream.
 Kerberos connection-based nature forces this fact right up into your 
 face. Needing a new keytab token at every proxy. Squid 3.2+ supports 
 login=NEGOTIATE to send your Squid's Kerberos credentials to the next 
 proxy down the chain.
 
 
 Hmmm I don't think that is what I need - I really need to pass the name
 of the user that made the connection to squid upstream.  I have just
 tested login=PASS and that works fine for basic auth but kerberos fails.
 
 
 Login from user to web servers is irrelevant to this whole process. 
 They are passed down untouched. Although some auth frameworks like 
 NTLM/Kerberos/Negotiate make several bad assumptions and need persistent 
 connection pinning hacks (Squid 2.6, 2.7, and 3.1+ supported) in place 
 to work over HTTP.
 
 
 Right.  I am not wanting to touch logon from user to web servers.  The
 upstream proxy is a security/scanning thing that can apply different
 policies based on a user or group membership and also feeds the data
 into a reporting database.  For all this to work properly the username
 needs of the person making the request via squid needs to be presented
 to the upstream proxy.
 
 Is there a configuration item I can enable to get the header?
 A bit of a search showed up nothing apart from some ICAP related 
 stuff.
 I cannot use ICAP for this application, I simply need the header.  
 Would
 the squid developers consider a patch if I developed one to add this?
 
 No the header is not part of HTTP or any other protocol specification. 
 It is an experimental header created for the use of ICAP plugins to 
 Squid until such time as Squid can be re-written to use proper 
 authentication to ICAP or ICAP helpers to not depend on the existence of 
 a user label.
 
 
 Well, I can tell you now that someone in the commercial space is abusing
 that header for their own ends.  Their documentation has clear
 instructions on how to add the header to a BlueCoat device and they have
 a .dll for MS ISA.  I don't want to name names in a public forum but I
 am happy to provide the info privately if you are interested.

I have a commercial web filtering and reporting product (although I think 
different from yours) that can also make use of the X-Authenticated-User header 
(as well as other user identification methods).

I have previously patched 3.0 versions of squid using the patch from 
http://www.squid-cache.org/mail-archive/squid-dev/201004/0199.html.

I'm sure it wouldn't be too hard to port to other versions of squid.

 
 -- 
 Brett Lymn
 Warning:
 The information contained in this email and any attached files is
 confidential to BAE Systems Australia. If you are not the intended
 recipient, any use, disclosure or copying of this email or any
 attachments is expressly prohibited.  If you have received this email
 in error, please notify us immediately. VIRUS: Every care has been
 taken to ensure this email and its attachments are virus free,
 however, any loss or damage incurred in using this email is not the
 sender's responsibility.  It is your responsibility to ensure virus
 checks are completed before installing any data sent in this email to
 your computer.
 
 



Re: [squid-users] Single slow site

2011-09-12 Thread Michael Hendrie

On 12/09/2011, at 12:44 PM, John Kenyon wrote:

 I had the exact same problem with with 3.1.10.  In my case it was an IPv6
 problem so I compiled squid with --disable-ipv6 as I didn't need it.  There 
 are
 a number of other ways to overcome the problem if you look through the
 mail archives (http://www.squid-cache.org/mail-archive/squid-
 users/201101/0344.html) or google.
 
 Hi Michael,
 
 I tried disabling ipv6 - no luck! Still getting 30-60 second wait to load 
 this page.
 It *should* take 2-3 seconds to get the initial page up... how long does it 
 take for you?
 https://www.my.commbank.com.au/netbank/Logon/Logon.aspx
 
 Cheers,
 
 JLK

The page now loads for me the same as any other, no excessive delays.



smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] Single slow site

2011-09-12 Thread Michael Hendrie

On 09/09/2011, at 4:37 PM, Amos Jeffries wrote:

 On 09/09/11 18:15, Michael Hendrie wrote:
 
 On 09/09/2011, at 12:34 PM, John Kenyon wrote:
 
 Hi All,
 
 I am experiencing a slow down on one particular site:
 https://www.my.commbank.com.au/netbank/Logon/Logon.aspx
 
 I can access this web site fine however it takes approx. 30 seconds
 to load, and if I bypass squid it takes 1 second.
 
 Currently running version 3.1.15, can someone point me in the right
 direct to further troubleshoot this one?
 
 Cheers,
 
 JLK
 
 I had the exact same problem with with 3.1.10.  In my case it was an
 IPv6 problem so I compiled squid with --disable-ipv6 as I didn't need
 it.  There are a number of other ways to overcome the problem if you
 look through the mail archives
 (http://www.squid-cache.org/mail-archive/squid-users/201101/0344.html)
 or google.
 
 
 Well, considering this is .AU *do* need it, and soonish.
 
 Michael;
 If disabling IPv6 entirely solves your problem, then the problem is in the 
 IPv6 setup. When its one particular site like this its probably at or close 
 to their end. Hanging/Pausing connections could be:
 - DNS lag from resolvers failing to respond the same for A and ,
 - ICMP loss from ISP who still think its safe to drop them, or tunnels with 
 too-big MTU configuration.
 - PMTUD failures from lost ICMP messages.

I understand and wasn't pointing the finger at squid as being the cause of the 
problem, simply offering a place to start looking based on my experience with 
this same site.

In my environment it was much easier to recompile squid with --disable-ipv6 as 
there is no need for it (at this point in time) and a lot quicker than tracking 
down where else in the network, which is beyond my control, the problem is 
occurring.

 That said, I checked from here across the ditch and its seems to be an 
 IPv4-only site. So none of that applies.
 
 
 John;
 being https:// Squids only involvement is limited to being told an IP/domain 
 to connect to and start forwarding packets there.
 I'm more inclined to suspect the bank is doing some extra validation in the 
 background when it detects the end user is not at the IP the request is 
 coming from.
 
 Amos
 -- 
 Please be using
 Current Stable Squid 2.7.STABLE9 or 3.1.15
 Beta testers wanted for 3.2.0.11



smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] Single slow site

2011-09-09 Thread Michael Hendrie

On 09/09/2011, at 12:34 PM, John Kenyon wrote:

 Hi All,
 
 I am experiencing a slow down on one particular site: 
 https://www.my.commbank.com.au/netbank/Logon/Logon.aspx
 
 I can access this web site fine however it takes approx. 30 seconds to load, 
 and if I bypass squid it takes 1 second.
 
 Currently running version 3.1.15, can someone point me in the right direct to 
 further troubleshoot this one?
 
 Cheers,
 
 JLK

I had the exact same problem with with 3.1.10.  In my case it was an IPv6 
problem so I compiled squid with --disable-ipv6 as I didn't need it.  There are 
a number of other ways to overcome the problem if you look through the mail 
archives (http://www.squid-cache.org/mail-archive/squid-users/201101/0344.html) 
or google.



smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] WWW-Authenticate header

2011-06-14 Thread Michael Hendrie

On 15/06/2011, at 8:09 AM, Amos Jeffries wrote:

 On Wed, 15 Jun 2011 08:48:31 +1200, Mike Bordignon (GMI) wrote:
 On 14/06/2011 6:32 p.m., Amos Jeffries wrote:
 Not another one. Good luck.
 
 If you have any influence or contact with the devs of that app please help 
 educate them of the safety issues involved with sending users internal 
 machine logins out over the global Internet. And HTTPS is no longer a 
 guarantee of protection.
 
 
 
 I do have access to the devs, but access won't be over the Internet -
 it'll be over a LAN. No problem there.
 
 replies with a WWW-Authenticate header. Squid doesn't appear to be
 passing through the Authentication headers to the browser.
 
 Indicating that Squid has detected the TCP links involved do not support 
 that type of auth.
 
 I've since used Wireshark and it appears I am receiving
 WWW-Authenticate headers. Somewhat confused now.
 
 Welcome to the party.
 
 
 Could be the security levels don't match between the WebApp server and the 
 workstation. NTLM has a layering system where the server advertises its 
 preferred security level, and the workstation agrees or does not respond. 
 There are five levels, some of which indicate willingness to accept lower 
 security, some restrict only to that level or higher.
 
 This has the best explain I've seen so far. Though it does not mention where 
 Negotiate/Kerberos fits into the layers.
 http://technet.microsoft.com/en-us/magazine/2006.08.securitywatch.aspx
 
 
 
 
 pipeline_prefetch is one feature which NTLM auth will break. Make sure that 
 is turned OFF manually.
 
 HTTP/1.0 persistent connections is another. Make sure 
 client_persistent_connections is turned ON manually in 3.1 series. Make 
 sure that server_persistent_connections is REMOVED from your config in 3.1 
 series, and manually turned ON in 3.0 and earlier.
 
 
 After that its cross fingers and hope. If you find anything strange still 
 going on, please mention it.
 
 When you encounter a problem the first thing asked will be to verify it on 
 the latest release. It speeds up the fix a bit if that is where its found.
 
 Thanks, I will keep that in mind. I've made the other config changes
 you suggest but still I get prompted for a password by my browser, I
 enter the correct password and again I get the prompt (via Firefox).
 IE is working, however?!
 
 Which indicates the credentials are fine as is the proxy part of the 
 transaction. Firefox appears not have security access to the OS properly to 
 do the background stuff required. 2/3 of NTLM and related protocols is done 
 in background actions.

If it's working in IE then its probably one of Firefox's NTLM settings.  If you 
enter about:config in the address bar of FF and then filter for ntlm you 
will see what options are available.

More than likely be the network.automatic-ntlm-auth.trusted-uris; option 
needs the address of the app server listed.

 
 Amos



smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] Access log not using logformat config line.

2011-05-04 Thread Michael Hendrie

On 05/05/2011, at 9:06 AM, Farokh Irani wrote:

 I don't have any specific access_log config line, but that's not the issue. 
 The access log file is being created but the entries aren't in the format 
 I've specified.
 

That is the cause of your issue.  If there is no access_log configuration 
specified it will use the default - 
http://www.squid-cache.org/Doc/config/access_log/.  You need to tell squid to 
use your logging format as Amos described below.

 Thanks.
 
 Amos Jeffries wrote:
 On Wed, 04 May 2011 13:34:01 -0400, Farokh Irani wrote:
 I've got the following entry in my squid.conf file:
 
 logformat ourlogformat %tl %a %Ss/%03Hs %st %rm %ru %un %Sh/%A %mt
 
 But the access.log entries look like this:
 1304530209.280 2765 x.x.x.x TCP_MISS/200 306 POST https://x? -
 DIRECT/x.x.x.x text/html
 
 It doesn't seem to be using the logformat I specified, and I'm not
 sure why.
 
 Any ideas?
 
 Did you use it to format that output file?
 access_log /path/to/file ourformat
 
 http://www.squid-cache.org/Doc/config/access_log/
 
 Amos
 
 -- 
 Farokh Irani
 far...@itouchpoint.com
 Skype: farokhitp
 Phone: 914-262-1594
 



smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] limit squid memory ram use - squid becomes slow when ram full

2011-04-11 Thread Michael Hendrie
On 11/04/2011, at 9:22 PM, rpere...@lavabit.com wrote:

 Hi
 
 How I can limit the ram memory use in my squid/tproxy box ?
 
 I have a fast server with 16Gb ram. The average bandwidth is about 60-70
 Mb/s.
 
 The bridge works well but when the cache and memory becomes full its goes
 slow and becomes unusable.
 
 The cache is 10G size.
 
 I see that a few hours to be working and have used the 16 GB of RAM 
 starts to run slow.
 
 Any help ?. I have configured some memory optimization options but looks
 don't help for me.
 
 Thanks in advance
 
 roberto
 
 This is my config:
 
 -
 
 cache_mem 10 MB
 memory_pools off
 cache_swap_low 94
 cache_swap_high 95
 
 #
 # Recommended minimum configuration:
 #
 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl localhost src ::1/128
 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
 acl to_localhost dst ::1/128
 
 # Example rule allowing access from your local networks.
 # Adapt to list your (internal) IP networks from where browsing
 # should be allowed
 acl localnet src 10.0.0.0/8   # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12# RFC1918 possible internal network
 acl localnet src 192.168.0.0/16   # RFC1918 possible internal network
 acl localnet src fc00::/7   # RFC 4193 local private network range
 acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) machines
 
 acl net-g1 src 200.117.xxx.xxx/24
 acl net-g2 src 200.xxx.xxx.xxx/24
 acl net-g3 src 190.xxx.xxx.xxx/24
 
 acl SSL_ports port 443
 acl Safe_ports port 80# http
 acl Safe_ports port 21# ftp
 acl Safe_ports port 443   # https
 acl Safe_ports port 70# gopher
 acl Safe_ports port 210   # wais
 acl Safe_ports port 1025-65535# unregistered ports
 acl Safe_ports port 280   # http-mgmt
 acl Safe_ports port 488   # gss-http
 acl Safe_ports port 591   # filemaker
 acl Safe_ports port 777   # multiling http
 acl CONNECT method CONNECT
 
 #
 # Recommended minimum Access Permission configuration:
 #
 # Only allow cachemgr access from localhost
 http_access allow manager localhost
 http_access deny manager
 
 # Deny requests to certain unsafe ports
 http_access deny !Safe_ports
 
 # Deny CONNECT to other than secure SSL ports
 http_access deny CONNECT !SSL_ports
 
 # We strongly recommend the following be uncommented to protect innocent
 # web applications running on the proxy server who think the only
 # one who can access services on localhost is a local user
 #http_access deny to_localhost
 
 http_access allow net-g1
 http_access allow net-g2
 http_access allow net-g3
 
 #
 # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
 #
 
 # Example rule allowing access from your local networks.
 # Adapt localnet in the ACL section to list your (internal) IP networks
 # from where browsing should be allowed
 http_access allow localnet
 http_access allow localhost
 
 # And finally deny all other access to this proxy
 http_access deny all
 
 # Squid normally listens to port 3128
 http_port 3128
 http_port 3129 tproxy
 
 
 # We recommend you to use at least the following line.
 hierarchy_stoplist cgi-bin ?
 
 # Uncomment and adjust the following to add a disk cache directory.
 cache_dir ufs /var/spool/squid 1 64 256

The fact that it runs fine initially then performance begins to degrade after a 
number of hours, I would start looking io statistics when the disk cache is 
full.  aufs or diskd provide better performance so I would suggest using one of 
these methods for your cache_dir and see if you get an increase in performance.

 
 #access_log /var/log/squid/access.log squid
 access_log none
 
 cache_log /var/log/squid/cache.log
 
 
 
 # Leave coredumps in the first cache dir
 coredump_dir /var/spool/squid
 
 # Add any of your own refresh_pattern entries above these.
 refresh_pattern ^ftp: 144020% 10080
 refresh_pattern ^gopher:  14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 0   0%  0
 refresh_pattern . 0   20% 4320
 
 
 



Re: [squid-users] refresh_pattern based on acl

2011-03-02 Thread Michael Hendrie

On 03/03/2011, at 12:41 AM, Leon Volfson wrote:

 Hi,
 
 I have a few squid servers in front of web servers (accelerator setup).
 Since the website is very dynamic, I had to turn off the client refresh 
 action:
 
 refresh_pattern -i ^http://www.website.com  14400   80% 43200 
 ignore-reload
 
 but then I got the problem: some files that have a 7 days caching time - I 
 have no way of refreshing them if I modify the file on the webserver.
 
 To make it clearer, I have some .js or .css file which has a max-age of 7 
 days. It's cached by squid and everything's great.
 After a day I modify the file, but the squid keeps serving the old version.
 
 What are the possible solutions in these situations (besides shortening the 
 max-age)?
 
 Is there any way to have another refresh_pattern rule based on my local IP 
 (acl)?
 
 
 Thanks,
 
 Lenny.
 

Not sure about refresh pattern based on ACL but I'm sure someone will chip in 
if they know the answer.

You could always purge the object from the cache so next time it is requested a 
fresh copy is retrieved from your web server.

http://wiki.squid-cache.org/SquidFaq/OperatingSquid?highlight=%28purge%29#How_can_I_purge_an_object_from_my_cache.3F



Re: [squid-users] cache_peer

2011-02-11 Thread Michael Hendrie

On 11/02/2011, at 8:21 PM, Tim Bateson wrote:

 Hi,
 I am using squid 2.7 and would like to know if it possible to map 2
 acl groups to a particular cache_peer.
 Our acls are mapped using the extern_acl and acl as follows.
 external_acl_type groupn children=10 ttl=200 %LOGIN
 /usr/lib/squid/wbinfo_group.pl
 acl unrestrictedusers external groupn grp1
 acl restrictedusers external groupn grp2
 

Check out the cache_peer_access tag.  You can use your ACL elements to 
allow/deny access to certain cache_peers 
http://www.squid-cache.org/Doc/config/cache_peer_access/

 Can anyone confirm what I want is possible. If not I will have to run
 2 squid servers with each set of users getting mapped to their own
 cache_peer parent.
 Thanks,
 Tim



Re: [squid-users] Connection Pinning in 3.1.x

2011-02-01 Thread Michael Hendrie

On 01/02/2011, at 8:39 AM, Amos Jeffries wrote:

 On Mon, 31 Jan 2011 16:20:45 +1030, Michael Hendrie
 mich...@hendrie.id.au
 wrote:
 Hello List,
 
 I need to use a version with connection pinning and was hoping to use
 3.1.10 but I've run into a problem using a cache_peer that requires NTLM
 authentication.  In my tests I'm able to get 3 authenticated requests
 through the parent (access.log on parent shows they have been
 authenticated) before the client starts to receive a pop-up to enter
 credentials.  In the test, child and parent are on the same LAN segment
 so
 there is nothing in between doing any port translations, etc.
 
 The relevant parts of my config:
 
 cache_peer 172.16.50.45 parent 8080 0 no-query proxy-only default
 login=PASS
 never_direct allow all
 persistent_connection_after_error on
 
 I have also tried adding connection-auth=on to both the cache_peer and
 http_port directives but this hasn't helped the situation.
 
 Testing with squid-2.7STABLE9 doesn't show the above issue, connection
 pinning seems to work perfectly to the parent proxy.  I have also tried
 3.1.9 and 3.1.8 in case it was something that was unexpectedly
 introduced
 in the latest version but they fail also.
 
 I should point out that in my tests using 3.1.x talking to an origin
 server requiring NTLM works perfectly, only to a cache_peer fails.
 
 Does anyone have any ideas as to why this is failing, or a 3.1.x talking
 to an NTLM parent and if so could you please share your exact 3.1.x
 version
 and relevant config.
 
 Thanks
 Mick
 
 3.1.10 has one known situation. When the server replies with
 unknown-length or chunked replies squid has no choice but to close the TCP
 link at the end of the object transfer. Breaking NTLM pinning. This is very
 common with dynamic content websites.
 
 Other than that situation it should be working.
 
 You can get a debug trace of the keep-alive actions with debug_options
 33,2 88,5 search for clientReplyStatus: and clientBuildReplyHeader:
 
So I tested with these debug options and while there was a lot of data, nothing 
seemed to jump out the log at me so Wireshark time and what I see is for the 
failed requests, it seems 3.1.x is not correctly setting the Connection or 
Proxy Connection header on the request carrying the type 1 message 
(NTLMSSP_NEGOTIATE) which is needed for NTLM connection pin to function, 
examples are as follows:

Client Request - Child (squid-3.1.10)

GET http://www.google.com.au/images/cb_r.gif HTTP/1.1
Host: www.google.com.au
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) 
Gecko/20101203 Firefox/3.6.13
Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Proxy-Connection: keep-alive
Referer: http://www.google.com.au/
Proxy-Authorization: NTLM TlRMTVNTUAABB4IIAAA=

Child Request (squid-3.1.10) - Parent (squid-3.0.STABLE19)

GET http://www.google.com.au/images/cb_r.gif HTTP/1.1
Host: www.google.com.au
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) 
Gecko/20101203 Firefox/3.6.13
Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Referer: http://www.google.com.au/
Proxy-Authorization: NTLM TlRMTVNTUAABB4IIAAA=
Via: 1.1 3110-child (squid/3.1.10)
X-Forwarded-For: unknown
Cache-Control: max-age=259200

On the return, 3.1.10 is also not setting Connection/Proxy-Connection: close as 
it should:

Parent Response (squid-3.0.STABLE19) - Child (squid-3.1.10) (I believe this 
407 contain only BASIC offering now because the request didn't have a keep 
alive set, the first time the request got 407'd it contained both NTLM and 
BASIC hence the client tried with NTLM)

HTTP/1.0 407 Proxy Authentication Required
Server: squid/3.0.STABLE19
Mime-Version: 1.0
Date: Tue, 01 Feb 2011 13:36:30 GMT
Content-Type: text/html
Content-Length: 2517
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
Proxy-Authenticate: Basic realm=BASIC
X-Cache: MISS from parent
Via: 1.0 parent (squid/3.0.STABLE19)
Proxy-Connection: close


Child (squid-3.1.10) Response - Client

HTTP/1.0 407 Proxy Authentication Required
Server: squid/3.0.STABLE19
Mime-Version: 1.0
Date: Tue, 01 Feb 2011 13:36:30 GMT
Content-Type: text/html
Content-Length: 2517
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
Proxy-Authenticate: Basic realm=BASIC
X-Cache: MISS from parent
X-Cache: MISS from 3110-child
Via: 1.0 parent (squid/3.0.STABLE19), 1.0 3110-child (squid/3.1.10)
Connection: keep-alive

Any idea why this would be occurring?



 Amos
 



Re: [squid-users] Connection Pinning in 3.1.x

2011-01-31 Thread Michael Hendrie
On 01/02/2011, at 12:50 AM, Chad Naugle wrote:

 Is the cache_peer parent, also 3.1.10 or another type of proxy?
 
This is running in a test environment so I have tried a few different parents 
but the result is always the same.  I have tried squid-3.0.STABLE19, 
squid-3.1.10 and ISA2006 as the parents.

 Michael Hendrie mich...@hendrie.id.au 1/31/2011 12:50 AM 
 Hello List,
 
 I need to use a version with connection pinning and was hoping to use
 3.1.10 but I've run into a problem using a cache_peer that requires NTLM
 authentication.  In my tests I'm able to get 3 authenticated requests
 through the parent (access.log on parent shows they have been
 authenticated) before the client starts to receive a pop-up to enter
 credentials.  In the test, child and parent are on the same LAN segment
 so there is nothing in between doing any port translations, etc.
 
 The relevant parts of my config:
 
 cache_peer 172.16.50.45 parent 8080 0 no-query proxy-only default
 login=PASS
 never_direct allow all
 persistent_connection_after_error on
 
 I have also tried adding connection-auth=on to both the cache_peer
 and http_port directives but this hasn't helped the situation.
 
 Testing with squid-2.7STABLE9 doesn't show the above issue, connection
 pinning seems to work perfectly to the parent proxy.  I have also tried
 3.1.9 and 3.1.8 in case it was something that was unexpectedly
 introduced in the latest version but they fail also.
 
 I should point out that in my tests using 3.1.x talking to an origin
 server requiring NTLM works perfectly, only to a cache_peer fails.
 
 Does anyone have any ideas as to why this is failing, or a 3.1.x
 talking to an NTLM parent and if so could you please share your exact
 3.1.x version and relevant config.
 
 Thanks
 Mick
 
 
 
 
 
 Travel Impressions made the following annotations
 -
 This message and any attachments are solely for the intended recipient
 and may contain confidential or privileged information.  If you are not
 the intended recipient, any disclosure, copying, use, or distribution of
 the information included in this message and any attachments is
 prohibited.  If you have received this communication in error, please
 notify us by reply e-mail and immediately and permanently delete this
 message and any attachments.
 Thank you.



[squid-users] Connection Pinning in 3.1.x

2011-01-30 Thread Michael Hendrie
Hello List,

I need to use a version with connection pinning and was hoping to use 3.1.10 
but I've run into a problem using a cache_peer that requires NTLM 
authentication.  In my tests I'm able to get 3 authenticated requests through 
the parent (access.log on parent shows they have been authenticated) before the 
client starts to receive a pop-up to enter credentials.  In the test, child and 
parent are on the same LAN segment so there is nothing in between doing any 
port translations, etc.

The relevant parts of my config:

cache_peer 172.16.50.45 parent 8080 0 no-query proxy-only default login=PASS
never_direct allow all
persistent_connection_after_error on

I have also tried adding connection-auth=on to both the cache_peer and 
http_port directives but this hasn't helped the situation.

Testing with squid-2.7STABLE9 doesn't show the above issue, connection pinning 
seems to work perfectly to the parent proxy.  I have also tried 3.1.9 and 3.1.8 
in case it was something that was unexpectedly introduced in the latest version 
but they fail also.

I should point out that in my tests using 3.1.x talking to an origin server 
requiring NTLM works perfectly, only to a cache_peer fails.

Does anyone have any ideas as to why this is failing, or a 3.1.x talking to an 
NTLM parent and if so could you please share your exact 3.1.x version and 
relevant config.

Thanks
Mick





Re: [squid-users] Some pages loading very slow in 3.1.10 Stable

2011-01-24 Thread Michael Hendrie

On 24/01/2011, at 8:17 PM, Saiful Alam wrote:

 
 OK I have kept your suggestion in my mind, but right now I'm not in a 
 position to buy two HDD's. May be I can afford to buy 15 days later. For the 
 time being, my prime problem is the loading of two major sites from where my 
 users download mp3. Those are
 
 www.music.com.bd   and   www.djmaza.com
 

Seems to load fine for me but that doesn't mean your slow = my fine.

I had issues with some random sites being slow with 3.1.10 and tracked it 
down to squid trying to get  records for the problem sites (or objects 
pulled from other sites).  Not sure why this was occurring as IPv6 is not 
enabled on the OS.  I didn't investigate too much and just recompiled with 
--disable-ipv6 as it wasn't needed.  Doing so resolved my slow sites issue.


 Don't know the reason, but music.com.bd loads very slow. And in firebug i see 
 that the problem persists while loading 3 ads from ads.clicksor.com and some 
 facebook widgets. Can you please check and try to load these two domains if 
 you're running a Squid 3.1.X version and see if everything is alright from 
 your end.
 
 Regards,
 Saiful
 
 
 Date: Mon, 24 Jan 2011 11:41:04 +0200
 From: elie...@ec.hadorhabaac.com
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Some pages loading very slow in 3.1.10 Stable
 
 are you kdding me?
 
 
 500 clients?
 
 if most of the clients are just doing almost nothing just downloading
 one page of 2 MB..
 
 how much is your HD I\O ?? in this case?(in speed MBps) ?
 
 so first.. change the UFS to AUFS you dont need to do anything to the
 cache it self cause it the same system just with Async options on.
 and just buy two more of these 500GB drives and put all three of them in
 raid 0 or 5.
 or first try 0 on two and then add another one to then if it goes smoothly.
 this will give you a lot more speed.
 i dont now the cause but it looks like or connectivity/dns or I\O problem.
 
 i'v tried to use iptraf but ifstat just gives you the numbers of every
 Interface you have in aginst out traffic meter in a simple way.
 
 
 On 24/01/2011 11:02, Saiful Alam wrote:
 
 
 TRIED AUFS, but didn't get better performance, while
 
 researching in the web, I read everywhere that AUFS is better than UFS
 in terms of performance,but I don't know why I get bad performance with
 this.
 Processor is Intel(R) Core(TM) i7 CPU 870 @ 2.93GHz
 Motherboard is Intel® Desktop Board DH55TC
 
 
 
 RAM is 2x4GB=8GB DDR3
 DISK 1 = 250GB Hitachi SATA HDP72502 {[USED ONLY FOR UBUNTU SYSTEM BOOT]}
 DISK 2 = 500GB Hitachi SATA HDS72105 {[USED FOR CACHE DIRECTORIES ONLY, 
 MOUNTED ON /MEDIA/CACHE FILESYSTEM EXT4]}
 
 Normally my Disk I/O never goes more than 15% and I would say the average 
 is about 3-4%.
 For bandwidth monitoring I usually see iptraf which is also good, but 
 surely I'll try ifstat next time.
 At peak hours (which is between 10pm - 2am GMT +0600), we have around 500 
 clients connected (approx)
 
 I have tried apt-get install squid3 (which is the default 3.1.6 in apt 
 repository) and found the performance of 3.1.10 (my custom configuration) 
 is better than the 3.1.6.
 
 Regards,
 Saiful
 
 
 
 
 Date: Mon, 24 Jan 2011 10:36:14 +0200
 From: elie...@ec.hadorhabaac.com
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Some pages loading very slow in 3.1.10 Stable
 
 It's a small peace and most of the answers are not really suppose to there.
 
 
 the first thing is that your cache is not just cache it's a store house..
 
 it;'s not bad but you can try to change the ufs to aufs..
 
 can get better performance.
 
 what are the specification of the machine?
 
 core i7 ? ram? what disk? array of disks?
 
 did you tried to ping from the machine or WGET?
 
 if it's debian you can install ifstat that can give you real-rime
 bandwidth usage and it might be cause of something else that is not
 related to squid..
 
 if you have even like 25 clients downloading obsessively mp3 files for
 like 10 or more minutes in this time your I\O of your hard drive will 
 rise..
 
 also..
 
 you can monitor the access to the squid folders onfly while you have the
 problem and to understand what is causing it..
 
 if it's CPU load or DISK I\O load .. or other stuff.
 
 by the way.. can you try the ubuntu squid3 stable?
 
 im using squid3 stable on ubuntu 10.04 on an Intel Atom D450 machine
 with cache of 40GB and it's taking the load very nicely.
 
 
 
 
 On 24/01/2011 09:12, Saiful Alam wrote:
 
 
 Some results of TCPDUMP in -vv mode.
 
 13:12:04.191180 IP (tos 0x0, ttl 127, id 2750, offset 0, flags [DF], 
 proto TCP (6), length 40)
 172.16.80.2.1155 77.67.29.42.www: Flags [.], cksum 0x6de4 (correct), seq 
 1127903567, ack 4192021369, win 64700, length 0
 13:12:04.192822 IP (tos 0x0, ttl 64, id 4692, offset 0, flags [DF], proto 
 TCP (6), length 823)
 www-12-02-snc5.facebook.com.www 10.16.63.123.3714: Flags [P.], 

Re: [squid-users] Some pages loading very slow in 3.1.10 Stable

2011-01-24 Thread Michael Hendrie

On 24/01/2011, at 11:03 PM, Amos Jeffries wrote:

 On 24/01/11 23:09, Michael Hendrie wrote:
 
 On 24/01/2011, at 8:17 PM, Saiful Alam wrote:
 

snip

 
 I had issues with some random sites being slow with 3.1.10 and
 tracked it down to squid trying to get  records for the problem
 sites (or objects pulled from other sites).  Not sure why this was
 occurring as IPv6 is not enabled on the OS.  I didn't investigate too
 much and just recompiled with --disable-ipv6 as it wasn't needed.
 Doing so resolved my slow sites issue.
 
 
 Seems like you actually had IPv6 partially enabled in the OS, and maybe a 
 break in DNS or MTU.
 
 When Squid 3.1.10 starts up it probes the OS network capabilities to see
 if IPv6 connections can be made. When they are possible it enables
 things like  to use those connections.  --disable-ipv6 merely sets the 
 result of that test to always be false.
 
 With a reasonably fast DNS response time (under a half second)  lookups 
 will not be noticeable.
 
 With working MTU there will be almost zero lag from opening and attempting 
 IPv6 connections on an IPv4-only network.
 

Sorry to to hijack thread but thought I'd post my findings on this as it may be 
useful to other users.  Thanks to Amos for comments, investigation shows that 
simply telling RHEL/CENTOS (5.5) not to enable IPv6 with NETWORKING_IPV6=no in 
/etc/sysconfig/network is not enough to disable IPv6.  Article at 
http://www.cyberciti.biz/faq/redhat-centos-disable-ipv6-networking/ covers what 
is required.  After doing this and recompiling squid this time without the 
--disable-ipv6 option, squid no longer issues  lookup requests.

snip


 Amos
 -- 
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4



Re: [squid-users] TCP_MISS TCP_HIT with Squid-SNMP or squidclient

2010-12-22 Thread Michael Hendrie

On 23/12/2010, at 12:03 AM, Amos Jeffries wrote:


On 22/12/10 18:39, Tom Tux wrote:

Hi

Is there a squid-snmp-oid or a squidclient-option to get the  
following

values (since startup of squid or since creation of cache_dirs)?

- tcp_miss
- tcp_hit
- tcp_mem_hit

If not, how can I determine these values?
Thanks a lot.
Tom


http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs

Under Per-Protocol Statistics OIDs:

MISS = cacheProtoClientHttpRequests - cacheHttpHits
HIT = cacheHttpHits

The individual cache storage locations are not accounted separately.  
So MEM_HIT are not available.


The same details can be found under squidclient mgr:utilization with  
various time-brackets of accumulation.


Amos
--
Please be using
 Current Stable Squid 2.7.STABLE9 or 3.1.9
 Beta testers wanted for 3.2.0.3


I recently created the patch below to allow HTTP HIT KB's to be  
retrieved via SNMP.  This is for 3.0STABLE19 and not sure if it's  
transferable to other version.  I'm not an experienced coder so if  
anyone has any feedback on this (e.g if there are any problems with  
the patch) I'd like to hear:


diff -u -r squid-3.0.STABLE19/include/cache_snmp.h squid-3.0.STABLE19- 
snmp_http_kit_kb/include/cache_snmp.h
--- squid-3.0.STABLE19/include/cache_snmp.h	2009-09-06  
20:59:34.0 +0930
+++ squid-3.0.STABLE19-snmp_http_kit_kb/include/cache_snmp.h	 
2010-12-21 17:23:27.0 +1030

@@ -239,6 +239,7 @@
 PERF_PROTOSTAT_AGGR_KBYTES_OUT,
 PERF_PROTOSTAT_AGGR_CURSWAP,
 PERF_PROTOSTAT_AGGR_CLIENTS,
+PERF_PROTOSTAT_AGGR_HTTP_HIT_KBYTES_OUT,
 PERF_PROTOSTAT_AGGR_END
 };

diff -u -r squid-3.0.STABLE19/src/mib.txt squid-3.0.STABLE19- 
snmp_http_kit_kb/src/mib.txt

--- squid-3.0.STABLE19/src/mib.txt  2009-09-06 20:59:38.0 +0930
+++ squid-3.0.STABLE19-snmp_http_kit_kb/src/mib.txt	2010-12-21  
19:54:57.0 +1030

@@ -405,6 +405,13 @@
  Number of clients accessing cache 
 ::= { cacheProtoAggregateStats 15 }

+   cacheHttpHitKb OBJECT-TYPE
+SYNTAX Counter32
+MAX-ACCESS read-only
+STATUS current
+DESCRIPTION
+ HTTP KB's served from cache 
+::= { cacheProtoAggregateStats 16 }
--
-- cacheProtoMedianSvcStats group
--
diff -u -r squid-3.0.STABLE19/src/snmp_agent.cc squid-3.0.STABLE19- 
snmp_http_kit_kb/src/snmp_agent.cc
--- squid-3.0.STABLE19/src/snmp_agent.cc	2009-09-06 20:59:38.0  
+0930
+++ squid-3.0.STABLE19-snmp_http_kit_kb/src/snmp_agent.cc	2010-12-21  
17:24:44.0 +1030

@@ -504,6 +504,12 @@
   SMI_GAUGE32);
 break;

+ case PERF_PROTOSTAT_AGGR_HTTP_HIT_KBYTES_OUT:
+ Answer = snmp_var_new_integer(Var-name, Var-name_length,
+   (snint)  
statCounter.client_http.hit_kbytes_out.kb,

+   SMI_COUNTER32);
+ break;
+
 default:
 *ErrP = SNMP_ERR_NOSUCHNAME;
 break;
diff -u -r squid-3.0.STABLE19/src/snmp_core.cc squid-3.0.STABLE19- 
snmp_http_kit_kb/src/snmp_core.cc
--- squid-3.0.STABLE19/src/snmp_core.cc	2009-09-06 20:59:38.0  
+0930
+++ squid-3.0.STABLE19-snmp_http_kit_kb/src/snmp_core.cc	2010-12-21  
16:32:52.0 +1030

@@ -179,7 +179,7 @@
 snmpAddNode 
(snmpCreateOid(LEN_SQ_PRF + 1, SQ_PRF, PERF_PROTO),
 LEN_SQ_PRF 
 + 1, NULL, NULL, 2,
 snmpAddNode 
(snmpCreateOid(LEN_SQ_PRF + 2, SQ_PRF, PERF_PROTO, 1),
-LEN_SQ_PRF 
 + 2, NULL, NULL, 15,
+ 
LEN_SQ_PRF 
 + 2, NULL, NULL, 16,
 snmpAddNode 
(snmpCreateOid(LEN_SQ_PRF + 3, SQ_PRF, PERF_PROTO, 1, 1),
 LEN_SQ_PRF 
 + 3, snmp_prfProtoFn, static_Inst, 0),
 snmpAddNode 
(snmpCreateOid(LEN_SQ_PRF + 3, SQ_PRF, 

Re: [squid-users] Object Hit/Byte Hit accounting with Multiple Instances

2010-12-15 Thread Michael Hendrie

On 16/12/2010, at 12:44 PM, Amos Jeffries wrote:

On 15/12/10 14:38, Michael Hendrie wrote:

Hello List,

I have server running 3 instances of squid-3.0.STABLE19 using a
configuration similar to that documented at
http://wiki.squid-cache.org/MultipleInstances. Each instance has all
other instance configured as siblings using the proxy-only  
directive

to allow sharing of cache without duplicating objects. This setup is
working very well and has increased server performance by over 50%.

I'm now trying to get an accurate indication of byte savings I'm
achieving with this configuration however I'm not sure that the
calculations I'm using are giving the correct results. Because each
instance maintains a separate cache_dir this seems to be a little
difficult to calculate. When instance 1 records a request as a MISS  
it
may in fact be a HIT (from an entire system point of view) if the  
object

is retrieved from the cache of instance 2 or 3.

Using a combination of squidclient mgr:counters and SNMP, I grab
counter values from each instance, tally and use the following  
formula

to calculate the byte hit ratio:

(mgr:counters:client_http.hit_kbytes_out +
snmp:cacheClientHTTPHitKb.sibling_addresses) /
(mgr:counters:client_http.kbytes_out -
snmp:cacheClientHTTPHitKb.sibling_addresses) * 100 = % cache byte  
hit ratio


Using this formula, I always seem to get inconsistencies between what
squid reports and what my benchmarking tool reports (web- 
polygraph). In
the few cases I've checked so far, squid is always reporting a 4-5%  
less

byte hit than what web-polygraph reports.


That sounds about the size of header overheads to me.
Give 3.2 workers a try out now and see if that is usable. The stats  
calculations are fixed there for multiple workers.




Unfortunately I must use this version (for the moment) for reasons  
beyond my control.  Just to clarify


1).  Are you saying that headers aren't counted in the any of  
hit_kb_out counters so I would still see the discrepancies in figures  
between web-polygraph and a single instance squid (never had a need to  
check before now).


2).  Excluding the fact that headers may not be counted, does the  
formula I'm using sound like the correct way to calculate hit % with a  
multi-instance setup


3).  From the 3.2 wiki page -   http://wiki.squid-cache.org/Features/SmpScale
	Currently, Squid workers do not share and do not synchronize other  
resources or services, including:
	• object caches (memory and disk) -- there is an active project to  
allow such sharing;


Can 3.2 workers be configured with other workers as siblings to make  
use of their cache.





[squid-users] Object Hit/Byte Hit accounting with Multiple Instances

2010-12-14 Thread Michael Hendrie

Hello List,

I have server running 3 instances of squid-3.0.STABLE19 using a  
configuration similar to that documented at http://wiki.squid-cache.org/MultipleInstances 
.  Each instance has all other instance configured as siblings using  
the proxy-only directive to allow sharing of cache without  
duplicating objects.  This setup is working very well and has  
increased server performance by over 50%.


I'm now trying to get an accurate indication of byte savings I'm  
achieving with this configuration however I'm not sure that the  
calculations I'm using are giving the correct results.  Because each  
instance maintains a separate cache_dir this seems to be a little  
difficult to calculate.  When instance 1 records a request as a MISS  
it may in fact be a HIT (from an entire system point of view) if the  
object is retrieved from the cache of instance 2 or 3.


Using a combination of squidclient mgr:counters and SNMP, I grab  
counter values from each instance, tally and use the following formula  
to calculate the byte hit ratio:


(mgr:counters:client_http.hit_kbytes_out +  
snmp:cacheClientHTTPHitKb.sibling_addresses) /  
(mgr:counters:client_http.kbytes_out -  
snmp:cacheClientHTTPHitKb.sibling_addresses) * 100 = % cache byte hit  
ratio


Using this formula, I always seem to get inconsistencies between what  
squid reports and what my benchmarking tool reports (web-polygraph).   
In the few cases I've checked so far, squid is always reporting a 4-5%  
less byte hit than what web-polygraph reports.


Can anyone suggest a better formula to calculate byte hits from a  
multi-instance configuration?



Cheers

Mick