[squid-users] Blank page on first load
Hi all I have a problem with some of my users getting blank pages when loading sites like google and MSN. They would open the site and get a blank page, but when refreshing it loads. These users mostly use IE11 but have had it with browsers like Safari. Although I have to say that 98% of the time it is with IE10 and 11. In my squid logs I can see the request going to the website. The client just gets a blank page until they reload it. My setup is 3 servers running squid 3-3.1.12-8.12.1 behind an F5 load balancer. From there I send all traffic to a ZScaler cache peer. In my testing I have bypassed the cache peer but without any success. Has anyone come across this problem before? -- Kind Regards Jasper
Re: [squid-users] Blank page on first load
On 7/04/2014 6:28 p.m., Jasper Van Der Westhuizen wrote: Hi all I have a problem with some of my users getting blank pages when loading sites like google and MSN. They would open the site and get a blank page, but when refreshing it loads. These users mostly use IE11 but have had it with browsers like Safari. Although I have to say that 98% of the time it is with IE10 and 11. In my squid logs I can see the request going to the website. The client just gets a blank page until they reload it. Do you see anything coming back *from* the webserver? Is anything being delivered by Squid to the client? My setup is 3 servers running squid 3-3.1.12-8.12.1 behind an F5 load balancer. From there I send all traffic to a ZScaler cache peer. In my testing I have bypassed the cache peer but without any success. Has anyone come across this problem before? Many people. For many different reasons ranging from browser bugs displaying content they do not understand (or sometimes not even using HTTP), to TCP protocol issues in the Windows networking code (ECN and scaling), to NAT timeouts somewhere along the connection hops, to ICMP filtering (ICMP is not optional etc, etc). Amos
Re: [squid-users] Blank page on first load
My setup is 3 servers running squid 3-3.1.12-8.12.1 behind an F5 load balancer. From there I send all traffic to a ZScaler cache peer. In my testing I have bypassed the cache peer but without any success. Has anyone come across this problem before? Hi Jasper, Have you tried bypassing the F5's ? They try and do a bunch of clever things and this can mess with normal networking/caching Cheers, Pieter
Re: [squid-users] Blank page on first load
In my squid logs I can see the request going to the website. The client just gets a blank page until they reload it. Do you see anything coming back *from* the webserver? Is anything being delivered by Squid to the client? Hi Amos Yes I do see traffic coming back from the server. What I'm found though was that when going to http://www.google.co.za or even http://www.google.com, it redirects to https://www.google.co.za or https://www.google.com. That then gives the user the blank page. When refreshing the page, it then loads properly. Regards Jasper
Re: [squid-users] Blank page on first load
On Mon, 2014-04-07 at 18:42 +1200, Pieter De Wit wrote: My setup is 3 servers running squid 3-3.1.12-8.12.1 behind an F5 load balancer. From there I send all traffic to a ZScaler cache peer. In my testing I have bypassed the cache peer but without any success. Has anyone come across this problem before? Hi Jasper, Have you tried bypassing the F5's ? They try and do a bunch of clever things and this can mess with normal networking/caching Cheers, Pieter Hi Pieter I also thought the F5 could be the problem, but I have a QA squid server that I have been testing with and this server does not sit behind a load balancer. Same results as with prod. Regards Jasper
Re: [squid-users] Blank page on first load
Do you see anything coming back *from* the webserver? Is anything being delivered by Squid to the client? Hi Amos Yes I do see traffic coming back from the server. What I'm found though was that when going to http://www.google.co.za or even http://www.google.com, it redirects to https://www.google.co.za or https://www.google.com. That then gives the user the blank page. When refreshing the page, it then loads properly. Hi Amos I have done more testing and found the following. It seems this problem is HTTP1.1 related. In IE11 Advanced settings(pic attached) there are 3 settings under HTTP Settings. Use HTTP1.1, Use HTTP 1.1 through proxy connections and Use SPDY/3. If we disable the first two, sites like google load first time. After more searches on the web we found an article that suggested that SPDY/3 was problematic and that a patch is available. (http://angrytechnician.wordpress.com/2014/01/16/google-not-loading-first-time-in-ie11-via-a-web-proxy-on-windows-8-1-turn-off-spdy-support/) With the first two options enabled in IE and SPDY/3 disabled, google loads fine first time. IE9 doesn't have a SPDY/3 setting, but disabling the HTTP1.1 settings work. So to me it seems that HTTP1.1 is the problem here(as well as the SPDY/3 problem). We run Squid 3.1. Regards Jasper attachment: image001.png
Re: [squid-users] Blank page on first load
On 7/04/2014 9:09 p.m., Jasper Van Der Westhuizen wrote: Do you see anything coming back *from* the webserver? Is anything being delivered by Squid to the client? Hi Amos Yes I do see traffic coming back from the server. What I'm found though was that when going to http://www.google.co.za or even http://www.google.com, it redirects to https://www.google.co.za or https://www.google.com. That then gives the user the blank page. When refreshing the page, it then loads properly. Hi Amos I have done more testing and found the following. It seems this problem is HTTP1.1 related. In IE11 Advanced settings(pic attached) there are 3 settings under HTTP Settings. Use HTTP1.1, Use HTTP 1.1 through proxy connections and Use SPDY/3. If we disable the first two, sites like google load first time. After more searches on the web we found an article that suggested that SPDY/3 was problematic and that a patch is available. (http://angrytechnician.wordpress.com/2014/01/16/google-not-loading-first-time-in-ie11-via-a-web-proxy-on-windows-8-1-turn-off-spdy-support/) With the first two options enabled in IE and SPDY/3 disabled, google loads fine first time. IE9 doesn't have a SPDY/3 setting, but disabling the HTTP1.1 settings work. So to me it seems that HTTP1.1 is the problem here(as well as the SPDY/3 problem). We run Squid 3.1. Okay. Squid-3.1 is still mostly HTTP/1.0 software and IE has problems using HTTP/1.1 to a 1.0 proxy. You could avoid that by upgrading Squid, perferrably to the current supproted release (3.4.4). I have a client running many IE11 with their default settings behind a Squid-3.4 and not seeing problems. Amos
Re: [squid-users] SSL bump not working for Android and IOS apps
The answer to what you seek depends mainly on: - That you have installed the root CA certificate on the device. - The certificate is bumpable. - The app is not using embedded certificate(public or per user). Eliezer On 04/06/2014 07:56 PM, Rajesh Srivastava wrote: Hi, As part of a proof of concept, I am able to use ssl bump for https sites from IE and Firefox browsers. I have created a self signed certificate in squid and have added the same as trusted certificate in IE and Firefox browsers. I added the same certificate in a mobile device and could see ssl bump is working from inbuilt mobile browser, chrome\safari browsers. But when I use mobile apps, then for couple of apps like twitter, soundhound etc, ssl bump is not working and I can see SSL error in squid cache log. Is there a way to address ssl bump for mobile apps? Thanks in advance, Rajesh -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-bump-not-working-for-Android-and-IOS-apps-tp4665453.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Blank page on first load
With the first two options enabled in IE and SPDY/3 disabled, google loads fine first time. IE9 doesn't have a SPDY/3 setting, but disabling the HTTP1.1 settings work. So to me it seems that HTTP1.1 is the problem here(as well as the SPDY/3 problem). We run Squid 3.1. Okay. Squid-3.1 is still mostly HTTP/1.0 software and IE has problems using HTTP/1.1 to a 1.0 proxy. You could avoid that by upgrading Squid, perferrably to the current supproted release (3.4.4). I have a client running many IE11 with their default settings behind a Squid-3.4 and not seeing problems. Amos Thank you Amos. I will go to 3.4 then. Regards Jasper
Re: [squid-users] inconsistency with objects in squid cache
2014-04-06 13:46 GMT-03:00, Eliezer Croitoru elie...@ngtech.co.il: On 04/06/2014 05:29 PM, Sylvio Cesar wrote: but this happens only in the network 10.21.155.0/24. then squid.conf and the debug_options output would help to understand if it is the reason or there is another reason. Eliezer Hi Eliezer, I noticed that when there is the header Vary: Accept-Encoding,User-Agent the object is not cached. Following logs cache.log and cachemgr.cgi HTTP/1.1 200 OK Date: Mon, 07 Apr 2014 12:26:02 GMT Server: Apache/2.0.63 Last-Modified: Tue, 11 Mar 2014 22:31:34 GMT ETag: 586039-e63e58-484cbd80 Accept-Ranges: bytes Content-Length: 15089240 Vary: Accept-Encoding,User-Agent Cache-Control: public Keep-Alive: timeout=15, max=35 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 FLV^A^E -- 2014/04/07 09:26:06.755 kid1| ctx: exit level 0 2014/04/07 09:26:06.755 kid1| ctx: enter level 0: 'http://xxx.example/video/video.flv' 2014/04/07 09:26:06.755 kid1| http.cc(919) haveParsedReplyHeaders: HTTP CODE: 200 2014/04/07 09:26:06.755 kid1| http.cc(656) httpMakeVaryMark: httpMakeVaryMark: accept-encoding, user-agent=curl%2F7.19.0%20(i686-suse-linux-gnu)%20libcurl%2F7.19.0%20OpenSSL%2F0.9.8h%20zlib%2F1.2.3%20libidn%2F1.10 2014/04/07 09:26:06.755 kid1| http.cc(656) httpMakeVaryMark: httpMakeVaryMark: accept-encoding, user-agent=curl%2F7.19.0%20(i686-suse-linux-gnu)%20libcurl%2F7.19.0%20OpenSSL%2F0.9.8h%20zlib%2F1.2.3%20libidn%2F1.10 2014/04/07 09:26:06.756 kid1| ctx: exit level 0 2014/04/07 09:26:06.756 kid1| client_side.cc(1459) sendStartOfMessage: HTTP Client local=127.0.0.1:3128 remote=127.0.0.1:44323 FD 14 flags=1 2014/04/07 09:26:06.756 kid1| client_side.cc(1460) sendStartOfMessage: HTTP Client REPLY: - HTTP/1.1 200 OK Date: Mon, 07 Apr 2014 12:26:02 GMT Server: Apache/2.0.63 Last-Modified: Tue, 11 Mar 2014 22:31:34 GMT ETag: 586039-e63e58-484cbd80 Accept-Ranges: bytes Content-Length: 15089240 Vary: Accept-Encoding,User-Agent Cache-Control: public Content-Type: text/plain; charset=ISO-8859-1 X-Cache: MISS from sylviosuse11 X-Cache-Lookup: MISS from sylviosuse11:3128 Via: 1.1 sylviosuse11 (squid/3.4.4) Connection: keep-alive -- KEY 7351B3AA6DB80247DE63873CAB59CFE8 STORE_OK IN_MEMORY SWAPOUT_DONE PING_DONE CACHABLE,DISPATCHED,VALIDATED LV:1396873562 LU:1396873566 LM:1394577094 EX:-1 0 locks, 0 clients, 1 refs Swap Dir 0, File 0X01 GET http://xxx.example/video/video.flv vary_headers: accept-encoding, user-agent=curl%2F7.19.0%20(i686-suse-linux-gnu)%20libcurl%2F7.19.0%20OpenSSL%2F0.9.8h%20zlib%2F1.2.3%20libidn%2F1.10 inmem_lo: 0 inmem_hi: 15089604 swapout: 15089604 bytes queued KEY E08FBDC74EAD09CEBCC38380DACCF63F STORE_OK IN_MEMORY SWAPOUT_DONE PING_NONE CACHABLE,VALIDATED LV:1396873566 LU:1396873566 LM:-1EX:1396973566 0 locks, 0 clients, 0 refs Swap Dir 0, File GET http://xxx.example/video/video.flv inmem_lo: 0 inmem_hi: 227 swapout: 227 bytes queued How to make squid perform the object cache when found in the header Vary: Accept-Encoding,User-Agent???
Re: [squid-users] inconsistency with objects in squid cache
My squid -v Squid Cache: Version 3.4.4 configure options: '--prefix=/usr' '--sysconfdir=/etc/squid' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--localstatedir=/var' '--libexecdir=/usr/sbin' '--datadir=/usr/share/squid' '--mandir=/usr/share/man' '--libdir=/usr/lib' '--sharedstatedir=/var/squid' '--with-logdir=/var/log/squid' '--with-swapdir=/var/cache/squid' '--with-pidfile=/var/run/squid.pid' '--with-dl' '--with-maxfd=16384' '--enable-async-io' '--enable-maintainer-mode' '--enable-storeio' '--enable-disk-io' '--enable-removal-policies=heap,lru' '--enable-icmp' '--enable-delay-pools' '--enable-esi' '--enable-icap-client' '--enable-useragent-log' '--enable-referer-log' '--enable-kill-parent-hack' '--enable-arp-acl' '--enable-ssl' '--enable-forw-via-db' '--enable-cache-digests' '--enable-linux-netfilter' '--with-large-files' '--enable-underscores' '--enable-auth' '--enable-basic-auth-helpers' '--enable-ntlm-auth-helpers' '--enable-negotiate-auth-helpers' '--enable-digest-auth-helpers' '--enable-external-acl-helpers' '--enable-ntlm-fail-open' '--enable-stacktraces' '--enable-x-accelerator-vary' '--with-default-user=squid' '--disable-ident-lookups' '--disable-strict-error-checking' '--enable-zph-qos' '--enable-follow-x-forwarded-for' 'CFLAGS=-O2 -g -m32 -march=i586 -mtune=i686 -fmessage-length=0 -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -DNUMTHREADS=60 -march=nocona -03 -pipe -fomit-frame-pointer -funroll-loops -ffast-math -fno-exceptions' 'LDFLAGS=-Wl,-z,relro,-z,now -pie' 'CXXFLAGS=-O2 -g -m32 -march=i586 -mtune=i686 -fmessage-length=0 -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -fPIE -fPIC -DOPENSSL_LOAD_CONF' 2014-04-07 11:02 GMT-03:00, Sylvio Cesar sylviotamo...@gmail.com: 2014-04-06 13:46 GMT-03:00, Eliezer Croitoru elie...@ngtech.co.il: On 04/06/2014 05:29 PM, Sylvio Cesar wrote: but this happens only in the network 10.21.155.0/24. then squid.conf and the debug_options output would help to understand if it is the reason or there is another reason. Eliezer Hi Eliezer, I noticed that when there is the header Vary: Accept-Encoding,User-Agent the object is not cached. Following logs cache.log and cachemgr.cgi HTTP/1.1 200 OK Date: Mon, 07 Apr 2014 12:26:02 GMT Server: Apache/2.0.63 Last-Modified: Tue, 11 Mar 2014 22:31:34 GMT ETag: 586039-e63e58-484cbd80 Accept-Ranges: bytes Content-Length: 15089240 Vary: Accept-Encoding,User-Agent Cache-Control: public Keep-Alive: timeout=15, max=35 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 FLV^A^E -- 2014/04/07 09:26:06.755 kid1| ctx: exit level 0 2014/04/07 09:26:06.755 kid1| ctx: enter level 0: 'http://xxx.example/video/video.flv' 2014/04/07 09:26:06.755 kid1| http.cc(919) haveParsedReplyHeaders: HTTP CODE: 200 2014/04/07 09:26:06.755 kid1| http.cc(656) httpMakeVaryMark: httpMakeVaryMark: accept-encoding, user-agent=curl%2F7.19.0%20(i686-suse-linux-gnu)%20libcurl%2F7.19.0%20OpenSSL%2F0.9.8h%20zlib%2F1.2.3%20libidn%2F1.10 2014/04/07 09:26:06.755 kid1| http.cc(656) httpMakeVaryMark: httpMakeVaryMark: accept-encoding, user-agent=curl%2F7.19.0%20(i686-suse-linux-gnu)%20libcurl%2F7.19.0%20OpenSSL%2F0.9.8h%20zlib%2F1.2.3%20libidn%2F1.10 2014/04/07 09:26:06.756 kid1| ctx: exit level 0 2014/04/07 09:26:06.756 kid1| client_side.cc(1459) sendStartOfMessage: HTTP Client local=127.0.0.1:3128 remote=127.0.0.1:44323 FD 14 flags=1 2014/04/07 09:26:06.756 kid1| client_side.cc(1460) sendStartOfMessage: HTTP Client REPLY: - HTTP/1.1 200 OK Date: Mon, 07 Apr 2014 12:26:02 GMT Server: Apache/2.0.63 Last-Modified: Tue, 11 Mar 2014 22:31:34 GMT ETag: 586039-e63e58-484cbd80 Accept-Ranges: bytes Content-Length: 15089240 Vary: Accept-Encoding,User-Agent Cache-Control: public Content-Type: text/plain; charset=ISO-8859-1 X-Cache: MISS from sylviosuse11 X-Cache-Lookup: MISS from sylviosuse11:3128 Via: 1.1 sylviosuse11 (squid/3.4.4) Connection: keep-alive -- KEY 7351B3AA6DB80247DE63873CAB59CFE8 STORE_OK IN_MEMORY SWAPOUT_DONE PING_DONE CACHABLE,DISPATCHED,VALIDATED LV:1396873562 LU:1396873566 LM:1394577094 EX:-1 0 locks, 0 clients, 1 refs Swap Dir 0, File 0X01 GET http://xxx.example/video/video.flv vary_headers: accept-encoding, user-agent=curl%2F7.19.0%20(i686-suse-linux-gnu)%20libcurl%2F7.19.0%20OpenSSL%2F0.9.8h%20zlib%2F1.2.3%20libidn%2F1.10 inmem_lo: 0 inmem_hi: 15089604 swapout: 15089604 bytes queued KEY E08FBDC74EAD09CEBCC38380DACCF63F STORE_OK IN_MEMORY SWAPOUT_DONE PING_NONE CACHABLE,VALIDATED LV:1396873566 LU:1396873566 LM:-1EX:1396973566 0 locks, 0 clients, 0 refs Swap Dir 0, File GET http://xxx.example/video/video.flv inmem_lo: 0 inmem_hi: 227 swapout: 227 bytes queued How
[squid-users] Re: SSL bump not working for Android and IOS apps
- That you have installed the root CA certificate on the device. - Yes, certificate is installed on device - The certificate is bumpable. - Yes, I can see access logs for https sites from browser - The app is not using embedded certificate(public or per user). - I am not sure though this might be the reason. Is there a way to know if an app for e.g. twitter, soundhound etc. use embedded certificate. Thanks, Rajesh Eliezer Croitoru-2 wrote The answer to what you seek depends mainly on: - That you have installed the root CA certificate on the device. - The certificate is bumpable. - The app is not using embedded certificate(public or per user). Eliezer On 04/06/2014 07:56 PM, Rajesh Srivastava wrote: Hi, As part of a proof of concept, I am able to use ssl bump for https sites from IE and Firefox browsers. I have created a self signed certificate in squid and have added the same as trusted certificate in IE and Firefox browsers. I added the same certificate in a mobile device and could see ssl bump is working from inbuilt mobile browser, chrome\safari browsers. But when I use mobile apps, then for couple of apps like twitter, soundhound etc, ssl bump is not working and I can see SSL error in squid cache log. Is there a way to address ssl bump for mobile apps? Thanks in advance, Rajesh -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-bump-not-working-for-Android-and-IOS-apps-tp4665453.html Sent from the Squid - Users mailing list archive at Nabble.com. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-bump-not-working-for-Android-and-IOS-apps-tp4665453p4665470.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Re: SSL bump not working for Android and IOS apps
Try to surf into the web site using firefox on a regular pc or chrome and see what happens then. Eliezer On 04/07/2014 08:43 PM, Rajesh Srivastava wrote: - That you have installed the root CA certificate on the device. - Yes, certificate is installed on device - The certificate is bumpable. - Yes, I can see access logs for https sites from browser - The app is not using embedded certificate(public or per user). - I am not sure though this might be the reason. Is there a way to know if an app for e.g. twitter, soundhound etc. use embedded certificate. Thanks, Rajesh Eliezer Croitoru-2 wrote The answer to what you seek depends mainly on: - That you have installed the root CA certificate on the device. - The certificate is bumpable. - The app is not using embedded certificate(public or per user). Eliezer On 04/06/2014 07:56 PM, Rajesh Srivastava wrote: Hi, As part of a proof of concept, I am able to use ssl bump for https sites from IE and Firefox browsers. I have created a self signed certificate in squid and have added the same as trusted certificate in IE and Firefox browsers. I added the same certificate in a mobile device and could see ssl bump is working from inbuilt mobile browser, chrome\safari browsers. But when I use mobile apps, then for couple of apps like twitter, soundhound etc, ssl bump is not working and I can see SSL error in squid cache log. Is there a way to address ssl bump for mobile apps? Thanks in advance, Rajesh -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-bump-not-working-for-Android-and-IOS-apps-tp4665453.html Sent from the Squid - Users mailing list archive at Nabble.com. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-bump-not-working-for-Android-and-IOS-apps-tp4665453p4665470.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Do we have an algorithm to define the cachabillity of an object by the request and response?
For example some question was asked about youtube in the past if it's cachable or not. Once we see the request and the response we can say it is cachable. for example this request: Host: r8---sn-nhpax-ua8e.googlevideo.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:28.0) Gecko/20100101 Firefox/28.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip, deflate Referer: http://s.ytimg.com/yts/swfbin/player-vflXH_6x-/watch_as3.swf Connection: keep-alive ##end should be cachable but forces or accepts gzip. and indeed it's being cached. The issue is that the player is using vbr which changes the requests and responses size and shape each time as far as I understand. A store log can show that the object is being stored but the next request is not the same as the previous one that seems pretty similar. So in a case you do try to cache youtube for example it will be very hard to cache an application which changes it's way of fetching the same object in different sizes and shapes. Eliezer
[squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?
Sorry, but what is vbr ? The issue is that the player is using vbr I do not understand your question. First of all, the request usually is not simply r8---sn-nhpax-ua8e.googlevideo.com but also contains add. info, like itag, id, and, most important, range=xxx-yyy. I was always afraid, that youtube/google might start to make xxx-yyy more or less random, as this will definitely kill cachability,unless sombody writes some very smart code to join/extract the various parts. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-tp4665473p4665474.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?
VBR is a short for Variable Bit Rate which means it is changing all the time by specific things that the software think is fit for the content. In streaming it is being used to stream the video and\or audio in a way that fits the current bandwidth more then the quality. Indeed as you say about itag etc but still they changed couple things and the only way is to force a fetch of the full object as far as I can tell. (it was a question which is half a question and half an answer) Eliezer On 04/07/2014 11:23 PM, babajaga wrote: Sorry, but what is vbr ? The issue is that the player is using vbr I do not understand your question. First of all, the request usually is not simply r8---sn-nhpax-ua8e.googlevideo.com but also contains add. info, like itag, id, and, most important, range=xxx-yyy. I was always afraid, that youtube/google might start to make xxx-yyy more or less random, as this will definitely kill cachability,unless sombody writes some very smart code to join/extract the various parts.
Re: [squid-users] inconsistency with objects in squid cache
Hi Eliezer, do you can help me? 2014-04-07 11:06 GMT-03:00 Sylvio Cesar sylviotamo...@gmail.com: My squid -v Squid Cache: Version 3.4.4 configure options: '--prefix=/usr' '--sysconfdir=/etc/squid' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--localstatedir=/var' '--libexecdir=/usr/sbin' '--datadir=/usr/share/squid' '--mandir=/usr/share/man' '--libdir=/usr/lib' '--sharedstatedir=/var/squid' '--with-logdir=/var/log/squid' '--with-swapdir=/var/cache/squid' '--with-pidfile=/var/run/squid.pid' '--with-dl' '--with-maxfd=16384' '--enable-async-io' '--enable-maintainer-mode' '--enable-storeio' '--enable-disk-io' '--enable-removal-policies=heap,lru' '--enable-icmp' '--enable-delay-pools' '--enable-esi' '--enable-icap-client' '--enable-useragent-log' '--enable-referer-log' '--enable-kill-parent-hack' '--enable-arp-acl' '--enable-ssl' '--enable-forw-via-db' '--enable-cache-digests' '--enable-linux-netfilter' '--with-large-files' '--enable-underscores' '--enable-auth' '--enable-basic-auth-helpers' '--enable-ntlm-auth-helpers' '--enable-negotiate-auth-helpers' '--enable-digest-auth-helpers' '--enable-external-acl-helpers' '--enable-ntlm-fail-open' '--enable-stacktraces' '--enable-x-accelerator-vary' '--with-default-user=squid' '--disable-ident-lookups' '--disable-strict-error-checking' '--enable-zph-qos' '--enable-follow-x-forwarded-for' 'CFLAGS=-O2 -g -m32 -march=i586 -mtune=i686 -fmessage-length=0 -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -DNUMTHREADS=60 -march=nocona -03 -pipe -fomit-frame-pointer -funroll-loops -ffast-math -fno-exceptions' 'LDFLAGS=-Wl,-z,relro,-z,now -pie' 'CXXFLAGS=-O2 -g -m32 -march=i586 -mtune=i686 -fmessage-length=0 -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -fPIE -fPIC -DOPENSSL_LOAD_CONF' 2014-04-07 11:02 GMT-03:00, Sylvio Cesar sylviotamo...@gmail.com: 2014-04-06 13:46 GMT-03:00, Eliezer Croitoru elie...@ngtech.co.il: On 04/06/2014 05:29 PM, Sylvio Cesar wrote: but this happens only in the network 10.21.155.0/24. then squid.conf and the debug_options output would help to understand if it is the reason or there is another reason. Eliezer Hi Eliezer, I noticed that when there is the header Vary: Accept-Encoding,User-Agent the object is not cached. Following logs cache.log and cachemgr.cgi HTTP/1.1 200 OK Date: Mon, 07 Apr 2014 12:26:02 GMT Server: Apache/2.0.63 Last-Modified: Tue, 11 Mar 2014 22:31:34 GMT ETag: 586039-e63e58-484cbd80 Accept-Ranges: bytes Content-Length: 15089240 Vary: Accept-Encoding,User-Agent Cache-Control: public Keep-Alive: timeout=15, max=35 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 FLV^A^E -- 2014/04/07 09:26:06.755 kid1| ctx: exit level 0 2014/04/07 09:26:06.755 kid1| ctx: enter level 0: 'http://xxx.example/video/video.flv' 2014/04/07 09:26:06.755 kid1| http.cc(919) haveParsedReplyHeaders: HTTP CODE: 200 2014/04/07 09:26:06.755 kid1| http.cc(656) httpMakeVaryMark: httpMakeVaryMark: accept-encoding, user-agent=curl%2F7.19.0%20(i686-suse-linux-gnu)%20libcurl%2F7.19.0%20OpenSSL%2F0.9.8h%20zlib%2F1.2.3%20libidn%2F1.10 2014/04/07 09:26:06.755 kid1| http.cc(656) httpMakeVaryMark: httpMakeVaryMark: accept-encoding, user-agent=curl%2F7.19.0%20(i686-suse-linux-gnu)%20libcurl%2F7.19.0%20OpenSSL%2F0.9.8h%20zlib%2F1.2.3%20libidn%2F1.10 2014/04/07 09:26:06.756 kid1| ctx: exit level 0 2014/04/07 09:26:06.756 kid1| client_side.cc(1459) sendStartOfMessage: HTTP Client local=127.0.0.1:3128 remote=127.0.0.1:44323 FD 14 flags=1 2014/04/07 09:26:06.756 kid1| client_side.cc(1460) sendStartOfMessage: HTTP Client REPLY: - HTTP/1.1 200 OK Date: Mon, 07 Apr 2014 12:26:02 GMT Server: Apache/2.0.63 Last-Modified: Tue, 11 Mar 2014 22:31:34 GMT ETag: 586039-e63e58-484cbd80 Accept-Ranges: bytes Content-Length: 15089240 Vary: Accept-Encoding,User-Agent Cache-Control: public Content-Type: text/plain; charset=ISO-8859-1 X-Cache: MISS from sylviosuse11 X-Cache-Lookup: MISS from sylviosuse11:3128 Via: 1.1 sylviosuse11 (squid/3.4.4) Connection: keep-alive -- KEY 7351B3AA6DB80247DE63873CAB59CFE8 STORE_OK IN_MEMORY SWAPOUT_DONE PING_DONE CACHABLE,DISPATCHED,VALIDATED LV:1396873562 LU:1396873566 LM:1394577094 EX:-1 0 locks, 0 clients, 1 refs Swap Dir 0, File 0X01 GET http://xxx.example/video/video.flv vary_headers: accept-encoding, user-agent=curl%2F7.19.0%20(i686-suse-linux-gnu)%20libcurl%2F7.19.0%20OpenSSL%2F0.9.8h%20zlib%2F1.2.3%20libidn%2F1.10 inmem_lo: 0 inmem_hi: 15089604 swapout: 15089604 bytes queued KEY E08FBDC74EAD09CEBCC38380DACCF63F STORE_OK IN_MEMORY SWAPOUT_DONE PING_NONE CACHABLE,VALIDATED LV:1396873566 LU:1396873566 LM:-1EX:1396973566 0 locks, 0 clients, 0 refs Swap Dir 0, File
Re: [squid-users] Error negotiating SSL connection on FD ##: Closed by client
Thanks, Guy. I’m almost tempted to just ssl_bump none for 23.0.0.0/12, but I’m sure that would lead to all sorts of annoyances for clients who are tracking users download usage etc. I’d appreciate if you could share your list of IP addresses, might be useful for us. Dan On 7 Apr 2014, at 11:23 pm, Guy Helmer ghel...@palisadesystems.com wrote: On Apr 6, 2014, at 11:58 PM, Dan Charlesworth d...@getbusi.com wrote: This somewhat vague error comes up with relative frequency from iOS apps when browsing via our Squid 3.4.4 intercepting proxy which is performing server-first SSL Bumping. The requests in question don’t make it as far as the access log, but with debug_options 28,3 26,3, the dst IP can be identified and allowed through with ssl_bump none. The device trusts Squid's CA, but apparently that’s not enough for the Twitter iOS app and certain Akamai requests that App Store updates use. Can anyone suggest how one might debug this further? Or just an idea of why the client might be closing the SSL connection in certain cases? Thanks! I suspect that the Twitter app is using certificate pinning to prevent man-in-the-middle decryption: https://dev.twitter.com/docs/security/using-ssl IIRC, I have had some difficulty tracking down or obtaining intermediate certs that Akamai uses. I ended up whitelisting many Akamai IP addresses from SSL interception on my test network. Guy
Re: [squid-users] inconsistency with objects in squid cache
On 8/04/2014 2:02 a.m., Sylvio Cesar wrote: 2014-04-06 13:46 GMT-03:00, Eliezer Croitoru elie...@ngtech.co.il: On 04/06/2014 05:29 PM, Sylvio Cesar wrote: but this happens only in the network 10.21.155.0/24. then squid.conf and the debug_options output would help to understand if it is the reason or there is another reason. Eliezer Hi Eliezer, I noticed that when there is the header Vary: Accept-Encoding,User-Agent the object is not cached. The object *is* cached... -- KEY 7351B3AA6DB80247DE63873CAB59CFE8 STORE_OK IN_MEMORY SWAPOUT_DONE PING_DONE CACHABLE,DISPATCHED,VALIDATED Note the CACHEABLE and STORE_OK and SWAPOUT_DONE. LV:1396873562 LU:1396873566 LM:1394577094 EX:-1 0 locks, 0 clients, 1 refs Swap Dir 0, File 0X01 GET http://xxx.example/video/video.flv vary_headers: accept-encoding, user-agent=curl%2F7.19.0%20(i686-suse-linux-gnu)%20libcurl%2F7.19.0%20OpenSSL%2F0.9.8h%20zlib%2F1.2.3%20libidn%2F1.10 However only a client which sends the *exact* headers marked above will be able to fetch it. inmem_lo: 0 inmem_hi: 15089604 swapout: 15089604 bytes queued KEY E08FBDC74EAD09CEBCC38380DACCF63F STORE_OK IN_MEMORY SWAPOUT_DONE PING_NONE CACHABLE,VALIDATED LV:1396873566 LU:1396873566 LM:-1EX:1396973566 0 locks, 0 clients, 0 refs Swap Dir 0, File GET http://xxx.example/video/video.flv inmem_lo: 0 inmem_hi: 227 swapout: 227 bytes queued How to make squid perform the object cache when found in the header Vary: Accept-Encoding,User-Agent??? Always fetch using the same browser agent. Any time the Accept-Encoding OR User-Agent header changes (even by 1 byte) a new object will be fetched and cached. ... this is not a reasonable thing for real users to do. They all choose a mix of browsers, players and versions. Alternatively if you have control over the web server it needs redesigning in a way that prevents Vary:User-Agent from being sent. Amos
Re: [squid-users] Error negotiating SSL connection on FD ##: Closed by client
On 8/04/2014 11:34 a.m., Dan Charlesworth wrote: Thanks, Guy. I’m almost tempted to just ssl_bump none for 23.0.0.0/12, but I’m sure that would lead to all sorts of annoyances for clients who are tracking users download usage etc. FYI: for tracking just usage amounts it is not a huge problem. The CONNECT are still logged along with the transferred data sizes. Just a little bit later than the requests inside the tunnel would have been. It is a big problem for AV scanning and site restrictions security blocks. But then in those cases the pinning is being slightly helpful by blocking the sites usage without wasting Squid's processing time. Amos I’d appreciate if you could share your list of IP addresses, might be useful for us. Dan On 7 Apr 2014, at 11:23 pm, Guy Helmer wrote: On Apr 6, 2014, at 11:58 PM, Dan Charlesworth wrote: This somewhat vague error comes up with relative frequency from iOS apps when browsing via our Squid 3.4.4 intercepting proxy which is performing server-first SSL Bumping. The requests in question don’t make it as far as the access log, but with debug_options 28,3 26,3, the dst IP can be identified and allowed through with ssl_bump none. The device trusts Squid's CA, but apparently that’s not enough for the Twitter iOS app and certain Akamai requests that App Store updates use. Can anyone suggest how one might debug this further? Or just an idea of why the client might be closing the SSL connection in certain cases? Thanks! I suspect that the Twitter app is using certificate pinning to prevent man-in-the-middle decryption: https://dev.twitter.com/docs/security/using-ssl IIRC, I have had some difficulty tracking down or obtaining intermediate certs that Akamai uses. I ended up whitelisting many Akamai IP addresses from SSL interception on my test network. Guy
Re: [squid-users] inconsistency with objects in squid cache
Amos, I have configured squid.conf request_header_access Vary deny all put the squid ignores this and still appears the Vary: Accept-Encoding Is there a way to delete the row of squid Vary: Accept-Encoding? 2014-04-07 20:49 GMT-03:00 Amos Jeffries squ...@treenet.co.nz: On 8/04/2014 2:02 a.m., Sylvio Cesar wrote: 2014-04-06 13:46 GMT-03:00, Eliezer Croitoru elie...@ngtech.co.il: On 04/06/2014 05:29 PM, Sylvio Cesar wrote: but this happens only in the network 10.21.155.0/24. then squid.conf and the debug_options output would help to understand if it is the reason or there is another reason. Eliezer Hi Eliezer, I noticed that when there is the header Vary: Accept-Encoding,User-Agent the object is not cached. The object *is* cached... -- KEY 7351B3AA6DB80247DE63873CAB59CFE8 STORE_OK IN_MEMORY SWAPOUT_DONE PING_DONE CACHABLE,DISPATCHED,VALIDATED Note the CACHEABLE and STORE_OK and SWAPOUT_DONE. LV:1396873562 LU:1396873566 LM:1394577094 EX:-1 0 locks, 0 clients, 1 refs Swap Dir 0, File 0X01 GET http://xxx.example/video/video.flv vary_headers: accept-encoding, user-agent=curl%2F7.19.0%20(i686-suse-linux-gnu)%20libcurl%2F7.19.0%20OpenSSL%2F0.9.8h%20zlib%2F1.2.3%20libidn%2F1.10 However only a client which sends the *exact* headers marked above will be able to fetch it. inmem_lo: 0 inmem_hi: 15089604 swapout: 15089604 bytes queued KEY E08FBDC74EAD09CEBCC38380DACCF63F STORE_OK IN_MEMORY SWAPOUT_DONE PING_NONE CACHABLE,VALIDATED LV:1396873566 LU:1396873566 LM:-1EX:1396973566 0 locks, 0 clients, 0 refs Swap Dir 0, File GET http://xxx.example/video/video.flv inmem_lo: 0 inmem_hi: 227 swapout: 227 bytes queued How to make squid perform the object cache when found in the header Vary: Accept-Encoding,User-Agent??? Always fetch using the same browser agent. Any time the Accept-Encoding OR User-Agent header changes (even by 1 byte) a new object will be fetched and cached. ... this is not a reasonable thing for real users to do. They all choose a mix of browsers, players and versions. Alternatively if you have control over the web server it needs redesigning in a way that prevents Vary:User-Agent from being sent. Amos -- Att, Sylvio César, LPIC1, LPIC2, RHCT, RHCE, NCLA, FreeBSD Committer. Se vós estiverdes em mim, e as minhas palavras estiverem em vós, pedireis tudo o que quiserdes, e vos será feito. João 15:7
[squid-users] How to make squid proxy server cache response with vary: * in header?
curl -x localhost:3128 --silent -o /dev/null --dump-header /dev/stdout http://stackoverflow.com HTTP/1.1 200 OK Cache-Control: public, max-age=18 Content-Type: text/html; charset=utf-8 Expires: Tue, 08 Apr 2014 02:00:38 GMT Last-Modified: Tue, 08 Apr 2014 01:59:38 GMT Vary: * X-Frame-Options: SAMEORIGIN Date: Tue, 08 Apr 2014 02:00:18 GMT Content-Length: 212147 X-Cache: MISS from sylviosuse11 X-Cache-Lookup: MISS from sylviosuse11:3128 Via: 1.1 sylviosuse11 (squid/3.4.4) Connection: keep-alive -- Att, Sylvio César,
Re: [squid-users] How to make squid proxy server cache response with vary: * in header?
Vary:* means the response changes depending on factors outside the HTTP protocol for which shared proxies like Squid are 100% unable to determine whether the cached response is appropriate to deliver. Even if you did store it, the cache would still always MISS. FWIW: The server for stackoverflow is presenting several conflicting cache controls. Personally I think that server should be emitting an ETag header and Cache-Control:max-age=60 instead of the Vary and the Cache-Control's it is using. But even so there is still nothing any proxy can do about it except MISS. Also, be aware that curl by default sends cache control headers forcing a MISS, so it is not the best tool to be testing proxies with. Prefer wget, squidclient, or the HTTP validator at http://redbot.org/. Amos On 8/04/2014 2:00 p.m., Sylvio Cesar wrote: curl -x localhost:3128 --silent -o /dev/null --dump-header /dev/stdout http://stackoverflow.com HTTP/1.1 200 OK Cache-Control: public, max-age=18 Content-Type: text/html; charset=utf-8 Expires: Tue, 08 Apr 2014 02:00:38 GMT Last-Modified: Tue, 08 Apr 2014 01:59:38 GMT Vary: * X-Frame-Options: SAMEORIGIN Date: Tue, 08 Apr 2014 02:00:18 GMT Content-Length: 212147 X-Cache: MISS from sylviosuse11 X-Cache-Lookup: MISS from sylviosuse11:3128 Via: 1.1 sylviosuse11 (squid/3.4.4) Connection: keep-alive
Re: [squid-users] inconsistency with objects in squid cache
On 8/04/2014 12:54 p.m., Sylvio Cesar wrote: Amos, I have configured squid.conf request_header_access Vary deny all put the squid ignores this and still appears the Vary: Accept-Encoding Is there a way to delete the row of squid Vary: Accept-Encoding? No. That will screw up HTTP. From the IETF draft which is about to become the new HTTP/1.1 RFC standard (http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-7.1.4) An origin server might send Vary with a list of fields for two purposes: 1. To inform cache recipients that they MUST NOT use this response to satisfy a later request unless the later request has the same values for the listed fields as the original request (Section 4.1 of [Part6]). In other words, Vary expands the cache key required to match a new request to the stored cache entry. No matter what is done when communicating with the server it is the *received* header from clients which Squid is required to use for cache lookups. == Mangling the headers delivered to the server (and thus the ones used to store the object) will only increase the amount of MISS happening on Vary. As later client requests with even the same headers as current client request will not match the headers sent to the server. The best you can do is ensure that the server sees the headers Squid received from the client as accurately as possible (for at least the set mentioned in Vary). Future requests from the same client or others with the same UA and encoding can then HIT. Remember: there is *never* a guarantee of a HIT in real traffic. Caching is a best effort mechanism. Amos
Re: [squid-users] How to make squid proxy server cache response with vary: * in header?
On 8/04/2014 3:02 p.m., Sylvio Cesar wrote: Amos, how I use squidclient to download a file .flv for example?? squidclient -h shows the full set of parameters available and what they do. As with any good command line tool. Via proxy on localhost: squidclient http://stackoverflow.com/ Via proxy at example.com (could be an IP if needed): squidclient -h example.com http://stackoverflow.com/ Direct from the web server: squidclient -p 80 -h stackoverflow.com / NP: Depending on tool version you may or may not also need the -j stackoverflow.com or -H 'Host:stackoverflow.com\n' parameters to set the Host: header explicitly. The -H takes a string of extra headers separated by \n to add to the request. Amos 2014-04-07 23:35 GMT-03:00 Amos Jeffries squ...@treenet.co.nz: Vary:* means the response changes depending on factors outside the HTTP protocol for which shared proxies like Squid are 100% unable to determine whether the cached response is appropriate to deliver. Even if you did store it, the cache would still always MISS. FWIW: The server for stackoverflow is presenting several conflicting cache controls. Personally I think that server should be emitting an ETag header and Cache-Control:max-age=60 instead of the Vary and the Cache-Control's it is using. But even so there is still nothing any proxy can do about it except MISS. Also, be aware that curl by default sends cache control headers forcing a MISS, so it is not the best tool to be testing proxies with. Prefer wget, squidclient, or the HTTP validator at http://redbot.org/. Amos On 8/04/2014 2:00 p.m., Sylvio Cesar wrote: curl -x localhost:3128 --silent -o /dev/null --dump-header /dev/stdout http://stackoverflow.com HTTP/1.1 200 OK Cache-Control: public, max-age=18 Content-Type: text/html; charset=utf-8 Expires: Tue, 08 Apr 2014 02:00:38 GMT Last-Modified: Tue, 08 Apr 2014 01:59:38 GMT Vary: * X-Frame-Options: SAMEORIGIN Date: Tue, 08 Apr 2014 02:00:18 GMT Content-Length: 212147 X-Cache: MISS from sylviosuse11 X-Cache-Lookup: MISS from sylviosuse11:3128 Via: 1.1 sylviosuse11 (squid/3.4.4) Connection: keep-alive
[squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?
Thanx for the hint. Was wondering already because of unusual low byte-hitrate in my 2.7-setup. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-tp4665473p4665485.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Caching not working for Youtube videos
Hi, Last week we realized that our caching for Youtube videos is broken and not working any more. We are using 'storeurl_rewrite_program' header to rewrite URL for all youtube videos. Following is our configuration (Squid 2.7): acl store_rewrite_list url_regex youtube cache allow store_rewrite_list storeurl_access allow store_rewrite_list storeurl_access deny all storeurl_rewrite_program VideoCachingPolicy.pl storeurl_rewrite_children 1 storeurl_rewrite_concurrency 100 We use the following method in VideoCachingPolicy.pl: 1. All youtube requests which have stream_204 and generate_204 in the URL are stored in a log file. 2. In the perl file, for each request we check if it has videoplayback + google/youtube in the URL 3. If Yes, then we read(backwards) the log file generated in step 1. a. We check if any of the stream_204/generate_204 requests have a matching CPN field. If yes then we extract the docid from these requests and generate an internal URL. b. Else we append the ID which came with the current request. Note: As this ID is dynamically generated for every request stream so it doesn't result in cache HIT. This method was working fine for some time, but now it seems to be broken. On investigating I found two issues: 1. The stream_204/generate_204 requests do not always come before videoplayback requests. 2. Even if stream_204 requests come before videoplayback they are not logged immediately. When I try to read the file, it doesnt have these lines initially but it has them later on. Is anyone else facing these issues? Is there any long term solution for caching Youtube videos? Thanks, Aditya