Re: [squid-users] how to dynamically reconfigure squid?
On 9/04/2014 5:16 p.m., Waldemar Brodkorb wrote: Hi, Amos Jeffries wrote, What do you think? What might be a solution to this problem? I can't restart squid when changing the ACL rules, because then all users in the network would be disconnected. You could set the request_timeout to be short. This would make the CONNECT requests terminate after a few minutes. Will try that. You could also use SSL-bump feature in Squid. This has a double benefit of allowing the control software acting on the HTTPS requests and preventing SPDY etc. being used by the browser. This is not wanted by my boss. Probably because of ethical reasons. If a user uses https, he normally believes his traffic is secure and we want that this is the case. Fair enough. Going back to the initial problem, slow NTLM authentications with newer browsers. Would it be worth to switch completely to Negotiate? Yes. NTLM was deprecated officially by MS about 8 years ago and Negotiate/Kerberos is supported by a wider range of modern software. Or is it possible to cache the NTLM authentication results, so that Squid does not need to fork a ntlm auth helper on every request? NTLM (and Negotiate) credentials are pinned to the connection state for as long as the connection they are valid for exists. As the credentials token is connection-specific there is no additional caching and re-use possible beyond that. The helpers should not be forking on every request. They should be forked on startup and later only if there are insufficient already running. Once forked each helper should service traffic indefinitely. You can minimize NTLM costs: * by enabling persistent connections on both client and server sides of Squid and as widely on other software as possible, * by encouraging HTTP/1.1 with chunked encoding be used as much as possible instead of HTTP/1.0 connection:close by other software in the network, * by adding Negotiate/Kerberos alongside NTLM. There will still be significant churn for NTLM, but every bit helps. Amos
Re: [squid-users] Blank page on first load
You could avoid that by upgrading Squid, perferrably to the current supproted release (3.4.4). I have a client running many IE11 with their default settings behind a Squid-3.4 and not seeing problems. Amos Thank you Amos. I will go to 3.4 then. Hi Amos I built a new proxy, stock standard settings, and tested it again. With IE11 and SPDY/3 enabled I still get the initial page can not be loaded problem. When doing a refresh immediately afterward, it loads the page. On the plus side, Sharepoint sites now work. :-) Any suggestions? Kind Regards Jasper
[squid-users] Squid Question about method GET
Hi all, My name is Miguel A. Aguayo, I'm working on a proyect and have some question about squid The proyect consist in: I have a server with Video content formatted in 3GP-DASH standard I'm transferring this content to a client using Multicast through FLUTE standard In the client I have apache2 and squid What I'm trying to do is a squid config that caches the method GET of a VLC client trying to reach the content in my server and pass the value of the method get to a perl executable that checks in the apache local server to see if the content is in the client if it's in the client apache redirect to that content and if not let the method GET go to the server for the content I have implemented a redirector that just send the petitions to localhost but with out the intelligence needed to see if the file exist or not in the client apache. My question is how to pass the value of method get to a perl program, because I haven't seen an example like that Thanks Best Regards M.C. Miguel Ángel Aguayo Ortuño m...@outlook.com ma.agu...@alumnos.upm.es Estudiante de Doctorado ETSI-T DIT UPM.es
Re: [squid-users] Squid Question about method GET
On 9/04/2014 10:53 p.m., MIGUEL ANGEL AGUAYO ORTUÑO wrote: Hi all, My name is Miguel A. Aguayo, I'm working on a proyect and have some question about squid The proyect consist in: I have a server with Video content formatted in 3GP-DASH standard I'm transferring this content to a client using Multicast through FLUTE standard In the client I have apache2 and squid What I'm trying to do is a squid config that caches the method GET of a VLC client trying to reach the content in my server and pass the value of the method get to a perl executable that checks in the apache local server to see if the content is in the client if it's in the client apache redirect to that content and if not let the method GET go to the server for the content I have implemented a redirector that just send the petitions to localhost but with out the intelligence needed to see if the file exist or not in the client apache. My question is how to pass the value of method get to a perl program, because I haven't seen an example like that That informatinon is already being sent by Squid to your redirector. Here is the specifications of the redirector and rewriter helper protocols: http://wiki.squid-cache.org/Features/AddonHelpers#URL_manipulation Amos
Re: [squid-users] Error negotiating SSL connection on FD ##: Closed by client
On Apr 7, 2014, at 6:34 PM, Dan Charlesworth d...@getbusi.com wrote: Thanks, Guy. I’m almost tempted to just ssl_bump none for 23.0.0.0/12, but I’m sure that would lead to all sorts of annoyances for clients who are tracking users download usage etc. I’d appreciate if you could share your list of IP addresses, might be useful for us. Some CIDRs of interest and the date I verified them. Akamai numbers are bound to vary based on logical and geographical location. Validate before use. 11/27/2013: Dropbox: 108.160.160.0/20 06/03/2013: WebEx: 64.68.96.0/19 05/03/2013: Mozilla: 63.245.208.0/20 11/20/2012: Akamai: 184.24.0.0/13 7/31/2012: swcdn.apple.com: 157.238.0.0/16 6/27/2012: Dropbox: 199.47.216.0/22 6/12/2012: Akamai 23.32.0.0/11, 207.108.0.0/15, 209.211.216.0/24, 204.93.46.0/23, 216.243.192.0/19, 216.243.197.224/20 5/9/2012: supportdownload.apple.com: 67.135.105.0/24 (Akamai) 3/9/2012: Quicken: 206.108.40.0/21 Guy Dan On 7 Apr 2014, at 11:23 pm, Guy Helmer ghel...@palisadesystems.com wrote: On Apr 6, 2014, at 11:58 PM, Dan Charlesworth d...@getbusi.com wrote: This somewhat vague error comes up with relative frequency from iOS apps when browsing via our Squid 3.4.4 intercepting proxy which is performing server-first SSL Bumping. The requests in question don’t make it as far as the access log, but with debug_options 28,3 26,3, the dst IP can be identified and allowed through with ssl_bump none. The device trusts Squid's CA, but apparently that’s not enough for the Twitter iOS app and certain Akamai requests that App Store updates use. Can anyone suggest how one might debug this further? Or just an idea of why the client might be closing the SSL connection in certain cases? Thanks! I suspect that the Twitter app is using certificate pinning to prevent man-in-the-middle decryption: https://dev.twitter.com/docs/security/using-ssl IIRC, I have had some difficulty tracking down or obtaining intermediate certs that Akamai uses. I ended up whitelisting many Akamai IP addresses from SSL interception on my test network. Guy signature.asc Description: Message signed with OpenPGP using GPGMail
Re: [squid-users] Blank page on first load
Hey Jasper, Just to make sure I understand: What is the issue and is it on specific sites? take my site for example http://www1.ngtech.co.il/ Try to browse into the main page. For me squid works fine. I had an issue which ICAP settings delayed the page loading but what you describe is not a blank page but an error page. Can you look at the development console of IE11 and see what is happening in the network layer? Eliezer On 04/09/2014 01:05 PM, Jasper Van Der Westhuizen wrote: Hi Amos I built a new proxy, stock standard settings, and tested it again. With IE11 and SPDY/3 enabled I still get the initial page can not be loaded problem. When doing a refresh immediately afterward, it loads the page. On the plus side, Sharepoint sites now work.:-) Any suggestions? Kind Regards Jasper
Re: [squid-users] Error negotiating SSL connection on FD ##: Closed by client
That’s awesome. I’ll check these out — thanks. On 10 Apr 2014, at 1:03 am, Guy Helmer guy.hel...@palisadesystems.com wrote: On Apr 7, 2014, at 6:34 PM, Dan Charlesworth d...@getbusi.com wrote: Thanks, Guy. I’m almost tempted to just ssl_bump none for 23.0.0.0/12, but I’m sure that would lead to all sorts of annoyances for clients who are tracking users download usage etc. I’d appreciate if you could share your list of IP addresses, might be useful for us. Some CIDRs of interest and the date I verified them. Akamai numbers are bound to vary based on logical and geographical location. Validate before use. 11/27/2013: Dropbox: 108.160.160.0/20 06/03/2013: WebEx: 64.68.96.0/19 05/03/2013: Mozilla: 63.245.208.0/20 11/20/2012: Akamai: 184.24.0.0/13 7/31/2012: swcdn.apple.com: 157.238.0.0/16 6/27/2012: Dropbox: 199.47.216.0/22 6/12/2012: Akamai 23.32.0.0/11, 207.108.0.0/15, 209.211.216.0/24, 204.93.46.0/23, 216.243.192.0/19, 216.243.197.224/20 5/9/2012: supportdownload.apple.com: 67.135.105.0/24 (Akamai) 3/9/2012: Quicken: 206.108.40.0/21 Guy Dan On 7 Apr 2014, at 11:23 pm, Guy Helmer ghel...@palisadesystems.com wrote: On Apr 6, 2014, at 11:58 PM, Dan Charlesworth d...@getbusi.com wrote: This somewhat vague error comes up with relative frequency from iOS apps when browsing via our Squid 3.4.4 intercepting proxy which is performing server-first SSL Bumping. The requests in question don’t make it as far as the access log, but with debug_options 28,3 26,3, the dst IP can be identified and allowed through with ssl_bump none. The device trusts Squid's CA, but apparently that’s not enough for the Twitter iOS app and certain Akamai requests that App Store updates use. Can anyone suggest how one might debug this further? Or just an idea of why the client might be closing the SSL connection in certain cases? Thanks! I suspect that the Twitter app is using certificate pinning to prevent man-in-the-middle decryption: https://dev.twitter.com/docs/security/using-ssl IIRC, I have had some difficulty tracking down or obtaining intermediate certs that Akamai uses. I ended up whitelisting many Akamai IP addresses from SSL interception on my test network. Guy
[squid-users] Squid not sending request to web
Hi All, I have squid 3.3.8 configured as a transparent proxy. My router is redirecting web requests on port 80 to the squid box on port 3128. The problem is that the request is returned url could not be retrieved. My configuration file is below. I am hoping that some one can take a look at it and help me resolve this issue. The proxy server works when I direct traffic to port 3128 using the browser. Router script is below the config file. Thanks #Recommended minimum configuration: #acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 192.168.1.0/24 acl lan src 192.168.1.0/255.255.255.0 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl bad_url url_regex /etc/squid3/blockedsites.acl #acl lan src 192.168.1.0/25 acl CONNECT method CONNECT visible_hostname NAS http_access allow lan #http_access allow manager localhost #http_access deny manager #http_access deny !Safe_ports #http_access deny to_localhost icp_access deny all htcp_access deny all http_port 3129 http_port 192.168.1.16:3128 intercept hierarchy_stoplist cgi-bin ? access_log /var/log/squid3/access.log squid #Suggested default: refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid3 acl whitelist dstdomain /etc/squid3/whitelist.txt # Allow localnet machines to whitelisted sites #http_access allow localnet whitelist # block all other access http_access deny bad_url Below is my Iptables router script. #!/bin/sh PROXY_IP=192.168.1.16 PROXY_PORT=3128 LAN_IP=`nvram get lan_ipaddr` LAN_NET=$LAN_IP/`nvram get lan_netmask` iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp --dport 80 -j ACCEPT iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp --dport 80 -j DNAT --to $PROXY_IP:$PROXY_PORT iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j SNAT --to $LAN_IP iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport $PROXY_PORT -j ACCEPT -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-not-sending-request-to-web-tp4665512.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Squid brought down by hundreds of HEAD request to itself
The Squid instance is started in the morning and stopped at night. It is daily brought down by what I call hundreds of HEAD request to itself. There's no fixed pattern for the problem. Sometimes Squid keep working OK with hundreds of those requests, sometime it just becomes very unresponsive. Here's what the request look like with my logformat: 09/Apr/2014:17:41:02] 192.168.0.2 TCP_MISS:DEFAULT_PARENT 504 HEAD http://192.168.0.2:3128/ HTTP/1.0 Size:333 Ref:- Agent:- Squid's server IP is 192.168.0.2, so it's like the server itself requesting the proxy. There's nothing running on the same server that I know of that would access the proxy. Where a HEAD request like that could come from? Addional info: the size is always 333 during runtime, but when I do a restart, when Squid is stopping then I see much higher numbers, in the thousands first then quickly up until ~2, then it stops and restarts and the pattern dissapears for a couple hours. Any idea of what could cause this to happen? Windows 7 running SQUID 2.7.STABLE8 cheers -nodje
Re: [squid-users] Squid not sending request to web
On 10/04/2014 12:09 p.m., fordjohn wrote: Hi All, I have squid 3.3.8 configured as a transparent proxy. My router is redirecting web requests on port 80 to the squid box on port 3128. The problem is that the request is returned url could not be retrieved. My configuration file is below. I am hoping that some one can take a look at it and help me resolve this issue. The proxy server works when I direct traffic to port 3128 using the browser. Router script is below the config file. It is mandatory that the NAT operation be performed *only* on the Squid box. If NAT is performed anywhere else the IP address information needed by Squid to verify connections is missing. Use policy routing in iptables to direct traffic from the router to the Squid box without NAT. http://wiki.squid-cache.org/ConfigExamples/Intercept/IptablesPolicyRoute Amos
[squid-users] Re: Squid brought down by hundreds of HEAD request to itself
Some type of loop, I suspect. As you probably have parent squids configured. In case, you have, pls also post parents squid.conf It (almost) always makes sense, to post the squid.conf here. Just guessing around does not help a lot. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-brought-down-by-hundreds-of-HEAD-request-to-itself-tp4665513p4665514.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Squid brought down by hundreds of HEAD request to itself
On 10/04/2014 1:32 p.m., nodje wrote: The Squid instance is started in the morning and stopped at night. It is daily brought down by what I call hundreds of HEAD request to itself. There's no fixed pattern for the problem. Sometimes Squid keep working OK with hundreds of those requests, sometime it just becomes very unresponsive. Here's what the request look like with my logformat: 09/Apr/2014:17:41:02] 192.168.0.2 TCP_MISS:DEFAULT_PARENT 504 HEAD http://192.168.0.2:3128/ HTTP/1.0 Size:333 Ref:- Agent:- Squid's server IP is 192.168.0.2, so it's like the server itself requesting the proxy. There's nothing running on the same server that I know of that would access the proxy. ... you mentioned a proxy running on that box :-0 Where a HEAD request like that could come from? Probably; NAT intercepted traffic containing the header Host:192.168.0.2:3128 or, squid.conf http_port containing defaultsite=192.168.0.2:3128 Either way this is a well known DoS enabled by misconfiguration. Add the squid.conf directive via on. You should start to see messages about forwarding loops being blocked and be able to track down which problem it is causing the loop to start. Amos
Fwd: Fwd: Re: [squid-users] Re: WARNING: Forwarding loop detected for:
Hi, Any clue after seeing my squid.conf. I can see another person facing the same problem Squid brought down by hundreds of HEAD request to itself which would have come to your mailbox's today. *Dipjyoti Bharali* *Please consider the environment before printing this email. * On 08-04-2014 15:51, Dipjyoti Bharali wrote: squid.conf is as follows, https_port 192.168.1.1:3129 cert=/etc/pki/myCA/private/server-key-cert.pem transparent http_port 192.168.1.1:3128 transparent acl QUERY urlpath_regex cgi-bin \? acl apache rep_header Server ^Apache access_log /var/log/squid/access.log squid hosts_file /etc/hosts refresh_pattern ^ftp:// 480 60% 22160 refresh_pattern ^gopher:// 30 20% 120 refresh_pattern . 480 50% 22160 forwarded_for on cache_dir ufs /var/spool/squid 1 16 256 acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 acl nocache dst 192.168.0.0/24 acl lan src 192.168.1.0/24 fe80::/10 acl SSL_ports port 443 # https acl Safe_ports port 80 443 # http, https acl Safe_ports port 21 # ftp acl Safe_ports port 995 # SSL/TLS acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl Safe_ports port 2082 # CPANEL acl Safe_ports port 2083 # CPANEL acl Safe_ports port 2078 # Webdav acl purge method PURGE acl CONNECT method CONNECT acl BadSite ssl_error SQUID_X509_V_ERR_DOMAIN_MISMATCH acl banned_sites url_regex -i who.is whois cricket resolver lyrics songs bollywood porn xxx livetv acl ads dstdom_regex /var/squidGuard/ad_block.txt #acl local src 192.168.1.1 acl numeric_IPs dstdom_regex ^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9af]+)?:([0-9af:]+)?:([0-9af]+)?\])):443 acl blockfiles urlpath_regex /var/squidGuard/blocks.files.acl deny_info ERR_BLOCKED_FILES blockfiles http_access deny blockfiles http_access deny banned_sites http_access deny skype_access http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow lan http_access deny numeric_IPS http_access deny all http_reply_access allow all icp_access allow all visible_hostname hindenberg coredump_dir /var/spool/squid cache_peer hindenberg parent 3128 3129 acl PEERS src 192.168.1.1 cache_peer_access hindenberg allow !PEERS sslproxy_cert_error allow lan sslproxy_flags DONT_VERIFY_PEER cache_effective_user squid cache_effective_group squid cache_mem 2048 MB memory_replacement_policy lru cache_replacement_policy heap LFUDA cache deny nocache redirect_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf err_html_text Blocked !! dns_nameservers 127.0.0.1 url_rewrite_children 30 url_rewrite_concurrency 0 httpd_suppress_version_string on *Dipjyoti Bharali* *Please consider the environment before printing this email. * On 08-04-2014 15:05, babajaga wrote: Pls, post squid.conf, without comments. And, wich URL exactly results in the forward loop ? -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/WARNING-Forwarding-loop-detected-for-tp4665487p4665491.html Sent from the Squid - Users mailing list archive at Nabble.com. --- avast! Antivirus: Inbound message clean. Virus Database (VPS): 140407-0, 07-04-2014 Tested on: 08-04-2014 15:14:59 avast! - copyright (c) 1988-2014 AVAST Software. http://www.avast.com --- avast! Antivirus: Outbound message clean. Virus Database (VPS): 140407-0, 07-04-2014 Tested on: 08-04-2014 15:51:48 avast! - copyright (c) 1988-2014 AVAST Software. http://www.avast.com --- avast! Antivirus: Inbound message clean. Virus Database (VPS): 140407-0, 07-04-2014 Tested on: 08-04-2014 15:55:01 avast! - copyright (c) 1988-2014 AVAST Software. http://www.avast.com