Re: [squid-users] squid authentication failing
On Mon, Aug 11, 2014 at 7:59 PM, Sarah Baker sba...@brightedge.com wrote: Background: Squid: squid-3.1.23-2.el6.x86_64 OS: CentOS 6.5 - Linux 2.6.32-431.23.3.el6.x86_64 #1 SMP Thu Jul 31 17:20:5= 1 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Issue: I have two boxes, same OS, same squid binary, same config file, same squid-= passwd file. Configuration is setup for ncsa_auth. Squid runs as user squid. Both systems return OK to use of command line of ncsa_auth as squid user to= the login and password in the squid-passwd file. Using squid however via a curl thru one of the proxy ips/port of the system= : one system gives 403 forbidden, the other works just fine. Tried removing authentication entirely, a fully open squid. It fails - same message. 403 forbidden means that the authenticator doesn't even get to kick in; it's a final deny. Are you really sure that the 403 is generated by Squid, and not by the origin server? you can tell by looking at the error page. Also looked at thusfar: rpm -q query_options --requires squid-3.1.23-2.el6.x86_64 the same on both boxes. Ran yum update on both to insure everything was up to latest - no change. The issue is either not in squid or it's related to the http_access configuration. Would you mind sharing an excerpt of your squid.conf with including that part? Any ideas what I should look far? -- Francesco
Re: [squid-users] Re: ONLY Cache certain Websites.
On 12/08/2014 7:57 a.m., nuhll wrote: Thanks for your help. But i go crazy. =) Internet is slow as fuck. I dont see any errors in the logs. And some services (Battle.net) is not working. /etc/squid3/squid.conf debug_options ALL,1 33,2 acl domains_cache dstdomain /etc/squid/lists/domains_cache cache allow domains_cache acl localnet src 192.168.0.0 acl all src all acl localhost src 127.0.0.1 cache deny all #access_log daemon:/var/log/squid/access.test.log squid http_port 192.168.0.1:3128 transparent cache_dir ufs /daten/squid 10 16 256 range_offset_limit 100 MB windowsupdate maximum_object_size 6000 MB quick_abort_min -1 # Add one of these lines for each of the websites you want to cache. refresh_pattern -i microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000 reload-into-ims refresh_pattern -i windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000 reload-into-ims refresh_pattern -i windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000 reload-into-ims #kaspersky update refresh_pattern -i geo.kaspersky.com/.*\.(cab|dif|pack|q6v|2fv|49j|tvi|ez5|1nj|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000 reload-into-ims #nvidia updates refresh_pattern -i download.nvidia.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000 reload-into-ims #java updates refresh_pattern -i sdlc-esd.sun.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000 reload-into-ims # DONT MODIFY THESE LINES refresh_pattern \^ftp: 144020% 10080 refresh_pattern \^gopher:14400% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 #kaspersky update acl kaspersky dstdomain geo.kaspersky.com acl windowsupdate dstdomain windowsupdate.microsoft.com acl windowsupdate dstdomain .update.microsoft.com acl windowsupdate dstdomain download.windowsupdate.com acl windowsupdate dstdomain redir.metaservices.microsoft.com acl windowsupdate dstdomain images.metaservices.microsoft.com acl windowsupdate dstdomain c.microsoft.com acl windowsupdate dstdomain www.download.windowsupdate.com acl windowsupdate dstdomain wustat.windows.com acl windowsupdate dstdomain crl.microsoft.com acl windowsupdate dstdomain sls.microsoft.com acl windowsupdate dstdomain productactivation.one.microsoft.com acl windowsupdate dstdomain ntservicepack.microsoft.com acl CONNECT method CONNECT acl wuCONNECT dstdomain www.update.microsoft.com acl wuCONNECT dstdomain sls.microsoft.com http_access allow kaspersky localnet http_access allow CONNECT wuCONNECT localnet http_access allow windowsupdate localnet #test http_access allow localnet http_access allow all http_access allow localhost /etc/squid/lists/domains_cache microsoft.com windowsupdate.com windows.com #nvidia updates download.nvidia.com #java updates sdlc-esd.sun.com #kaspersky geo.kaspersky.com /var/log/squid3/access.log 1407786051.567 17909 192.168.0.125 TCP_MISS/000 0 GET http://dist.blizzard.com.edgesuite.net/hs-pod/beta/EU/4944.direct/base-Win-deDE.MPQ - DIRECT/dist.blizzard.com.edgesuite.net - 1407786051.567 17909 192.168.0.125 TCP_MISS/000 0 GET http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/base-Win.MPQ - DIRECT/llnw.blizzard.com - The blizzard.com servers did not produce a response for these requests. Squid waited almost 18 seconds and nothing came back. TCP window scaling, ECN, Path-MTU discovery, ICMP blocking are things to look for here. Any one of them could be breaking the connection from transmitting or receiving properly. The rest of the log shows working traffic. Even for battle.net. I suspect battle.net uses non-80 ports right? I doubt those are being intercepted in your setup. /var/log/squid3/cache.log 2014/08/11 21:51:29| Squid Cache (Version 3.1.20): Exiting normally. 2014/08/11 21:53:04| Starting Squid Cache version 3.1.20 for x86_64-pc-linux-gnu... Hmm. Which version of Debian (or derived OS) are you using? and can you update it to the latest stable? squid3 package has been at 3.3.8 for most of a year now. 2014/08/11 21:53:04| Process ID 32739 2014/08/11 21:53:04| With 65535 file descriptors available 2014/08/11 21:53:04| Initializing IP Cache... 2014/08/11 21:53:04| DNS Socket created at [::], FD 7 2014/08/11 21:53:04| DNS Socket created at 0.0.0.0, FD 8 2014/08/11 21:53:04| Adding nameserver 8.8.8.8 from squid.conf 2014/08/11 21:53:04| Adding nameserver 8.8.4.4 from squid.conf 2014/08/11 21:53:05| Unlinkd pipe opened on FD 13 2014/08/11 21:53:05| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec 2014/08/11 21:53:05| Store logging disabled 2014/08/11 21:53:05| Swap maxSize 10240 + 262144 KB, estimated 7897088 objects 2014/08/11 21:53:05| Target number of buckets: 394854 2014/08/11 21:53:05| Using 524288 Store buckets 2014/08/11 21:53:05|
[squid-users] Request Entity Too Large Error in Squid Reverse Proxy
I'm having a problem that just started after I implemented squid reverse proxy. I have a couple of applications on one of the apache servers behind the reverse proxy. Every time someone tries to upload relatively large files to the application (7 MB, 30 MB), they get the following error: Request Entity Too Large If I try to perform the same operation without going through the squid reverse proxy, the uploads work with no problems. I'm using proxy 3.1.20 https://github.com/pfsense/pfsense-packages/commits/master/config/31 on pfsense. I tried posting this issue on the pfsense support forums and I have gotten zero replies so I'm trying the squid mailing list. The situation has become a big problem so I would appreciate some help on this. A few parameters I've adjusted to various values with no success: Minimum object size Maximum object size Memory cache size Maximum download size Maximum upload size Thanks a lot
[squid-users] Re: ONLY Cache certain Websites.
Hello, thanks for your help. I fixed the slow issue by myself, i forgott to add nameservers, so it was using the local dns, which ofc fakes some ips... i added nameservers directive and it works now fast again. root@debian-server:~# cat /proc/version Linux version 3.2.0-4-amd64 (debian-ker...@lists.debian.org) (gcc version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian 3.2.60-1+deb7u3 If i look at http://wiki.squid-cache.org/SquidFaq/BinaryPackages#Debian 3.1 is the newest? Am i wrong? You tell me that Squid cant connect to some servers. How. Its just connected to normal fritz.box, nothing special, nothign what could block, or do i miss something? -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667191.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] squid authentication failing
El 2014-08-11 18:59, Sarah Baker escribió: Background: Squid: squid-3.1.23-2.el6.x86_64 OS: CentOS 6.5 - Linux 2.6.32-431.23.3.el6.x86_64 #1 SMP Thu Jul 31 17:20:5= 1 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Issue: I have two boxes, same OS, same squid binary, same config file, same squid-= passwd file. Configuration is setup for ncsa_auth. Squid runs as user squid. Both systems return OK to use of command line of ncsa_auth as squid user to= the login and password in the squid-passwd file. Using squid however via a curl thru one of the proxy ips/port of the system= : one system gives 403 forbidden, the other works just fine. Tried removing authentication entirely, a fully open squid. It fails - same message. Also looked at thusfar: rpm -q query_options --requires squid-3.1.23-2.el6.x86_64 the same on both boxes. Ran yum update on both to insure everything was up to latest - no change. Any ideas what I should look far? - S. Baker Manager of Technical Operations, BrightEdge Maybe some SELinux/Apparmor/Similar application blocking some context of Squid and therefore throwing a 403 code?
[squid-users] HTTP/HTTPS transparent proxy doesn't work
Hello, I am having trouble with my squid setup. Here is exactly what I am trying to do: I am setting up a VPN server and I want all VPN traffic to be transparently proxied by squid with ssl bumping enabled. Right now when I try to do this I get an access denied page from the client. Here are lines from my squid.conf: = acl localnet src 192.168.1.0/24 # local network acl localnet src 192.168.3.0/24 # vpn network http_access allow localnet http_access allow localhost http_access deny all http_port 192.168.1.145:3127 intercept http_port 192.168.1.145:3128 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB key=/etc/squid3/ssl/private.pem cert=/etc/squid3/ssl/public.pem always_direct allow all ssl_bump allow all sslproxy_cert_error allow all sslproxy_flags DONT_VERIFY_PEER sslcrtd_program /usr/lib/squid3/ssl_crtd -s /var/lib/ssl_db -M 4MB sslcrtd_children 5 = Here are my iptables rules: = sysctl -w net.ipv4.ip_forward=1 iptables -F iptables -t nat -F # transparent proxy for vpn iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 80 -j DNAT --to-destination 192.168.1.145:3127 iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 443 -j DNAT --to-destination 192.168.1.145:3128 iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables --table nat --append POSTROUTING --out-interface ppp+ -j MASQUERADE iptables -I INPUT -s 192.168.3.0/24 -i ppp+ -j ACCEPT iptables --append FORWARD --in-interface eth0 -j ACCEPT = When I connect to VPN and try to browse the web I get the following error in /etc/squid3/cache.log on the vpn server: 2014/08/12 21:21:02 kid1| ERROR: No forward-proxy ports configured. 2014/08/12 21:21:02 kid1| WARNING: Forwarding loop detected for: GET /Artwork/SN.png HTTP/1.1 Host: www.squid-cache.org User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:30.0) Gecko/20100101 Firefox/30.0 Accept: image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: http://www.google.com/ Via: 1.1 localhost (squid/3.2.11) X-Forwarded-For: 127.0.0.1 Cache-Control: max-age=259200 Connection: keep-alive 2014/08/12 21:21:02 kid1| ERROR: No forward-proxy ports configured. I am wondering about this erro No forward-proxy ports configured. What do I need to change about my squid.conf that would allow me to do transparent proxying? Thanks in advance. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTP-HTTPS-transparent-proxy-doesn-t-work-tp4667193.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] HTTP/HTTPS transparent proxy doesn't work
On 13/08/2014 4:33 p.m., agent_js03 wrote: Hello, I am having trouble with my squid setup. Here is exactly what I am trying to do: I am setting up a VPN server and I want all VPN traffic to be transparently proxied by squid with ssl bumping enabled. Right now when I try to do this I get an access denied page from the client. Here are lines from my squid.conf: = acl localnet src 192.168.1.0/24 # local network acl localnet src 192.168.3.0/24 # vpn network http_access allow localnet http_access allow localhost http_access deny all http_port 192.168.1.145:3127 intercept http_port 192.168.1.145:3128 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB key=/etc/squid3/ssl/private.pem cert=/etc/squid3/ssl/public.pem always_direct allow all ssl_bump allow all sslproxy_cert_error allow all sslproxy_flags DONT_VERIFY_PEER sslcrtd_program /usr/lib/squid3/ssl_crtd -s /var/lib/ssl_db -M 4MB sslcrtd_children 5 = Here are my iptables rules: = sysctl -w net.ipv4.ip_forward=1 iptables -F iptables -t nat -F # transparent proxy for vpn iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 80 -j DNAT --to-destination 192.168.1.145:3127 iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 443 -j DNAT --to-destination 192.168.1.145:3128 iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables --table nat --append POSTROUTING --out-interface ppp+ -j MASQUERADE iptables -I INPUT -s 192.168.3.0/24 -i ppp+ -j ACCEPT iptables --append FORWARD --in-interface eth0 -j ACCEPT = When I connect to VPN and try to browse the web I get the following error in /etc/squid3/cache.log on the vpn server: 2014/08/12 21:21:02 kid1| ERROR: No forward-proxy ports configured. 2014/08/12 21:21:02 kid1| WARNING: Forwarding loop detected for: GET /Artwork/SN.png HTTP/1.1 Host: www.squid-cache.org User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:30.0) Gecko/20100101 Firefox/30.0 Accept: image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: http://www.google.com/ Via: 1.1 localhost (squid/3.2.11) X-Forwarded-For: 127.0.0.1 Cache-Control: max-age=259200 Connection: keep-alive 2014/08/12 21:21:02 kid1| ERROR: No forward-proxy ports configured. I am wondering about this erro No forward-proxy ports configured. What do I need to change about my squid.conf that would allow me to do transparent proxying? 1) ERROR: No forward-proxy ports configured. This is getting to be a FAQ. I've added a wiki page about it. http://wiki.squid-cache.org/KnowledgeBase/NoForwardProxyPorts 2) WARNING: Forwarding loop detected for: This is a side effect of the above problem. Forwarding loop fetching the error page artwork directly from a intercept port. Amos