Re: [squid-users] Blocking spesific url
Thanks for the reply everyone, I was trying to implement this in my squid.conf but 1) squid fails to restart 2)if it starts, no webpage will load. I even tried to paste only the akamaihd\.net\/battlelog\/background-videos\/ in my “adserver” file as well but no dice. Here is my (working) squid.conf without the acl. http_port 192.168.0.1:3128 transparent #Block acl ads dstdom_regex -i /etc/squid3/adservers http_access deny ads acl LAN src 192.168.0.0/24 http_access allow LAN http_access deny all maximum_object_size 100 MB cache_dir ufs /var/spool/squid3 5000 16 256 And here is the top of my /etc/squid3/adservers file akamaihd\.net\/battlelog\/background-videos\/ — Not working. rd.samsungadhub.com ad.samsungadhub.com http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm$ (^|\.)serving-sys.com tracking.xwebhub.net On 10 Jul 2014, at 16:29, Alexandre arekkusu@gmail.com wrote: My bad. I need to check squid ACL in more detail. I guess squidguard main advantage is speed when dealing with large list of URL then. Alexandre On 10/07/14 14:31, Leonardo Rodrigues wrote: Em 10/07/14 09:04, Alexandre escreveu: Concerning blocking the specific URL. Someone correct me if I am wrong but I don't believe you can not do this with only squid. The squid ACL system can apparently block per domain: http://wiki.squid-cache.org/SquidFaq/SquidAcl Of course you can block specific URLs using only squid ACL options !! # acl aclname url_regex [-i] ^http:// ... # regex matching on whole URL # acl aclname urlpath_regex [-i] \.gif$ ... # regex matching on URL path if the URL is: http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm then something like: acl blockedurl url_regex -i akamaihd\.net\/battlelog\/background-videos\/ http_access deny block should do it ! And i not even include the filename which, i imagine, can change between different stages.
Re: [squid-users] Re: Blocking spesific url
Having some issues replying to the thread, thought I had pasted both already. Anyway, here goes. Here is my (working) squid.conf without the acl. http_port 192.168.0.1:3128 transparent #Block acl ads dstdom_regex -i /etc/squid3/adservers http_access deny ads acl LAN src 192.168.0.0/24 http_access allow LAN http_access deny all maximum_object_size 100 MB cache_dir ufs /var/spool/squid3 5000 16 256 And here is the top of my /etc/squid3/adservers file akamaihd\.net\/battlelog\/background-videos\/ — Not working. rd.samsungadhub.com ad.samsungadhub.com http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm$ (^|\.)serving-sys.com tracking.xwebhub.net On 11 Jul 2014, at 10:03, babajaga augustus_me...@yahoo.de wrote: Pls, publish your complete non-working squid.conf OR at least the part invoking your /etc/squid3/adservers -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Blocking-spesific-url-tp4666791p4666836.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Blocking spesific url
Finally! :D 192.168.0.20 TCP_DENIED/403 3654 GET http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm - NONE/- text/html Thanks everyone! :) On 11 Jul 2014, at 10:47, Amos Jeffries squ...@treenet.co.nz wrote: On 11/07/2014 7:50 p.m., Andreas Westvik wrote: Thanks for the reply everyone, I was trying to implement this in my squid.conf but 1) squid fails to restart 2)if it starts, no webpage will load. I even tried to paste only the akamaihd\.net\/battlelog\/background-videos\/ in my “adserver” file as well but no dice. Here is my (working) squid.conf without the acl. http_port 192.168.0.1:3128 transparent #Block acl ads dstdom_regex -i /etc/squid3/adservers http_access deny ads Insert *right here* ... acl block url_regex -i akamaihd\.net\/battlelog\/background-videos\/ http_access deny block acl LAN src 192.168.0.0/24 http_access allow LAN http_access deny all maximum_object_size 100 MB cache_dir ufs /var/spool/squid3 5000 16 256 And here is the top of my /etc/squid3/adservers file akamaihd\.net\/battlelog\/background-videos\/ — Not working. rd.samsungadhub.com ad.samsungadhub.com http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm$ (^|\.)serving-sys.com tracking.xwebhub.net Some of these are not domain names. The dstdom_regex ACL type which is using the contents of this file matches against *only* the domain/hostname section of URLs. PS. most of those entries are better matched using dstdomain type ACL. Even serving-sys.com entry is equivalent to .serving-sys.com in dstdomain format. Amos
[squid-users] Blocking spesific url
So this is driving me crazy. Some of my users are playing battlefield 4 and battlefield have this server browsing page that has webm background. Turns of this video downloads every few seconds and that adds up to about 8Gb every day. Here is the url: http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm Now, I dont want to block http://eaassets-a.akamaihd.net/ since updates and such comes from this CDN, and I dont want to block the file webm. And I cant for the life of me figure how to block this spesific url? Google gives me only what I dont want to do. Any pointers? -Andreas
Re: [squid-users] blocking ads/sites not working anymore?
So what kind of format do I have now then? Do you have any examples? -Andreas On Mar 10, 2013, at 07:46 , Amos Jeffries squ...@treenet.co.nz wrote: On 2013-03-10 01:54, Andreas Westvik wrote: Hi everyone Over the time I have collected a lot of sites to block. snip #Block acl ads dstdom_regex -i /etc/squid3/adservers http_access deny ads cat /etc/squid3/adservers | less (^|\.)yieldmanager\.edgesuite\.net$ (^|\.)yieldmanager\.net$ (^|\.)yoc\.mobi$ (^|\.)yoggrt\.com$ (^|\.)yourtracking\.net$ (^|\.)z\.times\.lv$ (^|\.)z5x\.net$ (^|\.)zangocash\.com$ (^|\.)zanox-affiliate\.de$ (^|\.)zanox\.com$ (^|\.)zantracker\.com$ (^|\.)zde-affinity\.edgecaching\.net$ (^|\.)zedo\.com$ (^|\.)zencudo\.co\.uk$ (^|\.)zenzuu\.com$ (^|\.)zeus\.developershed\.com$ (^|\.)zeusclicks\.com$ (^|\.)zintext\.com$ (^|\.)zmedia\.com$ Besides the fix Amm already gave, you will find Squid runs a bit faster if you convert that listing to dstdomain format and use a dstdomain ACL to check it. Amos
[squid-users] blocking ads/sites not working anymore?
Hi everyone Over the time I have collected a lot of sites to block. ads/malware/porn etc. This has been working like a charm. I have even created a custom errorpage for this. But since I don't know when, this has stopped working. And according to the googling I have done, my syntax in squid.conf are correct. So what can be wrong here? This is my setup: cat /etc/squid3/squid.conf http_port 192.168.0.1:3128 transparent acl LAN src 192.168.0.0/24 http_access allow LAN http_access deny all cache_dir ufs /var/spool/squid3 5000 16 256 #Block acl ads dstdom_regex -i /etc/squid3/adservers http_access deny ads cat /etc/squid3/adservers | less (^|\.)yieldmanager\.edgesuite\.net$ (^|\.)yieldmanager\.net$ (^|\.)yoc\.mobi$ (^|\.)yoggrt\.com$ (^|\.)yourtracking\.net$ (^|\.)z\.times\.lv$ (^|\.)z5x\.net$ (^|\.)zangocash\.com$ (^|\.)zanox-affiliate\.de$ (^|\.)zanox\.com$ (^|\.)zantracker\.com$ (^|\.)zde-affinity\.edgecaching\.net$ (^|\.)zedo\.com$ (^|\.)zencudo\.co\.uk$ (^|\.)zenzuu\.com$ (^|\.)zeus\.developershed\.com$ (^|\.)zeusclicks\.com$ (^|\.)zintext\.com$ (^|\.)zmedia\.com$ This is my /var/log/squid3/access.log when trying to access zmedia.com (currently blocked) 1362833540.822607 192.168.0.20 TCP_MISS/301 631 GET http://zmedia.com/ - DIRECT/216.34.207.134 text/html 1362833541.459236 192.168.0.20 TCP_MISS/200 7586 GET http://www.valueclickmedia.com/ - DIRECT/2.21.34.88 text/html 1362833541.570 95 192.168.0.20 TCP_MISS/200 2465 GET http://www.valueclickmedia.com/sites/all/modules/google_analytics/googleanalytics.js? - DIRECT/2.21.34.88 application/javascript Ps. Running Squid 3.1.6 on Debian Squeeze configure options: '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3' '--disable-maintainer-mode' '--disable-dependency-tracking' '--disable-silent-rules' '--srcdir=.' '--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3' '--mandir=/usr/share/man' '--with-cppunit-basedir=/usr' '--enable-inline' '--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap' '--enable-delay-pools' '--enable-cache-digests' '--enable-underscores' '--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM' '--enable-ntlm-auth-helpers=smb_lm,' '--enable-digest-auth-helpers=ldap,password' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group' '--enable-arp-acl' '--enable-esi' '--disable-translation' '--with-logdir=/var/log/squid3' '--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -g -Wall -O2' 'LDFLAGS=' 'CPPFLAGS=' 'CXXFLAGS=-g -O2 -g -Wall -O2' --with-squid=/tmp/buildd/squid3-3.1.6
Re: [squid-users] blocking ads/sites not working anymore?
That did the trick! :D http://bildr.no/view/1411006 On Mar 9, 2013, at 14:33 , Amm ammdispose-sq...@yahoo.com wrote: - Original Message - From: Andreas Westvik andr...@spbk.no To: squid-users@squid-cache.org squid-users@squid-cache.org Cc: Sent: Saturday, 9 March 2013 6:24 PM Subject: [squid-users] blocking ads/sites not working anymore? Hi everyone Over the time I have collected a lot of sites to block. ads/malware/porn etc. This has been working like a charm. I have even created a custom errorpage for this. But since I don't know when, this has stopped working. And according to the googling I have done, my syntax in squid.conf are correct. So what can be wrong here? This is my setup: cat /etc/squid3/squid.conf http_port 192.168.0.1:3128 transparent acl LAN src 192.168.0.0/24 http_access allow LAN http_access deny all cache_dir ufs /var/spool/squid3 5000 16 256 #Block acl ads dstdom_regex -i /etc/squid3/adservers http_access deny ads Dont know how it worked earlier but you need to put http_access deny ads before http_access allow LAN Amm
Re: [squid-users] Securing squid3
Oh, this was a lot of information! :D So here goes. Im only using squeeze on the production server. On the testing server Im running wheezy, but not squid. Only havp. And yeah, I seems a bit poor but I was only testing this as proof of concept. Or to satisfy my inner nerd. Im not going to use this solution in the long run, and like you say there is more options as well. So now Im going to check them out since Im done with this. Could a managed switch help me out here? Instead of the crazy iptables/forwarding/redirecting on the server? Right now Im researching a small HP procurve to manage these connections for me, is this the normal route (no pun intended) to do this? Was thinking about setting up the switch directly in front of 192.168.0.1 and redirect the traffic to 192.168.0.24 before it hits the server. Thanks for the other links. Im under the weather here, got the flu and a toot ache signed by satan him self. So Im going to check it out when Im fit for fight again. And about the cve patches Im like hoping the debian team is on top of this. If you have other information - please keep it to your self :D -Andreas On Feb 15, 2013, at 06:11 , Amos Jeffries squ...@treenet.co.nz wrote: On 15/02/2013 10:18 a.m., Andreas Westvik wrote: So i actually got it working! Client - gateway - havp - squid - internets I actually had blocked my self totally from squid3, so that was quite the head scratch. It turned out that http access deny all has to be at the bottom of the config file. ;) :-) You started this thread with a question on how to make Squid secure. If you are using the Squeeze or Wheezy package you are not secure, the Squeeze package is missing patches for 3 CVE vulnerabilities, the Wheezy package is currently missing 1. Also, since you have a good handle on where the traffic is coming from you can lock down the proxy listening port. I wouls suggest s small vriant of teh mangle table rule which can be found here: http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat By adding a -s !192.168.* stanza to exclude your internal clients from the port block you can give them service while halting all external access. So then I pasted this into squid.conf cache_peer 192.168.0.24 parent 3127 0 no-query no-digest And then I reloaded and everything just worked. Now my second server running debian wheezy is a first gen macbook. So that is not a beast. But it workes just fine. The log folder is mounted in the ram to use most of the speed. I made a little screencast of the thing working Have a look https://vimeo.com/59687536 Thanks for the help everyone! :) On Feb 14, 2013, at 17:24 , Andreas Westvik andr...@spbk.no wrote: havp supports parent setup, and as far as I have seen, it should be setup before squid. Now, I can always switch this around, and move the squid3 setup to 192.168.0.24 and setup havp on 192.168.0.1 of course. But 192.168.0.1 is running debian production and Debian does not support havp on a squeeze. So Im using a debian wheezy for havp in the mean while. And its not installed via apt. HAVP appears to be a neglected project. You may want to update the scanner to another AV (clamav with c-icap perhapse). NP: With ICAP you can plug in almost any AV scanner system into Squid and only have the MISS traffic being scanned, pre-scanned HITS still served out of cache at full speed. ICAP also supports streamed scanning from the latest AV systems, where the client gets delivery far faster. * serving from cache without re-scanning is a controverisial topic though. It is fast on the HITs, but permits any infections in cache to be delivered even after scanner signatures are updated. If squid caches infected files, the local clamav should take care of that anyways? Since havp on the other server are using clamav as well. Try plugging clamav directly into Squid. c-icap works for most people (unless you are one of the lucky ones with trouble). I really don't think the iptables rules should be that difficult to setup up, since I intercept the web traffic with this: iptables -t nat -A PREROUTING -i eth3 -p tcp --dport 80 -j REDIRECT --to-port 3128 So it's basically the same thing, but kinda like -j REDIRECT -to-destination 192.168.0.24:3127 But it's not working! grr! REDIRECT is a special case of DNAT target which redirects to the hosts main IP address. You cannot specify a destination IP on REDIRECT target, you can on DNAT. The LInuxDnat wiki page I linked to above has all the details you need for this - the iptables rules are the same for any proxy which accepts NAT'd traffic. So... * When your box IP is dynamically assigned and not known in advance use REDIRECT. * When your box is statically assigned use DNAT to the IP Squid is listening on. Squid-3.2+ provide protection against the CVE-2009-0801 security vulnerability in NAT
[squid-users] Securing squid3
Hi everybody I have been running squid3 on my Debian squeeze on/off for a few weeks now. And there is a few things Im not sure of 1. How can I be sure that Im running it securely? I really only want squid3 to server my local clients (192.168.0.0/32). 2. Can I bind squid3 to only listen to any device/ip? 3. just for fun, I have setup havp on a different server. Is it possible to send my http traffic to that server first? (havp runs on 192.168.0.24) Then back to squid3? As of now, I need to configure my clients to connect to that havp server, then havp will send traffic back to squid. But I would like to happen with some automatic iptables commands. I have tried several iptables setup, but nothing will make this work. I cannot for the life of me intercept the port 80 traffic, then redirect it to 192.168.0.24:3127 Like this: Client - Gw 192.168.0.1 - havp 192.168.0.24:3127 - squid3 192.168.0.1:3128 - internets This is my setup: http_port 3128 transparent acl LAN src 192.168.0.0/32 acl localnet src 127.0.0.1/255.255.255.255 http_access allow LAN http_access allow localnet cache_dir ufs /var/spool/squid3 5000 16 256 #Block acl ads dstdom_regex -i /etc/squid3/squid.adservers http_access deny ads eth3: 192.168.0.1 (non-dhcp envirment) eth4: wan official ip (non-dchp) -Andreas
Re: [squid-users] Securing squid3
Sorry, I have been replying directly to users email. To clear things up, here is a image of the setup: http://bildr.no/image/1389674.jpeg havp is running on 192.168.0.24:3127 squid3 is running on 192.168.0.1:3128 -Andras On Feb 14, 2013, at 16:45 , babajaga augustus_me...@yahoo.de wrote: I think, 2 corrections: Instead squid.conf: cache_peer localhost parent 8899 0 no-query no-digest squid.conf: cache_peer avp-host parent 8899 0 no-query no-digest never_direct allow all Otherwise, uncachable requests will not go thru parent proxy, but direct. Which will result in some files, not scanned by havp. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Securing-squid3-tp4658495p4658498.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Securing squid3
heh, try this one http://bildr.no/view/1389674 On Feb 14, 2013, at 16:49 , Andreas Westvik andr...@spbk.no wrote: Sorry, I have been replying directly to users email. To clear things up, here is a image of the setup: http://bildr.no/image/1389674.jpeg havp is running on 192.168.0.24:3127 squid3 is running on 192.168.0.1:3128 -Andras On Feb 14, 2013, at 16:45 , babajaga augustus_me...@yahoo.de wrote: I think, 2 corrections: Instead squid.conf: cache_peer localhost parent 8899 0 no-query no-digest squid.conf: cache_peer avp-host parent 8899 0 no-query no-digest never_direct allow all Otherwise, uncachable requests will not go thru parent proxy, but direct. Which will result in some files, not scanned by havp. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Securing-squid3-tp4658495p4658498.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Re: Securing squid3
havp supports parent setup, and as far as I have seen, it should be setup before squid. Now, I can always switch this around, and move the squid3 setup to 192.168.0.24 and setup havp on 192.168.0.1 of course. But 192.168.0.1 is running debian production and Debian does not support havp on a squeeze. So Im using a debian wheezy for havp in the mean while. And its not installed via apt. If squid caches infected files, the local clamav should take care of that anyways? Since havp on the other server are using clamav as well. I really don't think the iptables rules should be that difficult to setup up, since I intercept the web traffic with this: iptables -t nat -A PREROUTING -i eth3 -p tcp --dport 80 -j REDIRECT --to-port 3128 So it's basically the same thing, but kinda like -j REDIRECT -to-destination 192.168.0.24:3127 But it's not working! grr! -Andreas On Feb 14, 2013, at 17:12 , babajaga augustus_me...@yahoo.de wrote: Then its more a question how to setup iptables, the clients and HAVP. However, why HAV first ? This has the danger of squid caching infected files. And HAV will scan cached files over and over again. Then squid will be an upstream proxy of HAV. IF HAV supports parent proxies, then squid should have no problem. But this then either needs a proxy.pac for the clients browsers or explicit proxy config for the clients browsers. This would be the easier path. When this works, then to think about using ipt with explicit routing of all packets to HAV-box. And back, so you have to consider NAT. I am not fit enough in ipt, so I would keep it simple: client-PC-squid-HAV--web And the transparent setup for squid is well documented. PS: Grafik ist etwas klein :-) -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Securing-squid3-tp4658495p4658501.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Securing squid3
So i actually got it working! Client - gateway - havp - squid - internets I actually had blocked my self totally from squid3, so that was quite the head scratch. It turned out that http access deny all has to be at the bottom of the config file. ;) So then I pasted this into squid.conf cache_peer 192.168.0.24 parent 3127 0 no-query no-digest And then I reloaded and everything just worked. Now my second server running debian wheezy is a first gen macbook. So that is not a beast. But it workes just fine. The log folder is mounted in the ram to use most of the speed. I made a little screencast of the thing working Have a look https://vimeo.com/59687536 Thanks for the help everyone! :) On Feb 14, 2013, at 17:24 , Andreas Westvik andr...@spbk.no wrote: havp supports parent setup, and as far as I have seen, it should be setup before squid. Now, I can always switch this around, and move the squid3 setup to 192.168.0.24 and setup havp on 192.168.0.1 of course. But 192.168.0.1 is running debian production and Debian does not support havp on a squeeze. So Im using a debian wheezy for havp in the mean while. And its not installed via apt. If squid caches infected files, the local clamav should take care of that anyways? Since havp on the other server are using clamav as well. I really don't think the iptables rules should be that difficult to setup up, since I intercept the web traffic with this: iptables -t nat -A PREROUTING -i eth3 -p tcp --dport 80 -j REDIRECT --to-port 3128 So it's basically the same thing, but kinda like -j REDIRECT -to-destination 192.168.0.24:3127 But it's not working! grr! -Andreas On Feb 14, 2013, at 17:12 , babajaga augustus_me...@yahoo.de wrote: Then its more a question how to setup iptables, the clients and HAVP. However, why HAV first ? This has the danger of squid caching infected files. And HAV will scan cached files over and over again. Then squid will be an upstream proxy of HAV. IF HAV supports parent proxies, then squid should have no problem. But this then either needs a proxy.pac for the clients browsers or explicit proxy config for the clients browsers. This would be the easier path. When this works, then to think about using ipt with explicit routing of all packets to HAV-box. And back, so you have to consider NAT. I am not fit enough in ipt, so I would keep it simple: client-PC-squid-HAV--web And the transparent setup for squid is well documented. PS: Grafik ist etwas klein :-) -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Securing-squid3-tp4658495p4658501.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] squid3 makes my users foreigners!
First post from a long time squid user So during the Olympics my national broadcaster has these free HD streaming channels on some webpage. (I'm norwegian - nrk.no) Now, when I route my LAN traffic through squid3, nrk.no thinks Im outside Europe - and thus denies me access to the streams. And as soon I turn http routing off via squid3, everything works like a charm and nrk.no thinks Im legit. Now, I have a few webservers running on my host as well so I turned on dns lookups in apache2.conf to see what kind of host apache sees when Im visiting my own server, and sure enough thats hostname-ip.no-address So what am I doing wrong here? acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.0.0/16 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localnet http_access deny all http_port 3128 transparent hierarchy_stoplist cgi-bin ? cache_mem 512 MB maximum_object_size_in_memory 2 MB cache_dir ufs /var/spool/squid3 2048 16 256 access_log /var/log/squid3/access.log squid coredump_dir /var/spool/squid3 refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 shutdown_lifetime 5 seconds always_direct allow all memory_pools on memory_pools_limit 100 MB iptables routing Original (not working) iptables -t nat -A PREROUTING -i eth3 -p tcp --dport 80 -j REDIRECT --to-port 3128 Some new ones Im testing (Not working either, still getting blocked) iptables -t nat -A PREROUTING -i eth3 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.0.1:3128 iptables -t nat -A PREROUTING -i eth4 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 eth3 = LAN eth4 = official norwegian ip. So why does nrk.no think Im a foreigner when I try to watch the olympics via squid3?