Re: [squid-users] Proxy traversal query
On 2014-07-03 17:17, Vinay C wrote: Thank you so much Eliezer for the quick response. I am so happy to see such a detailed response here which I could not get in any forums. Please find my replies and a few queries inline. On Thu, Jul 3, 2014 at 12:16 AM, Eliezer Croitoru elie...@ngtech.co.il wrote: Hey Vinay, Answers are inside the email: On 07/02/2014 08:15 PM, Vinay C wrote: Hi, I am looking for answer to a basic query and I have posted it in different forum but did not get any satisfactory answers. I hope in this group of expert I can get the answer. We can try to help you. Context: I have a program (a sort of http client) that internally uses apache httpClient. Given some set of parameters like Authscheme, proxy server and other details it can traverse through Squid proxy and establish connection to given Webserver. What sort of authentication can it test?(basic, ntlm, kerberous) Vinay: It can test Basic, Digest, NTML and Kerberos. I want ensure that my client working can work with just not through Squid but any other enterprises level proxies in the world. I am not in IT domain but a QE engineer and want to ensure I can have a best possible coverage for my client. I agree that Squid is one of the best avaiable proxy server but my job is to ensure that my client works with other proxies too. Query1: I want to ensure my program works for most of the enterprise proxy servers. Given that it can establish a connection via squid, is it safe to assume that it is going to work with all the proxy server like Microsoft TMG, Bluecoat etc? Depends on what are the options to authenticate and the proxy configuration. Some use basic auth others ntlm(should not be used from many reasons) or kerberous.(the are other options) See below about the RFCs. Query2: In case I should test my program to be working with different proxy server then for enterprise world which of the proxy server would you like to suggest to have the best coverage. What fits for you!! If you can test all of them with squid in a convenient way use squid. If you feel that squid sweeps you from your feet then use another one that you feel easy and happy with. Vinay: I tested that the client can establish the connection through Squid but before testing in rest of the proxies in world. I want to know that does it even make sense to do this exercise. Can I assume that if my client can establish the connection through squid, it will be capable of establishing connection through any other proxies in the world? It does not matter. All proxies are working to a set of RFC standards. The general operation is defined in https://tools.ietf.org/html/rfc7235 with each specific authentication scheme being defined in the RFC standards referenced from http://www.iana.org/assignments/http-authschemes/http-authschemes.xhtml If your software meets the behaviour specified in those RFCs then any HTTP proxy will be able to authentication it using one or more of the schemes. Squid is a good testing ground for Basic, Digest (only a few bugs remaining), and Negotiate. We also have a Bearer module recently created if anyone wants to sponsor its merging into public releases. I'm not aware of any HTTP proxy supporting OAuth scheme yet - it is superceded by Bearer now so may never happen. NOTE that NTLM scheme found on many enterprise networks after only 12 years since deprecation has never been formally standardised. By the time that happened it was called Negotiate. If you want to support NTLM you will have to lookup the proprietary specification(s) from Microsoft for the 7 or so protocols which use that scheme label - although only NTLMv2 is anywhere near safe to use today. I recommend skipping this one, but you may need to do it for those earlier mentioned enterprise networks. Amos
Re: [squid-users] How can I make squid redirect HTTP traffic using access list and L3 switch?
On 2014-07-03 01:57, Mark jensen wrote: Hello I want to configure squid to be a Transparent proxy using L3 cisco 3550 switch (without using wccp), so I follow this tutorial http://www.cisco.com/c/en/us/support/docs/ip/ip-routed-protocols/47900-cat3550pbr.html as the picture in the tutorial show: The goal is to redirect all workstations (20.20.20.0) traffic to squid 30.30.30.2 (I have used PC with squid instead of the shown router in the picture) and I set 10.10.10.2 as a web server instead of the router too The redirection have worked well, my question is how can I make squid redirect the HTTP traffic to the web server from the workstation Transparently and return the page from web server to the workstations too? You seem to be asking how to setup an MITM proxy, please read http://wiki.squid-cache.org/SquidFaq/InterceptionProxy carefully to understand what you are doing before going further. Once you understand it we have many examples of how-to which can be found in the wiki (http://wiki.squid-cache.org/ConfigExamples#Interception). At its simplest all you have to do is add an http_port directive with the intercept mode flag and setup NAT *on the squid machine* to send the packets there. TCP protocol naturally does the upstream webserver parts without any configuration needed. Amos
Re: [squid-users] FATAL: No valid signing SSL certificate configured for https_port
On 2014-07-03 06:16, Eliezer Croitoru wrote: Hey Amos, I was thinking about something in the past and I will try my best to understand what can be done. Basically from what I understand even a read is not possible due to SELINUX by squid. So by that: A simple file open for read test on the certificates or even any other settings related files basic test can help to identify issues. What do you think about a basic read(and maybe a stat on the file for debug) test for all the main files? Compared to squid load this would be a piece of cake. Specifically for the certificate is one thing since OpenSSL dosn't provide too much. A pointer to find where the certificate read happens will be helpful. The cache.cf.cc function DoConfigure is the best place to start for that check currently. It contains some for-loops initializing each http_port and https_port entries SSL contexts. You may put the test directly in those loops, or inside the SSL context setup function they call. Amos
[squid-users] Re: Hotmail issue in squid 3.4.4
Hi Eliezer , OS is CentOS 5.5 uname -a : Linux username 2.6.18-194.el5PAE #1 SMP Fri Apr 2 15:37:44 EDT 2010 i686 i686 i386 GNU/Linux getenforce : Disabled ls -la /etc/squid3/ssl_cert/ total 20 drwxr-xr-x 3 root root 4096 Jun 10 14:33 . drwxr-xr-x 3 root root 4096 Jun 10 14:32 .. -rw-r--r-- 1 root root 848 Jun 10 14:33 myCA.der -rw-r--r-- 1 root root 2091 Jun 10 14:32 myCA.pem drwxr-xr-x 2 root root 4096 Jun 10 14:32 ssl_db Regards, krish -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Hotmail-issue-in-squid-3-4-4-tp4666020p409.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Re: what is cached
It depends on. Facebook now uses https, which can not be cached. Valif for other sites, using https, too. Or, in other words, only http can be cached. and game sites where chat is available. So facebook not (because of https), game sites may be, in case of using http for chat. Are facebook posts cached or just images? Neither or, as https is used. Is the chat on chat sites cached in any format that can be available. In case, of http, may be. Not all http can be ceache, either. Is gaming chat cached and accessible if needed? MIGHT be cache in case of http, which is unlikely. Usually, games use teamspeak, for example, which can not be cached. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/what-is-cached-tp410p411.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] problem with google captcha
Hello. I am using SQUID 3,3,8 transparently on opensuse 13.1. Here is the configuration: visible_hostname koreamotors.com.ua acl localnet src 192.168.0.0/24 # RFC1918 possible internal network acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT #custom Global-it's ACL's acl AdminsIP src /etc/squid/AccessLists/AdminsIP.txt acl RestrictedDomains dstdomain /etc/squid/AccessLists/RestrictedDomains.txt acl ad_group_rassh urlpath_regex -i /etc/squid/AccessLists/rasshirenie.txt http_access allow localhost http_access deny !Safe_ports # Deny CONNECT to other than SSL ports http_access deny CONNECT !SSL_ports #custom Global-it's settings http_access allow AdminsIP http_access deny RestrictedDomains http_access deny ad_group_rassh http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 192.168.0.97:3128 http_port 192.168.0.97:3129 intercept cache deny all # Add any of your own refresh_pattern entries above these. # refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 . As soon as the squid and wrap it traffic, Google immediately starts to request captcha. What should I do to solve this problem? Dmitry
[squid-users] Problem with HTTP redirection and IPTABLES?
I have follow this tutorial to redirect HTTP traffic to the squid listening on 8080: http://wiki.squid-cache.org/ConfigExamples/Intercept/AtSource My questions are: 1- when I try to do this command: iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination SQUIDIP:8080 an error returns: unknown option --to-destination (iptables version1.4.7) 2- I'm using squid3.1.10 what option should I chose: http_port 8080 transparent OR http_port 8080 intercept mark
Re: [squid-users] Problem with HTTP redirection and IPTABLES?
1 - Your iptables is missing DNAT target, you may try using REDIRECT target. 2 - In Squid 3.1+ the transparent option has been split. Use 'intercept to catch DNAT packets. 2014-07-03 11:25 GMT-03:00 Mark jensen ngiw2...@hotmail.com: I have follow this tutorial to redirect HTTP traffic to the squid listening on 8080: http://wiki.squid-cache.org/ConfigExamples/Intercept/AtSource My questions are: 1- when I try to do this command: iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination SQUIDIP:8080 an error returns: unknown option --to-destination (iptables version1.4.7) 2- I'm using squid3.1.10 what option should I chose: http_port 8080 transparent OR http_port 8080 intercept mark
RE: [squid-users] Problem with HTTP redirection and IPTABLES?
If I have following this tutorials to configure transparent proxy: on cisco L3 switch: http://wiki.squid-cache.org/ConfigExamples/Intercept/Cisco2501PolicyRoute on squid machine: http://wiki.squid-cache.org/ConfigExamples/Intercept/IptablesPolicyRoute Is it necessary to add any configuration to the squid like: http_port 8080 intercept or following this tutorial is enough for me. NOTE: I have the client, squid and the web server on different machines Mark
[squid-users] Question to cache_peer
Hi there, I've configured my squid as follow: cache_peer xxx.xxx.xxx.xxx parent 3128 0 no-query no-digest But a lot of traffic didn't pass the parent proxy. Why? How can i configure the child proxy that ALL traffic must pass the parent proxy? Mit freundlichen Grüßen / Kind regards Mr. Andreas Reschke andreas.resc...@mahle.com, http://www.mahle.com
Re: [squid-users] TProxy Setup
Thank you Amos Eliezer for your responses! Amos, we have enabled debug_options 11,2, but that did not show any HTTP request being received by Squid, not even after doing the changes that Eliezer suggested. But they did show up, when we reverted back to http_port 3127 intercept related configuration. More details below. Eliezer, we tried with the ip route add local default dev lo table 100, but still same problem. I think the wiki page http://wiki.squid-cache.org/Features/Tproxy4 needs to be updated as it clearly says dev eth0 and not dev lo. Our setup would need a bit explanation. Please bear with me while I describe as below: For Traffic From Host: #Start# Host (eth0 A.B.170.10/26) -- -- (eth2 A.B.170.1/26) Rtr1 (eth2 A.B.170.1/26) -- -- (eth0 A.B.170.24/26) SquidBox (eth1 A.B.169.21/28) -- -- (eth2 A.B.169.17/28) Rtr2 (eth1 BGP peered uplink) -- -- Internet #End# For Traffic From Internet: #Start# Internet -- -- (eth1 BGP peered uplink) Rtr2 (eth2 A.B.169.17/28) -- -- (eth1 A.B.169.21/28) SquidBox (eth0 A.B.170.24/28) -- -- (eth0 A.B.170.10/26) Host #End# * In my understanding, this should not pass through Rtr1 as as SquidBox eth0 is in the same subnet as Host. Both Rtr1 Rtr2 are Linux based router called Mikrotik, installed on x86 hardware. Rtr1 has the following rules: /ip firewall mangle add action=mark-routing chain=prerouting disabled=no dst-port=80 new-routing-mark=_to_squid_ passthrough=yes protocol=tcp src-address=A.B.170.10 /ip route add disabled=no distance=1 dst-address=0.0.0.0/0 gateway=A.B.170.24 routing-mark=_to_squid_ scope=30 target-scope=10 Rtr2 has the following rules: /ip firewall mangle add action=mark-routing chain=prerouting disabled=no dst-address=A.B.170.10 new-routing-mark=_to_squid_ passthrough=yes protocol=tcp src-port=80 /ip route add disabled=no distance=1 dst-address=0.0.0.0/0 gateway=A.B.169.21 routing-mark=_to_squid_ scope=30 target-scope=10 The policy routing rules are the same on Rtr1 when we use the REDIRECT rule in iptables -t nat for http_port 3127 intercept, and in that instance SquidBox works like a charm. All the HTTP request are now showing up in cache.log because of debug_options 11,2 as Amos suggested. However, whenever we remove the nat rules and introduce the mangle rules + ip rule + ip route in table 100 for http_port 3129 tproxy, Rtr1 shows that the packets are marked and forwarded to SquidBox. Even SquidBox properly logs the packets in /var/log/messages due to the mangle table rule, but Squid process on SquidBox does not seem to be receiving the packets. No HTTP request entry showing up in cache.log. IPTables -L for mangle show the following: [root@proxy01 ~]# iptables -vxnL --line-numbers -t mangle Chain PREROUTING (policy ACCEPT 235 packets, 29632 bytes) num pkts bytes target prot opt in out source destination 1 00 ACCEPT all -- * * 0.0.0.0/0 A.B.169.21 26174 821596 ACCEPT all -- * * 0.0.0.0/0 A.B.170.24 3100551367 ACCEPT all -- * * 0.0.0.0/0 A.B.174.0/24 4 00 ACCEPT all -- * * 0.0.0.0/0 M.N.0.66 5 49 3420 DIVERT tcp -- * * 0.0.0.0/0 0.0.0.0/0 socket 6 52 3840 LOGtcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 LOG flags 0 level 4 prefix `TProxy: ' 7 52 3840 TPROXY tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 TPROXY redirect 0.0.0.0:3129 mark 0x1/0x1 The IP rule route lists, and rt_tables rp_filter show: [root@proxy01 ~]# ip route list table squidtproxy local default dev lo scope host [root@proxy01 ~]# ip rule list 0: from all lookup local 32765: from all fwmark 0x1 lookup squidtproxy 32766: from all lookup main 32767: from all lookup default [root@proxy01 ~]# cat /etc/iproute2/rt_tables # # reserved values # 255 local 254 main 253 default 0 unspec # # local # #1 inr.ruhep 100 squidtproxy [root@proxy01 ~]# find /proc/sys/net/ipv4/ -iname rp_filter /proc/sys/net/ipv4/conf/all/rp_filter /proc/sys/net/ipv4/conf/default/rp_filter /proc/sys/net/ipv4/conf/lo/rp_filter /proc/sys/net/ipv4/conf/eth0/rp_filter /proc/sys/net/ipv4/conf/eth1/rp_filter /proc/sys/net/ipv4/conf/gre0/rp_filter /proc/sys/net/ipv4/conf/gretap0/rp_filter [root@proxy01 ~]# find /proc/sys/net/ipv4/ -iname rp_filter -exec cat {} + 0 0 0 0 0 0 0 Amos, we also looked into the routing loop that you mentioned. Since there are two routers involved, Rtr1 Rtr2, with Squid connected to both of them, we used the rules above, so, Rtr1 only policy-routes Host - Squid and Rtr2 policy-routes Internet - Squid. This should be correct, no? In the very least, Squid should be receiving the packets, and the HTTP request headers should show up in cache.log, shouldn't they? We apologize for the rather long email. Hopefully,
Re: [squid-users] Problem with HTTP redirection and IPTABLES?
Hi Mark, The following iptables rules works on my Centos 6.5 running iptables 1.4.7 iptables -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3127 And, on squid.conf, it is configured as: http_port 3127 intercept This works like a charm. Regards HASSAN On Thu, Jul 3, 2014 at 8:47 PM, Mark jensen ngiw2...@hotmail.com wrote: If I have following this tutorials to configure transparent proxy: on cisco L3 switch: http://wiki.squid-cache.org/ConfigExamples/Intercept/Cisco2501PolicyRoute on squid machine: http://wiki.squid-cache.org/ConfigExamples/Intercept/IptablesPolicyRoute Is it necessary to add any configuration to the squid like: http_port 8080 intercept or following this tutorial is enough for me. NOTE: I have the client, squid and the web server on different machines Mark
[squid-users] access denied
I keep getting: Access Denied. Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect. Your cache administrator is webmaster. I'm not sure what is wrong. I used to run squid2.7 a long while ago, this is my first time trying to setup squid3 (squid v3.3.8 if I'm not mistaken) my squid.conf: http_port 3129 transparent acl our_networks src 192.168.0.0/16 acl SSL_ports port 443 acl Safe_ports port 80 acl Safe_ports port 21 acl Safe_ports port 443 acl Safe_ports port 70 acl Safe_ports port 210 acl Safe_ports port 1025-65535 acl Safe_ports port 280 acl Safe_ports port 488 acl Safe_ports port 591 acl Safe_ports port 777 acl Safe_ports port 3129 acl CONNECT method CONNECT http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost manager http_access deny manager http_access allow our_networks http_access deny all qos_flows tos local-hit=0x30 qos_flows mark local-hit=0x30 cache_mem 1024 MB maximum_object_size_in_memory 2048 KB memory_replacement_policy heap LFUDA cache_replacement_policy heap LRU cache_dir ufs /mnt/cache/cache1 8000 16 256 cache_dir ufs /mnt/cache/cache2 8000 16 256 cache_dir ufs /mnt/cache/cache3 8000 16 256 cache_dir ufs /mnt/cache/cache4 8000 16 256 maximum_object_size 1024 MB logfile_rotate 9 coredump_dir /var/spool/squid3 refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 dns_nameservers 8.8.8.8 8.8.4.4
Re: [squid-users] Problem with HTTP redirection and IPTABLES?
Hey Mark, What Distribution of linux are you using? Eliezer On 07/03/2014 05:25 PM, Mark jensen wrote: I have follow this tutorial to redirect HTTP traffic to the squid listening on 8080: http://wiki.squid-cache.org/ConfigExamples/Intercept/AtSource My questions are: 1- when I try to do this command: iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination SQUIDIP:8080 an error returns: unknown option --to-destination (iptables version1.4.7) 2- I'm using squid3.1.10 what option should I chose: http_port 8080 transparent OR http_port 8080 intercept mark
Re: [squid-users] problem with google captcha
Hey Dmitry, Sometimes it's because there is a sign of a proxy in the middle in the headers or that Google Thinks that there is too much traffic for you network and which is not typical. I do not know how google can allow identification of a network behind a proxy but it seems to me that adding some headers are the right direction. You can use the via off option and also the removal of x-forward-for Headers on the proxy using: http://www.squid-cache.org/Doc/config/forwarded_for/ with delete. it a basic try and find out.. and also fill this form: https://support.google.com/websearch/contact/ban One thing that I think you should check is the PTR record for you IP and also RBL checks on your IP. You can use the tool I have modified at: http://www1.ngtech.co.il/rbl/rblcheck.rb You can try to add some google apps headers that restricts some access to google apps and which might block some traffic which should not be there in the first place(as a test only to see if it helps) Eliezer On 07/03/2014 03:43 PM, Дмитрий Шиленко wrote: Hello. I am using SQUID 3,3,8 transparently on opensuse 13.1. Here is the configuration: visible_hostname koreamotors.com.ua acl localnet src 192.168.0.0/24 # RFC1918 possible internal network acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT #custom Global-it's ACL's acl AdminsIP src /etc/squid/AccessLists/AdminsIP.txt acl RestrictedDomains dstdomain /etc/squid/AccessLists/RestrictedDomains.txt acl ad_group_rassh urlpath_regex -i /etc/squid/AccessLists/rasshirenie.txt http_access allow localhost http_access deny !Safe_ports # Deny CONNECT to other than SSL ports http_access deny CONNECT !SSL_ports #custom Global-it's settings http_access allow AdminsIP http_access deny RestrictedDomains http_access deny ad_group_rassh http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 192.168.0.97:3128 http_port 192.168.0.97:3129 intercept cache deny all # Add any of your own refresh_pattern entries above these. # refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 . As soon as the squid and wrap it traffic, Google immediately starts to request captcha. What should I do to solve this problem? Dmitry
[squid-users] problem with google captcha
I just can not understand - I ispolyuzuyu only http proxy, and the search engine Google is working on https protocol. What's the connection between them? Eliezer Croitoru писал 03.07.2014 21:15: Hey Dmitry, Sometimes it's because there is a sign of a proxy in the middle in the headers or that Google Thinks that there is too much traffic for you network and which is not typical. I do not know how google can allow identification of a network behind a proxy but it seems to me that adding some headers are the right direction. You can use the via off option and also the removal of x-forward-for Headers on the proxy using: http://www.squid-cache.org/Doc/config/forwarded_for/ with delete. it a basic try and find out.. and also fill this form: https://support.google.com/websearch/contact/ban One thing that I think you should check is the PTR record for you IP and also RBL checks on your IP. You can use the tool I have modified at: http://www1.ngtech.co.il/rbl/rblcheck.rb You can try to add some google apps headers that restricts some access to google apps and which might block some traffic which should not be there in the first place(as a test only to see if it helps) Eliezer On 07/03/2014 03:43 PM, Дмитрий Шиленко wrote: Hello. I am using SQUID 3,3,8 transparently on opensuse 13.1. Here is the configuration: visible_hostname koreamotors.com.ua acl localnet src 192.168.0.0/24 # RFC1918 possible internal network acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT #custom Global-it's ACL's acl AdminsIP src /etc/squid/AccessLists/AdminsIP.txt acl RestrictedDomains dstdomain /etc/squid/AccessLists/RestrictedDomains.txt acl ad_group_rassh urlpath_regex -i /etc/squid/AccessLists/rasshirenie.txt http_access allow localhost http_access deny !Safe_ports # Deny CONNECT to other than SSL ports http_access deny CONNECT !SSL_ports #custom Global-it's settings http_access allow AdminsIP http_access deny RestrictedDomains http_access deny ad_group_rassh http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 192.168.0.97:3128 http_port 192.168.0.97:3129 intercept cache deny all # Add any of your own refresh_pattern entries above these. # refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 . As soon as the squid and wrap it traffic, Google immediately starts to request captcha. What should I do to solve this problem? Dmitry -- С ув. Шиленко Дмитрий Системный инженер global-it.com.ua моб. (063)142-32-59 офис 221-55-72
[squid-users] problem with google captcha
Sorry,I accidentally made a mistake)) I just can not understand - I !!USE!! only http proxy, and the search engine Google is working on https protocol. What's the connection between them? Eliezer Croitoru писал 03.07.2014 21:15: Hey Dmitry, Sometimes it's because there is a sign of a proxy in the middle in the headers or that Google Thinks that there is too much traffic for you network and which is not typical. I do not know how google can allow identification of a network behind a proxy but it seems to me that adding some headers are the right direction. You can use the via off option and also the removal of x-forward-for Headers on the proxy using: http://www.squid-cache.org/Doc/config/forwarded_for/ with delete. it a basic try and find out.. and also fill this form: https://support.google.com/websearch/contact/ban One thing that I think you should check is the PTR record for you IP and also RBL checks on your IP. You can use the tool I have modified at: http://www1.ngtech.co.il/rbl/rblcheck.rb You can try to add some google apps headers that restricts some access to google apps and which might block some traffic which should not be there in the first place(as a test only to see if it helps) Eliezer On 07/03/2014 03:43 PM, Дмитрий Шиленко wrote: Hello. I am using SQUID 3,3,8 transparently on opensuse 13.1. Here is the configuration: visible_hostname koreamotors.com.ua acl localnet src 192.168.0.0/24 # RFC1918 possible internal network acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT #custom Global-it's ACL's acl AdminsIP src /etc/squid/AccessLists/AdminsIP.txt acl RestrictedDomains dstdomain /etc/squid/AccessLists/RestrictedDomains.txt acl ad_group_rassh urlpath_regex -i /etc/squid/AccessLists/rasshirenie.txt http_access allow localhost http_access deny !Safe_ports # Deny CONNECT to other than SSL ports http_access deny CONNECT !SSL_ports #custom Global-it's settings http_access allow AdminsIP http_access deny RestrictedDomains http_access deny ad_group_rassh http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 192.168.0.97:3128 http_port 192.168.0.97:3129 intercept cache deny all # Add any of your own refresh_pattern entries above these. # refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 . As soon as the squid and wrap it traffic, Google immediately starts to request captcha. What should I do to solve this problem? Dmitry -- С ув. Шиленко Дмитрий Системный инженер global-it.com.ua моб. (063)142-32-59 офис 221-55-72
[squid-users] problem with google captcha
Hey Eliezer, rblchek.rb said: You are listed on the following 4 blacklists cbl.abuseat.org dnsbl-1.uceprotect.net dnsbl.dronebl.org dnsbl-1.uceprotect.net As I understand it can be the root of my problem Дмитрий Шиленко писал 03.07.2014 21:45: Sorry,I accidentally made a mistake)) I just can not understand - I !!USE!! only http proxy, and the search engine Google is working on https protocol. What's the connection between them? Eliezer Croitoru писал 03.07.2014 21:15: Hey Dmitry, Sometimes it's because there is a sign of a proxy in the middle in the headers or that Google Thinks that there is too much traffic for you network and which is not typical. I do not know how google can allow identification of a network behind a proxy but it seems to me that adding some headers are the right direction. You can use the via off option and also the removal of x-forward-for Headers on the proxy using: http://www.squid-cache.org/Doc/config/forwarded_for/ with delete. it a basic try and find out.. and also fill this form: https://support.google.com/websearch/contact/ban One thing that I think you should check is the PTR record for you IP and also RBL checks on your IP. You can use the tool I have modified at: http://www1.ngtech.co.il/rbl/rblcheck.rb You can try to add some google apps headers that restricts some access to google apps and which might block some traffic which should not be there in the first place(as a test only to see if it helps) Eliezer On 07/03/2014 03:43 PM, Дмитрий Шиленко wrote: Hello. I am using SQUID 3,3,8 transparently on opensuse 13.1. Here is the configuration: visible_hostname koreamotors.com.ua acl localnet src 192.168.0.0/24 # RFC1918 possible internal network acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT #custom Global-it's ACL's acl AdminsIP src /etc/squid/AccessLists/AdminsIP.txt acl RestrictedDomains dstdomain /etc/squid/AccessLists/RestrictedDomains.txt acl ad_group_rassh urlpath_regex -i /etc/squid/AccessLists/rasshirenie.txt http_access allow localhost http_access deny !Safe_ports # Deny CONNECT to other than SSL ports http_access deny CONNECT !SSL_ports #custom Global-it's settings http_access allow AdminsIP http_access deny RestrictedDomains http_access deny ad_group_rassh http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 192.168.0.97:3128 http_port 192.168.0.97:3129 intercept cache deny all # Add any of your own refresh_pattern entries above these. # refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 . As soon as the squid and wrap it traffic, Google immediately starts to request captcha. What should I do to solve this problem? Dmitry -- С ув. Шиленко Дмитрий Системный инженер global-it.com.ua моб. (063)142-32-59 офис 221-55-72
Re: [squid-users] Question to cache_peer
Hi Andreas, As per the wiki page http://wiki.squid-cache.org/Features/CacheHierarchy, did you try with the following two lines in your squid.conf: cache_peer parentcache.foo.com parent 3128 0 no-query default never_direct allow all That 2nd line forces the child to only talk to the parent. Regards HASSAN On Thu, Jul 3, 2014 at 8:49 PM, andreas.resc...@mahle.com wrote: Hi there, I've configured my squid as follow: cache_peer xxx.xxx.xxx.xxx parent 3128 0 no-query no-digest But a lot of traffic didn't pass the parent proxy. Why? How can i configure the child proxy that ALL traffic must pass the parent proxy? Mit freundlichen Grüßen / Kind regards Mr. Andreas Reschke andreas.resc...@mahle.com, http://www.mahle.com
RE: [squid-users] Problem with HTTP redirection and IPTABLES?
I'm using centos 6.5 Linux distro
[squid-users] Re: access denied
change http_port 3129 transparent to http_port 3129 intercept You did not get an error msg in cache.log ? If this does not help, pls publish a) your browser proxy setup b) your firewall rules -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/access-denied-tp419p428.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Re: access denied
after doing some hours searching, i think i found my problem and possible solution at: http://www.squid-cache.org/mail-archive/squid-users/201304/0051.html http://myconfigure.blogspot.com/2013/03/transparent-squid-332-on-ubuntu-1210.html i wonder why squid had to change its method for transparent mode? because of this, requires extra work to do transparent proxy now. my router only do a regular redirect for every dstnat port 80. this works on squid2.9. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/access-denied-tp419p429.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] access denied
On 2014-07-04 03:42, WiNET . wrote: I keep getting: Access Denied. Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect. Your cache administrator is webmaster. I'm not sure what is wrong. I used to run squid2.7 a long while ago, this is my first time trying to setup squid3 (squid v3.3.8 if I'm not mistaken) This is because of the fix for CVE-2009-0801. NAT on a separate machine has never actually worked properly even in 2.7. The fix we have in current Squid involves verifying the TCP destination IP, which also enforces that NAT is performed on the Squid machine instead of remotely. You need to use policy routing or similar mechanisms on the router to get the packets to the Squid machine unchanged for interception to work. Amos
Re: [squid-users] Problem with HTTP redirection and IPTABLES?
On 07/04/2014 01:31 AM, Mark jensen wrote: I'm using centos 6.5 Linux distro You do understand That you enforce the rules of a nat on a PREROUTING table and not on an OUTPUT one... Take a look at the example in the man pages: http://ipset.netfilter.org/iptables-extensions.man.html iptables -t nat -A PREROUTING -p tcp --dport 80 -m cpu --cpu 0 -j REDIRECT --to-port 8080 iptables -t nat -A PREROUTING -p tcp --dport 80 -m cpu --cpu 1 -j REDIRECT --to-port 8081 You cannot use a DNAT from the OUTPUT table which is a local table that is not related to traffic that comes outside of the machine. All The Bests, Eliezer
Re: [squid-users] access denied
Hey There, We will need more information in the form of: Client address Squid Address Routing scheme\description iptables rules access.log output Is the squid box the gateway of the network? In almost all cases the denied is rightful. Eliezer On 07/03/2014 06:42 PM, WiNET . wrote: I keep getting: Access Denied. Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect. Your cache administrator is webmaster. I'm not sure what is wrong. I used to run squid2.7 a long while ago, this is my first time trying to setup squid3 (squid v3.3.8 if I'm not mistaken) my squid.conf: http_port 3129 transparent acl our_networks src 192.168.0.0/16 acl SSL_ports port 443 acl Safe_ports port 80 acl Safe_ports port 21 acl Safe_ports port 443 acl Safe_ports port 70 acl Safe_ports port 210 acl Safe_ports port 1025-65535 acl Safe_ports port 280 acl Safe_ports port 488 acl Safe_ports port 591 acl Safe_ports port 777 acl Safe_ports port 3129 acl CONNECT method CONNECT http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost manager http_access deny manager http_access allow our_networks http_access deny all qos_flows tos local-hit=0x30 qos_flows mark local-hit=0x30 cache_mem 1024 MB maximum_object_size_in_memory 2048 KB memory_replacement_policy heap LFUDA cache_replacement_policy heap LRU cache_dir ufs /mnt/cache/cache1 8000 16 256 cache_dir ufs /mnt/cache/cache2 8000 16 256 cache_dir ufs /mnt/cache/cache3 8000 16 256 cache_dir ufs /mnt/cache/cache4 8000 16 256 maximum_object_size 1024 MB logfile_rotate 9 coredump_dir /var/spool/squid3 refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 dns_nameservers 8.8.8.8 8.8.4.4
[squid-users] Re: access denied
This is because of the fix for CVE-2009-0801. NAT on a separate machine has never actually worked properly even in 2.7. The fix we have in current Squid involves verifying the TCP destination IP, which also enforces that NAT is performed on the Squid machine instead of remotely. You need to use policy routing or similar mechanisms on the router to get the packets to the Squid machine unchanged for interception to work. Amos on the contrary, my setup was working perfectly on those versions, because i'm not using the same machine for NAT routing. for routing, i leave everything on mikrotik, what squid do is only accept redirected request from mikrotik. my setup is A B C D E A. CLIENT ( 192.168.0.0/24 ) B. mikrotik router ( 192.168.0.253, 192.168.14.1 ) C. dstnat src-address=192.168.0.0/24 dst-port 80 redirect to squid ( to-addresses=192.168.14.2 to-ports=3129) D. squid does request internet via 192.168.14.1 (but this time won't get into dst-nat redirect, because the dstnat was only specified request from 192.168.0.0/24) E. directly route to internet gateway i have been using this setup for several years without any problem, but few days ago i decided to test the latest stable squid3, and kind of surprised getting these changes. is there any way i can do the same setup again on this latest version without having to do those iptables NAT? Hey There, We will need more information in the form of: Client address Squid Address Routing scheme\description iptables rules access.log output Is the squid box the gateway of the network? In almost all cases the denied is rightful. Eliezer i'm not using any iptables rules as i have explained above. and no, the squid box is not the gateway, a mikrotik is doing the job and redirect client request(not squid) dst-port 80 and redirect to squid http_port 3129 transparent port. i got lot of Forwarding loop message on cache.log, which led me to find this link on google: http://www.squid-cache.org/mail-archive/squid-users/201304/0051.html and http://myconfigure.blogspot.com/2013/03/transparent-squid-332-on-ubuntu-1210.html so, the question is the same, is there any way i can do the same setup again on this latest version without having to do those iptables NAT? thanks for helps -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/access-denied-tp419p433.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Re: access denied
because by turning the squid machine as internet gateway, i also have to change my mikrotik configurations, my routing policies, and i have too many of mikrotik rules to change if that happens -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/access-denied-tp419p434.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Re: access denied
On 07/04/2014 06:19 AM, winetbox wrote: C. dstnat src-address=192.168.0.0/24 dst-port 80 redirect to squid ( to-addresses=192.168.14.2 to-ports=3129) This rule is exactly the culprit!! You need to route the traffic towards the squid machine and not do redirection\nat towards a specific port. Just route... Probably on the squid box you will see the microtik ip as src ip for the request which is wrong.. Consider to also look at this: http://forum.mikrotik.com/viewtopic.php?f=2t=40811 Which is another thing but should also be a nice idea. Eliezer
[squid-users] Re: access denied
configuration sample on: http://myconfigure.blogspot.com/2013/03/transparent-squid-332-328-on-ubuntu.html requires 2 eth devices. can you show me a very-very simple one that probably suits for me, that only require 1 eth device on squid machine? -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/access-denied-tp419p436.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Re: access denied
ok, it's done. it works now on 1 eth. all i did: on squid: iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3129 on mikrotik: remove all redirect NAT, create a route to squid machine as internet gateway, create a mangle where src-nat from clients dst-port=80, are all routed to proxy gateway. i have another problem though, i do: # tail -f /var/log/squid3/access.log | grep TCP_HIT and if i: # tail -f /var/log/squid3/access.log i see everything is TCP_MISS, for example: 1404449047.279 2035 192.168.14.3 TCP_MISS/200 327 POST http://makasar.speedtest.telkom.net.id/speedtest/upload.php? - HIER_DIRECT/118.98.104.242 text/html 1404449049.441 4211 192.168.14.3 TCP_MISS/200 327 POST http://makasar.speedtest.telkom.net.id/speedtest/upload.php? - HIER_DIRECT/118.98.104.242 text/html 1404449052.162 2630 192.168.14.3 TCP_MISS/200 327 POST http://makasar.speedtest.telkom.net.id/speedtest/upload.php? - HIER_DIRECT/118.98.104.242 text/html 1404449052.966 3419 192.168.14.3 TCP_MISS/200 327 POST http://makasar.speedtest.telkom.net.id/speedtest/upload.php? - HIER_DIRECT/118.98.104.242 text/html something i missed? if if i don't wrongly recall, my last squid(squid 2.9) access.log, don't have HIER_DIRECT, it is just DIRECT. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/access-denied-tp419p437.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Re: access denied
On 07/04/2014 07:18 AM, winetbox wrote: configuration sample on: http://myconfigure.blogspot.com/2013/03/transparent-squid-332-328-on-ubuntu.html requires 2 eth devices. can you show me a very-very simple one that probably suits for me, that only require 1 eth device on squid machine? You can use a virtual IP on the same NIC. I don't know how to do it on a microtik device but using two virtual IP addresses which one of them is used to all outgoing connections towards the wan and the other towards the LAN. in any way you need to mark packets from squid by let say a mac address.. Sorry That I cannot help you with that. If I would have known the network structure I would maybe able to try to think about logically speaking how to implement it. Have you tried microtik forums\support? Eliezer