Re: [squid-users] Parent TCP connection failed
Thanks Amos for the advices, It was related to IPv6. After commenting out the ipv6 entry on hosts file, the parents were successfully connected. Thanks again! Artemis On Sat, 2011-10-08 at 18:19 +1300, Amos Jeffries wrote: On 08/10/11 03:51, Artemis Braja wrote: Hello, I'm running two frontends instances on ports 3128 and 3129, and two backend instances on ports 4001 and 4002 on the same box, and loadbalancing between parents using carp. cache_peer localhost parent 4001 0 carp name=backend-1 cache_peer localhost parent 4002 0 carp name=backend-2 squid -v Squid Cache: Version 3.1.15 configure options: '--enable-async-io' '--enable-linux-netfilter' '--enable-storeio=ufs,aufs' '--exec-prefix=/usr' '--bindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--enable-carp' '--enable-cache-digests' '--enable-removal-policies=lru,heap' --with-squid=/opt/squid-3.1.15 --enable-ltdl-convenience After successfully starting all instances I see on the frontends cache.log: 2011/10/07 16:10:56| TCP connection to localhost/4001 failed 2011/10/07 16:10:56| TCP connection to localhost/4002 failed 2011/10/07 16:10:57| TCP connection to localhost/4001 failed 2011/10/07 16:10:57| TCP connection to localhost/4002 failed 2011/10/07 16:10:57| Detected DEAD Parent: backend-2 2011/10/07 16:10:58| TCP connection to localhost/4001 failed 2011/10/07 16:10:58| Detected DEAD Parent: backend-1 2011/10/07 16:11:19| temporary disabling (Service Unavailable) digest from localhost 2011/10/07 16:20:16| TCP connection to localhost/4002 failed 2011/10/07 16:25:20| temporary disabling (Service Unavailable) digest from localhost Because only backends are caching, not connecting to then all traffic from frontends is forwarded to the origin servers, making the storages useless. Add no-digest to the cache_peer lines on the frontend. That will silence the digest messages and make it a little bit longer before DEAD gets triggered. Check that your backend were actually finished loading their caches and listening for new traffic before the frontend were connected to them. Note that Squid-3.1 is IPv6-enabled software so localhost is probably the IP address ::1. You may need to check your IPv6 firewall settings to verify the layers can connect over localhost. Amos
Re: [squid-users] ACL's by Specific Date and Time
Hallo, Jenny, Du meintest am 10.10.11: What my thoughts are is when they are on a holiday, to disable my normal rules. So when they are out of school the proxy doesn't stop their access, but if it's a non school day, it will allow them out. Very easy to do. See acl time: http://wiki.squid-cache.org/SquidFaq/SquidAcl?highlight=%28time%29#Ho w_can_I_allow_some_clients_to_use_the_cache_at_specific_times.3F You can add weekends to your rules to allow access to your kids. You can also download official public holiday list and create rules for these days. Sorry - that doesn't help. Such an ACL doesn't know school holidays, it doesn't know public holidays. Ok - that's the problem of nearly every simple calendar ... Viele Gruesse! Helmut
Re: [squid-users] Read timeout on FTP server
Am 10.10.2011 09:51, schrieb Tom Tux: We had similar problems. Solved with ftp_epsv off in squid.conf. This solved my problem. Thanks. Regards, Marc
Re: [squid-users] Parent TCP connection failed
On 10/10/11 20:31, Artemis Braja wrote: Thanks Amos for the advices, It was related to IPv6. After commenting out the ipv6 entry on hosts file, the parents were successfully connected. Thanks again! Artemis Are your parent proxies IPv4-only with this? or are they enabled as well and some firewall between preventing localhost-v6 communication? (I'm asking because either way this is a catch-22 to mention in the wiki). Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.15 Beta testers wanted for 3.2.0.12
Re: [squid-users] Parent TCP connection failed
The parents are IPv4 only. There is no firewall blocking localhost-v6 communication. Artemis On Mon, 2011-10-10 at 21:00 +1300, Amos Jeffries wrote: On 10/10/11 20:31, Artemis Braja wrote: Thanks Amos for the advices, It was related to IPv6. After commenting out the ipv6 entry on hosts file, the parents were successfully connected. Thanks again! Artemis Are your parent proxies IPv4-only with this? or are they enabled as well and some firewall between preventing localhost-v6 communication? (I'm asking because either way this is a catch-22 to mention in the wiki). Amos
Re: [squid-users] Parent TCP connection failed
On 10/10/11 22:31, Artemis Braja wrote: The parents are IPv4 only. There is no firewall blocking localhost-v6 communication. Artemis Thank you. I've added this to the wiki IPv6 troubleshooting section Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.15 Beta testers wanted for 3.2.0.12
[squid-users] Re: wccp2 + squid
Hi, I configured squid with wccp. I can see traffic in squid access.log and on wccp interface on squid.But traffic is not coming in proper format in squid access.log so browsing is not working. squid access.log 1318275851.743 0 245.244.12.23 NONE/400 3078 GET /index/u0607g.xml.klz - NONE/- text/html 1318275851.758 0 245.244.12.23 NONE/400 3070 GET /index/u0607g.xml - NONE/- text/html 1318275851.884 0 245.244.12.23 NONE/400 3078 GET /index/u0607g.xml.dif - NONE/- text/html 1318275851.897 0 245.244.12.23 NONE/400 3078 GET /index/u0607g.xml.klz - NONE/- text/html 1318275851.909 0 245.244.12.23 NONE/400 3070 GET /index/u0607g.xml - NONE/- text/html 1318275852.019 0 245.244.12.23 NONE/400 3078 GET /index/u0607g.xml.dif - NONE/- text/html 1318275852.032 0 245.244.12.23 NONE/400 3078 GET /index/u0607g.xml.klz - NONE/- text/html 1318275852.044 0 245.244.12.23 NONE/400 3070 GET /index/u0607g.xml - NONE/- text/html 1318275874.694 0 245.244.12.23 NONE/400 3098 POST /ajax/chat/buddy_list.php?__a=1 - NONE/- text/html 1318275900.971 0 245.244.12.23 NONE/400 3180 POST /gateway/gateway.dll?Version=1Action=openServer=NSIP=none - NONE/- text/html 1318275903.884 0 245.244.12.23 NONE/400 3098 POST /ajax/presence/update.php?__a=1 - NONE/- text/html 1318275908.830 0 245.244.12.23 NONE/400 3342 GET /svc/Social/GetFeed?filter=%7B%22FilterProperties%22%3A31%2C%22FeedType%22%3A1%2C%22TopN%22%3A20%2C%22AuthorFilter%22%3A239%2C%22Last%22%3A%22P365D%22%7D - NONE/- text/html wccp0 interface on squid: wccp0 Link encap:UNSPEC HWaddr 95-FF-10-13-00-00-82-79-00-00-00-00-00-00-00-00 inet addr:245.244.12.2 P-t-P:245.244.12.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MTU:1476 Metric:1 RX packets:12460 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:781602 (763.2 KiB) TX bytes:0 (0.0 b) squid.conf: http_port 3128 intercept wccp2_router 245.244.12.1 wccp2_forwarding_method gre wccp2_return_method gre wccp2_assignment_method hash wccp2_service standard 0 [root@CACHE_ENGINE ~]# cat /proc/sys/net/ipv4/conf/all/rp_filter 0 [root@CACHE_ENGINE ~]# cat /proc/sys/net/ipv4/conf/default/rp_filter 0 [root@CACHE_ENGINE ~]# cat /proc/sys/net/ipv4/conf/em1/rp_filter 0 [root@CACHE_ENGINE ~]# cat /proc/sys/net/ipv4/conf/lo/rp_filter 0 [root@CACHE_ENGINE ~]# cat /etc/rc.local #!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff. touch /var/lock/subsys/local #modprobe ip_gre ip tunnel add wccp0 mode gre remote 245.244.12.1 local 245.244.12.2 dev em1 ifconfig wccp0 245.244.12.2 netmask 255.255.255.255 up [root@CACHE_ENGINE ~]# iptables -L -nvx -t nat Chain PREROUTING (policy ACCEPT 2026 packets, 448189 bytes) pkts bytes target prot opt in out source destination 41736936 REDIRECT tcp -- wccp0 * 0.0.0.0/00.0.0.0/0 tcp dpt:80 redir ports 3128 Chain INPUT (policy ACCEPT 582 packets, 52266 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 109 packets, 6545 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 109 packets, 6545 bytes) pkts bytes target prot opt in out source destination Where could be the mistake? please guide me to solve it. OS : FEDORA 15 64 BIT SQUID : 3.1.14 KERNEL : 2.6.40.4-5.fc15.x86_64 Regards, Benjamin
[squid-users] squid loops on passive ftp requests
Hi Folks I am using 3.02.11 as our forward and reverse proxy. It appears our this set up has problems with passive ftp. I am gettting clean Passive Mode Responses which appear not to be honored by squid Any ideas? Thanks Erich Titl smime.p7s Description: S/MIME Kryptografische Unterschrift
[squid-users] Splash page -- detect if client is mobile?
Is there a way for a splash page to detect if it is being displayed on a mobile device, and to be able to redirect or show a different page that is reformatted to fit the much smaller display area of the mobile screen? I don't know if this is really a squid-related question, or if this can all be handled through the magic of javascript, independent of squid. -- Dale Mahalko
[squid-users] Splash page -- detect if auto-proxy config worked?
I am trying to set up a two-layer proxy for public mobile devices, offering transparent access for the mobile devices that are too stupid to auto-detect proxy settings via proxy.pac / wpad . I want an easy way for end-users to find out if their device is using transparent or auto-detected settings, after they have connected to the public wireless with no password. The only way I can see for doing that is to have a splash page that attempts an HTTPS connection when they first connect, using that to find out if auto-detect worked or not. And if it fails, then I can have a way to direct them to IT support to see what needs to be done to get proxied access working. I don't really like this method though, because if they are on the transparent proxy, there will be a long delay until the HTTPS attempt finally times out and fails. Is there some established way to for a splash page to quickly identify if transparent or auto-detect proxy settings are being used? Can a web browser page somehow interact with the squid cache to discover this? - Dale Mahalko
RE: [squid-users] Re: Password for ssl/https key file
Hi guys, Hope you are well ! I'm searching wich program I can use with this directive sslpassword_program ? I want to put manually the key but I don't want that squid runs foreground. Thanks a lot! Sebastian. -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: mercredi 22 septembre 2010 04:39 To: squid-users@squid-cache.org Subject: Re: [squid-users] Re: Password for ssl/https key file On Tue, 21 Sep 2010 08:44:03 -0700 (PDT), gurgo u...@gmx.net wrote: Hi! One more important thing to know is: the sslpassword_program line has to come before the https_port line in your configuration file. Otherwise squid will still prompt you for the passphrase on startup. Regards, Dean This is a bug. Squid should be catching that config error. Amos smime.p7s Description: S/MIME cryptographic signature
RE: [squid-users] Splash page -- detect if client is mobile?
Hi Dale, I think that you can achieve that with dynamic stuff like PHP and browser headers. Sebastian -Message d'origine- De : Dale Mahalko [mailto:dmaha...@gmail.com] Envoyé : lundi 10 octobre 2011 19:33 À : squid-users@squid-cache.org Objet : [squid-users] Splash page -- detect if client is mobile? Is there a way for a splash page to detect if it is being displayed on a mobile device, and to be able to redirect or show a different page that is reformatted to fit the much smaller display area of the mobile screen? I don't know if this is really a squid-related question, or if this can all be handled through the magic of javascript, independent of squid. -- Dale Mahalko smime.p7s Description: S/MIME cryptographic signature
Re: [squid-users] squid loops on passive ftp requests
On Mon, 10 Oct 2011 15:30:16 +0200, Erich Titl wrote: Hi Folks I am using 3.02.11 as our forward and reverse proxy. It appears our this set up has problems with passive ftp. I am gettting clean Passive Mode Responses which appear not to be honored by squid Any ideas? you mean 3.2.0.11 or 3.0.21? with PASV or EPSV? over an old NAT system? with what FTP log trace? (debug_options 9,2). Amos
Re: [squid-users] Splash page -- detect if auto-proxy config worked?
On Mon, 10 Oct 2011 12:41:53 -0500, Dale Mahalko wrote: I am trying to set up a two-layer proxy for public mobile devices, offering transparent access for the mobile devices that are too stupid to auto-detect proxy settings via proxy.pac / wpad . I want an easy way for end-users to find out if their device is using transparent or auto-detected settings, after they have connected to the public wireless with no password. The only way I can see for doing that is to have a splash page that attempts an HTTPS connection when they first connect, using that to find out if auto-detect worked or not. And if it fails, then I can have a way to direct them to IT support to see what needs to be done to get proxied access working. When using squid for this you should have different http_port entries in Squid for handling forward-proxy traffic (auto-detect worked) and intercepted traffic (auto-detect failed). You can use the http_port name= parameter and a myportname type ACL to detect whether auto-detect worked before the splash page exists. Amos
RE: [squid-users] Splash page -- detect if client is mobile?
On Mon, 10 Oct 2011 21:15:02 +, Sébastien WENSKE wrote: Hi Dale, I think that you can achieve that with dynamic stuff like PHP and browser headers. Sebastian -Message d'origine- De : Dale Mahalko [mailto:dmaha...@gmail.com] Envoyé : lundi 10 octobre 2011 19:33 À : squid-users@squid-cache.org Objet : [squid-users] Splash page -- detect if client is mobile? Is there a way for a splash page to detect if it is being displayed on a mobile device, and to be able to redirect or show a different page that is reformatted to fit the much smaller display area of the mobile screen? I don't know if this is really a squid-related question, or if this can all be handled through the magic of javascript, independent of squid. -- Dale Mahalko Well, yes. It can be handled by javascript and CSS a lot better than by Squid or the web server. Squid provides a browser type ACL to scan for regex patterns in the User-Agent string. But it is really hard to tell the difference between UA on smartphones, tablets, notebooks, projectors and PCs these days. I'm apt to say don't bother since you can design the page to be flexible on its own. Amos
Re: [squid-users] Building on Mac OSX
I tracked the auth linker error to the usage of the CBDATA_CLASS2 macro in src/auth/State.h. The usage seems the same as in other places, but when this is disabled the code links correctly. It looks like the basic auth might work without it, so for my limited development purposes maybe its OK. I don't know why the eui parts won't link still, there doesn't seem to be the same macro in use there. I filed a bug report on the CMSG_SPACE problem, and I'm happy to file others... but I'm not sure about the problem behind having to add the config.h includes and the linker error, maybe it is something specific to my environment. Matt From: Matt Cochran matt.coch...@yahoo.com To: Squid squid-users@squid-cache.org Sent: Sunday, October 9, 2011 7:59 AM Subject: Re: [squid-users] Building on Mac OSX I got it to build on Mac OS - but I would get compile errors from a straight, no options ./configure. I can get past the compile errors with some simple changes, but I also get some linking errors unless I use --disable-eui --disable-auth. Unfortunately I need the auth option! Can someone help me out with resolving the problems with TypedMsgHeader.h and the linker errors below? Address.h - added #includesys/types.h fatal.h, splay.h, util.h and SquidNew.h (although I'm not sure why it helped this last one) - added #includeconfig.h so that SQUIDCEXTERN was available In TypedMsgHeader.h, I get an array bound is not an integer constant for the struct definition here. To get past it I just set the array size to an arbitrary value, as I'm not sure why it doesn't work. Obviously that is not a fix... structCtrlBuffer{ charraw[CMSG_SPACE(sizeof(int))]; /// control buffer space for one fd } I also noticed that in adaptation/ecap/XactionRep.cc there is a conflict if you use --enable-ecap and --disable-auth, the class is using a property on the HttpRequest object that is not defined if auth is disabled, which is probably a rare case anyway. It would need to have something like: const libecap::Area Adaptation::Ecap::XactionRep::usernameValue() const { const HttpRequest *request = dynamic_castconst HttpRequest*(theCauseRep ? theCauseRep-raw().header : theVirginRep.raw().header); Must(request); #if USE_AUTH if (request-auth_user_request != NULL) { if (char const *name = request-auth_user_request-username()) return libecap::Area::FromTempBuffer(name, strlen(name)); } #endif return libecap::Area(); } So if I leave eui and auth enabled, I get the following linker error. I can live without eui, but I really need the auth part to work, and it's just that last entry for (Auth::StateData::CBDATA_StateData): Undefined symbols for architecture x86_64: Eui::Eui48::lookup(Ip::Address const), referenced from: connStateCreate(RefCountComm::Connection const, http_port_list*) in client_side.o ACLARP::match(ACLChecklist*) in libacls.a(Arp.o) Eui::Eui64::lookup(Ip::Address const), referenced from: connStateCreate(RefCountComm::Connection const, http_port_list*) in client_side.o ACLEui64::match(ACLChecklist*) in libacls.a(Eui64.o) Eui::Eui48::encode(char*, int), referenced from: makeExternalAclKey(ACLFilledChecklist*, _external_acl_data*) in external_acl.o aclDumpArpListWalkee(Eui::Eui48* const, void*)in libacls.a(Arp.o) Format::Format::assemble(MemBuf, AccessLogEntry*, int) constin libformat.a(Format.o) Eui::Eui64::encode(char*, int), referenced from: makeExternalAclKey(ACLFilledChecklist*, _external_acl_data*) in external_acl.o aclDumpEuiListWalkee(Eui::Eui64* const, void*)in libacls.a(Eui64.o) libtool: link: rm -f .libs/squidS.o Format::Format::assemble(MemBuf, AccessLogEntry*, int) constin libformat.a(Format.o) Eui::Eui48::decode(char const*), referenced from: aclParseArpData(char const*)in libacls.a(Arp.o) Eui::Eui64::decode(char const*), referenced from: aclParseEuiData(char const*)in libacls.a(Eui64.o) Auth::StateData::CBDATA_StateData, referenced from: AuthBasicUserRequest::module_start(void (*)(void*, char*), void*)in libauth.a(lt3-UserRequest.o) AuthNTLMUserRequest::module_start(void (*)(void*, char*), void*)in libauth.a(lt6-UserRequest.o) AuthNegotiateUserRequest::module_start(void (*)(void*, char*), void*)in libauth.a(lt9-UserRequest.o) AuthDigestUserRequest::module_start(void (*)(void*, char*), void*)in libauth.a(lt12-UserRequest.o) ld: symbol(s) not found for architecture x86_64 Any suggestions? I'm not sure how the linker isn't finding a 64bit library for these things, since I'm just building with the defaults. Matt From: Matt Cochran matt.coch...@yahoo.com To: Squid squid-users@squid-cache.org Sent: Thursday, October 6, 2011 6:10 AM Subject: Re: [squid-users] Building on Mac OSX Unfortunately, I'm trying
Re: [squid-users] Facebook page very slow to respond
Amos. Made the changes you suggested on this post. On 10/8/2011 11:24 PM, Amos Jeffries wrote: On 09/10/11 09:15, Wilson Hernandez wrote: I disabled squid and I'm doing simple FORWARDING and things work, this tells me that I'm having a configuration issue with squid 3.1.14. Now, I can't afford to run our network without squid since we are also running SquidGuard for disabling some websites to certain users. Here's part of my squid.conf: # Port Squid listens on http_port 172.16.0.1:3128 intercept disable-pmtu-discovery=off error_default_language es-do # Access-lists (ACLs) will permit or deny hosts to access the proxy acl lan-access src 172.16.0.0/16 acl localhost src 127.0.0.1 acl localnet src 172.16.0.0/16 acl proxy src 172.16.0.1 acl clientes_registrados src /etc/msd/ipAllowed # acl adstoblock dstdomain /etc/squid/blockAds acl CONNECT method CONNECT snip http_access allow proxy http_access allow localhost # Block some sites acl blockanalysis01 dstdomain .scorecardresearch.com clkads.com acl blockads01 dstdomain .rad.msn.com ads1.msn.com ads2.msn.com ads3.msn.com ads4.msn.com acl blockads02 dstdomain .adserver.yahoo.com ad.yieldmanager.com acl blockads03 dstdomain .doubleclick.net .fastclick.net acl blockads04 dstdomain .ero-advertising.com .adsomega.com acl blockads05 dstdomain .adyieldmanager.com .yieldmanager.com .adyieldmanager.net .yieldmanager.net acl blockads06 dstdomain .e-planning.net .super-publicidad.com .super-publicidad.net acl blockads07 dstdomain .adbrite.com .contextweb.com .adbasket.net .clicktale.net acl blockads08 dstdomain .adserver.com .adv-adserver.com .zerobypass.info .zerobypass.com acl blockads09 dstdomain .ads.ak.facebook.com .pubmatic.com .baynote.net .publicbt.com Optimization tip: These ACLs are the same as far as Squid is concerned. You are also using them the same way at the same time below. So the best thing to do is drop those 01,02,03 numbers and have all the blocked domains in one ACL name. Then the below testing can be reduced to a single: http_access deny blockads Changed all these to: acl blockadsdstdomain .rad.msn.com ads1.msn.com ads2.msn.com ads3.msn.com ads4.msn.com acl blockadsdstdomain .adserver.yahoo.com acl blockadsdstdomain .doubleclick.net .fastclick.net acl blockadsdstdomain .ero-advertising.com .adsomega.com acl blockadsdstdomain .adyieldmanager.com .yieldmanager.com .adyieldmanager.net .yieldmanager.net acl blockadsdstdomain .e-planning.net .super-publicidad.com .super-publicidad.net acl blockadsdstdomain .adbrite.com .contextweb.com .adbasket.net .clicktale.net acl blockadsdstdomain .adserver.com .adv-adserver.com .zerobypass.info .zerobypass.com acl blockadsdstdomain .ads.ak.facebook.com .pubmatic.com .baynote.net .publicbt.com http_access deny blockads balance_on_multiple_ip on This erases some of the benefits from connection persistence and reuse. It is not such a great idea with 3.1+ as it was with earlier Squid. Although you turned of connection persistence anyway below. So this only is noticable when it breaks websites depending on IP-base security. Removed this line as suggested later... refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 You may as well erase all the refresh_pattern rules below. The CGI and '.' pattern rules are the last ones Squid processes. Also deleted all the rules and left what's above. visible_hostname www.optimumwireless.com cache_mgr optimumwirel...@hotmail.com Optimum wireless. Hmm. I'm sure I've audited this config before and mentioned the same things... You probably have.. # TAG: store_dir_select_algorithm # Set this to 'round-robin' as an alternative. # #Default: # store_dir_select_algorithm least-load store_dir_select_algorithm round-robin Changed this to least-load... Don't know if is better or not... Interesting. Forcing round-robin selection between one dir. :) # PERSISTENT CONNECTION HANDLING # - # # Also see pconn_timeout in the TIMEOUTS section # TAG: client_persistent_connections # TAG: server_persistent_connections # Persistent connection support for clients and servers. By # default, Squid uses persistent connections (when allowed) # with its clients and servers. You can use these options to # disable persistent connections with clients and/or servers. # #Default: client_persistent_connections off server_persistent_connections off # TAG: persistent_connection_after_error # With this directive the use of persistent connections after # HTTP errors can be disabled. Useful if you have clients # who fail to handle errors on persistent connections proper. # #Default:
[squid-users] Internal 503 Errors on Squid
Hi, We are just testing the 3.1.15 branch of squid on Solaris10, and we're getting 503 errors whenever a URL is passed in with a query parameter, for instance googling an expression: http://www.google.com/search?q=squid+query+paramter+3.1.15+solarisie=utf-8oe=utf-8aq=trls=org.mozilla:en-US:officialclient=firefox-a We're not seeing any errors in the logs - the only error is in the web page and a 503 error in the squid access logs. Error returned to the user is: - The following error was encountered while trying to retrieve the URL: http://www.google.com/search? Connection to 209.85.143.99 failed. The system returned: (146) Connection refused The remote host or network may be down. Please try the request again. Your cache administrator is ad...@admin.com - Requests to just say google work fine. We previously had an installation of 3.0.15 installed, so we're using almost the same configurations as for that installation. Is this a known issue? Is there any dependency on OS libraries that may be missing? Regards, Justin This message and the information contained herein is proprietary and confidential and subject to the Amdocs policy statement, you may review at http://www.amdocs.com/email_disclaimer.asp
Re: [squid-users] Facebook page very slow to respond
Wilson Hernandez 849-214-8030 www.figureo56.com www.optimumwireless.com On 10/10/2011 9:54 PM, Wilson Hernandez wrote: Amos. Made the changes you suggested on this post. On 10/8/2011 11:24 PM, Amos Jeffries wrote: On 09/10/11 09:15, Wilson Hernandez wrote: I disabled squid and I'm doing simple FORWARDING and things work, this tells me that I'm having a configuration issue with squid 3.1.14. Now, I can't afford to run our network without squid since we are also running SquidGuard for disabling some websites to certain users. Here's part of my squid.conf: # Port Squid listens on http_port 172.16.0.1:3128 intercept disable-pmtu-discovery=off error_default_language es-do # Access-lists (ACLs) will permit or deny hosts to access the proxy acl lan-access src 172.16.0.0/16 acl localhost src 127.0.0.1 acl localnet src 172.16.0.0/16 acl proxy src 172.16.0.1 acl clientes_registrados src /etc/msd/ipAllowed # acl adstoblock dstdomain /etc/squid/blockAds acl CONNECT method CONNECT snip http_access allow proxy http_access allow localhost # Block some sites acl blockanalysis01 dstdomain .scorecardresearch.com clkads.com acl blockads01 dstdomain .rad.msn.com ads1.msn.com ads2.msn.com ads3.msn.com ads4.msn.com acl blockads02 dstdomain .adserver.yahoo.com ad.yieldmanager.com acl blockads03 dstdomain .doubleclick.net .fastclick.net acl blockads04 dstdomain .ero-advertising.com .adsomega.com acl blockads05 dstdomain .adyieldmanager.com .yieldmanager.com .adyieldmanager.net .yieldmanager.net acl blockads06 dstdomain .e-planning.net .super-publicidad.com .super-publicidad.net acl blockads07 dstdomain .adbrite.com .contextweb.com .adbasket.net .clicktale.net acl blockads08 dstdomain .adserver.com .adv-adserver.com .zerobypass.info .zerobypass.com acl blockads09 dstdomain .ads.ak.facebook.com .pubmatic.com .baynote.net .publicbt.com Optimization tip: These ACLs are the same as far as Squid is concerned. You are also using them the same way at the same time below. So the best thing to do is drop those 01,02,03 numbers and have all the blocked domains in one ACL name. Then the below testing can be reduced to a single: http_access deny blockads Changed all these to: acl blockadsdstdomain .rad.msn.com ads1.msn.com ads2.msn.com ads3.msn.com ads4.msn.com acl blockadsdstdomain .adserver.yahoo.com acl blockadsdstdomain .doubleclick.net .fastclick.net acl blockadsdstdomain .ero-advertising.com .adsomega.com acl blockadsdstdomain .adyieldmanager.com .yieldmanager.com .adyieldmanager.net .yieldmanager.net acl blockadsdstdomain .e-planning.net .super-publicidad.com .super-publicidad.net acl blockadsdstdomain .adbrite.com .contextweb.com .adbasket.net .clicktale.net acl blockadsdstdomain .adserver.com .adv-adserver.com .zerobypass.info .zerobypass.com acl blockadsdstdomain .ads.ak.facebook.com .pubmatic.com .baynote.net .publicbt.com http_access deny blockads balance_on_multiple_ip on This erases some of the benefits from connection persistence and reuse. It is not such a great idea with 3.1+ as it was with earlier Squid. Although you turned of connection persistence anyway below. So this only is noticable when it breaks websites depending on IP-base security. Removed this line as suggested later... refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 You may as well erase all the refresh_pattern rules below. The CGI and '.' pattern rules are the last ones Squid processes. Also deleted all the rules and left what's above. visible_hostname www.optimumwireless.com cache_mgr optimumwirel...@hotmail.com Optimum wireless. Hmm. I'm sure I've audited this config before and mentioned the same things... You probably have.. # TAG: store_dir_select_algorithm # Set this to 'round-robin' as an alternative. # #Default: # store_dir_select_algorithm least-load store_dir_select_algorithm round-robin Changed this to least-load... Don't know if is better or not... Interesting. Forcing round-robin selection between one dir. :) # PERSISTENT CONNECTION HANDLING # - # # Also see pconn_timeout in the TIMEOUTS section # TAG: client_persistent_connections # TAG: server_persistent_connections # Persistent connection support for clients and servers. By # default, Squid uses persistent connections (when allowed) # with its clients and servers. You can use these options to # disable persistent connections with clients and/or servers. # #Default: client_persistent_connections off server_persistent_connections off # TAG: persistent_connection_after_error # With this directive the use of persistent connections after # HTTP errors can be
Re: [squid-users] Re: Password for ssl/https key file
On 11/10/11 09:04, Sébastien WENSKE wrote: Hi guys, Hope you are well ! I'm searching wich program I can use with this directive sslpassword_program ? I want to put manually the key but I don't want that squid runs foreground. Any script which produces the password on demand will do. For example: #!/bin/sh echo secret password Squid sends it the key filename as the command line parameter if you need that. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.15 Beta testers wanted for 3.2.0.12