[squid-users] squid ntlm/basic authentication
Just a simple question. I understand that NTLM and Basic are two different types of proxy authentication. I was wondering, in general, do proxies always support Basic, and sometimes support NTLM? Or can you get pure NTLM-only proxies? Jon
Re: Res: Re: Res: Re: [squid-users] -- wb_group cache time
Hi! I'm using wb_ntlm_auth and wb_group and all works fine with W2k AD Try to start winbindd -n to disable winbind caching and set ttl=5, but it's not necessary. - Original Message - From: Henrik Nordstrom [EMAIL PROTECTED] To: Alex Carlos Braga Antão [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Tuesday, August 19, 2003 12:30 AM Subject: Re: Res: Re: Res: Re: [squid-users] -- wb_group cache time On Monday 18 August 2003 20.02, Alex Carlos Braga Antão wrote: Where do I find the wb_group helpers to squid work with Samba 3.0? The wbinfo based helper should work I think (helpers/external/wbinfo_group). The wb_auth and wb_ntlm_auth are both replaced by the Samba ntlm_auth helper in Samba-3, but I am not sure if there is a direct equivalence to wb_group yet. This is something to discuss with the Samba team. Regards Henrik -- Donations welcome if you consider my Free Squid support helpful. https://www.paypal.com/xclick/business=hno%40squid-cache.org If you need commercial Squid support or cost effective Squid or firewall appliances please refer to MARA Systems AB, Sweden http://www.marasystems.com/, [EMAIL PROTECTED]
Re: [squid-users] How to disable TCP dead detection?
Hi Henrik, Well, i notice something different behaviour. Here is cache.log extract: --- 2003/08/19 17:16:54| TCP connection to 127.0.0.1/8000 failed 2003/08/19 17:16:54| Detected DEAD Parent: 127.0.0.1/8000/0 2003/08/19 17:17:27| Failed to select source for 'http://www.b92.net/doc/aboutu' 2003/08/19 17:17:27| always_direct = 0 2003/08/19 17:17:27|never_direct = 1 2003/08/19 17:17:27|timedout = 0 2003/08/19 17:17:52| Failed to select source for 'http://www.b92.net/doc/aboutu' 2003/08/19 17:17:52| always_direct = 0 2003/08/19 17:17:52|never_direct = 1 2003/08/19 17:17:52|timedout = 0 2003/08/19 17:17:59| Failed to select source for 'http://www.b92.net/doc/aboutu' 2003/08/19 17:17:59| always_direct = 0 2003/08/19 17:17:59|never_direct = 1 2003/08/19 17:17:59|timedout = 0 2003/08/19 17:18:06| Failed to select source for 'http://www.danas.co.yu/' 2003/08/19 17:18:06| always_direct = 0 2003/08/19 17:18:06|never_direct = 1 2003/08/19 17:18:06|timedout = 0 2003/08/19 17:18:30| Failed to select source for 'http://www.cisco.com/' 2003/08/19 17:18:30| always_direct = 0 2003/08/19 17:18:30|never_direct = 1 2003/08/19 17:18:30| 08/19 17:18:30| Detected REVIVED Parent: 127.0.0.1/8000 - So, DEAD parent has been detected at 17:16:54, and shortly after that parent was reachable. However, there were 5-6 requests starting from 17:17:27 which were not server, although parent was alive. We have REVIVED message after one minute - at 17:18:30. If there is less requests, my expirience is that period to detect revived parent can be few times longer. Do you know any way do change this behaviour (to speed up detecting of revived parent cache)? Thanks Vladimir --- Henrik Nordstrom [EMAIL PROTECTED] wrote: On Friday 15 August 2003 13.12, Vladimir wrote: When there is delay in opening of ISDN line and ssh channel, Squid can not reach parent proxy on address localhost:8000 and it declares it as dead ( I think this is famous TCP DEAD feature of Squid). After few seconds, line is up, parent is reachable, but in the browser I still get error message that all parent proxies are down and request can not be forwarded. But, parent proxy is alive and kicking. Only way to fix it, is to wait something like 10 minutes or so, You should not need to wait at all. If you use never_direct then dead peers are supposed to be tried on each and every request as last-resort path. My question is this: how to completely disable this TCP dead feature, and force Squid to forward EACH request to proxy no matter if it is dead or not, because in my case this feature is not usefull at all, and just makes troubles. It is supposed to do exacly just that in never_direct configurations. Regards Henrik -- Donations welcome if you consider my Free Squid support helpful. https://www.paypal.com/xclick/business=hno%40squid-cache.org If you need commercial Squid support or cost effective Squid or firewall appliances please refer to MARA Systems AB, Sweden http://www.marasystems.com/, [EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software http://sitebuilder.yahoo.com
[squid-users] acl question
Hi, I'm new to this list. I'm using Squid 2.5STABLE3 on a Linux 2.4.21 system running Apache 2.x. As this is a test phase, I figured I'd test out the acl parameters. I've encountered a strange problem. Perhaps someone out there might be able to figure it out. acl noie browser -i MSIE deny_info ERR_NOIE noie http_access deny noie The above, when uncommented makes squid throw a segmentation fault. When I recomment the three lines, Squid works fine. Here's what the ERR_NOIE looks like: !DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.01 Transitional//EN http://www.w3.or g/TR/html4/loose.dtd html head meta http-equiv=Content-Type CONTENT=text/html titleError: Access denied./title /head body blockquote Please note that your browser is not sanctioned by this company. /blockquote /body /html After experimenting, it's only the 2nd line that makes squid segfault. Is there something wrong with deny_info ERR_NOIE noie that I'm not aware of? Any help appreciated CC -- email: [EMAIL PROTECTED] | A man who knows not where he goes, | knows not when he arrives. |- Anon ** All information contained in this email is strictly ** ** confidential and may be used by the intended receipient ** ** only. **
[squid-users]SQuid-3.0-PRE3 Compilation on Ia64
Hello Developers, I am new to this list.I am involving in the Epoll development in Squid-3.0 on Ia64 Platform.I have compiled the squid-3.0-pre3 with configure options: '--prefix=/usr/local/squidbug' '--enable-epoll' '--disable-poll' '--disable-select' '--disable-kqueue' '--enable-storeio=null,ufs,aufs' '--enable-async-io=16' '--with-file-descriptors=16384' '--with-pthreads' for testing the epoll netio method on Squid in linux kernel-2.4.20. The changes in the squid.conf === cache_dir null /dev/null,http_access allow all, cache_mem 1200 MB,half_closed_clients off, server_persistent_connection off other then normal options. The Squid-3.0-pre3 satisfies the requests upto 300 requests/sec in Polygraph testing.Beyond that limit,Squid-3.0 did not satisfied.The meomory consumption of Squid with epoll support exceeds 1.9GB.So the Polygraph entries are getting errors. I have traced the squid-3.0 memory consumption with top entries. Memory Usage: == 11:38am up 1:19, 3 users, load average: 1.00, 0.94, 0.63 43 processes: 41 sleeping, 2 running, 0 zombie, 0 stopped CPU0 states: 67.22% user, 31.9% system, 0.0% nice, 1.19% idle CPU1 states: 1.4% user, 9.16% system, 0.0% nice, 89.30% idle Mem: 2053824K av, 2028928K used, 24896K free, 0K shrd,2096K buff Swap: 2040208K av, 20112K used, 2020096K free 23328K cached PID USER PRI NI SIZERSS SHARE STAT %CPU %MEM TIMECOMMAND 1405 squid 16 0 1903M 1.9G 3520R 99.9 23.7 13:29 squid PolyGraph Entry at client side: === 016.30| i-top1 254073 100.20 28097 61.87 0 3497 016.38| i-top1 254073 0.00 -1 -1.000 3498 016.47| i-top1 254073 0.00 -1 -1.000 3499 I think,Some of you involved in the compilation of squid-3.0 on IA64 Platform.So I want to know whether,squid-3.0 with epoll netio consumes such memory for the requests and what is the reason that squid-3.0 consuming such huge memory for 350 requests/sec.Is there any possiblity for memory leakage.Specifically in comm.cc void comm_old_write(int fd, const char *buf, int size, CWCB * handler, void *handler_data, FREE * free_func) function. Anyhelp regarding to this problem are appreciated. Thanks -Muthukumar _ Polyphonic ringtones. Latest movie trailors. http://server1.msn.co.in/sp03/gprs/index.asp On your mobile!
RE: [squid-users] squid ntlm/basic authentication
I was wondering, in general, do proxies always support Basic, and sometimes support NTLM? Squid supports both basic and NTLM authentication. This is the Squid mailing list, so information about other proxies will either be limited or non-existant. Or can you get pure NTLM-only proxies? Squid can be configured to use only NTLM. Adam
RE: [squid-users] All ntlmauthenticator processes are busy
If I'm going to implement for 500 Users , how many ntlmauthenticator processes I should specify in squid.conf Hard to say - each individual setup is different. The general rule of thumb is one NTLM auth helper for each user. At the same time, our network has over 20 concurrent users and we use only 10 helpers (and the first three handle over 75% of the requests). I would suggest using Cache Manager to keep an eye on your NTLM auth helpers. It will show the number of requests handled by each helper, and the number of seconds since each helper served a request. There's also statistics for the current queue length and the average service time. In addition to increasing the number of helpers, you can also increase the max_challenge_reuses and max_challenge_lifetime parameters. The problem might not lie with Squid - if the DC is busy, or the link is congested, that can cause delays that will make requests queue up. Adam
[squid-users] Network Unreachable Error - Presentation at the end of the day...help!
Ladies and Gents, I am recieveing a Connection Failed: (101) Network Unreachable error after i successfully authenticate vs AD. If i dig from the server to a website i recieve a positive response. If i ping a network IP i get a pisitive repsonse. I have bound Squid to eth1 and when i do a port scan i see that the service is running. I have set a static IP address in the /etc/sysconfig/network-scripts/ifcfg-eth1 file for eth1. I have checked my /etc/resolv.conf file for the correct DNS servers. I have pinged the Squid server from a M$ box and recieved favorable response. What have i overlooked? David Johnson | Network Administrator | Hampton University | Hampton, VA | 23669 | office 757.728.6528 | fax 757.727.5438 mailto:[EMAIL PROTECTED]
[squid-users] Sonicwall and empty url
Hello, I am new to Squid and I'd like to use it as a simple cache proxy server for web browsing. We use a Sonicwall appliance for firewalling, and we'd like to use the Proxy Web Server option of Sonicwall. This option performs automatic proxy forwarding of all web requests coming from the lan (internal) interface of the Sonicwall. Users in the lan will use proxy without configuring their browser, because the Sonicwall automatically forwards packets to the proxy. The Squid proxy is currently located on the DMZ, but can be located anywhere outside the lan. Now the problem: Defining the Squid proxy in the Proxy Web Server field of the Sonicwall, Squid always returns Invalid URL, because receives an empty URL. If I use the same proxy from a browser, it works correctly. Here there is an extract of the Squid Log: 1061296569.027 2 10.192.45.54 NONE/400 1437 GET / - NONE/- text/html 1061296572.283 2 10.192.45.54 NONE/400 1437 GET / - NONE/- text/html 1061296573.872 1 10.192.45.54 NONE/400 1437 GET / - NONE/- text/html 1061296583.608 1245 212.131.168.53 TCP_MISS/200 17744 GET http://www.apple.com/ - DIRECT/17.112.152.32 text/html 1061296584.105545 212.131.168.53 TCP_MISS/200 982 GET http://statse.webtrendslive.com/S139226/button6.asp? - DIRECT/63.88.212.82 image/gif 1061296588.541 4980 212.131.168.53 TCP_MISS/200 1249 GET http://statse.webtrendslive.com/S130376/button6.asp? - DIRECT/63.88.212.82 image/gif 1061296810.333 1580 212.131.168.53 TCP_MISS/200 92149 GET http://www.cisco.com/ - DIRECT/198.133.219.25 text/html As you can see, the requests from 212.131.168.53 (the browser) are performed correctly, while requests from 10.192.45.54 (the redirection from the Sonicwall) are empty. I know (and tried) that the Sonicwall works with other proxies too (i.e. Apache), so I don't understand what the problem is. Any help? Sincerely, Vito Parisi
RE: [squid-users] Network Unreachable Error - Presentation at the end of the day...help!
I am recieveing a Connection Failed: (101) Network Unreachable error after i successfully authenticate vs AD Can you browse from a workstation if you don't use Squid? Have you tried using Squid's client program from the command line on the Squid server? Also, post your squid.conf (without any blank lines or comments). Adam
RE: [squid-users] NTLM but still got pop-ups /w IE ?
I did winbinfo -t and winbind -a domain\\username%password. It seems okay. Please don't say it seems okay, because that doesn't tell us anything useful. You're much better off posting the exact comman line used and the exact output given. (Changing the username and password in the post is ok) With wbinfo -a, did you see both a plain text and challenge response reply? Does basic authentication with the winbind helper work? Is these steps still required before joining domain No. The steps you detail involve integrating PAM with Samba. Squid does not require this to use the winbind helpers. Adam
[squid-users] no HIT ?
Hi, I probably did something wrong with my squid. The proxy seems to have no 'HIT' whatsoever, it continues to give me only MISSES. Can somebody tell me what happened ? Thanks. ... Rully
RE: [squid-users] no HIT ?
The proxy seems to have no 'HIT' whatsoever, it continues to give me only MISSES. Post your squid.conf (without blank lines or comments). Adam
[squid-users] Modify HTTP Headers in httpd-accelerator mode
Hi all, Is it possible, using Squid, to modify the HTTP Headers of an object when in httpd-accelerator mode. We have a Domino Web server that tells clients browsers that GIFS, CSS, JS and other static files to expire immediately, so that copies of these files are sent to the client over and over again! I was wondering if I could stick Squid in front of the Domino box in httpd-accelerator mode, and use it to modify the HTTP Headers and tell static content not to expire. Thanks in advance Reuben [EMAIL PROTECTED]
Re: [squid-users] no HIT ?
Rully Budisatya wrote: Hi, I probably did something wrong with my squid. The proxy seems to have no 'HIT' whatsoever, it continues to give me only MISSES. Can somebody tell me what happened ? Depending on methodologies used (aka browser reload) ,this may leed to this unwanted (averse) effect when testing your proxy. Make sure also objects accessed are cacheable using e.g. : http://www.ircache.net/cgi-bin/cacheability.py (include squid version and platform (version), can be usefull). M. Thanks. ... Rully -- 'Love is truth without any future. (M.E. 1997)
[squid-users] Abnormal Traffic Generation
Dear All, I am having problem with my squid box since i updated my kernel to 2.4.20-19.8smp , actually it is running as a squid cache on my network and serving 360 users at maximum , the specs are 80 Gb scsi storage dual intel PIII 1 GB ram but it behaves abnormally some times it tart generating traffic on the network inward (download) it seems like squid box is downloading data but there are no client requests for that, it really choked up the whole network and traffic just stopped for four hours ,and i am really astonished that no other machine or users were generating traffic rather there were only 50 users connected, it was just this linux box :(.is there some one who also expirienced the same ?? hope to have immediate response from you. Regards Ahmad Khan __ Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software http://sitebuilder.yahoo.com
[squid-users] Re: Squid RedHat RPMS
[EMAIL PROTECTED] wrote: Joe Just a note, as requested at http://www.squid-cache.org/binaries.html, to thank you for providing the RedHat RPMs and a cc to the list to say thank-you to the project team for Squid. Really mean that. Brilliant software! I'm happy they have been useful for you, Bill. For information, I'm running RH8 (2.4.18-14) and the RH7 rpm squid-2.5.STABLE3-1rh_7x.i386 installed and ran fine, the RH9 rpm failed to install due to 3 library dependency errors (sorry - I didn't think to capture them for this note). I think I know the deps in question. I'll see about getting a package just for 8.0 up, if I can dig up an 8.0 box that I haven't already upgraded to 9 (we're skipping striaght from a custom 7.3 to a custom 9 for our systems, so I've never seriously maintained any 8.0 boxes--RPM was broken throughout the whole 8.0 cycle, so there was no way to switch to it). -- Joe Cooper [EMAIL PROTECTED] Web caching appliances and support. http://www.swelltech.com
[squid-users] Problems
Hello, I am getting a bunch of messages like: clientReadRequest: FD 89 Invalid Request in my log file. Also I have people complaining that they cannot for example log into myEbay at .ebay.com. They get a 400 error from Squid. I am using WCCP and have squid running as a transparent proxy. In testing I get the Invalid Request when the client gets the 400 error. The problem I'm having is that the 400 errors tells you what the problem *MIGHT* be. I cannot seem to find any information in cache.log or access.log to find out what the problem *IS*. How can I figure this out and solve it? As soon as I turn squid off everything then routes fine. Oddly enough it only seems to hapen for our test users using Lynx. People with PPP sessions using IE do not seem to be having the issue. Thanks, Serge.
Re: [squid-users] Re: Access Network problem
On Monday 18 August 2003 22.20, Sergio Alonso wrote: Can't get my network users to access squid. My acl's configuration looks like this: Configuration looks fine. What do you get in access.log? And have you restarted Squid after making the configuration changes? Regards Henrik - My access.log doesn't log requests from other computers (only from localhost) , also i always restart my computer after making any changes to squid.conf. From red hat linux 9 installation i set up firewall configuration to maximum, is this the problem? _ MSN. Más Útil Cada Día http://www.msn.es/intmap/
Re: [squid-users] Squid Report Issue
I think I might have found part of the problem. I have a access.log file for that date that is 514 megs and I also have a bk.access.log file for that date that is only 150 megs. The two files are only separated by Is it possible for some reason that squid split the acccess.log file so now i need to cat them together? How can I tell which one I need to cat to which one? if I use filename filename how can I tell which one should be into the other one? They are only separated by a minute. What I am thinking happened is that while SARG was running then there was another cron job that did a /us/sbin/squid -k shutdown Below is their output from an ls -alht -rwxr--r--1 root root 514M Jul 31 15:14 access-07-25-03.log -rwxr--r--1 root root 150M Jul 31 15:13 bk.access-07-25-03.log Jim Jim_Brouse/[EMAIL PROTECTED] ITRIBE.ORG To: [EMAIL PROTECTED] cc: 08/18/2003 10:51 AM Subject: [squid-users] Squid Report Issue Currently, I am having a problem I can not resolve at this point though. I have tried looking everywhere for information on this particular squid report issue. I am using SARG for these reports. I do not always get these errors. There is a cron job every night and sometimes I get reports that list about 7000 userid's. We have less than 500 users that have Internet access and the data for all of the 500 users is not in report with the 7000 users which do not show correct ip addresses they are just a set of zeros here is an example of one of the userids 007 normally it is an IP address that makes in the userid field. Jim
Re: [squid-users] Squid3: ftp gateway in accelerator mode
On Tuesday 19 August 2003 02.04, Jim Flowers wrote: Did I then misunderstand your earlier response, to wit: What is possible is to set up http access to the content of the FTP server via a redirector rewriting the accelerated URLs to ftp://, but you can not connect to Squid using FTP. Do I have to do both redirector rewrite and always_direct? Basically yes. 1. The URL forwarded by Squid needs to be a ftp:// URL 2. The request needs to be forwarded directly to the server, not via a cache_peer. Or slightly longer version: The URL processed by Squid needs to become a ftp:// URL. This can either be accomplished by using the squid-3 protocol= http(s)_port option to make all http request received on that port processed as if they were requests for ftp:// URLs, or by using a redirector helper depending on how/when you want Squid to read the request as a request for a ftp:// object. Using a redirector gives full freedom allowing you to for example map http://www.example.com/pub/ to a FTP server but any other http://www.example.com/ requests to a web server. Then the requests which should end up in a FTP server needs to be always_direct to be forwarded using FTP to the FTP server. cache_peer is always HTTP (or HTTP over SSL if the peer is ssl enabled) The client needs to connect to Squid using http:// or https:// as this is the two types of ports where Squid can accept requests (http_port or https_port). Regards Henrik
RE: [squid-users] Re: Access Network problem
My access.log doesn't log requests from other computers Then the other computers aren't connecting to Squid. From red hat linux 9 installation i set up firewall configuration to maximum This config doesn't allow any outside connections to the machine. You need to set an exception for Squid (tcp port 3128). Since this isn't a Squid problem, if you need assistance with this, you'll get the best help from the RedHat forums. Adam
Re: [squid-users] Problems
Try this but be do not leave debug on too long just long enough to test why it is failing. debug is very verbose squid -k debug access some website squid -k debug Jim Serge Paquin [EMAIL PROTECTED] To: Jim_Brouse/[EMAIL PROTECTED] .orgcc: Subject: Re: [squid-users] Problems 08/19/2003 10:03 AM It doesn't say anything meaningfull other than items like: 2003/08/19 11:14:53| clientReadRequest: FD 221 Invalid Request Everything else seems very standard and unrelated. - Original Message - From: Jim_Brouse/[EMAIL PROTECTED] To: Serge Paquin [EMAIL PROTECTED] Sent: Tuesday, August 19, 2003 12:54 PM Subject: Re: [squid-users] Problems Have you done a tail -f /var/log/squid/cache.log that might provide some insight, Jim Serge Paquin [EMAIL PROTECTED] To: [EMAIL PROTECTED] .orgcc: Subject: [squid-users] Problems 08/19/2003 09:39 AM Hello, I am getting a bunch of messages like: clientReadRequest: FD 89 Invalid Request in my log file. Also I have people complaining that they cannot for example log into myEbay at .ebay.com. They get a 400 error from Squid. I am using WCCP and have squid running as a transparent proxy. In testing I get the Invalid Request when the client gets the 400 error. The problem I'm having is that the 400 errors tells you what the problem *MIGHT* be. I cannot seem to find any information in cache.log or access.log to find out what the problem *IS*. How can I figure this out and solve it? As soon as I turn squid off everything then routes fine. Oddly enough it only seems to hapen for our test users using Lynx. People with PPP sessions using IE do not seem to be having the issue. Thanks, Serge.
[squid-users] Re: Squid/LDAP/eDirectory
On Tuesday 19 August 2003 07.03, [EMAIL PROTECTED] wrote: I am looking to have Squid 2.5 authenticate connection requests against a Novell eDirectory 8.62 server. A neat solution from my point of view is to configure Squid to use the supplied LDAP helper, across SSL. Which the supplied helper does just fine, and several people use this for Novell NDS integration via LDAP and I see no reason why it should nor work with eDirectory as well. Is there any reason why I should additionally look at PAM authentication? Are there any potential benefits over what I have described above? None I can see.. only complications.. PAM is mostly useful if you have the UNIX server already integrated into some authentication system and you want to use the same for Squid authentication. i.e. if the UNIX server where you run Squid is already fully integrated into your eDirectory domain, allowing login/pop3/imap etc using accounts from eDirectory then using the same setup via PAM for Squid may be appropriate. However, even then it is often preferable to use the native Squid helpers in favor of the PAM based helper if the native helpers can do the job. Regards Henrik
[squid-users] Squid Peer´s problem.
Hi all: I have two squid box the first for Internet (this use smb_auth) and the second for intranet (don´t use validation). I need redirect the traffic in the second squid box when this go for internet to first squid box. I put the following rules in the squid conf. file of second squid box. acl local-servers dstdomain .eds.com .eds.com.ar always_direct allow local-servers cache_peer proxy.eds.com.ar parent 80 3130 no-query default When I go for Internet through second squid box, first squid box ask me user and password and don´t give access. This is the first squid box log (Internet access) 1061314557.472 74 192.168.27.10 TCP_DENIED/407 1697 GET http://www.yahoo.com.ar/ - NONE/- text/html This is the second squid box log 1061292795.554114 207.169.88.182 TCP_MISS/407 1746 GET http://www.yahoo.com.ar/ xzsl81 DEFAULT_PARENT/proxy.eds.com.ar text/html Thanks in advance. Fernando Ampugnani EDS Argentina - Software, Storage Network Global Operation Solution Delivery Tel: 5411 4704 3428 Mail: [EMAIL PROTECTED]
Re: [squid-users] How to disable TCP dead detection?
On Tuesday 19 August 2003 11.41, Vladimir wrote: Hi Henrik, Well, i notice something different behaviour. Here is cache.log extract: --- 2003/08/19 17:16:54| TCP connection to 127.0.0.1/8000 failed 2003/08/19 17:16:54| Detected DEAD Parent: 127.0.0.1/8000/0 2003/08/19 17:17:27| Failed to select source for 'http://www.b92.net/doc/aboutu' 2003/08/19 17:17:27| always_direct = 0 2003/08/19 17:17:27|never_direct = 1 2003/08/19 17:17:27|timedout = 0 2003/08/19 17:17:52| Failed to select source for 2003/08/19 17:18:30| Failed to select source for 'http://www.cisco.com/' 2003/08/19 17:18:30| always_direct = 0 2003/08/19 17:18:30|never_direct = 1 2003/08/19 17:18:30| 08/19 17:18:30| Detected REVIVED Parent: 127.0.0.1/8000 Try using a lower peer_connect_timeout or masquerade of outgoing locally generated traffic. Most likely your dialup is assigned a dynamic IP address disturbing the probes initiated by Squid. If there is less requests, my expirience is that period to detect revived parent can be few times longer. If there is no requests then Squid does not check. As soon as there is a request Squid checks if the parent is alive. Which Squid version is this? Regards Henrik
[squid-users] axel + squid - security argument
sorry for trolling, but i also thought out about another argument, as squid would download one file via different mirrrors/gateways , thus via different connections, it would increase security of file transfers. it would be great if such solution would be possible also for https documents.
[squid-users] Re: Squid + LDAP
On Tuesday 19 August 2003 16.21, Arias, Sebastian Alejandro - (Ext Arg) wrote: Could you give me some help to use the squid_ldap_auth? ... -I tried with this before but I didn't succeed, that's why I'm using ldap_auth- CN=user name,OU=it,OU=sys,OU=user accounts,dc=ar,dc=domain,dc=com If all your users are direclty below ou=it then all you need is -u cn -b OU=it,OU=sys,OU=user accounts,dc=ar,dc=domain,dc=com Which will tell Squid that the users DN is always on the form cn=username,OU=it,OU=sys,OU=user accounts,dc=ar,dc=domain,dc=com If your users are distributed in multiple OUs then you need to search for the users DN with the -f argument, probably something like -b OU=user accounts,dc=ar,dc=domain,dc=com -f ((objectClass=Person)(CN=%s)) Other filters are possible, mainly depending on the structure of the user objects in your LDAP tree and what LDAP attribute you want to use for the login name. If you have further question regarding the squid_ldap_auth helper please use the squid-users mailinglist. Regards Henrik
Re: [squid-users] Re: Access Network problem
On Tuesday 19 August 2003 18.50, Sergio Alonso wrote: - My access.log doesn't log requests from other computers (only from localhost) , also i always restart my computer after making any changes to squid.conf. From red hat linux 9 installation i set up firewall configuration to maximum, is this the problem? Well... if you set the firewall level to maximum then no connections from other computers to your computer is allowed by the firewall, and Squid will certainly not be accessible unless you reconfigure the firewall indicatign it should be allowed. If you use the RedHat firewall then you need to also use the corresponding administrative tool for defining what the firewall should allow, i.e. you need to allow local computers to connect to the http_port of your Squid (usually port 3128). Regards Henrik
Re: [squid-users] Stopping windows update
On Tuesday 19 August 2003 03.00, Fajar Priyanto wrote: Hello guys, I'm sorry for putting this question on the list because it's related to squidGuard more than squid I think. Or both. 1. Can windowsupdate.microsoft.com be denied rather than redirected? Being a redirector squidGuard can only redirect. Denials done by squidGuard is done by redirecting to the access denied page. If you want real access denials then you need to use Squid ACLs. 2. Can squid's ACL and squidGuard's work together? Yes. The order of things are roughly 1. http_access 2. redirect_program 3. request forwarded 4. http_reply_access when the response is seen. Regards Henrik -- Donations welcome if you consider my Free Squid support helpful. https://www.paypal.com/xclick/business=hno%40squid-cache.org If you need commercial Squid support or cost effective Squid or firewall appliances please refer to MARA Systems AB, Sweden http://www.marasystems.com/, [EMAIL PROTECTED]
Re: [squid-users] distributed http packets on transparent web cache
On Tuesday 19 August 2003 05.18, Nont Banditwong wrote: I use squid-2.5.STABLE3 and WCCPv2 and ISO 12.0(25) I've look into wccpv2.c it seem like WCCPv2 is not fully implemented, right ? Probably right. It was not last time I looked and I have not heard of any WCCPv2 related development in a long time. Last status I headr was that the Squid WCCPv2 support was functional to the level that multiple caches could join a single router, i.e. to the same level of WCCPv1 but using WCCPv2 mechanisms which is better supported by Cisco in their routers. Regards Henrik
Re: [squid-users] squid ntlm/basic authentication
On Tuesday 19 August 2003 08.02, jonathan soong wrote: I was wondering, in general, do proxies always support Basic, and sometimes support NTLM? Basic is part of the HTTP standard and is usually supported by all HTTP related software. NTLM is a Microsoft invention, violating important aspects of the HTTP standard in the process.. (as expected given their track record of actually reading RFCs while implementing new features..). Because of the violations to HTTP the NTLM authentication scheme can not be proxied by HTTP compliant proxies. Or can you get pure NTLM-only proxies? Most proxies supporting NTLM authentication can be configured to use NTLM authenticaiton only. This will however completely lock your users to using Microsoft browsers only and only on Microsoft OS:es and (well.. this said, some other browsers start to implement rudimentatry support for NTLM to interoperate with broken servers only supporting NTLM). And forget about trying to use another Java engine than the Microsoft looks like Java engine.. Regards Henrik -- Donations welcome if you consider my Free Squid support helpful. https://www.paypal.com/xclick/business=hno%40squid-cache.org If you need commercial Squid support or cost effective Squid or firewall appliances please refer to MARA Systems AB, Sweden http://www.marasystems.com/, [EMAIL PROTECTED]
Re: [squid-users] acl question
On Tuesday 19 August 2003 12.30, cc wrote: acl noie browser -i MSIE deny_info ERR_NOIE noie http_access deny noie The above, when uncommented makes squid throw a segmentation fault. When I recomment the three lines, Squid works fine. Probably this: http://www.squid-cache.org/Versions/v2/2.5/bugs/#squid-2.5.STABLE3-deny_info Regards Henrik -- Donations welcome if you consider my Free Squid support helpful. https://www.paypal.com/xclick/business=hno%40squid-cache.org If you need commercial Squid support or cost effective Squid or firewall appliances please refer to MARA Systems AB, Sweden http://www.marasystems.com/, [EMAIL PROTECTED]
Re: [squid-users] Modify HTTP Headers in httpd-accelerator mode
On Tuesday 19 August 2003 17.06, Reuben Pearse wrote: I was wondering if I could stick Squid in front of the Domino box in httpd-accelerator mode, and use it to modify the HTTP Headers and tell static content not to expire. There is no configuration support for this, but if you know a little C coding then it is not too hard. See the clientBuildReplyHeader function. Regards Henrik -- Donations welcome if you consider my Free Squid support helpful. https://www.paypal.com/xclick/business=hno%40squid-cache.org If you need commercial Squid support or cost effective Squid or firewall appliances please refer to MARA Systems AB, Sweden http://www.marasystems.com/, [EMAIL PROTECTED]
Re: [squid-users] Squid Report Issue
On Tuesday 19 August 2003 19.07, Jim_Brouse/[EMAIL PROTECTED] wrote: Is it possible for some reason that squid split the acccess.log file so now i need to cat them together? Squid never splits the access.log. What Squid may do is to rotate access.log into access.log.1, access.log.2, access.log.3 etc.. If you find other log files then these are either generated by other software, or extracted logs generated by one of your friends administrating the server. Regards Henrik -- Donations welcome if you consider my Free Squid support helpful. https://www.paypal.com/xclick/business=hno%40squid-cache.org If you need commercial Squid support or cost effective Squid or firewall appliances please refer to MARA Systems AB, Sweden http://www.marasystems.com/, [EMAIL PROTECTED]
Re: [squid-users] Squid Peer´s problem.
On Tuesday 19 August 2003 19.42, Ampugnani, Fernando wrote: When I go for Internet through second squid box, first squid box ask me user and password and don´t give access. See the cache_peer directive on the second Squid.. Regards Henrik -- Donations welcome if you consider my Free Squid support helpful. https://www.paypal.com/xclick/business=hno%40squid-cache.org If you need commercial Squid support or cost effective Squid or firewall appliances please refer to MARA Systems AB, Sweden http://www.marasystems.com/, [EMAIL PROTECTED]
[squid-users] authenticateNTLMHandleReply: called with no result string
We changed squid to start as squid so we could try to get a core for a different problem, but we've been seeing this one once or twice a day: In /var/log/messages I get: (squid): authenticateNTLMHandleReply: called with no result string squid[PID]: Squid Parent: child process PID exited due to signal 6 Squid Cache: Version 2.5.STABLE3 configure options: s390-redhat-linux --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man --infodir=/usr/share/info --exec_prefix=/usr --bindir=/usr/sbin --libexecdir=/usr/lib/squid --localstatedir=/var --sysconfdir=/etc/squid --enable-poll --enable-snmp --enable-removal-policies=heap,lru --enable-storeio=aufs,coss,diskd,ufs,null --enable-ssl --with-openssl=/usr/kerberos --enable-delay-pools --enable-linux-netfilter --with-pthreads '--enable-auth=basic ntlm' --enable-basic-auth-helpers --enable-ntlm-auth-helpers --enable-external-acl-helpers GNU gdb Red Hat Linux (5.1-1) Copyright 2001 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type show copying to see the conditions. There is absolutely no warranty for GDB. Type show warranty for details. This GDB was configured as s390-redhat-linux... (no debugging symbols found)... Core was generated by `(squid) -D'. Program terminated with signal 6, Aborted. Reading symbols from /lib/libcrypt.so.1...done. Loaded symbols for /lib/libcrypt.so.1 Reading symbols from /lib/libssl.so.2...done. Loaded symbols for /lib/libssl.so.2 Reading symbols from /lib/libcrypto.so.2...done. Loaded symbols for /lib/libcrypto.so.2 Reading symbols from /lib/libpthread.so.0...done. warning: Unable to set global thread event mask: generic error [New Thread 1024 (LWP 6952)] Error while reading shared library symbols: Cannot enable thread event reporting for Thread 1024 (LWP 6952): generic error Reading symbols from /lib/librt.so.1...done. Loaded symbols for /lib/librt.so.1 Reading symbols from /lib/libm.so.6...done. Loaded symbols for /lib/libm.so.6 Reading symbols from /lib/libresolv.so.2...done. Loaded symbols for /lib/libresolv.so.2 Reading symbols from /lib/libnsl.so.1...done. Loaded symbols for /lib/libnsl.so.1 Reading symbols from /lib/libc.so.6...done. Loaded symbols for /lib/libc.so.6 Reading symbols from /lib/ld.so.1...done. Loaded symbols for /lib/ld.so.1 Reading symbols from /lib/libnss_files.so.2...done. Loaded symbols for /lib/libnss_files.so.2 #0 0x4025f9de in kill () at soinit.c:56 56 soinit.c: No such file or directory. in soinit.c (gdb) bt #0 0x4025f9de in kill () at soinit.c:56 #1 0x4016f360 in raise (sig=6) at signals.c:65 #2 0x40261124 in abort () at ../sysdeps/generic/abort.c:88 #3 0x004969d6 in strcpy () at soinit.c:56 (gdb) q Any ideas? ~ Daniel --- This message is the property of Time Inc. or its affiliates. It may be legally privileged and/or confidential and is intended only for the use of the addressee(s). No addressee should forward, print, copy, or otherwise reproduce this message in any manner that would allow it to be viewed by any individual not originally listed as a recipient. If the reader of this message is not the intended recipient, you are hereby notified that any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is strictly prohibited. If you have received this communication in error, please immediately notify the sender and delete this message. Thank you.
RE: [squid-users] Sonicwall and empty url
You must use a few configuration parameters on squid.conf that tells squid it is suppposed to run in 'trasparent' mode. I am not very familiar myself with squid but I know these options are fairly well documented inside the squid.conf file. Regards, Alvaro Figueroa -Mensaje original- De: Vito Parisi [mailto:[EMAIL PROTECTED] Enviado el: Martes, 19 de Agosto de 2003 10:14 Para: [EMAIL PROTECTED] Asunto: [squid-users] Sonicwall and empty url Hello, I am new to Squid and I'd like to use it as a simple cache proxy server for web browsing. We use a Sonicwall appliance for firewalling, and we'd like to use the Proxy Web Server option of Sonicwall. This option performs automatic proxy forwarding of all web requests coming from the lan (internal) interface of the Sonicwall. Users in the lan will use proxy without configuring their browser, because the Sonicwall automatically forwards packets to the proxy. The Squid proxy is currently located on the DMZ, but can be located anywhere outside the lan. Now the problem: Defining the Squid proxy in the Proxy Web Server field of the Sonicwall, Squid always returns Invalid URL, because receives an empty URL. If I use the same proxy from a browser, it works correctly. Here there is an extract of the Squid Log: 1061296569.027 2 10.192.45.54 NONE/400 1437 GET / - NONE/- text/html 1061296572.283 2 10.192.45.54 NONE/400 1437 GET / - NONE/- text/html 1061296573.872 1 10.192.45.54 NONE/400 1437 GET / - NONE/- text/html 1061296583.608 1245 212.131.168.53 TCP_MISS/200 17744 GET http://www.apple.com/ - DIRECT/17.112.152.32 text/html 1061296584.105545 212.131.168.53 TCP_MISS/200 982 GET http://statse.webtrendslive.com/S139226/button6.asp? - DIRECT/63.88.212.82 image/gif 1061296588.541 4980 212.131.168.53 TCP_MISS/200 1249 GET http://statse.webtrendslive.com/S130376/button6.asp? - DIRECT/63.88.212.82 image/gif 1061296810.333 1580 212.131.168.53 TCP_MISS/200 92149 GET http://www.cisco.com/ - DIRECT/198.133.219.25 text/html As you can see, the requests from 212.131.168.53 (the browser) are performed correctly, while requests from 10.192.45.54 (the redirection from the Sonicwall) are empty. I know (and tried) that the Sonicwall works with other proxies too (i.e. Apache), so I don't understand what the problem is. Any help? Sincerely, Vito Parisi
Re: [squid-users] Squid Peer´s problem.
On Tuesday 19 August 2003 21.57, Ampugnani, Fernando wrote: Henrik, which is the best way to restrict all sites that go to internet in the second squid box except those I permit? By only allowing access in http_access to the sites you permit. I must do they in first squid box or in second squid box? Does not matter much. Because I suppose that the validation is managed by first squid box, the second squid box only forward all internet traffic to this isn´t is? Both can do full validation. The fact that one forwards requests to another is just a routing decision and does not in any way modify the capabilities of either Squid. If you have the rules in the Squid closest to the Internet then the rules matters no matter which internal proxy the user connects via. If you have the rules on the proxy closest to the user then the processing of the rule is somewhat more efficient as there is no need to query the Internet connected Squid.. Regards Henrik
[squid-users] File Descriptors
I have finally managed to get WCCPv2 working on my box. Works great. Now I am having errors show up in my log saying I am running on of file descriptors. I have checked my /proc/sys/fs/file-max and /proc/sys/fs/inode-nr files and they both are set pretty high. When I check the cachemgr runtime information page it shows squid seeing 1024 file descriptors available. Do I need to recompile squid to fix this? Paul Fiero Information Security Analyst City of Austin Communications and Technology Management (512) 974-3559 [EMAIL PROTECTED] CONFIDENTIALITY NOTICE: The information contained in this ELECTRONIC MAIL transmission is confidential. It may also be privileged work product or proprietary information. This information is intended for the exclusive use of the addressee(s). If you are not the intended recipient, you are hereby notified that any use, disclosure, dissemination, distribution [other than to the addressee(s)], copying or taking of any action because of this information is strictly prohibited. Vampireware /n/, a project, capable of sucking the lifeblood out of anyone unfortunate enough to be assigned to it, which never actually sees the light of day, but nonetheless refuses to die.
[squid-users] Squid 2.5Stable3 cores
Squid is coreing frequently now (every few minutes). This is a 2.5STABLE3 on RH 7.2 for s/390. This GDB was configured as s390-redhat-linux... (no debugging symbols found)... Core was generated by `(squid) -D'. Program terminated with signal 6, Aborted. Reading symbols from /lib/libcrypt.so.1...done. Loaded symbols for /lib/libcrypt.so.1 Reading symbols from /lib/libssl.so.2...done. Loaded symbols for /lib/libssl.so.2 Reading symbols from /lib/libcrypto.so.2...done. Loaded symbols for /lib/libcrypto.so.2 Reading symbols from /lib/libpthread.so.0...done. warning: Unable to set global thread event mask: generic error [New Thread 1024 (LWP 4249)] Error while reading shared library symbols: Cannot enable thread event reporting for Thread 1024 (LWP 4249): generic error Reading symbols from /lib/librt.so.1...done. Loaded symbols for /lib/librt.so.1 Reading symbols from /lib/libm.so.6...done. Loaded symbols for /lib/libm.so.6 Reading symbols from /lib/libresolv.so.2...done. Loaded symbols for /lib/libresolv.so.2 Reading symbols from /lib/libnsl.so.1...done. Loaded symbols for /lib/libnsl.so.1 Reading symbols from /lib/libc.so.6...done. Loaded symbols for /lib/libc.so.6 Reading symbols from /lib/ld.so.1...done. Loaded symbols for /lib/ld.so.1 Reading symbols from /lib/libnss_files.so.2...done. Loaded symbols for /lib/libnss_files.so.2 #0 0x4025f9de in kill () at soinit.c:56 56 soinit.c: No such file or directory. in soinit.c (gdb) bt #0 0x4025f9de in kill () at soinit.c:56 #1 0x4016f360 in raise (sig=6) at signals.c:65 #2 0x40261124 in abort () at ../sysdeps/generic/abort.c:88 #3 0x004967e2 in strcpy () at soinit.c:56 (gdb) /var/log/messages shows: Aug 19 11:22:39 linprox squid[1440]: Squid Parent: child process 4249 started Aug 19 11:24:36 linprox kernel: User process fault: interruption code 0x10 Aug 19 11:24:36 linprox kernel: failing address: 101 Aug 19 11:24:36 linprox kernel: CPU:0 Aug 19 11:24:36 linprox kernel: Process squid (pid: 4249, stackpage=05163000) Aug 19 11:24:36 linprox kernel: Aug 19 11:24:36 linprox kernel: User PSW:070dd000 c02b601e Tainted: PF Aug 19 11:24:36 linprox kernel: task: 05162000 ksp: 05163e80 pt_regs: 05163f68 Aug 19 11:24:36 linprox kernel: User GPRS: Aug 19 11:24:36 linprox kernel: 0092e808 0003 01010101 000dc6e8 Aug 19 11:24:36 linprox kernel: 01010101 4035cb50 7fffd188 Aug 19 11:24:36 linprox kernel: 01010101 7fffca98 Aug 19 11:24:36 linprox kernel: c03601ec c02b5ffc c02827ce 7fffca98 Aug 19 11:24:36 linprox kernel: User ACRS: Aug 19 11:24:36 linprox squid[1440]: Squid Parent: child process 4249 exited due to signal 6 Aug 19 11:24:36 linprox kernel: 40177f60 Aug 19 11:24:37 linprox kernel: Aug 19 11:24:37 linprox last message repeated 2 times Aug 19 11:24:37 linprox kernel: User Code: Aug 19 11:24:37 linprox kernel: bf 31 20 00 a7 84 00 1c a7 2a 00 01 18 32 14 31 a7 74 ff f8 Aug 19 11:24:40 linprox squid[1440]: Squid Parent: child process 4594 started Squid Cache: Version 2.5.STABLE3 configure options: s390-redhat-linux --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man --infodir=/usr/share/info --exec_prefix=/usr --bindir=/usr/sbin --libexecdir=/usr/lib/squid --localstatedir=/var --sysconfdir=/etc/squid --enable-poll --enable-snmp --enable-removal-policies=heap,lru --enable-storeio=aufs,coss,diskd,ufs,null --enable-ssl --with-openssl=/usr/kerberos --enable-delay-pools --enable-linux-netfilter --with-pthreads '--enable-auth=basic ntlm' --enable-basic-auth-helpers --enable-ntlm-auth-helpers --enable-external-acl-helpers Here's the squid.conf http_port 3128 pid_filename /var/run/squid/squid.pid debug_options ALL,1 cache_dir null /tmp cache_store_log none cache_access_log /var/www/html/squid/daily/logs/access.log cache_log /var/www/html/squid/daily/logs/cache.log auth_param ntlm program /usr/lib/squid/wb_ntlmauth auth_param ntlm children 96 auth_param ntlm max_challenge_reuses 0 auth_param ntlm max_challenge_lifetime 2 minutes auth_param basic program /usr/lib/squid/wb_auth auth_param basic children 5 auth_param basic realm Proxy auth_param basic credentialsttl 40 hours external_acl_type NT_global_group ttl=3600 negative_ttl=600 concurrency=20 %LOGIN /usr/lib/squid/wbinfo_group.pl refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern . 0 20% 4320 acl password proxy_auth REQUIRED acl all src 0.0.0.0/0.0.0.0 no_cache deny all acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl
[squid-users] AuthenticateNTLMFixErrorHeader
Hi all, Came in this morning to find my Squid shutdown and many references to the following in the logs, I have no idea what is causing this or even if they are related. We are using NTLM authentication and are not experiencing any problems that have been brought to my attention. Squid -v gives: Squid Cache: Version 2.5.STABLE3-20030803 configure options: --enable-delay-pools --enable-auth=ntlm,basic --enable-basic-auth-helpers=winbind --enable-ntlm-helpers=winbind Aug 18 14:34:14 kirk (squid): unexpected state in AuthenticateNTLMFixErrorHeader. Simon Bryan IT Manager OLMC Parramatta
RE: [squid-users] NTLM but still got pop-ups /w IE ?
With wbinfo -a, did you see both a plain text and challenge response reply? Here's the exact result I from wbinfo -a and wbinfo -t # ./wbinfo -a mydomain\\myuser%mypass plaintext password authentication succeeded challenge/response password authentication succeeded #./wbinfo -t Secret is good Does basic authentication with the winbind helper work? #./wb_auth -d /wb_auth[2022](wb_basic_auth.c:167): basic winbindd auth helper build Aug 15 2003, 14:56:47 starting up... mydomain\myuser mypass /wb_auth[2022](wb_basic_auth.c:129): Got 'mydomain\myuser mypass' from squid (length: 25). /wb_auth[2022](wb_basic_auth.c:55): winbindd result: 1 /wb_auth[2022](wb_basic_auth.c:58): sending 'OK' to squid OK Still the pop-up shows. Any Idea ? Regards, Arief K
Re: [squid-users] acl question
Henrik Nordstrom wrote: On Tuesday 19 August 2003 12.30, cc wrote: acl noie browser -i MSIE deny_info ERR_NOIE noie http_access deny noie The above, when uncommented makes squid throw a segmentation fault. When I recomment the three lines, Squid works fine. Probably this: http://www.squid-cache.org/Versions/v2/2.5/bugs/#squid-2.5.STABLE3-deny_info Not probably but definitely. :) Thanks Henrik! -- email: [EMAIL PROTECTED] | A man who knows not where he goes, | knows not when he arrives. |- Anon ** All information contained in this email is strictly ** ** confidential and may be used by the intended receipient ** ** only. **
Re: [squid-users] Problems
Hello Jim, I tried debug. I found the lines where I get the 400 error but I don't see anything in the log from top to bottom that says anything about errors. Any ideas of something else I can try? Somethink that I might be looking for? Serge. - Original Message - From: Jim_Brouse/[EMAIL PROTECTED] To: Serge Paquin [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Tuesday, August 19, 2003 1:27 PM Subject: Re: [squid-users] Problems Try this but be do not leave debug on too long just long enough to test why it is failing. debug is very verbose squid -k debug access some website squid -k debug Jim Serge Paquin [EMAIL PROTECTED] To: Jim_Brouse/[EMAIL PROTECTED] .orgcc: Subject: Re: [squid-users] Problems 08/19/2003 10:03 AM It doesn't say anything meaningfull other than items like: 2003/08/19 11:14:53| clientReadRequest: FD 221 Invalid Request Everything else seems very standard and unrelated. - Original Message - From: Jim_Brouse/[EMAIL PROTECTED] To: Serge Paquin [EMAIL PROTECTED] Sent: Tuesday, August 19, 2003 12:54 PM Subject: Re: [squid-users] Problems Have you done a tail -f /var/log/squid/cache.log that might provide some insight, Jim Serge Paquin [EMAIL PROTECTED] To: [EMAIL PROTECTED] .orgcc: Subject: [squid-users] Problems 08/19/2003 09:39 AM Hello, I am getting a bunch of messages like: clientReadRequest: FD 89 Invalid Request in my log file. Also I have people complaining that they cannot for example log into myEbay at .ebay.com. They get a 400 error from Squid. I am using WCCP and have squid running as a transparent proxy. In testing I get the Invalid Request when the client gets the 400 error. The problem I'm having is that the 400 errors tells you what the problem *MIGHT* be. I cannot seem to find any information in cache.log or access.log to find out what the problem *IS*. How can I figure this out and solve it? As soon as I turn squid off everything then routes fine. Oddly enough it only seems to hapen for our test users using Lynx. People with PPP sessions using IE do not seem to be having the issue. Thanks, Serge.
Re: [squid-users] File Descriptors
Now I am having errors show up in my log saying I am running on of file descriptors. Using ulimit (or its equivalent), set the hard and soft limits both before compiling and before running Squid. This has been recently discussed on the list (today, I think), and also several times in the archives. A quick check of the archives would have gotten you a faster answer. Adam
Re: [squid-users] Squid3: ftp gateway in accelerator mode
Thanks, that's very clear. I can do the rewriting OK. What I can't figure out is what directive(s) I use to accomplish forwarded using FTP to the FTP server as I thought that was the function of cache_peer originserver. How do I tell squid where the ftp server is (not on the same host) and to use ftp? One last hint and I think I've got it, please. -- Original Message --- From: Henrik Nordstrom [EMAIL PROTECTED] To: Jim Flowers [EMAIL PROTECTED] Cc: Squid Users [EMAIL PROTECTED] Sent: Tue, 19 Aug 2003 19:17:53 +0200 Subject: Re: [squid-users] Squid3: ftp gateway in accelerator mode Then the requests which should end up in a FTP server needs to be always_direct to be forwarded using FTP to the FTP server. cache_peer is always HTTP (or HTTP over SSL if the peer is ssl enabled)
Re: [squid-users] Squid3: ftp gateway in accelerator mode
On Wed, 2003-08-20 at 13:02, Jim Flowers wrote: Thanks, that's very clear. I can do the rewriting OK. What I can't figure out is what directive(s) I use to accomplish forwarded using FTP to the FTP server as I thought that was the function of cache_peer originserver. How do I tell squid where the ftp server is (not on the same host) and to use ftp? One last hint and I think I've got it, please. acl ftpserver dstdomain ftpservername always_direct ftpserver you may also need to relax any never_direct rules you have - jsut enough - to allow this. Cheers, Rob -- GPG key available at: http://members.aardvark.net.au/lifeless/keys.txt. signature.asc Description: This is a digitally signed message part
[squid-users] Problems getting delay pools to work with ident
I am having trouble getting delay pools to work with ident... I am running Squid-2.5.STABLE3 on OpenBSD 3.3 Clients are running Windows 98 with Identd (from identd.sourceforge.net) I have the following in my squid.conf acl allowed_hosts src 10.0.0.0/24 acl good_guys ident_regex -i laerer acl bad_guys ident_regex -i kursist acl bad_files urlpath_regex -i .exe .mp3 delay_pools 3 delay_class 1 2 delay_class 2 1 delay_class 3 2 delay_parameters 1 65536/65536 -1/-1 delay_parameters 2 4096/4096 delay_parameters 3 65536/65536 8192/16384 delay_access 1 allow good_guys delay_access 1 deny all delay_access 2 allow bad_files delay_access 2 deny all delay_access 3 allow allowed_hosts delay_access 3 deny all The idea is to allow good_guys full bandwidth no matter what, while limiting everyone else when downloading bad_files, though allowing them decent surf-speed... No matter who is logged on, the delay_access 3 rule is chosen! If good_guys starts downloading bad_files, the delay_access 2 rule is chosen! Both the good_guys and bad_guys acl works perfectly everywhere else...(e.g. http_access, reply_body_max_size...) If I change the good_guys and bad_guys acl's to use the src acltype instead of ident, the delay_access rules works perfectly. What am I doing wrong?
Re: [squid-users] How to disable TCP dead detection?
This is 2.4 Stable 7 I will try out what you suggest. Vladimir --- Henrik Nordstrom [EMAIL PROTECTED] wrote: On Tuesday 19 August 2003 11.41, Vladimir wrote: Hi Henrik, Well, i notice something different behaviour. Here is cache.log extract: --- 2003/08/19 17:16:54| TCP connection to 127.0.0.1/8000 failed 2003/08/19 17:16:54| Detected DEAD Parent: 127.0.0.1/8000/0 2003/08/19 17:17:27| Failed to select source for 'http://www.b92.net/doc/aboutu' 2003/08/19 17:17:27| always_direct = 0 2003/08/19 17:17:27|never_direct = 1 2003/08/19 17:17:27|timedout = 0 2003/08/19 17:17:52| Failed to select source for 2003/08/19 17:18:30| Failed to select source for 'http://www.cisco.com/' 2003/08/19 17:18:30| always_direct = 0 2003/08/19 17:18:30|never_direct = 1 2003/08/19 17:18:30| 08/19 17:18:30| Detected REVIVED Parent: 127.0.0.1/8000 Try using a lower peer_connect_timeout or masquerade of outgoing locally generated traffic. Most likely your dialup is assigned a dynamic IP address disturbing the probes initiated by Squid. If there is less requests, my expirience is that period to detect revived parent can be few times longer. If there is no requests then Squid does not check. As soon as there is a request Squid checks if the parent is alive. Which Squid version is this? Regards Henrik __ Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software http://sitebuilder.yahoo.com
[squid-users] Squid on FreeBSD
Hi, We have set up transproxy in freebsd-4.8. using squid-2.4.STABLE7. We are using alteon switch to redirect webtraffic. We have other 5 squid transproxies running on Redhat 6.2. In the FreeBSD machine I'm getting lots of TCP_CLIENT_REFRESH_MISS messages in access.log. However there isn't any TCP_CLIENT_REFRESH_XXX message in other proxies running on Redhat machine. Is it a configuration problem or do I need to change any parameter in the freebsd system ? Thanks a lot, Rohit Neupane
[squid-users] Re: Squid on FreeBSD
ODHIAMBO Washington wrote: * Rohit Nepali [EMAIL PROTECTED] [20030819 12:20]: wrote: Hi, We have set up transproxy in freebsd-4.8. using squid-2.4.STABLE7. We are using alteon switch to redirect webtraffic. We have other 5 squid transproxies running on Redhat 6.2. In the FreeBSD machine I'm getting lots of TCP_CLIENT_REFRESH_MISS messages in access.log. However there isn't any TCP_CLIENT_REFRESH_XXX message in other proxies running on Redhat machine. Is it a configuration problem or do I need to change any parameter in the freebsd system ? Try usinf {s}diff to compare the files. All the systems have same squid.cof file. Is it due to system tcp/ip buffer size? I'm not sure though. Thanks a lot. regards, Rohit -Wash