Re: [squid-users] squid eth alias
On Thu, 18 Dec 2003, Petre Bandac wrote: I make a request, and tcpdump shows the following thing: packets go from client to 5.6.7.8:3128, but there they go on the internet via 1.2.3.4 (ie I access google.com, and tcpdump shows 2 pieces of the traffic - 1 host-squid_IP, then address - 1.2.3.4) See tcp_outgoing_address This makes sense due to Squid being a proxy. When forwarding a request it opens a new TCP connection to the requested server and unless you have told Squid otherwise it will let the OS choose the most appropriate source address to use. This is also what most people want. Regards Henrik
Re: [squid-users] Cache-control modification and accelarator mode
On Thu, 18 Dec 2003, Gabriel Wicke wrote: On the xhtml content i set the Cache-control header as must-revalidate,max-age=0,s-maxage=36000. The browser will always check back for changes, squid will get a purge request if something changes. This breaks as soon as some transparent proxy gets in the way that picks up the s-maxage header. Does not need to be a transparent proxy, just any proxy supporting s-maxage will do this. The above headers says * Shared caches (i.e. proxies) may cache the object for 36000 seconds * All other caches (i.e. browsers) must check with their source (i.e. the proxy in proxied environment) if the source knows about a newer copy. So i'd like to strip any s-maxage headers or set them to 0 when passing through squid. I took a look at header_access and header_replace: header_access Cache-control deny all header_replace Cache-control Must-revalidate,max-age=0,s-maxage=0 This makes the browser check back for everything on every request (stylesheets, pics...), very slow. What i'd like to achieve is a conditional replace only if s-maxage was included in the original header. Is this possible with some acl matching magic? If you know which URLs you need to do this on then just make a urlpath_regex pattern matching which URLs this need to be done on (or not done on) and then use this in header_access when determining if the header should be replaced. Regards Henrik
Re: [squid-users] authentication
On Thu, 18 Dec 2003, melvin melvin wrote: how do write the squid.conf so that certain IP addresses can pass thru the proxy without being prompt for authentication? By allowing these access BEFORE where you require authentication in http_access. See the Squid FAQ chapter 10 Access controls. Regards Henrik
Re: [squid-users] squid timing
On Thu, 18 Dec 2003, GouthamLabs wrote: Is there any way to create timing issues in squid so that we can restrict people to access internet Yes, see the time acl. Regards Henrik
Re: [squid-users] size limit per ACL/user
On Thu, 18 Dec 2003, zen wrote: Hello squid-users, is it possible to limit the download file size per user or per ACL?? It should. Have you tried using the proxy_auth acl in reply_body_max_size? and also denying certain file to be download such as mp3 and exe? See http_access and the Squid FAQ chapter 10 Access controls. Regads Henrik
Re: [squid-users] 000 status code being logged for redirects
On Thu, 18 Dec 2003, Jesse Reynolds wrote: We have an array of squid servers acting as reverse proxy servers (web accelerators). They also work as URL rewriters, via the redirector interface, eg bouncing http to https in some cases, and mapping certain paths to different backend web servers... Which Squid version? If this is 2.5.STABLE4 please file a bug report. Otherwise first try upgrading. Regards Henrik
Re: [squid-users] squid eth alias
YES, THANK YOU petre On Thursday 18 December 2003 09:14 Anno Domini, Henrik Nordstrom wrote using one of his keyboards: On Thu, 18 Dec 2003, Petre Bandac wrote: I make a request, and tcpdump shows the following thing: packets go from client to 5.6.7.8:3128, but there they go on the internet via 1.2.3.4 (ie I access google.com, and tcpdump shows 2 pieces of the traffic - 1 host-squid_IP, then address - 1.2.3.4) See tcp_outgoing_address This makes sense due to Squid being a proxy. When forwarding a request it opens a new TCP connection to the requested server and unless you have told Squid otherwise it will let the OS choose the most appropriate source address to use. This is also what most people want. Regards Henrik -- Login: petreName: Petre Bandac Directory: /home/petre Shell: /usr/local/bin/zsh On since Thu Dec 11 07:34 (EET) on ttyv0, idle 7 days 2:08 (messages off) Last login Tue Dec 16 10:39 (EET) on ttyp5 from misu.kgb.ro No Mail. No Plan.
Re[2]: [squid-users] size limit per ACL/user
Hello Henrik, Thursday, December 18, 2003, 2:22:21 PM, you wrote: On Thu, 18 Dec 2003, zen wrote: Hello squid-users, is it possible to limit the download file size per user or per ACL?? It should. Have you tried using the proxy_auth acl in reply_body_max_size? and also denying certain file to be download such as mp3 and exe? See http_access and the Squid FAQ chapter 10 Access controls. Regads Henrik i will try that. thx for the tips.. -- Best regards, zenmailto:[EMAIL PROTECTED]
Re: [squid-users] 000 status code being logged for redirects
At 8:23 +0100 18/12/03, Henrik Nordstrom wrote: On Thu, 18 Dec 2003, Jesse Reynolds wrote: We have an array of squid servers acting as reverse proxy servers (web accelerators). They also work as URL rewriters, via the redirector interface, eg bouncing http to https in some cases, and mapping certain paths to different backend web servers... Which Squid version? If this is 2.5.STABLE4 please file a bug report. Otherwise first try upgrading. Yes it's 2.5.STABLE4, sorry to forget to mention that! OK, I'll file a bug. :-) For now I'm filtering my log files to replace the 000's with 302's, which works for now, kludgey as it is. Cheers Jesse -- ::: Jesse Reynolds +61 (0)414 669 790 ::: AIM - jessedreynolds ::: ::: Virtual Artists Pty Ltd, Adelaide ::: http://www.va.com.au :::
[squid-users] squid cache poisoing
Hi, We are running two squid proxies one with RedHat 8.0 and the other RedHat 9.0 with default kernels. Interscan viruswall ver. 3.8 is also runnig on both the proxies and are acting as parent proxies for squid proxies. The squid version and config parameters are as given below. # /usr/local/squid/sbin/squid -v Squid Cache: Version 2.5.STABLE3 configure options: --enable-async-io --enable-carp The hardware configs of both the proxies are exactly same ( HP DL 580 , 16GB RAM , 4 x 72 GB HDD ). There are three cache directories of 60 GB disk space each ( three partitions /cache1 , /cache2 and /cache3 with reiserfs FS) on each proxy i.e. total of 180GB cache. For last few days we are facing very strange problem as described below. Whenever user tries to access few sites e.g. www.google.com, www.rediff.com, www.indiatimes.com , www.yahoo.com and many more . all what s/he gets is coolsavings.com web page. We suspected some adware might have got installed in local client machine so we cleared all local cache , cookles etc. and again tried but the problem continued. We then tried through lynx and links from linux desktops and problem persisted there also. We then stopped squid , cleared cache and restarted again. Iit worked for few minutes but again the whole thing started with users only able to see coolsavings.com pages. We then stopped squid entirely and divered all user traffic through viruswall acting as a proxy and it worked fine. We then recompiled squid with storeio as null option and started squid without caching enabled and it worked fine. But since we could not work without cache and could not use viruswall as proxy we had to find other solution. We then blocked coolsavings.com on proxy with IPTABLES rules and it resolved the problem. To understand the problem we removed IPTABLES rules, cleared the cache again and put ethereal on client machine. When the problem reoccured we captured the entire TCP stream. We again cleared the cache and opened the page captured which immediatly reproduced the problem. The problem was also reproduced on all other client machines accessing the proxy . Strangly I have not been able to reproduce the problem on any other squid proxy running same versions of squid ( diff hardware config but same squid.conf ) Now I have again put the firewall rules and everything is working fine but I'm unable to find the cause of the problem. Kindly help Regards Vikram _ Stand out from the crowd. Make your own MMS cards. http://msn.migasia.cn/msn Have some mobile masti!
[squid-users] Re: Cache-control modification and accelarator mode
On Thu, 18 Dec 2003 08:20:27 +0100, Henrik Nordstrom wrote: If you know which URLs you need to do this on then just make a urlpath_regex pattern matching which URLs this need to be done on (or not done on) and then use this in header_access when determining if the header should be replaced. Hmmm- unfortunately matching by url isn't really possible in this case. Guess i'll just leave it for now. Tried replacing the Vary spec with Vary : * But this of course doesn't cause the other caches to refresh for a certain user. Are there any plans for acl matching based on arbitrary header contents? I've seen that it's possible to parse headers for authentication with external programs, will have a closer look at that. Thanks Gabriel
[squid-users] Re: squid cache poisoing
On Thu, 18 Dec 2003 16:20:23 +0530, vikram mohite wrote: Hi, We are running two squid proxies one with RedHat 8.0 and the other RedHat 9.0 with default kernels. Interscan viruswall ver. 3.8 is also runnig on both the proxies and are acting as parent proxies for squid proxies. Maybe this viruswall is redirecting/rewriting something? Just a guess... Gabriel
Re: [squid-users] squid cache poisoing
On Thu, 18 Dec 2003, vikram mohite wrote: To understand the problem we removed IPTABLES rules, cleared the cache again and put ethereal on client machine. When the problem reoccured we captured the entire TCP stream. We again cleared the cache and opened the page captured which immediatly reproduced the problem. The problem was also reproduced on all other client machines accessing the proxy . Can you please get a ethereal or tcpdump -s 1600 trace of the poisoning traffic on the Squid proxy? tcpdump -s 1600 -w traffic.dump -i any The trace should include 1. The requests which was identified as causing the cache poisoning 2. One request showing that the cache is poisoned. Strangly I have not been able to reproduce the problem on any other squid proxy running same versions of squid ( diff hardware config but same squid.conf ) Odd.. and there is no difference in network connectivity? Including which DNS servers are used, ISP etc.. Regards Henrik
[squid-users] Strange High CPU usage
The problem is the following: We have approximately 700Reqs/sec distributed on 4 x Dell 2650 (sibling with digest) after a load balancer each one with 2 x Xeon 1.8Ghz (HT disabled) 2GB Ram PERC3/Di RAID controller 5 x 36GB HD SCSI 10k (2 RAID1 for os and swap, 3 for cache) 3 NIC, 1 internet, 1 intranet, 1 proxy lan (digest and proxy comunication) 52Mbit link Slackware 8.1 Squid 2.5.STABLE4 some line of fstab /dev/sdb1/var/cache/spool/0 reiserfsnoatime,notail 1 2 /dev/sdc1/var/cache/spool/1 reiserfsnoatime,notail 1 2 /dev/sdd1/var/cache/spool/2 reiserfsnoatime,notail 1 2 configure options: --prefix=/usr --exec_prefix=/usr --bindir=/usr/sbin --libexecdir=/usr/lib/squid --sysconfdir=/etc/squid --localstatedir=/var --with-aufs-threads=48 --with-pthreads --with-aio --enable-async-io --enable-storeio=diskd,aufs --disable-wccp --enable-default-err-language=English --enable-err-languages=English --disable-ident-lookups --enable-underscores --enable-removal-policies=heap,lru --enable-snmp --enable-cache-digests --enable-gnuregex and actually configuration: cache_mem 64 MB cache_swap_low 85 cache_swap_high 90 maximum_object_size 65536 KB maximum_object_size_in_memory 24 KB cache_replacement_policy heap LFUDA memory_replacement_policy heap GDSF cache_dir diskd /var/cache/spool/0 28000 96 256 Q1=72 Q2=64 cache_dir diskd /var/cache/spool/1 28000 96 256 Q1=72 Q2=64 cache_dir diskd /var/cache/spool/2 28000 96 256 Q1=72 Q2=64 memory_pools_limit 50 MB cache_access_log /var/cache/log/access.log cache_log /var/cache/log/cache.log buffered_logs on Squid sames to work fine, but cpu usage after ~60Req/Sec on each server is always 100% I have tried many changes to squid.conf with no results. Last question I have also to upgrade our os to RedHat AS3, any experience with it? thanks -- *Giulio Cervera* EDS PA SpA Via Atanasio Soldati 80 00155 Roma (Italy) tel: +39 06 22739 270 fax: +39 06 22739 233 e-mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
[squid-users] Errors compiling squid
Hello, compiling squid (version 2.5.STABLE3) configured with --enable-basic-auth-helpers=helpers I receive compiling errors of type FATAL: file.h not found with helper LDAP the files are: ldap.h and lber.h with helper PAM the file is: pam_appl.h with helper SASL the file is: sasl.h I suppose I need some libabry in order to compile squid with this options, is it correct? In this case, which libraries I need? Thnak you for your time, Daniele Ricci.
[squid-users] 2 domains 1 squid
Hi Helpers I have 2 domains: domain1 == Windows 2000 with NT 4 compatibility domain2 == NT 4 1 squid to authenticate users from this 2 domains. With smb_auth it's working, but users have to type user and pass... My question: Is it possible to configure squid to authenticate user from both domains without users typing username and pass? If yes give me some direction. Compilation options used but can be changed: # RedHat 7.3 # Squid-2.5STABLE4 # Squid configure: ./configure \ --prefix=/usr/local/squid \ --sysconfdir=/etc/squid \ --enable-auth=basic,ntlm \ --enable-basic-auth-helpers=MSNT,SMB \ --enable-ntlm-auth-helpers=winbind \ --enable-external-acl-helpers=wbinfo_group \ --enable-snmp \ --enable-underscores \ --enable-default-err-language=Portuguese # Samba-3.0.0 # Samba configure: ./configure \ --with-configdir=/etc/samba \ --with-winbind \ --with-winbind-auth-challenge Thanks Elton S. Fenner
Re: [squid-users] Errors compiling squid
On Thu, 18 Dec 2003, Daniele Ricci wrote: with helper LDAP the files are: ldap.h and lber.h Then you are missing the OpenLDAP development headers and libraries. with helper PAM the file is: pam_appl.h Then you are missing the PAM development headers and libraries. with helper SASL the file is: sasl.h Then you are missing the SALS development headers and libraries. Regards Henrik
Re: [squid-users] authentication
| Hi all, | | this is my squid.conf line | acl password proxy_auth REQUIRED | http_access allow password | | how do write the squid.conf so that certain IP addresses can pass thru the | proxy without being prompt for authentication? | currently, all users who access the proxy have to be authenticated first. it should works... acl USER_WITH_AUTH scr X1.X2.X3.X4/A.B.C.D acl USER_WITHOUT_AUTH scr Y1.Y2.Y3.Y4/E.F.G.H acl PASSWORD proxy_auth REQUIRED http_access allow USER_WITHOUT_AUTH http_access allow USER_WITH_AUTH PASSWORD http_access deny all Vadim...
[squid-users] RE: FTP read /write access
Hi, How can I enable read and write access for ftp via proxy. I want to connect to an ftp site, the remote ftp site is set up for read and write, but i get an error if I go via squid that only read access is enabled. so how can I enable write access? Regards, Willem Pretorius NorthWeb ISP
[squid-users] problem with squid and squid2mysql
hi all, Got a small problem here with squid running in conjunction with squid2mysql on a linux box. Config is RH 9 system running as squid cache with a back end mysql database runnning on another machine. Perl 5.08 installed with dbi module, mysql drivers and perl::ldap Created a special file on the webcache called /logs/mysqlacess.log and have started cat /logs/mysqlaccess.log|tee -a /logs/access.log|/usr/local/bin/squid2mysql 2/logs/squid2mysql.err The above means I can still have the standard access.log file and dump log entries into a back end database. A back end RADIUS authentication database is also present to authenticate various classes of users. For various reasons i can't set up our dial in service to authenticate to our web caches, so i've added some code to the squid2mysql perl prog that performs an ldap query of our RADIUS server whenever a log file entry appears with a client ip address that comes from our dialin service. Basically it asks the radius server for the userid currently logged onto the ip address obtained from the access log entry. When i go through a web cache configured as described, everything works. When i try routing all of our dial in service calls through it the squid process crashes. I *think* its to do with the ldap lookups performed for every log record associated with our dial in service - all i see in the logs is FATAL: Received Segment Violation...dying. Just before I crank up the logging to see whats happening, anyone out there using squid2mysql on a linux platform? - just to double check that it does work in a production environment alex Sent using Mulberry 3.01a
Re: [squid-users] problem with squid and squid2mysql - update
Increasing the logging I can now see 2003/12/18 17:29:21| The request CONNECT loginnet.passport.com:443 i ecause it matched 'from_hullnet' FATAL: Received Segment Violation...dying. 2003/12/18 17:29:21| Not currently OK to rewrite swap log. 2003/12/18 17:29:21| storeDirWriteCleanLogs: Operation aborted. CPU Usage: 1.060 seconds = 0.540 user + 0.520 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 438 Memory usage for squid via mallinfo(): = so what's the next step? any suggestions appreciated alex --On 18 December 2003 16:28 + Alex Sharaz [EMAIL PROTECTED] wrote: hi all, Got a small problem here with squid running in conjunction with squid2mysql on a linux box. Config is RH 9 system running as squid cache with a back end mysql database runnning on another machine. Perl 5.08 installed with dbi module, mysql drivers and perl::ldap Created a special file on the webcache called /logs/mysqlacess.log and have started cat /logs/mysqlaccess.log|tee -a /logs/access.log|/usr/local/bin/squid2mysql 2/logs/squid2mysql.err The above means I can still have the standard access.log file and dump log entries into a back end database. A back end RADIUS authentication database is also present to authenticate various classes of users. For various reasons i can't set up our dial in service to authenticate to our web caches, so i've added some code to the squid2mysql perl prog that performs an ldap query of our RADIUS server whenever a log file entry appears with a client ip address that comes from our dialin service. Basically it asks the radius server for the userid currently logged onto the ip address obtained from the access log entry. When i go through a web cache configured as described, everything works. When i try routing all of our dial in service calls through it the squid process crashes. I *think* its to do with the ldap lookups performed for every log record associated with our dial in service - all i see in the logs is FATAL: Received Segment Violation...dying. Just before I crank up the logging to see whats happening, anyone out there using squid2mysql on a linux platform? - just to double check that it does work in a production environment alex Sent using Mulberry 3.01a
[squid-users] Squid for Windows?
Is there something similar to Squid, but running on Windows? I'm looking for an open source (or freeware) and highly reliable proxy server that can run on my windows 2000 machine. Any ideas? Thanks! _ MSN Extra Storage: piena libertà di esprimersi e comunicare http://www.msn.it/msnservizi/es/?xAPID=534DI=1044SU=http://hotmail.it/HL=HMTAGTX_MSN_Extra_Storage
Re: [squid-users] Squid for Windows?
Hi, At 19.00 18/12/2003, Emilio Salgari wrote: Is there something similar to Squid, but running on Windows? Yes, Squid . http://www.acmeconsulting.it/SquidNT/ Really your name is Emilio Salgari ? Regards Guido I'm looking for an open source (or freeware) and highly reliable proxy server that can run on my windows 2000 machine. Any ideas? Thanks! - Guido Serassio Acme Consulting S.r.l. Via Gorizia, 69 10136 - Torino - ITALY Tel. : +39.011.3249426 Fax. : +39.011.3293665 Email: [EMAIL PROTECTED] WWW: http://www.acmeconsulting.it/
[squid-users] Linux eating all memory
Hi folks, I'm having a problem of memory shortage on my squid server. The server is an Compaq DL350 G3 with 2G of ram with linux RH7 / kern. 2.4.22 . It doesnt matter how I set cache_mem , kernel keeps caching (file cache?) till it uses all memory , leaving only 10Mb, thus the server keeps swapping all the time ! I search google for ways of setting vm on 2.4 to *not* eat all my memory, but doesnt find nothing really useful till now ! Thanks in advance ! Bruno Marcondes []'s
[squid-users] squid_ldap_auth authentication
Hi all, I hope you can help me: I'm trying to authenticate squid users against a MS Active directory but i am having problems. I've already tried all the statements tha are in the squid_ldap_auth manual. the MS Active directory is under the following domain: tre-pb.gov.br I created some users in directly in this domain. If anyone went trough the same situation and solved the problem, please tell me why.Give me an example of your squid.conf file
Re: [squid-users] 2 domains 1 squid
On 18 Dec 2003, Elton S. Fenner wrote: Is it possible to configure squid to authenticate user from both domains without users typing username and pass? Only if there is a trust between the domains. Regards Henrik
Re: [squid-users] RE: FTP read /write access
On Thu, 18 Dec 2003, Northweb Squid wrote: How can I enable read and write access for ftp via proxy. By using a client supporting FTP file uploads via HTTP proxies. So far only Netscape 4.x is known to support this. I want to connect to an ftp site, the remote ftp site is set up for read and write, but i get an error if I go via squid that only read access is enabled. so how can I enable write access? You can get rid of the error if your disable FTP Folder View in your IE settings. But to my knowledge no versions of MSIE supports FTP file upload via HTTP proxies. If you want this please request Microsoft to implement the feature in their browser using the PUT HTTP method. Regards Henrik
Re: [squid-users] problem with squid and squid2mysql - update
On Thu, 18 Dec 2003, Alex Sharaz wrote: Increasing the logging I can now see 2003/12/18 17:29:21| The request CONNECT loginnet.passport.com:443 i ecause it matched 'from_hullnet' FATAL: Received Segment Violation...dying. 2003/12/18 17:29:21| Not currently OK to rewrite swap log. 2003/12/18 17:29:21| storeDirWriteCleanLogs: Operation aborted. CPU Usage: 1.060 seconds = 0.540 user + 0.520 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 438 Memory usage for squid via mallinfo(): = so what's the next step? Get a stack trace. See the Squid FAQ on reporting Squid bugs. But first make sure you run a up to date Squid version. You did not tell which version of Squid you are using but bug reports are only accepted if they can be reproduced in 2.5.STABLE4. Regards Henrik
Re: [squid-users] Linux eating all memory
On Thu, 18 Dec 2003, Bruno Marcondes wrote: It doesnt matter how I set cache_mem , kernel keeps caching (file cache?) till it uses all memory , leaving only 10Mb, thus the server keeps swapping all the time ! If is normal that all free memory is used for cache. This is how the kernel should work. It is however not normal that this causes swapping. If nothing else works, try disabling the swap.. With 2GB of memory you should not need a swap partition. Regards Henrik
Re: [squid-users] authentication
On Wed, 17 Dec 2003, Victor Souza Menezes wrote: Hello everybody, I can't solve my problems with squid_ldap_auth. I followed the manual instructions and putted the following line in squid.conf: auth_param basic program /usr/lib/squid/squid_ldap_auth -p -R -b dc=tre-pb, dc=gov, dc=br -D cn=victor,cn=users,dc=tre-pb,dc=gov,dc=br -w cl3500vsm -f ((userPrincipalName=%s)objectClass=Person)) -h ip_address The search filter is not correct. Missing a parantese infront of objectClass. squid_ldap_auth: WARNING, could not bind to binddn 'Invalid credentials' Then either -D or -w is not correct. Try binding to the same user using ldapsearch (remember the -x flag). Regards Henrik
[squid-users] ldapsearch
how can i bind to a specific user?
[squid-users] Active Directory.
Hi all, Can squid + smb_auth works with windows 2000 Active Directory.?, If can´t what I might use to authenticate MSAD. Thanks in advance. Fernando Ampugnani EDS Argentina - Software, Storage Network Global Operation Solution Delivery Tel: 5411 4704 3428 Mail: [EMAIL PROTECTED]
Re: [squid-users] Strange High CPU usage
and actually configuration: cache_mem 64 MB cache_swap_low 85 cache_swap_high 90 maximum_object_size 65536 KB maximum_object_size_in_memory 24 KB cache_replacement_policy heap LFUDA memory_replacement_policy heap GDSF cache_dir diskd /var/cache/spool/0 28000 96 256 Q1=72 Q2=64 cache_dir diskd /var/cache/spool/1 28000 96 256 Q1=72 Q2=64 cache_dir diskd /var/cache/spool/2 28000 96 256 Q1=72 Q2=64 memory_pools_limit 50 MB cache_access_log /var/cache/log/access.log cache_log /var/cache/log/cache.log buffered_logs on Squid sames to work fine, but cpu usage after ~60Req/Sec on each server is always 100% I have tried many changes to squid.conf with no results. Probably the first thing to look at is whether or not the high CPU problem comes from select/poll looping very quickly. For example: % squidclient mgr:5min | grep -i select select_loops = 72.180305/sec select_fds = 19.528906/sec average_select_fd_period = 0.007477/fd median_select_fds = 0.00 I think median_select_fds is essentially broken and always returns 0. I plot the values for my caches, which you can see here: http://www.ircache.net/Statistics/Vitals/rrd/cgi/select.day.cgi You might want to add 'half_closed_clients off' to your config file and see if that helps. Duane W.
[squid-users] Problem with wbinfo_group.pl
Hi!! We are using wbinfo_group.pl in order to build acls based on Windows groups, but we are facing the following problem: We have built a test acl, with a USER that we know that belongs to a specific Group. Wbinfo_group.pl is called by Squid, with the correct parameters, but it returns ERR to squid. Below there is a copy of our cache.log, with the actual Domain substituted by DOMAIN, the actual User substitued by USER, and the actual Group substituted by Group. The DOMAIN and the USER are actually all uppercase, and the group has just the first letter in uppercase. 2003/12/18 17:48:07| aclMatchExternal: nt_group = 0 2003/12/18 17:48:07| aclMatchExternal: nt_group(DOMAIN\\USER Group) = lookup needed 2003/12/18 17:48:07| externalAclLookup: lookup in 'nt_group' for 'DOMAIN\\USER Group' 2003/12/18 17:48:07| external_acl_cache_add: Adding 'DOMAIN\\USER Group' = -1 Got DOMAIN\\USER Group from squid shellwords: User: -USER- Group: -Group- User: -USER- Group: -Group- SID: -Could not lookup name Group- GID: -Could not convert sid Could to gid- Sending ERR to squid 2003/12/18 17:48:07| externalAclHandleReply: reply=ERR 2003/12/18 17:48:07| external_acl_cache_add: Adding 'DOMAIN\\USER Group' = 0 What may be going wrong? Thanks in Advance, Carlos. _ Voce quer um iGMail protegido contra vírus e spams? Clique aqui: http://www.igmailseguro.ig.com.br Ofertas imperdíveis! Link: http://www.americanas.com.br/ig/
[squid-users] Problem with wbinfo_group.pl
Hi!! We are using wbinfo_group.pl in order to build acls based on Windows groups, but we are facing the following problem: We have built a test acl, with a USER that we know that belongs to a specific Group. Wbinfo_group.pl is called by Squid, with the correct parameters, but it returns ERR to squid. Below there is a copy of our cache.log, with the actual Domain substituted by DOMAIN, the actual User substitued by USER, and the actual Group substituted by Group. The DOMAIN and the USER are actually all uppercase, and the group has just the first letter in uppercase. 2003/12/18 17:48:07| aclMatchExternal: nt_group = 0 2003/12/18 17:48:07| aclMatchExternal: nt_group(DOMAIN\\USER Group) = lookup needed 2003/12/18 17:48:07| externalAclLookup: lookup in 'nt_group' for 'DOMAIN\\USER Group' 2003/12/18 17:48:07| external_acl_cache_add: Adding 'DOMAIN\\USER Group' = -1 Got DOMAIN\\USER Group from squid shellwords: User: -USER- Group: -Group- User: -USER- Group: -Group- SID: -Could not lookup name Group- GID: -Could not convert sid Could to gid- Sending ERR to squid 2003/12/18 17:48:07| externalAclHandleReply: reply=ERR 2003/12/18 17:48:07| external_acl_cache_add: Adding 'DOMAIN\\USER Group' = 0 What may be going wrong? Thanks in Advance, Carlos. _ Voce quer um iGMail protegido contra vírus e spams? Clique aqui: http://www.igmailseguro.ig.com.br Ofertas imperdíveis! Link: http://www.americanas.com.br/ig/
Re: [squid-users] Load balancing multiple Squid servers
On Wed, 17 Dec 2003, Cavanagh, Kevin B wrote: Hi there, Please forgive me if this question has been asked/answered before (I searched the FAQs but quickly became too confused by all the various postings regarding load balancing, etc). We currently have six RedHat Linux V8.0 servers running Squid V2.5 Stable in the following parent/child proxy chain: User U Iprism URL filters Squid caching (child) HTTP Anti-virus (parent) Internet We have six of each - with each currently having a one-to-one relationship (ie. IPRISM1P SQUID1 AV1 INET, etc).We would like to create a load-balanced situation here if at all possible. probably the only part we can help you with is the way that SQUID selects which AV parent to send a request to. One approach is to make a DNS name or /etc/hosts entry for the AV servers that has all their IP addresses. Then you put a single line in squid.conf: cache_peer av-servers.example.com parent 0 no-query Another approach is to list all servers separately and use the round-robin option cache_peer 172.16.0.1 parent 0 no-query round-robin cache_peer 172.16.0.2 parent 0 no-query round-robin cache_peer 172.16.0.3 parent 0 no-query round-robin cache_peer 172.16.0.4 parent 0 no-query round-robin cache_peer 172.16.0.5 parent 0 no-query round-robin cache_peer 172.16.0.6 parent 0 no-query round-robin Yet another approach is to use CARP: cache_peer 172.16.0.1 parent 0 no-query carp-load-factor=0.16 cache_peer 172.16.0.2 parent 0 no-query carp-load-factor=0.16 cache_peer 172.16.0.3 parent 0 no-query carp-load-factor=0.17 cache_peer 172.16.0.4 parent 0 no-query carp-load-factor=0.17 cache_peer 172.16.0.5 parent 0 no-query carp-load-factor=0.17 cache_peer 172.16.0.6 parent 0 no-query carp-load-factor=0.17 Duane W.
[squid-users] False Web addresses, and how to handle them
I read an article in EWeek that explained how to create a misleading web link or link in email by typing the acceptable http address, followed by %01%00@ and the actual destination address. I showed it to my boss, who didn't like what she saw. Is it possible to create an ACL in Squid that specifically stomps out misdirected URLs? I don't know if Squid must accept literal characters when sniffing out URLs for ACLs, since the %01 and %00 are hex representations. Anyone have an idea about this? If so, it'd be a boon to add another ACL that stops this simple exploit at the proxy. According to the W3 consortium, the @ symbol is a reserved character, so it's probably not wise to block for it exclusively. Thanks! Eric
[squid-users] squid_ldap_group authentication against Active Directory
Hi, i'm trying to restrict access to my squid cache to users of a special group ProxyUsers in Active Directory. I have Debian Testing (Sarge) with squid-2.5Stable4 installed. First i tried with the ldap_auth command: /usr/lib/squid/ldap_auth -b dc=dhc-gmbh,dc=com -R -D [EMAIL PROTECTED] -w SeCrEt -f sAMAccountName=%s myW2KServer In this way, when i enter username password lines, i get OK or ERR, and everything is fine. The problem: every valid user with a valid password has access to the cache. I read many mailings on this list (and some other too), but i didn't find a good hint. I know so far, that squid_ldap_group is the right program, but how do i use it? In a mail from Henrik Nordstrom, there was this description: 0. Optionally bind (login) as a dummy user (by DN) if anonymous searches is disallowed in the directory (-D+-W arguments) 1. Search for the user in the directory (-F argument with the same data as -f to squid_ldap_auth) 2. Search for the group in the directory and verify that the user is member of the group (-f argument). How must the -f argument looks like?!? In some mails, people talk about some examples, that are shipped with squid and work fine with Active Directory, but i can't find them. I'm not very familiar with ldap searchstrings so can somebody give me a hint, how the FULL command looks? Greetings Christoph
Re: [squid-users] Problem with wbinfo_group.pl
Hi Again !! I was checking wbinfo, and found out the the Group that I have chosen to test can´t be looked up by wbinfo, although it exists in MSAD. This problem occurs with some other Groups in MSAD, but, for the majority of the Groups, the lookup runs ok !!! Have anyone run into this problem before? Regards, Carlos. Em 18 Dec 2003, [EMAIL PROTECTED] escreveu: Hi!! We are using wbinfo_group.pl in order to build acls based on Windows groups, but we are facing the following problem: We have built a test acl, with a USER that we know that belongs to a specific Group. Wbinfo_group.pl is called by Squid, with the correct parameters, but it returns ERR to squid. Below there is a copy of our cache.log, with the actual Domain substituted by DOMAIN, the actual User substitued by USER, and the actual Group substituted by Group. The DOMAIN and the USER are actually all uppercase, and the group has just the first letter in uppercase. 2003/12/18 17:48:07| aclMatchExternal: nt_group = 0 2003/12/18 17:48:07| aclMatchExternal: nt_group(DOMAIN\\USER Group) = lookup needed 2003/12/18 17:48:07| externalAclLookup: lookup in 'nt_group' for 'DOMAIN\\USER Group' 2003/12/18 17:48:07| external_acl_cache_add: Adding 'DOMAIN\\USER Group' = -1 Got DOMAIN\\USER Group from squid shellwords: User: -USER- Group: -Group- User: -USER- Group: -Group- SID: -Could not lookup name Group- GID: -Could not convert sid Could to gid- Sending ERR to squid 2003/12/18 17:48:07| externalAclHandleReply: reply=ERR 2003/12/18 17:48:07| external_acl_cache_add: Adding 'DOMAIN\\USER Group' = 0 What may be going wrong? Thanks in Advance, Carlos. _ Voce quer um iGMail protegido contra vírus e spams? Clique aqui: http://www.igmailseguro.ig.com.br Ofertas imperdíveis! Link: http://www.americanas.com.br/ig/ -- _ Voce quer um iGMail protegido contra vírus e spams? Clique aqui: http://www.igmailseguro.ig.com.br Ofertas imperdíveis! Link: http://www.americanas.com.br/ig/
Re: [squid-users] Problem with wbinfo_group.pl
Hi Again !! I was checking wbinfo, and found out the the Group that I have chosen to test can´t be looked up by wbinfo, although it exists in MSAD. This problem occurs with some other Groups in MSAD, but, for the majority of the Groups, the lookup runs ok !!! Have anyone run into this problem before? Regards, Carlos. Em 18 Dec 2003, [EMAIL PROTECTED] escreveu: Hi!! We are using wbinfo_group.pl in order to build acls based on Windows groups, but we are facing the following problem: We have built a test acl, with a USER that we know that belongs to a specific Group. Wbinfo_group.pl is called by Squid, with the correct parameters, but it returns ERR to squid. Below there is a copy of our cache.log, with the actual Domain substituted by DOMAIN, the actual User substitued by USER, and the actual Group substituted by Group. The DOMAIN and the USER are actually all uppercase, and the group has just the first letter in uppercase. 2003/12/18 17:48:07| aclMatchExternal: nt_group = 0 2003/12/18 17:48:07| aclMatchExternal: nt_group(DOMAIN\\USER Group) = lookup needed 2003/12/18 17:48:07| externalAclLookup: lookup in 'nt_group' for 'DOMAIN\\USER Group' 2003/12/18 17:48:07| external_acl_cache_add: Adding 'DOMAIN\\USER Group' = -1 Got DOMAIN\\USER Group from squid shellwords: User: -USER- Group: -Group- User: -USER- Group: -Group- SID: -Could not lookup name Group- GID: -Could not convert sid Could to gid- Sending ERR to squid 2003/12/18 17:48:07| externalAclHandleReply: reply=ERR 2003/12/18 17:48:07| external_acl_cache_add: Adding 'DOMAIN\\USER Group' = 0 What may be going wrong? Thanks in Advance, Carlos. _ Voce quer um iGMail protegido contra vírus e spams? Clique aqui: http://www.igmailseguro.ig.com.br Ofertas imperdíveis! Link: http://www.americanas.com.br/ig/ -- _ Voce quer um iGMail protegido contra vírus e spams? Clique aqui: http://www.igmailseguro.ig.com.br Ofertas imperdíveis! Link: http://www.americanas.com.br/ig/
Re: [squid-users] False Web addresses, and how to handle them
On Thu, 18 Dec 2003, Eric Geater 12/12/03 wrote: I read an article in EWeek that explained how to create a misleading web link or link in email by typing the acceptable http address, followed by %01%00@ and the actual destination address. I showed it to my boss, who didn't like what she saw. Is it possible to create an ACL in Squid that specifically stomps out misdirected URLs? I don't know if Squid must accept literal characters when sniffing out URLs for ACLs, since the %01 and %00 are hex representations. Anyone have an idea about this? If so, it'd be a boon to add another ACL that stops this simple exploit at the proxy. According to the W3 consortium, the @ symbol is a reserved character, so it's probably not wise to block for it exclusively. Thanks! Its not currently possible to block such requests in Squid because the funny characters are a part of the login component of the URL. Squid doesn't have any ACLs that use or care about the login data. It should be pretty easy to come up with a patch that does. DW
RE: [squid-users] Active Directory.
There are several better options than smb_auth for use against Active Directory: * LDAP (reliable) - The FAQ has info on configuring LDAP helpers * Samba Winbind (a little more complicated - but using NTLM authentication IE users won't need to type in a username/password - it'll pull it directly). Don't forget you need Samba 3. http://itmanagers.net/[EMAIL PROTECTED] has details of getting it going. -Original Message- From: Ampugnani, Fernando [mailto:[EMAIL PROTECTED] Sent: Friday, 19 December 2003 7:00 AM To: [EMAIL PROTECTED] Subject: [squid-users] Active Directory. Hi all, Can squid + smb_auth works with windows 2000 Active Directory.?, If can´t what I might use to authenticate MSAD. Thanks in advance. Fernando Ampugnani EDS Argentina - Software, Storage Network Global Operation Solution Delivery Tel: 5411 4704 3428 Mail: [EMAIL PROTECTED]
Re: [squid-users] Active Directory.
On Thu, 18 Dec 2003, Ampugnani, Fernando wrote: Can squid + smb_auth works with windows 2000 Active Directory.? Most likely. If can´t what I might use to authenticate MSAD. The LDAP helpers for sure work (squid_ldap_auth + squid_ldap_group) Regards Henrik
Re: [squid-users] squid_ldap_group authentication against Active Directory
On Thu, 18 Dec 2003, Keppner, Christoph wrote: I know so far, that squid_ldap_group is the right program, but how do i use it? In a mail from Henrik Nordstrom, there was this description: squid_ldap_group is used via the external_acl_type directive. See the manual (yes there is a manual for squid_ldap_group). 0. Optionally bind (login) as a dummy user (by DN) if anonymous searches is disallowed in the directory (-D+-W arguments) 1. Search for the user in the directory (-F argument with the same data as -f to squid_ldap_auth) 2. Search for the group in the directory and verify that the user is member of the group (-f argument). How must the -f argument looks like?!? The manual has some good hints on this. The purpose of the -f argument to squid_ldap_group is similar to the purpose of the -f argument to squid_ldap_auth but looking for a matching group rather than a matching user. Usually this looks like -f ((cn=%g)(member=%u)(objectClass=groupOfNames)) asking the helper to search for a groupOfNames with the group name as cn and the user DN as member. Should probably make this the default when -F is specified. The user DN is looked up by the -F argument in the same manner as the -f argument to squid_ldap_auth. Regards Henrik
Re: [squid-users] ldapsearch
On Thu, 18 Dec 2003, Victor Souza Menezes wrote: how can i bind to a specific user? By using the -D command line option (don't forget the -x option). You also need to specify a password via -W or -w. Regards Henrik
Re: [squid-users] Problem with wbinfo_group.pl
On Thu, 18 Dec 2003 [EMAIL PROTECTED] wrote: This problem occurs with some other Groups in MSAD, but, for the majority of the Groups, the lookup runs ok !!! Have anyone run into this problem before? You may want to ask this question on the appropriate Samba forum.. Regards
[squid-users] Too few redirector processes are running
Hello, I have redirector_children 10 in my config file, but I keep getting the entry below, over and over again in my cache.log when I go to http://money.cnn.com/best/bplive: snip 2003/12/18 23:01:30| Starting Squid Cache version 3.0-PRE3-CVS for i686-pc-linux-gnu... 2003/12/18 23:01:30| Process ID 6587 2003/12/18 23:01:30| With 1024 file descriptors available 2003/12/18 23:01:30| DNS Socket created at 0.0.0.0, port 32785, FD 4 2003/12/18 23:01:30| Adding nameserver 192.168.1.5 from /etc/resolv.conf 2003/12/18 23:01:30| Adding nameserver 216.19.2.80 from /etc/resolv.conf 2003/12/18 23:01:30| Adding nameserver 63.169.42.12 from /etc/resolv.conf 2003/12/18 23:01:30| Adding nameserver 209.140.24.33 from /etc/resolv.conf 2003/12/18 23:01:30| helperOpenServers: Starting 10 'orsquidGuard' processes 2003/12/18 23:01:30| helperOpenServers: Starting 10 'ncsa_auth' processes 2003/12/18 23:01:30| Unlinkd pipe opened on FD 29 2003/12/18 23:01:30| Swap maxSize 102400 KB, estimated 7876 objects 2003/12/18 23:01:30| Target number of buckets: 393 2003/12/18 23:01:30| Using 8192 Store buckets 2003/12/18 23:01:30| Max Mem size: 8192 KB 2003/12/18 23:01:30| Max Swap size: 102400 KB 2003/12/18 23:01:30| Rebuilding storage in /usr/local/orsquid/var/cache (CLEAN) 2003/12/18 23:01:30| Using Least Load store dir selection 2003/12/18 23:01:30| Set Current Directory to /usr/local/squid/var/cache 2003/12/18 23:01:30| Loaded Icons. 2003/12/18 23:01:30| Accepting HTTP connections at 0.0.0.0, port 8940, FD 31. 2003/12/18 23:01:30| WCCP Disabled. 2003/12/18 23:01:30| Ready to serve requests. 2003/12/18 23:01:30| Done reading /usr/local/orsquid/var/cache swaplog (1692 entries) 2003/12/18 23:01:30| Finished rebuilding storage from disk. 2003/12/18 23:01:30| 1692 Entries scanned 2003/12/18 23:01:30| 0 Invalid entries. 2003/12/18 23:01:30| 0 With invalid flags. 2003/12/18 23:01:30| 1692 Objects loaded. 2003/12/18 23:01:30| 0 Objects expired. 2003/12/18 23:01:30| 0 Objects cancelled. 2003/12/18 23:01:30| 0 Duplicate URLs purged. 2003/12/18 23:01:30| 0 Swapfile clashes avoided. 2003/12/18 23:01:30| Took 0.0 seconds (46286.4 objects/sec). 2003/12/18 23:01:30| Beginning Validation Procedure 2003/12/18 23:01:30| Completed Validation Procedure 2003/12/18 23:01:30| Validated 1692 Entries 2003/12/18 23:01:30| store_swap_size = 14540k 2003/12/18 23:01:30| WARNING: redirector #1 (FD 5) exited 2003/12/18 23:01:31| WARNING: redirector #2 (FD 6) exited 2003/12/18 23:01:31| WARNING: redirector #3 (FD 7) exited 2003/12/18 23:01:31| WARNING: redirector #4 (FD 8) exited 2003/12/18 23:01:31| WARNING: redirector #5 (FD 9) exited 2003/12/18 23:01:31| WARNING: redirector #6 (FD 10) exited 2003/12/18 23:01:31| storeDirWriteCleanLogs: Starting... 2003/12/18 23:01:31| Finished. Wrote 1692 entries. 2003/12/18 23:01:31| Took 0.0 seconds (839702.2 entries/sec). FATAL: Too few redirector processes are running Squid Cache (Version 3.0-PRE3-CVS): Terminated abnormally. CPU Usage: 0.090 seconds = 0.040 user + 0.050 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 470 Memory usage for squid via mallinfo(): total space in arena:2918 KB Ordinary blocks: 2897 KB 9 blks Small blocks: 0 KB 0 blks Holding blocks: 2396 KB 13 blks Free Small blocks: 0 KB Free Ordinary blocks: 20 KB Total in use:5293 KB 181% Total free:20 KB 1% snip I bumped redirector_children to 40, and still got he same message repeated over and over. Is this telling me that my redirectors are dying from an error in the redirector code, or what? Thanks, Murrah Boswell
Re: [squid-users] Too few redirector processes are running
Is this telling me that my redirectors are dying from an error in the redirector code, or what? Make sure that the Squid userid is able to execute the redirector program. Check file and directory permissions. Duane W.
Re: [squid-users] False Web addresses, and how to handle them
Its not currently possible to block such requests in Squid because the funny characters are a part of the login component of the URL. Squid doesn't have any ACLs that use or care about the login data. It should be pretty easy to come up with a patch that does. The attached patch adds a new ACL type: urllogin With it you could write some rules to deny any HTTP request that contains any login credentials: acl UrlHasLogin urllogin . http_access deny UrlHasLogin or you can deny a request where the login data contains a non-alphanumeric character: acl SketchyLogin urllogin [^a-zA-Z0-9] http_access deny SketchyLogin Duane W.Index: src/acl.c === RCS file: /server/cvs-server/squid/squid/src/acl.c,v retrieving revision 1.270.2.18 diff -u -3 -p -r1.270.2.18 acl.c --- src/acl.c 29 Nov 2003 08:59:23 - 1.270.2.18 +++ src/acl.c 18 Dec 2003 21:54:43 - @@ -178,6 +178,8 @@ aclStrToType(const char *s) return ACL_MAX_USER_IP; if (!strcmp(s, external)) return ACL_EXTERNAL; +if (!strcmp(s, urllogin)) + return ACL_URLLOGIN; return ACL_NONE; } @@ -252,6 +254,8 @@ aclTypeToStr(squid_acl type) return max_user_ip; if (type == ACL_EXTERNAL) return external; +if (type == ACL_URLLOGIN) + return urllogin; return ERROR; } @@ -737,6 +741,7 @@ aclParseAclLine(acl ** head) aclParseTimeSpec(A-data); break; case ACL_URL_REGEX: +case ACL_URLLOGIN: case ACL_URLPATH_REGEX: case ACL_BROWSER: case ACL_REFERER_REGEX: @@ -1464,6 +1469,7 @@ aclMatchAcl(acl * ae, aclCheck_t * check case ACL_URLPATH_REGEX: case ACL_URL_PORT: case ACL_URL_REGEX: +case ACL_URLLOGIN: /* These ACL types require checklist-request */ if (NULL == r) { debug(28, 1) (WARNING: '%s' ACL is used but there is no @@ -1567,6 +1573,12 @@ aclMatchAcl(acl * ae, aclCheck_t * check k = aclMatchRegex(ae-data, esc_buf); safe_free(esc_buf); return k; +case ACL_URLLOGIN: + esc_buf = xstrdup(r-login); + rfc1738_unescape(esc_buf); + k = aclMatchRegex(ae-data, esc_buf); + safe_free(esc_buf); + return k; /* NOTREACHED */ case ACL_MAXCONN: k = clientdbEstablished(checklist-src_addr, 0); @@ -2114,6 +2126,7 @@ aclDestroyAcls(acl ** head) #endif case ACL_PROXY_AUTH_REGEX: case ACL_URL_REGEX: + case ACL_URLLOGIN: case ACL_URLPATH_REGEX: case ACL_BROWSER: case ACL_REFERER_REGEX: @@ -2529,6 +2542,7 @@ aclDumpGeneric(const acl * a) return aclDumpTimeSpecList(a-data); case ACL_PROXY_AUTH_REGEX: case ACL_URL_REGEX: +case ACL_URLLOGIN: case ACL_URLPATH_REGEX: case ACL_BROWSER: case ACL_REFERER_REGEX: Index: src/enums.h === RCS file: /server/cvs-server/squid/squid/src/enums.h,v retrieving revision 1.203.2.8 diff -u -3 -p -r1.203.2.8 enums.h --- src/enums.h 21 Jan 2003 00:06:39 - 1.203.2.8 +++ src/enums.h 18 Dec 2003 21:51:57 - @@ -136,6 +136,7 @@ typedef enum { ACL_REP_MIME_TYPE, ACL_MAX_USER_IP, ACL_EXTERNAL, +ACL_URLLOGIN, ACL_ENUM_MAX } squid_acl;