Re: [squid-users] Two connections per client
On Wed, Mar 16, 2016 at 10:44 AM, Amos Jeffries <squ...@treenet.co.nz> wrote: > On 17/03/2016 3:03 a.m., Chris Nighswonger wrote: > > On Wed, Mar 16, 2016 at 9:07 AM, Amos Jeffries <squ...@treenet.co.nz> > wrote: > > > >> On 17/03/2016 1:57 a.m., Amos Jeffries wrote: > >>> On 17/03/2016 1:25 a.m., Chris Nighswonger wrote: > >>>> On Wed, Mar 16, 2016 at 1:03 AM, Amos Jeffries wrote: > >>>> > >>>>> On 16/03/2016 12:38 p.m., Chris Nighswonger wrote: > >>>>>> Why does netstat show two connections per client connection to > Squid: > >>>>>> > >>>>>> tcp0 0 127.0.0.1:3128 127.0.0.1:34167 > >>>>>> ESTABLISHED > >>>>>> tcp0 0 127.0.0.1:34167 127.0.0.1:3128 > >>>>>> ESTABLISHED > >>>>>> > >>>>>> In this case, there is a content filter running in front of Squid on > >> the > >>>>>> same box. The same netstat command filtered on the content filter > port > >>>>>> shows only one connection per client: > >>>>>> > >>>>>> tcp0 0 192.168.x.x:8080 192.168.x.y:1310 > >>>>> ESTABLISHED > >>>>>> > >>>>> > >>>>> Details of your Squid configuration are needed to answer that. > >>>>> > >>>> > >>>> > >>>> Here it is. I've stripped out all of the acl lines to reduce the > length: > >>>> > >>>> tcp_outgoing_address 184.x.x.x > >>>> http_port 127.0.0.1:3128 > >>> > >>> It would seem that it is not Squid making those connections outbound > >>> from 127.0.0.1:3128. Squid uses that 184.x.x.x address with random > >>> source ports for *all* its outbound connections. > >> > >> > >> Ah, just had an idea. Do you have IDENT protocol in those ACLs you > elided? > >> > >> IDENT makes a reverse connection back to the client to find the > identity. > >> > >> > > So I have this acl in the list: > > > > acl AuthorizedUsers proxy_auth REQUIRED > > > > Might that be the one? > > No, if existing it would have 'ident' or 'ident_regex' type. > > Log formats would be the other way to hit ident. But I didn't notice > anything fancy like that in the config you posted. > Sorry for the direct reply on the last iteration. Silly g-mail does not support reply to list apparently. I've cleaned up the config based on your suggestions. I'm not super concerned about the two connection issue. I was mostly wondering what was up. Perhaps I should be. Ignorance is not always bliss. WRT follow_x_forwarded_for allow all, I've changed "all" to "localhost." I don't know if that tightens things up maybe? I need this enabled so that the client IPs show up in the Squid log. At least I think I do. Thanks for the help. We've run Squid for over 16 years and it mostly just works. Kind regards, Chris ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Two connections per client
>>> On Thu, Mar 17, 2016 at 4:50 AM, Amos Jeffrieswrote: If its not Squid then something is playing around with the Squid port. Best know what it is even if thats okay. >>> I ran a pcap on the lo if and squid's port. While running it, I opened a browser and accessed foxnews.com through the GW. Attached is the related exchanges (sanitized) which took place on the lo if. (It is actually a txt file.) I don't know if this might cast some light on this issue or not. Chris ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Fwd: Problem whitelisting .shiprush.com
So what am I missing in the following situation? Our mail dept uses shiprush.com. The software supplied by shiprush is not proxy-auth friendly, so I added a acl ShipRush dstdomain .shiprush.com and http_access allow campusnet ShipRush before my http_access line requiring authentication. Yet I still see Squid3 requesting auth [1]. What am I doing wrong? I've supplied my squid.conf in redacted form [2]. (General comments welcome as well as those specific to this problem.) Kind Regards, Chris Misc Info: OS: Ubuntu 10.04.4 LTS Squid Cache: Version 3.1.6 configure options: '--build=i486-linux-gnu' '--prefix=/usr' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3' '--disable-maintainer-mode' '--disable-dependency-tracking' '--disable-silent-rules' '--srcdir=.' '--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3' '--mandir=/usr/share/man' '--with-cppunit-basedir=/usr' '--enable-inline' '--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap' '--enable-delay-pools' '--enable-cache-digests' '--enable-underscores' '--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM' '--enable-ntlm-auth-helpers=smb_lm,' '--enable-digest-auth-helpers=ldap,password' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group' '--enable-arp-acl' '--enable-esi' '--disable-translation' '--with-logdir=/var/log/squid3' '--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter' 'build_alias=i486-linux-gnu' 'CFLAGS=-g -O2 -g -Wall -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' 'CXXFLAGS=-g -O2 -g -Wall -O2' --with-squid=/build/buildd/squid3-3.1.6 [1] https://docs.google.com/file/d/0B5GhqVvpzpvjVE5MX2drM21HNW8/edit?usp=sharing [2] https://docs.google.com/file/d/0B5GhqVvpzpvjWjhQUnc4UDNweUk/edit?usp=sharing
Re: [squid-users] squid returns an error on an ajax request
On Tue, Mar 16, 2010 at 5:21 PM, Amos Jeffries squ...@treenet.co.nz wrote: On Tue, 16 Mar 2010 15:31:51 -0400, Chris Nighswonger cnighswon...@foundations.edu wrote: This is probably a problem with the site rather than with squid, but I thought the list might be able to identify it as I cannot. Below is the entire response body. snip FORM http://www.blueletterbible.org/BibleAJAX/tense.cfm?tense=5723 HTTP/1.0 Host: www.blueletterbible.org snip Generated Tue, 16 Mar 2010 19:22:19 GMT by squid (squid/2.7.STABLE3) Never heard of a FORM request in HTTP/1.0 before. Did you configure extension_methods FORM to your squid.conf? Adding this directive does work around the deviate method. However, I have also contacted the site maintainers to inquire the reason for using this method. Many thanks for your help, Amos. Kind Regards, Chris
[squid-users] squid returns an error on an ajax request
This is probably a problem with the site rather than with squid, but I thought the list might be able to identify it as I cannot. Below is the entire response body. This request works fine without squid in line. Kind Regards, Chris !DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.01 Transitional//EN http://www.w3.org/TR/html4/loose.dtd; HTMLHEADMETA HTTP-EQUIV=Content-Type CONTENT=text/html; charset=iso-8859-1 TITLEERROR: The requested URL could not be retrieved/TITLE STYLE type=text/css!--BODY{background-color:#ff;font-family:verdana,sans-serif}PRE{font-family:sans-serif}--/STYLE /HEADBODY H1ERROR/H1 H2The requested URL could not be retrieved/H2 HR noshade size=1px P While trying to process the request: PRE FORM http://www.blueletterbible.org/BibleAJAX/tense.cfm?tense=5723 HTTP/1.0 Host: www.blueletterbible.org User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.8) Gecko/20100214 Ubuntu/9.10 (karmic) Firefox/3.5.8 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: identity,gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 Referer: http://www.blueletterbible.org/Bible.cfm?b=Ephamp;c=4amp;v=28amp;t=KJV Content-Length: 9 Cookie: CFID=116177544; CFTOKEN=93629362; __utma=136995939.1420754775.1268152974.1268752499.1268767308.3; __utmz=136995939.1268152974.1.1.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=blueletter%20bible; SSTR=%2D1; radio0=0; radio1=0; abbrev=1; quoted=0; sqrbrkt=0; __utmc=136995939; __utmb=136995939.3.10.1268767308 X-Forwarded-For: 192.168.x.x /PRE P The following error was encountered: UL LI STRONG Invalid Request /STRONG /UL P Some aspect of the HTTP Request is invalid. Possible problems: UL LIMissing or unknown request method LIMissing URL LIMissing HTTP Identifier (HTTP/1.0) LIRequest is too large LIContent-Length missing for POST or PUT requests LIIllegal character in hostname; underscores are not allowed /UL PYour cache administrator is A HREF=mailto:bitbuc...@placespamhere.org;bitbuc...@placespamhere.org/A. BR clear=all HR noshade size=1px ADDRESS Generated Tue, 16 Mar 2010 19:22:19 GMT by squid (squid/2.7.STABLE3) /ADDRESS /BODY/HTML
Re: [squid-users] Fwd: Webapp problems with squid 2.7.STABLE3
On Sat, Jan 10, 2009 at 11:01 AM, Chris Nighswonger cnighswon...@foundations.edu wrote: Attached is the current config. The config on the upgrade was a simple cp of the previous config file. The only thing different now is the addition of ignore_expect_100 on at the end per the suggestion earlier in this thread. (Which did allow the webapp to work correctly.) --snip-- Thanks for the help on this one. If anyone sees any other optimizations I should have in my squid.conf, feel free to point them out. I suppose I just stared too long at things this past Friday. The ssl problem is not with squid, but with my virus scanning config. A direct connection to squid by the client works fine. This is normally the first thing I do... when I'm not burned out that is. ;-) Kind Regards, Chris -- Christopher Nighswonger Faculty Member Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org
[squid-users] dstdom_regex question
I'm using authentication and trying to allow unauthenticated access to http://java.sun.com/update/1.6.0/map-1.6.0.xml and all associated urls so Java will update transparently rather than prompting the user for credentials. I have been trying to do this using dstdom_regex and cannot seem to get things to work the way I imagine they should. I have tried two ways. acl AuthorizedUsers proxy_auth REQUIRED acl JavaUpdate dstdom_regex -i sun.*update http_access allow JavaUpdate http_access allow AuthorizedUsers and acl AuthorizedUsers proxy_auth REQUIRED acl JavaUpdate1 dstdom_regex -i sun acl JavaUpdate2 dstdom_regex -i update http_access allow JavaUpdate1 JavaUpdate2 http_access allow AuthorizedUsers Neither acl catches http://java.sun.com/update/1.6.0/map-1.6.0.xml and it falls through to AuthorizedUsers per cache.log: 2009/01/12 09:39:15| The request GET http://java.sun.com/update/1.6.0/map-1.6.0.xml is DENIED, because it matched 'AuthorizedUsers' However, this does work: acl AuthorizedUsers proxy_auth REQUIRED acl JavaUpdate dstdom_regex -i sun http_access allow JavaUpdate http_access allow AuthorizedUsers cache.log now says: 2009/01/12 09:37:44| The request GET http://java.sun.com/update/1.6.0/map-1.6.0.xml is ALLOWED, because it matched 'JavaUpdate' But it allows access to any url containing 'sun' which is not what I want. Am I going about this wrong or just missing something about dstdom_regex? Kind Regards, Chris -- Christopher Nighswonger Faculty Member Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org
Re: [squid-users] dstdom_regex question
On Mon, Jan 12, 2009 at 11:00 AM, Amos Jeffries squ...@treenet.co.nz wrote: Chris Nighswonger wrote: I'm using authentication and trying to allow unauthenticated access to http://java.sun.com/update/1.6.0/map-1.6.0.xml and all associated urls so Java will update transparently rather than prompting the user for credentials. I have been trying to do this using dstdom_regex and cannot seem to get things to work the way I imagine they should. I have tried two ways. acl AuthorizedUsers proxy_auth REQUIRED acl JavaUpdate dstdom_regex -i sun.*update http_access allow JavaUpdate http_access allow AuthorizedUsers and acl AuthorizedUsers proxy_auth REQUIRED acl JavaUpdate1 dstdom_regex -i sun acl JavaUpdate2 dstdom_regex -i update http_access allow JavaUpdate1 JavaUpdate2 http_access allow AuthorizedUsers snip Am I going about this wrong or just missing something about dstdom_regex? ... by attempting to match a part of the path 'update' against a domain name... Try this: acl Sun dstdomain java.sun.com I ended up makeing this line acl Sun dstdomain .sun.com because the server name changes from time to time it appears. Otherwise it works great. Thanks Amos and Tim. Kind Regards, Chris -- Christopher Nighswonger Faculty Member Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org
Re: [squid-users] Fwd: Webapp problems with squid 2.7.STABLE3
On Fri, Jan 9, 2009 at 9:22 PM, Amos Jeffries squ...@treenet.co.nz wrote: BTW, we started back up for the spring semester yesterday. I did my upgrade over the break. Now I am having multiple sites (many are ssl) unaccessible which were accessible under 2.6.STABLE12. Did I miss some major changes between 2.6 and 2.7? I'm considering rolling back to 2.6 to quell the rebellion... :-( We can't really tell what or if you missed anything without config details :). Whats the current config and the diff between the old and new squid.conf? Attached is the current config. The config on the upgrade was a simple cp of the previous config file. The only thing different now is the addition of ignore_expect_100 on at the end per the suggestion earlier in this thread. (Which did allow the webapp to work correctly.) Regarding ssl sites (https://pob-w.fidelitybanknc.com/servlet/cefs/online/login-tfb.html is one example that hangs and times out via squid): Several tcpdumps seem to indicate that the client sends a connect frame to squid, squid acknowledges but never passes any traffic on to the internet site. Generally clients are authenticated via ntlm helper, but I have some clients that are authenticated based on ip. These clients (ipauthex) do not have this problem: they connect to these sites fine. This would seem to indicate an config issue, but what? I have also attached a pcap file for traffic between an ntlm auth client and squid. There is no pcap for the same squid to fidelity connection as there is never any traffic there. Thanks for the help on this one. If anyone sees any other optimizations I should have in my squid.conf, feel free to point them out. Note: fidelity.txt is really a pcap file. Kind Regards, Chris -- Christopher Nighswonger Faculty Member Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org Ôò¡ ÿÿ gI+I 6 6 }9 ÀW¥$ E (�...@ @JÀ¨ ÷À¨ Q|MÕ +P Ûñ gI!J ÀW¥$ }9 E (...@ a%À¨ À¨ ÷ Õ +Q|MPÿÿõgI,K ÀW¥$ }9 E (...@ a$À¨ À¨ ÷ Õ +Q|MPÿÿõgI6K 6 6 }9 ÀW¥$ E ( ...@ @¸kÀ¨ ÷À¨ Q|MÕ ,P Ûð gI7*ÀW¥$ }9 e ...@ `À¨ À¨ ÷ ¦epÿÿ ´gIJ* }9 Àw¥$ e ...@ @¸cÀ¨ ÷À¨ Âk¤£ ¦fpШ¬ ´gIC+ÀW¥$ }9 E (...@ `À¨ À¨ ÷ ¦fÂk¤¤pÿ�...@gi, 1 1 ÀW¥$ }9 e ...@ _!À¨ À¨ ÷ ¦fÂk¤¤PÿÿX CONNECT pob-w.fidelitybanknc.com:443 HTTP/1.0 User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727) Proxy-Connection: Keep-Alive Content-Length: 0 Host: pob-w.fidelitybanknc.com Pragma: no-cache gIF, 6 6 }9 ÀW¥$ E (�...@ @}À¨ ÷À¨ Âk¤¤ §aP Ò% http_port 192.168.0.247:3128 http_port 127.0.0.1:3128 hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? cache deny QUERY acl apache rep_header Server ^Apache broken_vary_encoding allow apache cache_mem 12 MB maximum_object_size 32768 KB maximum_object_size_in_memory 200 KB cache_dir aufs /var/spool/squid 477184 65 256 access_log /var/log/squid/access.log cache_log /var/log/squid/cache.log cache_store_log none cachemgr_passwd VerySecret all debug_options ALL,1 auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 17 auth_param ntlm keep_alive on auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param basic children 2 auth_param basic realm Campus Proxy Server auth_param basic credentialsttl 2 hours auth_param basic casesensitive off refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern . 0 20% 4320 quick_abort_min 0 KB quick_abort_max 0 KB acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.0.0.0 acl masada src 192.168.0.23/255.255.255.255 acl cnighswonger-lt src 192.168.0.105/255.255.255.255 acl campusnet src 192.168.0.0/24 acl farswap src 192.168.254.0/24 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 334 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 1 acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT acl PURGE method PURGE acl AuthorizedUsers proxy_auth REQUIRED acl WindowsUpdate dstdomain download.microsoft.com ntservicepack.microsoft.com .update.microsoft.com .windowsupdate.com windowsupdate.microsoft.com wustat.windows.com c.microsoft.com crl.microsoft.com watson.microsoft.com acl Webmin src 192.168.0.247-192.168.0.247/255.255.255.255 acl Zipcode dstdomain dail-a-zip.com acl USPSShipping dstdomain
Re: [squid-users] Fwd: Webapp problems with squid 2.7.STABLE3
On Thu, Jan 8, 2009 at 11:23 PM, Amos Jeffries squ...@treenet.co.nz wrote: Chris Robertson wrote: Try http://www.squid-cache.org/Doc/config/ignore_expect_100/ This workaround did fix the problem for now. That said, the squid setting is only a bandaid over the top, and only works in that one proxy. All web clients attempting to send Expect: 100, are expected to behave sensibly when it fails and they get given the 417 response. It should simply re-try without the expectation. I sent an email off to USPS tech support with the grueling details, but I won't hold my breath. ;-) Thanks for the help. Kind Regards, Chris -- Christopher Nighswonger Faculty Member Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org
Re: [squid-users] Fwd: Webapp problems with squid 2.7.STABLE3
On Fri, Jan 9, 2009 at 9:42 AM, Chris Nighswonger cnighswon...@foundations.edu wrote: On Thu, Jan 8, 2009 at 11:23 PM, Amos Jeffries squ...@treenet.co.nz wrote: Chris Robertson wrote: Try http://www.squid-cache.org/Doc/config/ignore_expect_100/ This workaround did fix the problem for now. That said, the squid setting is only a bandaid over the top, and only works in that one proxy. All web clients attempting to send Expect: 100, are expected to behave sensibly when it fails and they get given the 417 response. It should simply re-try without the expectation. I sent an email off to USPS tech support with the grueling details, but I won't hold my breath. ;-) Thanks for the help. BTW, we started back up for the spring semester yesterday. I did my upgrade over the break. Now I am having multiple sites (many are ssl) unaccessible which were accessible under 2.6.STABLE12. Did I miss some major changes between 2.6 and 2.7? I'm considering rolling back to 2.6 to quell the rebellion... :-( Kind Regards, Chris -- Christopher Nighswonger Faculty Member Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org
[squid-users] Fwd: Webapp problems with squid 2.7.STABLE3
Hi all, I'm running Squid 2.7.STABLE3. I just moved off of 2.6.STABLE12. We run a USPS webapp called Shipping Assistant. With 2.6, this app worked fine. During my upgrade, I introduced no config changes. With 2.7, this app is broken. Comparing a tcpdump of the app talking direct and then the app talking to squid, it appears that squid borks over HTTP 1.1 (or the Expect directive) and throws an invalid request error (ERR_INVALID_REQ 0). Here is a bit of the short conversation between the app and squid: --begin-- POST http://production.shippingapis.com/ShippingApi.dll HTTP/1.1 Content-Type: application/x-www-form-urlencoded Host: production.shippingapis.com Content-Length: 395 Expect: 100-continue Proxy-Connection: Keep-Alive HTTP/1.0 417 Expectation failed Server: squid/2.7.STABLE3 Date: Thu, 08 Jan 2009 15:50:16 GMT Content-Type: text/html Content-Length: 1402 Expires: Thu, 08 Jan 2009 15:50:16 GMT X-Squid-Error: ERR_INVALID_REQ 0 X-Cache: MISS from squidserver X-Cache-Lookup: NONE from squidserver:3128 Via: 1.0 squidserver:3128 (squid/2.7.STABLE3) Connection: keep-alive Proxy-Connection: keep-alive POST http://production.shippingapis.com/ShippingApi.dll HTTP/1.1 Content-Type: application/x-www-form-urlencoded Host: production.shippingapis.com Content-Length: 395 Expect: 100-continue Proxy-Connection: Keep-Alive HTTP/1.0 417 Expectation failed Server: squid/2.7.STABLE3 Date: Thu, 08 Jan 2009 15:50:16 GMT Content-Type: text/html Content-Length: 1402 Expires: Thu, 08 Jan 2009 15:50:16 GMT X-Squid-Error: ERR_INVALID_REQ 0 X-Cache: MISS from squidserver X-Cache-Lookup: NONE from squidserver:3128 Via: 1.0 squidserver:3128 (squid/2.7.STABLE3) Connection: keep-alive Proxy-Connection: keep-alive --end-- I'd attach the dumps, but they contain account related data. Any thoughts appreciated. Kind Regards, -- Christopher Nighswonger Faculty Member Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org
Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY
On Sat, Nov 1, 2008 at 12:37 AM, Amos Jeffries [EMAIL PROTECTED] wrote: Um, I'm not so sure the people having trouble are using the right helper. There is a thing calling itself 'ntlm_auth' bundled with squid 3.0 and Squid-2 releases that is incapable of doing full NTLM for modern windows domains. There is also something calling itself 'ntlm_auth' bundled with Samba, which provides full working NTLM functionality. We have fixed this mixup in 3.1, but please check the helper you are using. Please prefer to use the one by Samba. We're using the Samba flavor. To be exact [EMAIL PROTECTED] ~]# /usr/bin/ntlm_auth -V Version 3.0.23c-2 IE7 is more advanced than the ealier IE and seems to be actually capable of proper negotiate auth. But can be expected fail with the limits imposed by Squid's 'ntlm_auth' thing. The issues we are having are with FF (see Mozilla bug referenced earlier in this thread). IE7 works fine on computers which are domain members. I'd still love to know what Nairb's config has that makes it work. Regards, Chris - Original Message From: matlor [EMAIL PROTECTED] To: squid-users@squid-cache.org Sent: Thursday, October 30, 2008 9:15:55 AM Subject: Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY I have tried your configuration... but I have the same problem. squid version is 3.0.5 in attachment there is one of my tested squid.conf. only IE7 is working properly thanks in advance nairb rotsak wrote: Always forget to hit the 'reply to all' instead of the 'reply'.. sorry.. below is what I sent Chris: Below is for w2k3 AD and Ubuntu 6.06.1: auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 15 auth_param ntlm max_challenge_reuses 0 auth_param ntlm max_challenge_lifetime 2 minutes #auth_param ntlm use_ntlm_negotiate off auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param basic children 5 auth_param basic realm Squid proxy-caching web server auth_param basic credentialsttl 2 hours auth_param basic casesensitive off acl NTLMUsers proxy_auth REQUIRED acl our_networks src 192.168.0.0/16 http_access allow all NTLMUsers http_access allow our_networks Here is our current setup (w2k8 and Ubuntu 8.04.1): auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 15 auth_param ntlm keep_alive on acl our_networks src 192.168.0.0/16 acl NTLMUsers proxy_auth REQUIRED external_acl_type ntgroup %LOGIN /usr/lib/squid/wbinfo_group.pl acl NOINTERNET external ntgroup no-internet http_access deny NOINTERNET http_access allow all NTLMUsers http_access allow our_networks http_access allow localhost We have a group policy do the IE browser, but with Firefox, we have to set it manually. Once it is set, there is no prompt... I use SARG to get the results.. Been doing it for almost three years.. I would get evangelical on people using iPrism/Barracuda/Websense.. but now I figure I will just let them spend the money.. ;-) - Original Message From: Chris Nighswonger [EMAIL PROTECTED] To: nairb rotsak [EMAIL PROTECTED] Cc: matlor [EMAIL PROTECTED]; squid-users@squid-cache.org Sent: Wednesday, October 29, 2008 9:31:32 AM Subject: Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY On Wed, Oct 29, 2008 at 10:23 AM, nairb rotsak [EMAIL PROTECTED] wrote: I am totally confused by this statement?.. as I have 300 people using firefox right now.. using Ubuntu 6.06, Samba3, Squid2.. and not a single one gets a user/pass prompt? I am not using it as a transparent proxy, it is listed in firefox under proxy settings (8080 because it goes to DG first.. but I have tested just Squid at 3128 and it works as well).. and I haven't touched anything else in firefox I'd be very interested in knowing what is different about your setup. I have fought this problem for several years now. - Original Message From: Chris Nighswonger [EMAIL PROTECTED] To: matlor [EMAIL PROTECTED] Cc: squid-users@squid-cache.org Sent: Wednesday, October 29, 2008 8:48:39 AM Subject: Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY On Tue, Oct 28, 2008 at 6:18 AM, matlor [EMAIL PROTECTED] wrote: I have configured squid with winbind integrated in the active directory of a windows 2003 domain. If I browse internet trough IE 7 everething is ok, no user and password prompted, because of the common login. While, if I open Firefox (2 or 3 version), it prompts for user and password. One other note: While FF does support NTLM, it does not do transparent auth as IE does. Hence the prompting for username/password. Furthermore, due to M$ having a broken implementation of NTLM, FF will at times repeatedly prompt ad infinitum. There is an open bug on this at Mozilla, (https://bugzilla.mozilla.org/show_bug.cgi?id=318253) but action on it is understandably slow. You can mess with FF's NTLM
Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY
On Tue, Oct 28, 2008 at 6:18 AM, matlor [EMAIL PROTECTED] wrote: I have configured squid with winbind integrated in the active directory of a windows 2003 domain. If I browse internet trough IE 7 everething is ok, no user and password prompted, because of the common login. While, if I open Firefox (2 or 3 version), it prompts for user and password. One other note: While FF does support NTLM, it does not do transparent auth as IE does. Hence the prompting for username/password. Furthermore, due to M$ having a broken implementation of NTLM, FF will at times repeatedly prompt ad infinitum. There is an open bug on this at Mozilla, (https://bugzilla.mozilla.org/show_bug.cgi?id=318253) but action on it is understandably slow. You can mess with FF's NTLM related settings under 'about:config' to gain some respite. You can also run a basic auth that authenticates against NTLM which for some reason seems to avoid the multi-prompt issue. Something like: auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param basic children 2 auth_param basic realm somerealm auth_param basic credentialsttl 2 hours auth_param basic casesensitive off Regards, Chris
Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY
On Wed, Oct 29, 2008 at 10:23 AM, nairb rotsak [EMAIL PROTECTED] wrote: I am totally confused by this statement?.. as I have 300 people using firefox right now.. using Ubuntu 6.06, Samba3, Squid2.. and not a single one gets a user/pass prompt? I am not using it as a transparent proxy, it is listed in firefox under proxy settings (8080 because it goes to DG first.. but I have tested just Squid at 3128 and it works as well).. and I haven't touched anything else in firefox I'd be very interested in knowing what is different about your setup. I have fought this problem for several years now. - Original Message From: Chris Nighswonger [EMAIL PROTECTED] To: matlor [EMAIL PROTECTED] Cc: squid-users@squid-cache.org Sent: Wednesday, October 29, 2008 8:48:39 AM Subject: Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY On Tue, Oct 28, 2008 at 6:18 AM, matlor [EMAIL PROTECTED] wrote: I have configured squid with winbind integrated in the active directory of a windows 2003 domain. If I browse internet trough IE 7 everething is ok, no user and password prompted, because of the common login. While, if I open Firefox (2 or 3 version), it prompts for user and password. One other note: While FF does support NTLM, it does not do transparent auth as IE does. Hence the prompting for username/password. Furthermore, due to M$ having a broken implementation of NTLM, FF will at times repeatedly prompt ad infinitum. There is an open bug on this at Mozilla, (https://bugzilla.mozilla.org/show_bug.cgi?id=318253) but action on it is understandably slow. You can mess with FF's NTLM related settings under 'about:config' to gain some respite. You can also run a basic auth that authenticates against NTLM which for some reason seems to avoid the multi-prompt issue. Something like: auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param basic children 2 auth_param basic realm somerealm auth_param basic credentialsttl 2 hours auth_param basic casesensitive off Regards, Chris
Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY
On Wed, Oct 29, 2008 at 5:16 PM, nairb rotsak [EMAIL PROTECTED] wrote: http_access allow all NTLMUsers Does the 'all' trump the 'NTLMUsers' acl here? Chris - Original Message From: Chris Nighswonger [EMAIL PROTECTED] To: nairb rotsak [EMAIL PROTECTED] Cc: matlor [EMAIL PROTECTED]; squid-users@squid-cache.org Sent: Wednesday, October 29, 2008 9:31:32 AM Subject: Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY On Wed, Oct 29, 2008 at 10:23 AM, nairb rotsak [EMAIL PROTECTED] wrote: I am totally confused by this statement?.. as I have 300 people using firefox right now.. using Ubuntu 6.06, Samba3, Squid2.. and not a single one gets a user/pass prompt? I am not using it as a transparent proxy, it is listed in firefox under proxy settings (8080 because it goes to DG first.. but I have tested just Squid at 3128 and it works as well).. and I haven't touched anything else in firefox I'd be very interested in knowing what is different about your setup. I have fought this problem for several years now. - Original Message From: Chris Nighswonger [EMAIL PROTECTED] To: matlor [EMAIL PROTECTED] Cc: squid-users@squid-cache.org Sent: Wednesday, October 29, 2008 8:48:39 AM Subject: Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY On Tue, Oct 28, 2008 at 6:18 AM, matlor [EMAIL PROTECTED] wrote: I have configured squid with winbind integrated in the active directory of a windows 2003 domain. If I browse internet trough IE 7 everething is ok, no user and password prompted, because of the common login. While, if I open Firefox (2 or 3 version), it prompts for user and password. One other note: While FF does support NTLM, it does not do transparent auth as IE does. Hence the prompting for username/password. Furthermore, due to M$ having a broken implementation of NTLM, FF will at times repeatedly prompt ad infinitum. There is an open bug on this at Mozilla, (https://bugzilla.mozilla.org/show_bug.cgi?id=318253) but action on it is understandably slow. You can mess with FF's NTLM related settings under 'about:config' to gain some respite. You can also run a basic auth that authenticates against NTLM which for some reason seems to avoid the multi-prompt issue. Something like: auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param basic children 2 auth_param basic realm somerealm auth_param basic credentialsttl 2 hours auth_param basic casesensitive off Regards, Chris -- Christopher Nighswonger Faculty Member Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org - NOTICE: The information contained in this electronic mail message is intended only for the use of the intended recipient, and may also be protected by the Electronic Communications Privacy Act, 18 USC Sections 2510-2521. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please reply to the sender, and delete the original message. Thank you.
Re: [squid-users] How get negative cache along with origin server error?
Hi Dave, On Tue, Sep 30, 2008 at 6:13 PM, Dave Dykstra [EMAIL PROTECTED] wrote: I found out a little bit more by looking in the source code and the generated headers and setting a few breakpoints. The squid closest to the origin server that is down (the one at the top of the cache_peer parent hierarchy) never attempts to store the negative result. Worse, it sets an Expires: header that is equal to the current time. Squids further down the hierarchy do call storeNegativeCache() but they see an expiration time that is already past so it isn't of any use. Those things make it seem like squid is far from being able to effectively handle failing over from one origin server to another at the application level. - Dave On Tue, Sep 30, 2008 at 10:32:43AM -0500, Dave Dykstra wrote: Do any of the squid experts have any answers for this? - Dave On Thu, Sep 25, 2008 at 02:04:09PM -0500, Dave Dykstra wrote: I am running squid on over a thousand computers that are filtering data coming out of one of the particle collision detectors on the Large Hadron Collider. A bit off-topic here, but I'm wondering if these squids are being used in CERN's new computing grid? I noticed Fermi was helping out with this. (http://devicedaily.com/misc/cern-launches-the-biggest-computing-grid-in-the-world.html) Regards, Chris
[squid-users] HDD Configuration Recommendations
Hi all, I'm preparing to move my squid to new hardware. I have two 500GB SATA HDD's in the new box which will be used to store squid's cache on. Any suggestions on the best raid config for these guys so as to maximize performance? Regards, Chris -- Christopher Nighswonger Faculty Member Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org
[squid-users] Upgrade from 2.6STABLE12 to 2.7STABLE4
Is there anything I should be aware of prior to upgrading a perfectly good working install of 2.6STABLE12 to 2.7STABLE4? Regards, Chris
Re: [squid-users] squid with dial-up
On 10/26/07, Amos Jeffries [EMAIL PROTECTED] wrote: It works fine when I am not on dial-up. Big hint there to whats causing the problem. I use squid on a dialup connection at my home on a win32 box and it works just fine. As Amos suggested, I would suspect an unstable dialup connection first. Chris
Re: [squid-users] x-forwarded-for
On 9/24/07, Gustavo Uribe [EMAIL PROTECTED] wrote: Hello list, sorry to bother you with a question, but i've been browsing teh internets for a few hours now without finding a clue. What im trying to do is... get in squid access.log the client IP, but since im using dansguardian , the front proxy is dg and squid only sees conecctions from localhost... so i enabled forwarded-for and x-forwarded-for in dansguardian as well compiled squid with --x-forwarded-for, put forwarded_for on , but i still see only localhost connections... what am i missing? Check this post on the DG users list: http://tech.groups.yahoo.com/group/dansguardian/message/19532 It addresses this issue. Chris
Re: [squid-users] Confusing about login name in AD-proxy authentication?
On 9/22/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: he is talking about using Windows Integrated Login to have the client automatically log in to the proxy, just as it automatically logs in to any other server on your network. http://wiki.squid-cache.org/SquidFaq/ProxyAuthentication#head-1d6e24e071a1a5e65f112d9a96cdf1320684a8f2 Perhaps we should add that the client machine must be a member of a domain trusted by the DC or equivalent to obtain truly seamless authentication. And even then, it is mostly with the browser. Other apps may prompt if they don't cowtow to M$ properly. Chris
Re: [squid-users] Re:[squid-users] Re:Re: [squid-users] How to setup two ADSL line on a single linux box?
On 9/12/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: it's not possible (at the moment) to open 2 ADSL Ports on the same telefony wire... Agreed. However, with two dry loops or live lines (or more) it is possible to bond the circuits together for aggregate bandwidth. That being said, the sum is not simply the bandwidth of each link * number of links. There is some bandwidth lost to the overhead of the bonding. FWIW, here is an isp in UK which does just this: http://www.upstreaminter.net/ But this is really straying from the topic of squid ;-) Chris
Re: [squid-users] Unable to download files over 2GB of size
On 5/21/07, Sathyan, Arjonan [EMAIL PROTECTED] wrote: I would like to know where we are on this bug... Are we able to find any clues why the DVD files are not getting downloaded through Squid? It appears to be an IE bug, not Squid. Have you tried Henrik's suggestion in his previous post? Chris
Re: [squid-users] Unable to download files over 2GB of size
On 5/16/07, Adrian Chadd [EMAIL PROTECTED] wrote: On Wed, May 16, 2007, Henrik Nordstrom wrote: tis 2007-05-15 klockan 14:34 -0700 skrev Sathyan, Arjonan: Was there any trace from the files which I have uploaded? Can you please tell me why I am not able to download the files which are 2GB via squid using IE 6? The http_headers only contains an Squid access denied result. The packet trace only contains a few SSH packets. Is this a bug in squid...? Not from what it looks so far. Pretty sure it's an MSIE6 bug. Can we narrow down the specific bug behaviour? I'll fire it off to someone in the IE team and see what 'e says. When IE6 is setup to use proxy (squid), and the aforementioned file is downloaded, a download window opens and the progress indicator zips to 100% in the first second after which IE announces that the download is complete. What the user really has is a file with a size equivalent to the effective kbps of their internet pipe. When IE6 is setup *not* to use proxy, and the same file is downloaded, the behaviour is as expected and the resulting file is the correct size. Two tcpdumps have been submitted. Let me know if you need more specific information, and I will provide it. Chris
Re: [squid-users] software based on squid
On 5/13/07, Pitti, Raul [EMAIL PROTECTED] wrote: Hi! I remember someone on the list mentioning a commercial filtering software, wich run on RH . I think the Software company is from australia, and what i like most is the cost.. but somehow i managed to delete my bookmarks (to much work..., to late on the night ;-)). can you people recommended a good commercial content filtering able to run on RedHat? Smoothwall (Commercial) is based on Dansguardian (Open source w/commercial user restrictions). http://smoothwall.net/ and http://www.dansguardian.org Chris
Re: [squid-users] java-script-problem ???
On 5/10/07, Starckjohann, Ove [EMAIL PROTECTED] wrote: Hello! We're having problems with the url of a financial magazine: http://www.cash-online.de/ Without proxy the sites shows rapidly...with proxy squid 2.6.5-4 the site does NOT load. Is this an squid-issue, or where may be the problem ? Possibly the tcp window scaling issue. Take a look at this post by Henrik: http://marc.info/?l=squid-usersm=117339989811225w=2 Chris
Re: [squid-users] java-script-problem ???
On 5/10/07, Starckjohann, Ove [EMAIL PROTECTED] wrote: direct hit :-) echo 0 /proc/sys/net/ipv4/tcp_window_scaling You might want to add that to your system config (not sure where off the top of my head) so that it will survive a reboot. Chris
Re: [squid-users] ACL Question
On 5/10/07, Vadim Pushkin [EMAIL PROTECTED] wrote: I am trying to modify my ACL to prevent a specific IP address within a range already defined in http_access and acl. Where within this do I state *not* (!) 192.168.1.200? acl NET_ONE src 192.168.0.0/16 or http_access allow NET_ONE I think you will have to define a new acl such as: acl deniedips src 192.168.1.200 and then make the following entry immediately *before* 'http_access allow NET_ONE' : http_access deny deniedips Rules are processed in order of appearance in the list, first to last. Chris
Re: [squid-users] Unable to download files over 2GB of size
On 5/9/07, Adrian Chadd [EMAIL PROTECTED] wrote: On Tue, May 08, 2007, Chris Nighswonger wrote: On 5/8/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: tis 2007-05-08 klockan 18:48 -0400 skrev Chris Nighswonger: Maybe it is a regression? I built my STABLE12 from source and did an install over top of STABLE9. From what I can tell Squid-2.6.STABLE12 works just fine and the problem is most likely in MSIE. I agree with this as does my test results using STABLE12 and IE6 and IE7 (see previous post). Most M$ things are broke in some form or fashion in my experience. If there is any real need for it, I'll be glad to do the tcpdump. Of course, how often does one do a 2GB download? Could you please do the tcpdump? I'd like to document exactly how/why its busted in an article in the Wiki. I'll try to get to it today. If not then first thing tomorrow. Chris
Re: [squid-users] new website: final beta
On 5/8/07, Adrian Chadd [EMAIL PROTECTED] wrote: Hi everyone, The new website is at http://new.squid-cache.org/. I'd like to put this version live in the next week or so. Could I get my writings proofed and links checked by someone with a little spare time? Ok. I took some time to proof the home page, exercise the links under Introduction, proof that set of pages, and verify the links on that set of pages. Here are my notes: * http://new.squid-cache.org/Intro/who.dyn under duane wessels 'co-ordinator' should not be hyphenated. * Same url under henrik nordstrom, 'Makes his living from Squid consulting and other Open Source related activities.' should read 'He makes his living from Squid consulting and other Open Source related activities.' * http://new.squid-cache.org/Intro/helping.dyn under Donate equipment and hardware to Squid developers the line reading 'or now, Email [EMAIL PROTECTED] for further information.' needs an href to make the email address clickable, and the word 'Email' should not be capitalized. NOTE: This only covers the home page and Introduction link pages. I'll do more as I have time. Great work, Adrian! Chris
Re: [squid-users] Unable to download files over 2GB of size
On 5/8/07, Tim Bates [EMAIL PROTECTED] wrote: Download indicator went form 0% to 100% in less than a second and confirmed a download size of 554 bytes in 1 sec. Did you happen to look at the file contents when it finished? Maybe it contains a clue to what goes on... I just tried that link with IE6, and it started downloading. The progress bar was full from the start though, but I let it run for about 15 seconds and it definitely was downloading something. Canceled it after about 4 MB had downloaded. I'm running Squid 2.6Stable5 Maybe it is a regression? I built my STABLE12 from source and did an install over top of STABLE9. Chris
Re: [squid-users] Unable to download files over 2GB of size
On 5/8/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: tis 2007-05-08 klockan 18:48 -0400 skrev Chris Nighswonger: Maybe it is a regression? I built my STABLE12 from source and did an install over top of STABLE9. From what I can tell Squid-2.6.STABLE12 works just fine and the problem is most likely in MSIE. I agree with this as does my test results using STABLE12 and IE6 and IE7 (see previous post). Most M$ things are broke in some form or fashion in my experience. If there is any real need for it, I'll be glad to do the tcpdump. Of course, how often does one do a 2GB download? Chris
Re: [squid-users] Unable to download files over 2GB of size
On 5/7/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: Anyone else able to download 2GB files with IE6 configured to use a proxy? Send a link to one and I'll try in the next hour or so. 98% of what we run behind squid is IE6 7. Chris
Re: [squid-users] Unable to download files over 2GB of size
On 5/7/07, Chris Nighswonger [EMAIL PROTECTED] wrote: On 5/7/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: Anyone else able to download 2GB files with IE6 configured to use a proxy? Send a link to one and I'll try in the next hour or so. 98% of what we run behind squid is IE6 7. Confirmed this problem *does* exist on IE6 behind Squid by attempting to download a suse 10.x DVD image some 3454MB in size. Download indicator went form 0% to 100% in less than a second and confirmed a download size of 554 bytes in 1 sec. Confirmed this problem *does not* exist on IE7 behind Squid. Same file started at the same time and the download indicator tells me I can expect a 2:42 wait with the current download rate of 355 KB/sec (Not bad for a 512kbps DSL). Here's the link for anyone else wanting to try a large file download: http://ftp.suse.com/pub/suse/i386/current/iso/SUSE-10.0-EvalDVD-i386-GM.iso Chris
Re: [squid-users] Unable to download files over 2GB of size
On 5/7/07, Chris Nighswonger [EMAIL PROTECTED] wrote: On 5/7/07, Chris Nighswonger [EMAIL PROTECTED] wrote: On 5/7/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: Anyone else able to download 2GB files with IE6 configured to use a proxy? Send a link to one and I'll try in the next hour or so. 98% of what we run behind squid is IE6 7. Ok.. it pays to look before you leap. I'm back at STABLE9. Sorry about that. :-( Chris
Re: [squid-users] Authentication Override
On 5/4/07, Brian Kirk [EMAIL PROTECTED] wrote: Squid 2.6 Stable 9. Ok so if I understand you correctly, it will not drop down to basic ever with IE since it is NTLM capable, it will just prompt you for your credentials if the credentials that were provided weren't a member of the specific require-membership-of group. And that would explain why I never get prompted with the realm provided in the basic authentication potion. Brian, FWIW, you can pass *realm* off on IE's NTLM prompt by 'domain\username' in the 'username' field ([EMAIL PROTECTED] may work as well). I run two separate domains through a single squid. All internet access accounts are on domain A. Thus, users on domain B have to use 'domainA\username' when prompted (which is every time they open a browser for the first time). Watch out for the 'Save my password' checkbox. Chris
Re: [squid-users] Authentication Override
On 5/4/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: fre 2007-05-04 klockan 13:47 -0400 skrev Chris Nighswonger: FWIW, you can pass *realm* off on IE's NTLM prompt by 'domain\username' in the 'username' field ([EMAIL PROTECTED] may work as well). That's the domain, not the realm. NTLM (and Negotiate) does not have a realm.. Henrik, I never have been real clear on the difference between realm and domain. What is it? Thanks, Chris
Re: [squid-users] Squid 2.6 Stable 9 NTLM Authentication
On 4/24/07, Brian Kirk [EMAIL PROTECTED] wrote: Thank you Chris, but I the client_persistent_connections is on by default, and I couldn't find a setting in the squid.conf for the persistent_connection_after_error is that new to squid 2.6? I'm not sure if this directive is new to 2.6, but here is a clip from squid.conf showing where it appears: # TAG: client_persistent_connections # TAG: server_persistent_connections # Persistent connection support for clients and servers. By # default, Squid uses persistent connections (when allowed) # with its clients and servers. You can use these options to # disable persistent connections with clients and/or servers. # #Default: client_persistent_connections on server_persistent_connections on # TAG: persistent_connection_after_error # With this directive the use of persistent connections after # HTTP errors can be disabled. Useful if you have clients # who fail to handle errors on persistent connections proper. # #Default: persistent_connection_after_error on I'm not a squid guru yet so this may be a question for someone in that catagory :) Chris
Re: [squid-users] Squid 2.6 Stable 9 NTLM Authentication
On 4/23/07, Brian Kirk [EMAIL PROTECTED] wrote: Is there a way to have it so once a user authenticates the credentials will be stored and won't need the ntlm helper for a set time. Do you have client_persistent_connections enabled? You might also try enabling persistent_connection_after_error as well. You can see persistent connection information in the CacheMgr if you have it setup. Chris
Re: [squid-users] Log analysis question
On 4/20/07, Allen Schmidt Sr. [EMAIL PROTECTED] wrote: Have never had to analyze our massive squid logs before but my boss's boss is asking if we can provide access numbers on specific IP range blocks. Is this possible? Anyone have any suggestions or a place to start? We have webalizer running and it spits out nice stuff. But is there a way with it or something else to collect data on IP ranges? You might take a look at Sawmill: http://www.sawmill.net/ Chris
Re: [squid-users] Squid 2.6 ntlm authentification failed for instant messaging
On 4/17/07, Suman Mukherjee [EMAIL PROTECTED] wrote: However while I am trying to connect any instant messaging (Yahoo messenger, MSN) through the proxy, connection is getting failed due to authentication failure. Are your MSN IM client setup for HTTP post method in the proxy config section? I use squid with ntlm and MSN IM works fine using this config. Chris
Re: [squid-users] Fwd: Multiple Authentication Prompts
On 4/8/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: Any messages in cache.log? If it's the same issue then there should be messages about NTLM message type 3 seen when 1 expected. That must be it. the cache.log is full of this type of entry: [2007/04/08 20:41:45, 1] libsmb/ntlmssp.c:ntlmssp_update(267) got NTLMSSP command 3, expected 1 I followed a Mozilla bug report on this issue which mentioned the same squid/gmail prompting issue I am experiencing: https://bugzilla.mozilla.org/show_bug.cgi?query_format=specificorder=relevance+descbug_status=__open__id=318253 It looks like it may be resolved in Firefox 2 so I have moved to 2.0.x I also made several suggested changes to about:config in Firefox which were suggested as helping ntlm and Firefox. We will see how it goes. Thanks, Chris
Re: [squid-users] Fwd: Multiple Authentication Prompts
On 4/9/07, Chris Nighswonger [EMAIL PROTECTED] wrote: On 4/8/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: Any messages in cache.log? If it's the same issue then there should be messages about NTLM message type 3 seen when 1 expected. That must be it. the cache.log is full of this type of entry: [2007/04/08 20:41:45, 1] libsmb/ntlmssp.c:ntlmssp_update(267) got NTLMSSP command 3, expected 1 I followed a Mozilla bug report on this issue which mentioned the same squid/gmail prompting issue I am experiencing: https://bugzilla.mozilla.org/show_bug.cgi?query_format=specificorder=relevance+descbug_status=__open__id=318253 It looks like it may be resolved in Firefox 2 so I have moved to 2.0.x I also made several suggested changes to about:config in Firefox which were suggested as helping ntlm and Firefox. We will see how it goes. It doesn't go :( Chris
Re: [squid-users] Fwd: Multiple Authentication Prompts
On 4/9/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: mån 2007-04-09 klockan 13:43 -0400 skrev Chris Nighswonger: It doesn't go :( Ok. Gave my 5 cents of information in the bug report on what I think is the cause. Thanks Henrik. Chris
[squid-users] Re: Multiple Authentication Prompts
No takers? On 4/5/07, Chris Nighswonger [EMAIL PROTECTED] wrote: Hi all, I am having an intermittent issue with squid prompting multiple times for authentication especially which using the gmail web interface. If I am away from the computer for any length of time, the prompts back up so deep that I have to kill the browser process to recover. It appears to be an issue with the ntlm type authentication because if I cancel out of the ntlm prompt and let squid roll into basic the prompts quit. However, both ntlm and basic use the ntlm_auth helper. ie. auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp and auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic Here is some other info: Browser: Firefox 1.5.0.10 Squid: 2.6.STABLE9 Any thoughts? Or is this yet another ntlm foible? Thanks, Chris
[squid-users] Fwd: Multiple Authentication Prompts
Hi all, I am having an intermittent issue with squid prompting multiple times for authentication especially which using the gmail web interface. If I am away from the computer for any length of time, the prompts back up so deep that I have to kill the browser process to recover. It appears to be an issue with the ntlm type authentication because if I cancel out of the ntlm prompt and let squid roll into basic the prompts quit. However, both ntlm and basic use the ntlm_auth helper. ie. auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp and auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic Here is some other info: Browser: Firefox 1.5.0.10 Squid: 2.6.STABLE9 Any thoughts? Or is this yet another ntlm foible? Thanks, Chris -- Chris Nighswonger Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org [EMAIL PROTECTED] V:910-892-8761 C:919-820-5473 - NOTICE: The information contained in this electronic mail message is intended only for the use of the intended recipient, and may also be protected by the Electronic Communications Privacy Act, 18 USC Sections 2510-2521. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please reply to the sender, and delete the original message. Thank you.
Re: [squid-users] Need Help, Connection Refused.
On 4/1/07, Kenny Lee [EMAIL PROTECTED] wrote: yes ... Does this mean that you can access your site from inside with squid bypassed? i can browse the website using internet IP address ... but if i used the website name to browse, it come out that error msg. Again, is this through squid or bypassing squid? This is important to determine if the problem is squid or something else. If the only way you can access the site is by ip, it may be a dns issue. Check your resolv.conf Maybe do a dig against your nameservers to see if the zone data is correct for your web server. Chris
Re: [squid-users] Need Help, Connection Refused.
On 3/30/07, Kenny Lee [EMAIL PROTECTED] wrote: Hi all,sdsdsd i am a beginner for Squid. I have setup a Squid Proxy with Suse Linux 10.2, and i have a web server which now connecting to squid proxy to access internet. Everythings is working fine, the web server can surf net, update Spamassassin, update Clam Antivirus. Outsider also can browse to my website which hosted in that web server. ... Ok ... the problem is here, when i used my PC or web server (which all connected to squid proxy) to browse to my website, the IE return me (111) Connection Refused. I checked my access.log and found this log ... 1175248338.619432 192.168.10.5 TCP_MISS/503 1402 GET http://www.mywebsite.com - DIRECT/123.123.123.123test/html so what is the problem there? any setting need to be done in squid.conf ? my proxy port is using port 81. Please help ... It looks like your webserver is refusing the connection rather than squid. Can you access your website from inside directly, bypassing squid? Chris
Fwd: [squid-users] Squid + DMZ
On 3/27/07, Charl Loubser [EMAIL PROTECTED] wrote: My squid.conf : acl local_network dst 192.168.0.0/24 acl local_dmz dst 192.168.1.0/255.255.255.240 I believe you mean 'acl local_network src 192.168.0.0/24' and 'acl local_dmz src 192.168.1.0/255.255.255.240' http_access allow localhost You probably want this higher in your list. You may want to take a look at http://wiki.squid-cache.org/SquidFaq/SquidAcl Remember the rules are processed in the order they appear in the list top to bottom. Chris
Re: [squid-users] Large ACL problem
On 3/27/07, Chris Rosset [EMAIL PROTECTED] wrote: So just checking if the --enable-gnuregex might help, or should i go with squidgard or squirm some other redirector? I'm not sure what you are trying to acl, but Dansguardian works nicely for filtering purposes. Chris
Re: [squid-users] Squid on Windows XP
On 3/26/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: sön 2007-03-25 klockan 17:25 -0400 skrev Chris Nighswonger: And if you try using the squidclient command line client shipped with Squid? C:\squid\binsquidclient http://www.google.com HTTP/1.0 200 OK Cache-Control: private Content-Type: text/html; charset=ISO-8859-1 Set-Cookie: PREF=ID=feeb9121718069f4:TM=1174907365:LM=1174907365:S=tDln0NdET5dCL 7Hm; expires=Sun, 17-Jan-2038 19:14:07 GMT; path=/; domain=.google.com Server: GWS/2.1 Date: Mon, 26 Mar 2007 11:09:25 GMT X-Cache: MISS from home-computer X-Cache-Lookup: MISS from home-computer:3128 Via: 1.0 home-computer:3128 (squid/2.6.STABLE12) Proxy-Connection: close -html dump clipped- It appears to work OK here. This request shows up in the access.log as well. 1174907359.980 1462 127.0.0.1 TCP_MISS/200 4319 GET http://www.google.com - DIRECT/216.239.37.99 text/html Is it an IE issue? XP issue? M$ issue? Chris
Re: [squid-users] Squid + DMZ
On 3/26/07, Charl Loubser [EMAIL PROTECTED] wrote: The intranet runs on a seperate server, which has Ip 192.168.1.3. It does not seem to be able to proxy the request, and constantly hands me a connection refused error when I try and access one of the pages. 1. Where is squid running? 2. What does squid's access.log say when you access both urls? Chris
Re: [squid-users] Squid on Windows XP
On 3/26/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: mån 2007-03-26 klockan 07:17 -0400 skrev Chris Nighswonger: There is several proxy settings in MSIE. There is the general proxy settings, then another set per connection. Maybe more.. They say you learn something new every day. After years of working with M$ *junk* (sorry) you have taught me something new... Each dialup connection has its own proxy settings. I had only set the *general* (equal to LAN it appears) proxy settings as you suggested. After setting the *connection-specific* proxy settings, squid functions as expected. Leave it to M$ to complicate it. Henrik, I owe you again. Thanks. Guido, Thanks for the help and the very nice Windows port of squid. Chris
Re: [squid-users] Squid on Windows XP
Hi Guido, On 3/25/07, Guido Serassio [EMAIL PROTECTED] wrote: Hi, Squid works fine on all Windows version starting from 2000 to the latest Vista. Do you have any personal firewall running on your XP machine I had it on, but have disabled it completely now. It still looks like squid is not proxying pages. I have squid setup to accept connections on 127.0.0.1:3128 The interesting thing is that even with the squid service stopped the browser will still resolve and load pages with proxy setting enabled wierd. I run squid on a larger campus network on FC6. Based on my configuration there, I see no reason why this should not be working. I tried to watch packets with wireshark, but I cannot look at traffic on the loopback on this XP box with it for some reason. Thanks, Chris
Re: [squid-users] Squid on Windows XP
On 3/25/07, Guido Serassio [EMAIL PROTECTED] wrote: Maybe the same reason because squid is not working ? Also check your antivirus software. I tried with AV services completely disabled. No luck. Do you can see the 3128 port in use with netstat -a command ? With squid started: TCPnighswonger-hm:3128nighswonger-hm:0 LISTENING With squid stopped: 3128 is not in use. Thanks for the help. Chris
[squid-users] Squid on Windows XP
Hi all, I installed the windows port of squid by Acme Consulting on an XP workstation with a dialup connection to the inet. Fixed up the squid.conf so that squid listens on 127.0.0.1 and set the IE proxy settings accordingly. Sadly I get no page-loads. The cache log shows that squid starts up OK and picks up the dns addresses assigned to the dialup connection. However, the pagefaults count on exit looks extremely high: Page faults with physical i/o: 1640 I use aufs on my FC6 squid and assumed that this would be fine on xp. Here is my cache_dir line (will tune later): cache_dir aufs c:/squid/var/cache 1024 16 256 Neither access.log nor store.log have any entries. Any thoughts on what is wrong here? Does this port no play well on XP? Or have I chosen the wrong store type? Or missed something else? I can post more of the config files if needed. Thanks, Chris
Re: [squid-users] RE: Squid, Java, Basic Authentication
On 3/23/07, Brian Bepristis [EMAIL PROTECTED] wrote: Hey all I have squid setup and has been running for some time at one of my customers sites however when every they visit this site with java it keeps asking for the username and password and we are typing it in correctly but I need to get this issue resolved today. So any help would be much appreciated here is the conf and here is the cache.log and the access.log thanks for the help This is probably similar to the java auth issues with ntlm. See http://www.mail-archive.com/squid-users@squid-cache.org/msg44693.html as well as various other posts in the list archives. Chris
Re: [squid-users] Another HTTP 1.1 Question
On 3/14/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: The window scaling problem is unfortunately not so easy to detect in Squid as it causes connections to hang after the request has been sent and acknowledged making it just look like the server takes ages to process the request, so it generally never reaches a retry condition.. I notice that the window may be scaled many times during the course of packet exchanges. I also noticed that the size of the window on the acknowledge packet immediately before the hang was different nearly every time. So, my question is: Is the symptom which exposes the window scaling problem simply the packet sequence 'request - response - hang (aka no subsequent packets)'? Chris
Re: [squid-users] Streaming
On 3/14/07, Fabio Silva [EMAIL PROTECTED] wrote: Chris, look it.. i have no ACL just one acl network src 192.168.2.0/255.255.255.0 http_access allow network All works good but only one site doesnt work.. the site is from Brazil www.uol.com.br but is the users area and must have username and password... If i dont use proxy like a ie direct connection or firefox direct connection the video opens ok... I tryed to create an acl streaming rep_mime_type with the type of the file that is showed in access.log but doesnt work... has any way to bypass squid for some sites??? Hmmm. This is getting beyond my limited knowledge. Maybe a tcpdump on the interface connected to the internet while accessing this link via squid would give some insight into what is happening. Perhaps someone else may have a thought as well. Chris
Re: [squid-users] Squid Java problem
On 3/13/07, Tornado [EMAIL PROTECTED] wrote: Yes we are. Are there any known issues with NTLM and java? Java does not seem to support transparent authentication very well. I use ntlm_auth and had the same issue. My workaround is to add this to my squid.conf: acl Java browser Java/1.4 Java/1.5 http_access allow localhost Java # the localhost acl is because I run DG content filtering on the same box. You may need to vary this depending on the versions of Java your clients run and your setup. This allows Java scripts to be accessed unauthenticated. This fix is discussed elsewhere on this list as well. Chris
Re: [squid-users] Squid Java problem
On 3/12/07, Tornado [EMAIL PROTECTED] wrote: Hi all, We are using squid proxy which is integrated with AD on our network without much problems and everything seems to be working fine except when site makes the use of Java. Are you using ntlm? Chris
Re: [squid-users] Another HTTP 1.1 Question
On 3/11/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: sön 2007-03-11 klockan 16:38 +0800 skrev Adrian Chadd: If someone would like a fun weekend project - write something to sniff out these broken connections and insert temporary ip routes for it. Another idea would be a test tool to see why a site is broken.. Known issues: - ECN - Window Scaling - Forgetting Vary - Mixing up ETag (same ETag on multiple incompatible entities) - Various malformed responses * Double content length * Malformed headers * Repeated single-value headers If I knew more about the structure of these items I'd give it a whirl. As it is, I just have come up to the bottom level of understanding tcp window scaling. FWIW, I complained to the ncsecu and got a call from their IT dept today. It seems that using the words firewall, bank, and broken in the same sentence caused a stir. Apparently they did an OS upgrade on their Symantec (?) firewall recently. I'm not the only one complaining. We'll see if Symantec fixes it. Thanks for the help again, Henrik. I would have been lost on this one without it. Chris
Re: [squid-users] Another HTTP 1.1 Question
On 3/9/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: tor 2007-03-08 klockan 20:25 -0500 skrev Chris Nighswonger: ip route add $THEIR_IP/32 via $MY_GATEWAY window 65535 which only limits window scaling for that destination without interfering with your other connections ---end snip--- Any thoughts? Better or worse than the other? Obviously better. Nice! Works great happy users every network admin's dream This method may be something to post in the wiki on the page you mentioned regarding these issues. For what it's worth I had no problem loading the start page using Firefox via Squid-2. What OS are you running squid-2 over? Linux Fedora Core 6, but I had both window scaling and ecn disabled since earlier tests with other broken sites.. ok. My squid is on FC6 as well. Thanks so much for the help Henrik. Chris
[squid-users] Another HTTP 1.1 Question
Hi all, I have a site (www.ncsecu.org) which has been working fine via Squid 2.6STABLE9. Several days ago it broke. Doing some investigation with wireshark, it looks like the site has switched to HTTP 1.1. I have checked with the list archives and understand that there is no real support for 1.1 in squid at present. I have tried the header_access workaround. I also found a post by Henrik suggesting this patch: http://www.henriknordstrom.net/code/squid-http11.patch which appears to be a broken link. Since this site is a banking site and many of my users bank there, I need to come up with a workaround. I have thought to setup a firewall rule combined with client configuration to bypass the proxy for this site. I don't like this solution because it is so specific and requires manual changes to configurations at the client level. Any thoughts/suggestions/other workarounds in place for HTTP 1.1/Squid? Thanks, Chris
Re: [squid-users] Another HTTP 1.1 Question
On 3/8/07, Adrian Chadd [EMAIL PROTECTED] wrote: You can try the Squid-2 snapshots which include the below patch. http://www.squid-cache.org/Versions/v2/HEAD/ Here is what I have done: 1. My current install is via yum (rpm). 2. I have configured with the same options returned from a '#squid -v' and done a 'make' 3. I have backup my current squid.conf Here is the question: Do I do a 'make install,' then replace the new 'squid.conf' with my original, and start squid back up? (This is a production box and I really don't want to bust it.) Thanks, Chris
Re: [squid-users] Another HTTP 1.1 Question
On 3/8/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: tor 2007-03-08 klockan 08:38 -0500 skrev Chris Nighswonger: Hi all, I have a site (www.ncsecu.org) which has been working fine via Squid 2.6STABLE9. Several days ago it broke. Doing some investigation with wireshark, it looks like the site has switched to HTTP 1.1. Can the brokenness be tested without having an account at the site? It can. The default page will not load with squid in-line. No errors at all in access.log. The browser just hangs. This happens after squid forwards an HTTP 1.0 packet. The entire packet exchange dies at this point. With squid out of line, the same packe is HTTP 1.1 and the page loads right up. It may be worth trying 2.6.STABLE10, it has significant workarounds for broken HTTP/1.1 servers. (i.e. the bulk of the http11 patch mentioned earlier). No need to go to Squid-2.HEAD for this. The related code is the same in 2.6.STABLE10. I'll grab it then. Thanks. Chris
Re: [squid-users] Another HTTP 1.1 Question
On 3/8/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: tor 2007-03-08 klockan 18:46 -0500 skrev Chris Nighswonger: It can. The default page will not load with squid in-line. No errors at all in access.log. The browser just hangs. This happens after squid forwards an HTTP 1.0 packet. The entire packet exchange dies at this point. With squid out of line, the same packe is HTTP 1.1 and the page loads right up. That smells more like a tcp windows issue than an HTTP/1.1 issue.. ok If on Linux try the following workaround: echo 0 /proc/sys/net/ipv4/tcp_window_scaling this works around quite many broken firewalls not coping well with window scaling, but significantly scarifies performance over long distance connections (measured in RTT * bandwidth, not miles)... I can try this tomorrow. For what it's worth I had no problem loading the start page using Firefox via Squid-2. Up untill a week or so ago, I had no problems with Firefox/Squid-2 either... :( Testing.. Ah, yes. There is a broken firewall at this site crashing window scaling.. http://wiki.squid-cache.org/SquidFaq/SystemWeirdnesses?highlight=%28window%29%7C%28scaling%29#head-699d810035c099c8b4bff21e12bb365438a21027 Someone should contact the site operators explaining the problem to them.. I read the info at the link above. Are you suggesting that the issue is a broken firewall on ncsecu.org? If that is the issue, I'll have a chat with them. Thanks again, Chris
Re: [squid-users] Another HTTP 1.1 Question
On 3/8/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: That smells more like a tcp windows issue than an HTTP/1.1 issue.. If on Linux try the following workaround: echo 0 /proc/sys/net/ipv4/tcp_window_scaling this works around quite many broken firewalls not coping well with window scaling, but significantly scarifies performance over long distance connections (measured in RTT * bandwidth, not miles)... Here: http://lwn.net/Articles/92727/ I found this workaround: ---snip--- With kernel 2.6.17.13 or higher, you can also do: THEIR_IP=1.2.3.4 MY_GATEWAY=5.6.7.8 ip route add $THEIR_IP/32 via $MY_GATEWAY window 65535 which only limits window scaling for that destination without interfering with your other connections ---end snip--- Any thoughts? Better or worse than the other? For what it's worth I had no problem loading the start page using Firefox via Squid-2. What OS are you running squid-2 over? Thanks, Chris
Re: [squid-users] Streaming
On 3/7/07, Fabio Silva [EMAIL PROTECTED] wrote: Yes... but nothing like DENIED is logged... just normal access... 1. Does the video stream ok with squid bypassed? (ie direct connection to the internet) 2. Post the acl and http_access portions of your squid.conf. Chris
Re: [squid-users] deny specific ip access to specific domain
On 3/7/07, Michael Gichoga [EMAIL PROTECTED] wrote: I'm fairly new to squid and I want to implement a policy to deny a specific ip access to a particular domain e.g ebay.com How can I make this work with acls? Try something like acl ebay dstdomain .ebay.com acl restrict_ip src IP_TO_BE_RESTRICTED/NETMASK http_access deny ebay restrict_ip Be sure to place the http_access statement *before* other less restrictive http_access statements. Rules are processed in the order they appear. Also see: http://www.visolve.com/squid/squid24s1/access_controls.php for more info on access controls in squid. Chris
Re: [squid-users] Streaming
On 3/5/07, Fabio Silva [EMAIL PROTECTED] wrote: Hi all, i have no rule to block stream in my squid.conf, but when i try to open a video from a site, the video doesnt show to me. I tryed to create a acl with the mime type of asf video but didnt solved my problem. Any clue? Have you checked your access.log to see what happens when accessing the video? Chris
Re: [squid-users] Squid Allowing Sites Not In Any Allow List - Why?
On 3/5/07, Chris Robertson [EMAIL PROTECTED] wrote: acl proxy_a_sites dstdom_regex [-i] c:/squid/lists/proxy_a_sites.txt I'd suggest you start by changing this ACL to one using dstdomain. I mentioned this in our previous exchange. The regular expressions you are using are far too vague and regular expressions should really be used sparingly. This SHOULDN'T be causing the problem you describe, but it's just good practice. From http://www.regular-expressions.info/dot.html The dot is a very powerful regex metacharacter. It allows you to be lazy. Put in a dot, and everything will match just fine when you test the regex on valid data. The problem is that the regex will also match in cases where it should not match. If you are new to regular expressions, some of these cases may not be so obvious at first. http_access allow proxy_a_users proxy_a_sites http_access allow proxy_b_users proxy_b_sites http_access deny all Is this ALL of your http_access lines? What you have shown does not explain the results you are getting. Agreed. Please post the rest of your http_access lines in the order they appear in your squid.conf. Chris
[squid-users] NTLM realm Parameter
Hi, I run 2.6.STABLE9. I notice that the ntlm_auth does not have a realm parameter. How hard would it be to add this? The resulting proxy prompt in Firefox has where the realm name should be. Thanks, Chris
Re: [squid-users] NTLM realm Parameter
On 3/2/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: fre 2007-03-02 klockan 12:41 -0500 skrev Chris Nighswonger: Hi, I run 2.6.STABLE9. I notice that the ntlm_auth does not have a realm parameter. How hard would it be to add this? The resulting proxy prompt in Firefox has where the realm name should be. The NTLM and Negotiate schemes as specified by Microsoft does not have a realm.. I was looking at the source a bit and noticed that the header was constructed differently for ntlm_auth. just one of many deviations from the HTTP protocol standars. Notice that the terms Microsoft and Mavrick begin with the same letter Thanks, Chris
Re: [squid-users] WWW-Authenticate error
What version of Squid are you running? You need version 2.6 to proxy the broken Microsoft NTLM authentication scheme. Kinkie sorry for the late response it version 2.5. But i its possible with anewer version ? As mentioned above, 2.6 works nicely. Chris
Re: [squid-users] New (beta) website!
Very nice! I like the rounded, streamlined look. Chris On 2/26/07, Adrian Chadd [EMAIL PROTECTED] wrote: Hi everyone, The Squid team has been working on a replacement website for the Squid project. The first stage is almost complete - the only thing missing from the current website is the downloads section which should be completed shortly. It can be found at http://new.squid-cache.org/ . We'd appreciate any and all feedback you may have. As this is a volunteer project things take their time but we're slowly getting there. Now, this website is an almost verbatim copy of the current content into the new template format. I've got some plans for additional content once this site has been made live and the bugs are ironed out but, as always, we appreciate any suggestions you may have. (For those of you who are technically inclined - its all checked into Squid CVS in the module 'www2'. Its a simple PHP template assembly setup which I'm aiming to add in caching primitives to once the site has been deployed. This does mean you're able to submit changes to the website by giving us a CVS patch. :P) I'd like to call attention to the How to help out section: http://new.squid-cache.org/Intro/helping.html The Squid project is run by volunteers. Most of the active developers work on fixing Squid bugs and improving Squid in our spare time. We'd love to hear from you if you'd like to help us out in any way - this can be (as the web page says) being a tester for beta code, helping us write articles and examples on caching technologies in general, participating in development and donating to the project via PayPal. Even $10 or $20 will help. Don't be afraid to donate! Now, onto serious matters. I'm thinking of putting up some Squid merchandise for sale through one of the various online market places. All proceeds from the merchandise would go to the project. I'm currently leaning towards Cafepress. I'd love to hear positive/negative feedback for this idea and any suggestions you all have for Squid kit. I'm thinking T-Shirts, Mugs and Squid stickers to start with. Thoughts? Thankyou, Adrian Chadd (Volunteer, The Squid-cache project.)
Re: [squid-users] Configuration Question
On 2/25/07, SQUID Mailing List [EMAIL PROTECTED] wrote: Hello List, My question is that how do I configure squid to capture traffic from my gigabit interface? It would be nice to have more info. But for starters: squid.conf http_port gigabit_addr:port_to_listen_on Chris
Fwd: [squid-users] Authentication Method - NTLM
On 2/21/07, Christian Ricardo dos Santos [EMAIL PROTECTED] wrote: I'm wondering if there is anyway to configure squid to: - Autenthicate users through multiple ADs domains (Windows 2k / 2k3). Search the list archives. I belive a post by Henrik mentions the need for trusts to be established between the domains. - Allow each user (or AD's group) to access a specific list of sites (txt file). I imagine this can be done with Squid, but am not sure how. (I seem to remember seeing a post recently dealing with txt file site lists.) It seems a good content filter would be more adept at it. I use DG with ntlm to do it. - Automatic accept the change of any password made by an user (Windows's AD). If your users can change their own passwords via their workstation, the change should be immediate with squid since it will be authenticating against the DC where the change was made. -- Chris Nighswonger Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org
Re: [squid-users] $20 in your PayPal account if you help me fix this
acl proxy_a_users external win_domain_group group_proxy_a acl proxy_a_sites dstdom_regex [-i] c:/squid/lists/proxy_a_sites.txt acl proxy_b_users external win_domain_group group_proxy_b acl proxy_b_sites dstdom_regex [-i] c:/squid/lists/proxy_b_sites.txt Why are you using dstdom_regex as the acl type rather than simply dstdomain? Your url format in the *.txt files fits the pattern suggested for dstdomain rather than the regexp version (See 'dstdomain' here http://www.visolve.com/squid/squid30/accesscontrols.php#acl). Remember the period in a rexexp is a lazy operator. there are two in each of your url entries. This may be causing the issue. Chris
Fwd: [squid-users] Re: Having problems with ntlm_auth in my squid.conf file
On 2/22/07, Ray Dermody [EMAIL PROTECTED] wrote: Hi, Thanks for that Craig, that seems to have got me a bit further now. Im getting prompted for a username and password when I try to browse but it accepting nothing. Under /var/log/messages Im can see ntlm_auth (permission?) errors. Feb 22 12:43:16 squidtest kernel: audit(1172148196.323:12): avc: denied { create } for pid=3133 comm=ntlm_auth scontext=user_u:system_r:winbind_helper_t tcontext=user_u:system_r:winbind_helper_t tclass=udp_socket Feb 22 12:43:16 squidtest kernel: audit(1172148196.323:13): avc: denied { create } for pid=3133 comm=ntlm_auth scontext=user_u:system_r:winbind_helper_t tcontext=user_u:system_r:winbind_helper_t tclass=udp_socket Feb 22 12:43:16 squidtest kernel: audit(1172148196.323:14): avc: denied { create } for pid=3133 comm=ntlm_auth scontext=user_u:system_r:winbind_helper_t tcontext=user_u:system_r:winbind_helper_t tclass=udp_socket Has any seen this error before. These are audit notices from SELinux. It appears that SELinux is set to permissive mode. As they begin with 'audit' they have no true effect on your systems operation. Somebody with more SELinux policy experience than I might be able to tell you how to correct the policy to permit the helper program. However, I don't think this is affecting any issues you are mentioning in this post. If you are working with a client that is *not* a member of your domain you may need to try entering the username as 'domain\username' or '[EMAIL PROTECTED]' If the machine is not a domain member it will supply its own name in the place of 'domain' and the authentication will fail. You can also tail the squid access.log while attempting to browse and see what is happening to the request. Maybe the cache.log also... although this may depend on the debug level set in your squid.conf (again, maybe someone more knowledgeable can comment on this). Chris
[squid-users] Gmail Repeated prompts for Authencation by Squid
Hi all, I am running Squid 2.6.STABLE9 w/ ntlm_auth and basic, both are run against a windows DC and Firefox 1.5.0.9 If I log into a gmail account and leave the browser open, eventually the gmail interface performs some type of refresh. At times when it does this, squid begins to prompt for credentials. Upon entering valid credentials, it continues to prompt, over and over again. If I cancel, squid falls back to basic auth and the credentials are accepted and away we go. Taking a look at cache.log while the repeated prompt is occuring, here is what I see: [EMAIL PROTECTED] ~]# tail -f /var/log/squid/cache.log [2007/02/22 10:36:32, 1] libsmb/ntlmssp.c:ntlmssp_update(267) got NTLMSSP command 3, expected 1 [2007/02/22 10:36:52, 1] libsmb/ntlmssp.c:ntlmssp_update(267) got NTLMSSP command 3, expected 1 [2007/02/22 10:37:12, 1] libsmb/ntlmssp.c:ntlmssp_update(267) got NTLMSSP command 3, expected 1 [2007/02/22 10:38:04, 1] libsmb/ntlmssp.c:ntlmssp_update(267) got NTLMSSP command 3, expected 1 [2007/02/22 10:38:42, 1] libsmb/ntlmssp.c:ntlmssp_update(267) got NTLMSSP command 3, expected 1 [2007/02/22 10:39:49, 1] libsmb/ntlmssp.c:ntlmssp_update(267) got NTLMSSP command 3, expected 1 The cache.log is loaded with this error. 30 for today and more in the past. Any thoughts? Thanks, Chris
Re: [squid-users] NTLM Authentication and Non-NTLM Friendly Applications
On 2/21/07, Adrian Chadd [EMAIL PROTECTED] wrote: On Tue, Feb 20, 2007, Chris Nighswonger wrote: Hi All, I am sure that this must be a common issue with proxys and NTLM. (yuk..) My users run a variety of apps which desire to access the internet. Many of them do not play well with NTLM auth. I have been in the practice of simply using squid ACLs to permit access to these apps without authentication based on their destination domain. I am wondering what ways others have used to address this issue and would like to hear them. Or perhaps this is the best way. Which version of Squid are you using? Squid-2.6 improves on this quite a lot. 2.6.STABLE9 Some of these apps have in their proxy settings the option to enter username/password. However, it looks as if they are passing these credentials off *basic* auth style. Below are my auth_param settings for both ntlm and basic. It seems that I have seen somewhere in this list a post which showed using the squid 'ntlmssp' helper as the 'basic program' setting. Perhaps this is what I need to do so that when the app passes basic auth credentials they are checked against the DC? auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 17 auth_param ntlm keep_alive on auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd auth_param basic children 2 auth_param basic realm Campus Proxy Server auth_param basic credentialsttl 2 hours auth_param basic casesensitive off This issue is especially acute with anti-virus client updates. Thanks for the assistance. Chris
[squid-users] LRU Aging Time in Cachemgr
Hi all, After reading a number of posts, it seems that I should have a line in Cachemgr giving the LRU aging. I do not. Here is a snip of what I see: Cache information for squid: Request Hit Ratios: 5min: 9.9%, 60min: 7.9% Byte Hit Ratios:5min: 38.1%, 60min: 43.8% Request Memory Hit Ratios: 5min: 0.0%, 60min: 1.4% Request Disk Hit Ratios:5min: 26.4%, 60min: 11.1% Storage Swap size: 3542400 KB Storage Mem size: 12188 KB Mean Object Size: 13.64 KB Requests given to unlinkd: 0 I recently dropped and rebuilt my cache from scratch in an effort to diagnose some other issues. The new cache has only been up for right at 24 hours. I don't know if this may be the reason for the missing LRU time or not. Thanks, Chris -- Chris Nighswonger Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org
[squid-users] NTLM Authentication and Non-NTLM Friendly Applications
Hi All, I am sure that this must be a common issue with proxys and NTLM. (yuk..) My users run a variety of apps which desire to access the internet. Many of them do not play well with NTLM auth. I have been in the practice of simply using squid ACLs to permit access to these apps without authentication based on their destination domain. I am wondering what ways others have used to address this issue and would like to hear them. Or perhaps this is the best way. Thanks Chris -- Chris Nighswonger Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org
[squid-users] HIgh TCP_MISS After Upgrading to 2.6.STABLE9
Hi all, Last night I upgraded our squid proxy from 2.6.STABLE5 to STABLE9. This morning I notice that the TCP_MISS rate has jumped to 47% Have I missed something I should have done during the upgrade? Other info: -Cache dir type = aufs -The original install was via yum so the upgrade was via yum. No problems during the upgrade and nothing else was changed. Only 'squid stop,' 'yum update squid,' 'squid start' and done. -In Cache Manager.info the select loop time was around 50-60 ms avg It is now 167.532 ms avg (Don't know what this means or if it is relevant to this issue.) -Nothing out of the ordinary appears in the logs (unless I need to look for something specific). Thanks, Chris
Re: [squid-users] A Different Squid Load Issue?
The cache_dir type. Look into replacing it with aufs, to avoid blocking the main Squid process on disk-I/O. Done. Another pitfall is the small amount of memory the server has, and the large size of the disk cache. See http://wiki.squid-cache.org/SquidFaq/SquidMemory#head-09818ad4cb8a1dfea1f51688c41bdf4b79a69991 for more details. Let me make sure I am figuring correctly: ((cache size)*10 + cache_mem + 20MB)*2 = Recommended system memory so for my cache_dir (20MB) and cache_mem (8MB) sizes: ((20)*10 + 8 + 20)*2 = 456 or 512MB I have 1GB on order so that should resolve memory issues I would think. Thanks for the help. Chris -- Chris Nighswonger Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org
Re: [squid-users] Dansguardian or Squid
On 2/9/07, Alan Araujo [EMAIL PROTECTED] wrote: What is the best solution: 1 - Squid -- Dansguardian -- Squid Or 2 - Dansguardian (2.9.8.2) -- Squid I am running #2 and it works fantastic. My thought is the fewer components invovled, the less there is to break. Chris
[squid-users] A Different Squid Load Issue?
Hi all, I have a issue where when the client browser requests a page, it (the client) appears to hang and just wait without loading the page. After several 'refresh' attempts, the page will load. At other times the client behaves as if the page has loaded, but displays nothing (blank screen). Over the past two days I have watched the squid process via 'top' and notice that when this phenomena occurs the squid process grabs %100 of the cpu resources and the 'top' screen freezes until squid backs off. Most of the time squid uses far less than %30 of the cpu. I have followed several of the threads about squid under high load and these do not seem to apply here. Additional info that may help: Squid Version 2.6.STABLE5 Hardware: Dual PII 400MHz, 192MB ram, 40GB LVM (2 odd sized SCSI drives) OS: Fedora 6 The system also runs DansGuardian, a firewall, and a caching-only Bind9 DNS There are only 40-50 users avg connected simultaneously. Some use VOIP applications (ie. Skype, etc.) Some pertinent lines from squid.conf store_avg_object_size 14 KB #Based on cachmgr.cgi avg cache_dir ufs /var/spool/squid 20480 65 256 Does anything here stand out as a potential issue? Thanks Chris -- Chris Nighswonger Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org
Re: [squid-users] squid 2.6STABLE9 WCCPv2 CISCO 2600 w/bad recv_id 00000000
Maybe this is helpful: http://www.reub.net/node/3 Chris On 2/8/07, Martin Kobele [EMAIL PROTECTED] wrote: ok, I finally found out, after running squid in gdb how to turn on the very detailed debug output. So I can confirm, squid does receive all of the handshaking: 2007/02/08 15:19:45| wccp2HereIam: Called 2007/02/08 15:19:45| wccp2HereIam: sending to service id 97 2007/02/08 15:19:45| wccp2_update_md5_security: called 2007/02/08 15:19:45| eventAdd: Adding 'wccp2HereIam', in 10.00 seconds 2007/02/08 15:19:45| wccp2HandleUdp: Called. 2007/02/08 15:19:45| Incoming WCCPv2 I_SEE_YOU length 148. 2007/02/08 15:19:45| Incoming WCCP2_I_SEE_YOU Received ID old=1016 new=1017. Now I still have the problem, that I don't get any webtraffic redirected. Is this message 1d03h: WCCP-EVNT:D97: Here_I_Am packet from 192.168.3.20 w/bad rcv_id to blame? Is it a bad version if IOS? Is anything else wrong? Thank you! Regards, Martin On Thursday 08 February 2007 14:29, Martin Kobele wrote: Hi, here is more output of the router confirming that the communication is kind of working: router1#debug ip wccp packets WCCP packet info debugging is on router1# 1d02h: WCCP-PKT:D97: Received valid Here_I_Am packet from 192.168.3.20 w/rcv_id 02F4 1d02h: WCCP-PKT:D97: Sending I_See_You packet to 192.168.3.20 w/ rcv_id 02F5 1d02h: WCCP-EVNT:D97: Here_I_Am packet from 192.168.3.20 w/bad rcv_id 1d02h: WCCP-PKT:D97: Sending I_See_You packet to 192.168.3.20 w/ rcv_id 02F6 1d02h: WCCP-PKT:D97: Received valid Here_I_Am packet from 192.168.3.20 w/rcv_id 02F6 1d02h: WCCP-PKT:D97: Sending I_See_You packet to 192.168.3.20 w/ rcv_id 02F7 1d02h: WCCP-EVNT:D97: Built new router view: 1 routers, 1 usable web caches, change # 000E 1d02h: WCCP-PKT:D97: Received valid Redirect_Assignment packet from 192.168.3.20 w/rcv_id 02F7 1d02h: WCCP-PKT:D97: Received valid Here_I_Am packet from 192.168.3.20 w/rcv_id 02F7 1d02h: WCCP-PKT:D97: Sending I_See_You packet to 192.168.3.20 w/ rcv_id 02F8 1d02h: WCCP-PKT:D97: Received valid Here_I_Am packet from 192.168.3.20 w/rcv_id 02F8 ... however, no output on cache.log, running squid with parameters -NDX -d9 Regards, Martin On Thursday 08 February 2007 11:56, Martin Kobele wrote: Hi, I am experiencing the following problem: Squid does not get any I_SEE_YOU messages and the router prints out Here_I_Am packet from 192.168.3.20 w/bad rcv_id 000 Here is the setup and what is happening in more detail: SQUID BOX IP: 192.168.3.20 Squid 2.6STABLE9, kernel 2.4.32 or 2.6.16, for now the kernel version does not seem to matter. For now I use one squid and one router. But once I get this to work, I plan on using 2 routers and most likely a second squid. Thus the dynamic configuration. squid.conf wccp2 settings: wccp2_router 192.168.3.21 wccp2_forwarding_method 1 wccp2_return_method 1 wccp2_assignment_method 1 wccp2_service dynamic 97 password=exPAS12 wccp2_service_info 97 protocol=tcp flags=src_ip_hash,ports_source priority=240 ports=80 wccp2_weight 1 CISCO 2621 Cisco Internetwork Operating System Software IOS (tm) C2600 Software (C2600-I-M), Version 12.0(3)T3, RELEASE SOFTWARE (fc1) Copyright (c) 1986-1999 by cisco Systems, Inc. Compiled Thu 15-Apr-99 15:41 by kpma Image text-base: 0x80008088, data-base: 0x80693A88 ROM: System Bootstrap, Version 11.3(2)XA4, RELEASE SOFTWARE (fc1) part of 'show conf': ip wccp 97 password exPAS12 ip name-server 192.168.1.253 ip name-server 192.168.1.252 ! interface FastEthernet0/0 ip address 192.168.3.21 255.255.255.0 no ip directed-broadcast ip wccp 97 redirect out no ip mroute-cache STARTING SQUID If I start squid I get the following output on the CISCO: 23:30:08: WCCP-EVNT:D97: Web Cache 192.168.3.20 added 23:30:18: WCCP-EVNT:D97: Built new router view: 1 routers, 1 usable web caches, change # 0030 if I restart squid I get the following on CISCO: 23:31:03: WCCP-EVNT:D97: Here_I_Am packet from 192.168.3.20 w/bad rcv_id 000 eventually, I get this: 23:50:06: WCCP-EVNT:D97: Redirect_Assignment packet from 192.168.3.20 fails source check I do not get any I_SEE_YOU messages in squid's cache.log The only wccp2 related messages are : 2007/02/08 11:14:22| WCCP Disabled. 2007/02/08 11:14:22| Accepting WCCPv2 messages on port 2048, FD 48. 2007/02/08 11:14:22| Initialising all WCCPv2 lists TCPDUMP == in order to be sure that udp packets are coming through, I captured the traffic while starting squid # tcpdump -X -s 1600 -n -i any -p port 2048 # squid 11:16:13.481972 IP 192.168.3.20.2048 192.168.3.21.2048: UDP, length 160 0x: 4500 00bc f920 4011 f996 c0a8 0314 [EMAIL PROTECTED] 0x0010: c0a8 0315 0800 0800 00a8 ceb5 000a 0x0020:
Re: [squid-users] Squid not resolving some url's
RCODE ATTEMPT1 ATTEMPT2 ATTEMPT3 0 107751 79 35 1000 2 2369 2268 2224 3 988 217 4000 5000 Before this issue came up, I never remember seeing anything beyond the 0 row. I was not able to figure out what this matrix is telling me or if it is relevant to the problem I am experiencing. 0 is name found. 1 is could not understand the query 2 is DNS server failure 3 is name not found (authorative) 4 is query type not implemented 5 is access denied All is responses from the DNS servers to Squid. So the numbers in this matrix reinforce resolution failure as the issue. Thanks again, Henrik. As always, your help is very much appreciated. Chris
[squid-users] Fwd: Error in cache.log???
Hi all, I am seeing a number of entries similiar to the following in my cache.log: CACHEMGR: unknown@192.168.0.247 requesting 'ntlmauthenticator' CACHEMGR: unknown@192.168.0.247 requesting 'idns' They appear anywhere from 5 to 10 at a time. This corralates with complaints of slow pageloads from users. Any thoughts? Thanks, Chris
[squid-users] Re: Error in cache.log???
On 2/6/07, Chris Nighswonger [EMAIL PROTECTED] wrote: Hi all, I am seeing a number of entries similiar to the following in my cache.log: CACHEMGR: unknown@192.168.0.247 requesting 'ntlmauthenticator' CACHEMGR: unknown@192.168.0.247 requesting 'idns' It appears I answered my own question cachemgr.cgi and do I feel dumb :)
[squid-users] Squid not resolving some url's
Hi all, I have been working on this problem now for a day or so. I'm running 2.6.STABLE5. Towards the end of last week various pages begin to be slow resolving and often required several F5's to finally load. The problem changed over the weekend to pages not resolving at all but being redirected to the search provided by the external dns servers we use (opendns). Bypassing squid and connecting directly to the Internet, using the same dns servers clears the problem up. Dig shows that the zone files in the dns servers are correct for the urls having problems. This would seem to eliminate the dns servers as the issue. I think. The cache.log shows no unusual entries. Access.log shows the url's being requested. The only thing that appears different as far as I can see is the rcode section of the Internal DNS page of cachemanager. Here it is: Internal DNS Statistics: The Queue: DELAY SINCE ID SIZE SENDS FIRST SEND LAST SEND -- - -- - Nameservers: IP ADDRESS # QUERIES # REPLIES --- - - 208.67.222.222 6262 208.67.220.220 0 0 192.168.0.2 0 0 Rcode Matrix: RCODE ATTEMPT1 ATTEMPT2 ATTEMPT3 0 107751 79 35 1000 2 2369 2268 2224 3 988 217 4000 5000 Before this issue came up, I never remember seeing anything beyond the 0 row. I was not able to figure out what this matrix is telling me or if it is relevant to the problem I am experiencing. Any help is greatly appriciated. Chris
[squid-users] Re: Squid not resolving some url's
More info: I decided to restart squid and it refused. At that point it appears that the redirector process ran away and the load went through the roof. I ended up having to take it down hard. After restarting, the box I found this in cache.log 2007/02/06 14:14:20| /var/spool/squid/10: (2) No such file or directory FATAL: Failed to verify one of the swap directories, Check cache.log for details. Run 'squid -z' to create swap directories if needed, or if running Squid for the first time. Squid Cache (Version 2.6.STABLE5): Terminated abnormally. CPU Usage: 0.240 seconds = 0.156 user + 0.084 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 1 So I ran the proscribed switch which I figure cost me two months worth of cache, and squid went back to running just fine. Any ideas as to what happened so that I might avoid it in the future? Thanks Chris On 2/6/07, Chris Nighswonger [EMAIL PROTECTED] wrote: Hi all, I have been working on this problem now for a day or so. I'm running 2.6.STABLE5. Towards the end of last week various pages begin to be slow resolving and often required several F5's to finally load. The problem changed over the weekend to pages not resolving at all but being redirected to the search provided by the external dns servers we use (opendns). Bypassing squid and connecting directly to the Internet, using the same dns servers clears the problem up. Dig shows that the zone files in the dns servers are correct for the urls having problems. This would seem to eliminate the dns servers as the issue. I think. The cache.log shows no unusual entries. Access.log shows the url's being requested. The only thing that appears different as far as I can see is the rcode section of the Internal DNS page of cachemanager. Here it is: Internal DNS Statistics: The Queue: DELAY SINCE ID SIZE SENDS FIRST SEND LAST SEND -- - -- - Nameservers: IP ADDRESS # QUERIES # REPLIES --- - - 208.67.222.222 6262 208.67.220.220 0 0 192.168.0.2 0 0 Rcode Matrix: RCODE ATTEMPT1 ATTEMPT2 ATTEMPT3 0 107751 79 35 1000 2 2369 2268 2224 3 988 217 4000 5000 Before this issue came up, I never remember seeing anything beyond the 0 row. I was not able to figure out what this matrix is telling me or if it is relevant to the problem I am experiencing. Any help is greatly appriciated. Chris
Re: [squid-users] dstdomain/port acl question
On 2/2/07, Henrik Nordstrom [EMAIL PROTECTED] wrote: tor 2007-02-01 klockan 16:26 -0500 skrev Chris Nighswonger: The following is my setup to handle the direct connections: acl streamserver dstdomain .streamserver.com acl streamport 1234 http_access deny streamserver streamport deny_info http://192.168.0.x:8000/mountpt streamserver streamport Where is this in relation to your other http_access rules? http_access allow manager localhost http_access allow manager masada1 http_access deny manager http_access deny CONNECT !SSL_ports http_access allow localhost UnauthAccess http_access allow localhost WindowsUpdate http_access allow localhost Java http_access allow cnighswonger-lt http_access allow localhost PURGE http_access allow localhost AuthorizedUsers # Deny connections from inside to the outside webradio stream and redirect them to the inside stream # The first two entries handle direct stream requests. The last two handle file list requests. http_access deny streamserver streamport deny_info http://192.168.0.238:8000/mountpt streamserver streamport http_access deny streamlink deny_info http://192.168.0.238:8000/list.m3u streamlink # http_access deny !Safe_ports http_access deny all And what is said in access.log? The access.log shows two TCP_DENIED and one TCP_MISS all looking at the outside streaming server. 1170362412.967 5 127.0.0.1 TCP_DENIED/407 1903 GET http://streamserver.com:7590/ - NONE/- text/html 1170362413.015 41 127.0.0.1 TCP_DENIED/407 2136 GET http://streamserver.com:7590/ - NONE/- text/html 1170362431.237 1 127.0.0.1 TCP_DENIED/407 1903 GET http://streamserver.com:7590/ - NONE/- text/html 1170362431.270 18222 127.0.0.1 TCP_MISS/600 4515 GET http://streamserver.com:7590/ Administrator DIRECT/69.5.81.71 - 1170362431.285 5 127.0.0.1 TCP_DENIED/407 2136 GET http://streamserver.com:7590/ - NONE/- text/html 1170362431.530 1 127.0.0.1 TCP_DENIED/407 1903 GET http://streamserver.com:7590/ - NONE/- text/html 1170362431.532243 127.0.0.1 TCP_MISS/600 8859 GET http://streamserver.com:7590/ Administrator DIRECT/69.5.81.71 - But for this task of directing users to a local mirror even if they request the original Internet address I'd recommend you to use a url rewriter. This way you can get the local mirror completely transparent to your users, not even knowing they access the local mirror. I have had some difficulty setting up for two redirectors (adzapper and squirm). I saw your post on this route and decided to give it a try. :) Chris
[squid-users] dstdomain/port acl question
Hi all, We run a webradio which is broadcast via an external streaming service (A). In an effort to keep the Internet pipe from becoming conjested with audio streaming traffic from on-campus users listening to the stream, we setup an internal streamer (B) for use on campus. Of course you have those who are not paying attention to the notice to use the inside streamer rather than the outside. Since I run dg/squid, I am configuring squid to redirect traffic requests headed for A to B. There are two ways of accessing the stream. One is via a playlist file. (i.e. http://streamserver.com/list.asx) The other is directly. (i.e. http://streamserver.com:1234/) I have setup the following to handle the playlist url's: acl streamlink url_regex -i ^http://streamserver.com/list. http_access deny streamlink deny_info http://192.168.0.x:8000/list.m3u streamlink This part works great! (Thanks Henrik. :) The following is my setup to handle the direct connections: acl streamserver dstdomain .streamserver.com acl streamport 1234 http_access deny streamserver streamport deny_info http://192.168.0.x:8000/mountpt streamserver streamport This one does not work at all. Watching the access.log, squid authenticates the request and then proceeds to pass the traffic to the external streaming server (A). Looking into the packets with wireshark shows that they are indeed headed for streamserver.com:1234 Two questions: 1. Am I using the correct acl types to match http://streamserver.com:1234/ (dstdomain + port)? 2. Am I doing this entire redirect the hard way? I would think that squid would be the logical place to take care of this. Or is it iptables? Thanks, Chris -- Chris Nighswonger Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org
Re: [squid-users] dstdomain/port acl question
acl streamport 1234 Assuming this is not a typo, you forgot an important feature. The ACL type. acl streamport port 1234 Sorry about that. It is a typo. That line in the config does include the port ACL type. Thanks, Chris
Re: [squid-users] squid stuck on old site
I cleaned out the cache directory and used squid -z to rebuild it, and was still seeing the same old site. I have a hard time picturing how that's even possible ;-) Maybe you have tried, but have you bypassed squid to see if your browsers can see the new site direct? Chris
[squid-users] Multiple Redirect Programs
Hi all, A search of the list archives shows only one post regarding this and that one received no answer. So How do I use more than one redirect program with squid? Inserting multiple redirect_program lines bungles squid. I need to run both adzapper and squirm. Chris -- Chris Nighswonger Network Systems Director Foundations Bible College Seminary www.foundations.edu www.fbcradio.org