RE: [squid-users] Fwd: Squid compiled size

2013-11-20 Thread Jenny Lee
 On 11/20/2013 12:04 AM, Mohd Akhbar wrote:
 
 I compiled squid on Centos 6.2 64bit with
 
 ./configure --prefix=/usr --includedir=/usr/include
 --datadir=/usr/share --bindir=/usr/sbin --libexecdir=/usr/lib/squid
 --localstatedir=/var --sysconfdir=/etc/squid
 
 My compiled size for squid runtime in /usr/sbin/squid is 28mb but if
 i'm install squid from rpm contributed by Elizer its only 2mb (cant
 remember the exact size) but definitely different from mine. Is there
 any prob with my compiled method ? Is it ok with that 28mb ?


Better to run stripped on production machine and keep the unstripped in case of 
segfaults.

cp /usr/sbin/squid /usr/sbin/squid.debug
strip /usr/sbin/squid

should bring it to 2MB. If it crashes give squid.debug to gdb.

Jenny 

RE: [squid-users] Re: WARNING: unparseable HTTP header field {:: }

2013-11-12 Thread Jenny Lee
They generate huge log files. We turn them off. Here it a patch for 3.3.10 if 
you need to suppress them.

Some of the cache log options should have config entries as they generate 
clutter and hide more important issues. We remove the following as well:

* Username ACLs are not reliable here
* ACL is used but there is no HTTP request (generates very huge files when 
peer is dead)
* Failed to select source for (Fixed in 3.3.10)
* Host Header Forgery crap

J

--- HttpHeader.cc.orig 2013-11-08 11:33:47.965826408
+++ HttpHeader.cc 2013-11-08 11:34:56.248823857
@@ -620,7 +620,7 @@ HttpHeader::parse(const char *header_sta
 
 if (field_start == field_end) {
 if (field_ptr  header_end) {
-debugs(55, DBG_IMPORTANT, WARNING: unparseable HTTP header 
field near { 
+debugs(55, 3, WARNING: unparseable HTTP header field near { 

getStringPrefix(field_start, header_end)  });
 goto reset;
 }
@@ -629,7 +629,7 @@ HttpHeader::parse(const char *header_sta
 }
 
 if ((e = HttpHeaderEntry::parse(field_start, field_end)) == NULL) {
-debugs(55, DBG_IMPORTANT, WARNING: unparseable HTTP header field 
{ 
+debugs(55, 3, WARNING: unparseable HTTP header field { 
getStringPrefix(field_start, field_end)  });
 debugs(55, Config.onoff.relaxed_header_parser = 0 ? 1 : 2,
 in {  getStringPrefix(header_start, header_end)  
});



 From: brian.dun...@kattenlaw.com
 To: squid-users@squid-cache.org
 Date: Tue, 12 Nov 2013 18:24:48 +
 Subject: RE: [squid-users] Re: WARNING: unparseable HTTP header field {:: }
 
 Is there any way to turn off reporting of unparseable HTTP headers for these?
 
 I get them also all day only for lijit.com. I know I can choose to block the 
 domain, was just curious if there was a way to put something in the conf that 
 will prevent these from being logged. I searched through the archives for 
 this mailing list and could not find anything definitive. Is there even any 
 value in having this feedback?
 
 2013/11/12 09:54:26 kid1| ctx: exit level 0
 2013/11/12 09:54:26 kid1| ctx: enter level 0: 
 'http://vap5dfw1.lijit.com/www/delivery/lg.php?bannerid=24091campaignid=232cids=23
 2bids=24091zoneid=183788retarget_matches=nulltid=4261995064_183788_a3f2bede5bd5486b923050d6938005c2channel_ids=,fpr=c5de34fca
 55a8e61eda787785db9a4c3loc=http%3A%2F%2Ffmsads.com%2Freq%3Fau%3D121referer=http%3A%2F%2Ffmsads.com%2Freq%3Fau%3D121cb=34826104'
 2013/11/12 09:54:26 kid1| WARNING: unparseable HTTP header field {:: }
 
 Thanks
 
 
 
 
 
 
 -Original Message-
 From: Ralf Hildebrandt [mailto:ralf.hildebra...@charite.de] 
 Sent: Tuesday, November 12, 2013 4:18 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Re: WARNING: unparseable HTTP header field {:: }
 
 * Dr.x ahmed.za...@netstream.ps:
 
 well , if this is just error for lijit.com website , i can remove
 redirecting this website to squid and let my head clear.
 
 just block them, all they do is to serve ads!
 
 -- 
 Ralf Hildebrandt Charite Universitätsmedizin Berlin
 ralf.hildebra...@charite.de Campus Benjamin Franklin
 http://www.charite.de Hindenburgdamm 30, 12203 Berlin
 Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155
 
 ===
 CIRCULAR 230 DISCLOSURE: Pursuant to Regulations Governing Practice Before 
 the Internal Revenue
 Service, any tax advice contained herein is not intended or written to be 
 used and cannot be used
 by a taxpayer for the purpose of avoiding tax penalties that may be imposed 
 on the taxpayer.
 ===
 CONFIDENTIALITY NOTICE:
 This electronic mail message and any attached files contain information 
 intended for the exclusive
 use of the individual or entity to whom it is addressed and may contain 
 information that is
 proprietary, privileged, confidential and/or exempt from disclosure under 
 applicable law. If you
 are not the intended recipient, you are hereby notified that any viewing, 
 copying, disclosure or 
 distribution of this information may be subject to legal restriction or 
 sanction. Please notify
 the sender, by electronic mail or telephone, of any unintended recipients and 
 delete the original 
 message without making any copies.
 ===
 NOTIFICATION: Katten Muchin Rosenman LLP is an Illinois limited liability 
 partnership that has
 elected to be governed by the Illinois Uniform Partnership Act (1997).
 ===   
   

RE: [squid-users] clarification of delay_initial_bucket_level

2012-09-27 Thread Jenny Lee



 Date: Thu, 27 Sep 2012 21:08:12 +0200
 From: e...@g.jct.ac.il
 To: t...@raynersw.com
 CC: squid-users@squid-cache.org
 Subject: Re: [squid-users] clarification of delay_initial_bucket_level

 2012/9/27 t...@raynersw.com t...@raynersw.com:
  Hmm, I just noticed Eliazers' reply about how reloading often is a bad
  idea and one should use ext_acls instead,
 
  I'd be interested to hear why reloading is a bad idea. Squid supports HUP 
  reloading, I've been doing it for years and on my systems it takes about 
  100ms to do a reload, so even if it blocks for that amount of time it's not 
  a big deal. Unless the reload leaks memory or something, I don't think it's 
  a problem. I have an action item to move my servers to external ACLs, but 
  it's been one of those if it ain't broke type things, so I haven't done 
  it yet.
 

 I don't think Eliezer meant reloading per se as much as my question
 which was reloading every 5 minutes.
 I also reload all the time when I write new configs and sometimes I
 even end up reloading several times in one minute without my users
 feeling it as far as I can tell (or at least they don't feel it enough
 to start sending mail saying the Internet is broken).

You can be sure that they feel it. But they are not able to complain because 
duration is short and they are not sure if the problem is arising on their end 
or elsewhere. 
Why don't you just run a cron job to reconfigure squid every minute and try to 
keep browsing?
Bad things can and do happen when squid stops servicing connections while 
active sessions are going on. If your shutdown timeout is short, you are 
probably cutting off existing connections abruptly as well. Browsers and OSes 
behave weirdly. Once a user even had to reboot his computer to be able to get 
back online when subjected to a reconfigure.
Squid also will surely will take more than few seconds to get back online even 
with the minimal config options (maybe tcr can explain how he came up with 
100ms). In my config, it takes 3 seconds (with shutdown_timeout 1). When you 
are doing 500 requests per second, those couple of seconds mean you lose couple 
thousand requests, and you probably cut couple thousand in the middle. 
Moreover, it takes about 1.5 minutes for squid to get back to the speed it was 
doing before a reconfigure (probably because client machines are waiting for 
timeouts on failed requests). And I don't even do caching!
Moral of the story is... despite Amos's efforts to make it less burdensome on 
3.2's, frequent reconfigures must be avoided.
Jenny

PS: I do a reconfigure once an hour, but my traffic is controlled.  
  

RE: [squid-users] clarification of delay_initial_bucket_level

2012-09-27 Thread Jenny Lee

  PS: I do a reconfigure once an hour, but my traffic is controlled.

 Jenny as far as I can tell from your mail you are running a restart
 (service squid3 restart or /etc/init.d/squid3 restart) and not a
 reload, reloads in my experience are very fast, they fix almost
 everything and are close to unfelt by the users, I have had streams
 running in a browser while I reloaded and they just continued with no
 problem (but that may also be effective caching on the websites part).
 Eli

Eli, sockets are being closed and reopened on reconfigure as well (not only on 
a restart). A reconfigure is very similiar to a restart on squid.

Jenny 

RE: [squid-users] clarification of delay_initial_bucket_level

2012-09-27 Thread Jenny Lee

  I don't think Eliezer meant reloading per se as much as my question
  which was reloading every 5 minutes.

 I reload very frequently as well- not on a timer but triggered by events, and 
 it can happen as often as every minute. I understand that it's not optimal 
 but I don't see it causing real-world problems.

You probably are not using squid for anything noteworthy if you think 
reconfiguring every minute is not causing problems and is alright.

 My experience corroborates this. I run some very high-traffic servers and 
 reloads have never caused problems. If they did I would definitely hear about 
 it. 

And here comes the good part: You have been living a lie all these years but 
light is ahead.

You will be able to improve your system, increase your rps, throughput and all 
with one very simple thing: By not sending a reconfigure to squid every minute! 
:)

Jenny 

RE: [squid-users] kudos donations

2012-08-30 Thread Jenny Lee

Why don't you send some donations to the man: aypp2...@treenet.co.nz

He has singlehandedly attended  fixed everyone's problems here for years and 
not once asked anything in return. 

Jenny

 Date: Wed, 29 Aug 2012 16:28:58 -0500
 To: squid-users@squid-cache.org
 From: knap...@realtime.net
 Subject: [squid-users] kudus
 
 I've been watching this list for a while, and I'd like to just take a 
 moment to give Amos Jeffries a huge pat on the back. I've been involved 
 with the administration of a mailing list, and it is a huge job. Amos 
 obviously spends much time dealing with the list and squid, and probably 
 gets little enough thanks for it. He handles a lot of crap with consummate 
 skill. He is a huge asset to the open source community, and squid in 
 particular.
 
 So THANKS!, Amos. You get a gold star.
 
 
 

RE: [squid-users] ACL processing in Squid 3.2

2012-08-18 Thread Jenny Lee

nonhierarchical_direct off
Jenny
 Date: Sat, 18 Aug 2012 18:31:14 +0100
 From: a.f...@ntlworld.com
 To: squid-users@squid-cache.org
 Subject: [squid-users] ACL processing in Squid 3.2
 
 I may be missing something here, but it looks like ACL processing is
 broken for at least some HTTPS requests in 3.2.
 
 Example configuration:
 
 acl useparent dstdomain domain.com
 
 cache_peer 172.25.2.70 parent 8080 0 no-query name=parent01
 connection-auth=off
 
 cache_peer_access parent01 allow useparent
 cache_peer_access parent01 deny all
 
 # Included to see if it made any difference
 always_direct deny useparent
 always_direct allow all
 
 Access over HTTP goes to the parent as expected, but HTTPS assess does not:
 
 1345310649.623 644 10.0.0.1 TCP_MISS/200 8055 GET
 http://www.domain.com/ - FIRSTUP_PARENT/172.25.2.70 text/html
 1345310544.835 8536 10.0.0.1 TCP_MISS/200 3580 CONNECT
 www.domain.com:443 - HIER_DIRECT/172.25.2.34 -
 
 Also tried adding:
 cache_peer_access parent01 allow CONNECT useparent
 but it made no difference.
 
 Build options:
 Squid Cache: Version 3.2.1
 configure options: '--prefix=/usr/local/squid'
 '--infodir=/usr/local/info' '--mandir=/usr/local/man'
 '--enable-async-io' '--enable-removal-policies=heap,lru'
 '--disable-wccp' '--disable-wccpv2' '--disable-ident-lookups'
 '--enable-linux-netfilter' '--with-large-files' '--disable-snmp'
 '--disable-htcp' '--disable-ipv6' 'CFLAGS=-pipe -Wall -O2
 -fomit-frame-pointer -march=native -s' 'CXXFLAGS=-pipe -Wall -O2
 -fomit-frame-pointer -march=native -s'
 'PKG_CONFIG_PATH=/usr/local/lib64/pkgconfig:/usr/lib64/pkgconfig'
 
 Any suggestions, or this a bug in 3.2?
 
 Andrew
 
 

RE: [squid-users] ACL processing in Squid 3.2

2012-08-18 Thread Jenny Lee

Apologies for top posting, from Squid FAQs:
Certain types of requests cannot be cached or are served faster going direct, 
and Squid is optimized to send them over direct connections by default. The 
nonhierarchical_direct off directive tells Squid to send these requests via the 
parent anyway.
I wonder if anyone can ever understand something from this. 
An FAQ entry to specifically mention HTTPS/CONNECT, and solution of 
nonhierarchical_direct off is necessary since this is being asked once a week.
Jenny

 nonhierarchical_direct off
 Jenny

  Date: Sat, 18 Aug 2012 18:31:14 +0100
  From: a.f...@ntlworld.com
  To: squid-users@squid-cache.org
  Subject: [squid-users] ACL processing in Squid 3.2
 
  I may be missing something here, but it looks like ACL processing is
  broken for at least some HTTPS requests in 3.2.
 
  Example configuration:
 
  acl useparent dstdomain domain.com
 
  cache_peer 172.25.2.70 parent 8080 0 no-query name=parent01
  connection-auth=off
 
  cache_peer_access parent01 allow useparent
  cache_peer_access parent01 deny all
 
  # Included to see if it made any difference
  always_direct deny useparent
  always_direct allow all
 
  Access over HTTP goes to the parent as expected, but HTTPS assess does not:
 
  1345310649.623 644 10.0.0.1 TCP_MISS/200 8055 GET
  http://www.domain.com/ - FIRSTUP_PARENT/172.25.2.70 text/html
  1345310544.835 8536 10.0.0.1 TCP_MISS/200 3580 CONNECT
  www.domain.com:443 - HIER_DIRECT/172.25.2.34 -
 
  Also tried adding:
  cache_peer_access parent01 allow CONNECT useparent
  but it made no difference.
 
  Build options:
  Squid Cache: Version 3.2.1
  configure options: '--prefix=/usr/local/squid'
  '--infodir=/usr/local/info' '--mandir=/usr/local/man'
  '--enable-async-io' '--enable-removal-policies=heap,lru'
  '--disable-wccp' '--disable-wccpv2' '--disable-ident-lookups'
  '--enable-linux-netfilter' '--with-large-files' '--disable-snmp'
  '--disable-htcp' '--disable-ipv6' 'CFLAGS=-pipe -Wall -O2
  -fomit-frame-pointer -march=native -s' 'CXXFLAGS=-pipe -Wall -O2
  -fomit-frame-pointer -march=native -s'
  'PKG_CONFIG_PATH=/usr/local/lib64/pkgconfig:/usr/lib64/pkgconfig'
 
  Any suggestions, or this a bug in 3.2?
 
  Andrew
 


RE: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?

2012-08-17 Thread Jenny Lee

In your /etc/rc.d/init.d/squid file, or whatever script is starting squid, put:
ulimit -HSn 65536
Jenny
 From: sunyuc...@gmail.com
 Date: Thu, 16 Aug 2012 20:03:05 -0700
 To: squid-users@squid-cache.org
 Subject: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?
 
 I found that if I include
 max_filedescriptors 16384 in the config, it will actually use the 16384 fds
 
 if I don't have this line, then it will use 1024, however the document
 and source code I can find doesn't say any thing like 1024 at all,
 
 what might be the reason?
 
 On Thu, Aug 16, 2012 at 7:31 PM, Yucong Sun (叶雨飞) sunyuc...@gmail.com wrote:
  Here's what I get from mgr:info
 
  File descriptor usage for squid:
  Maximum number of file descriptors: 1024
  Largest file desc currently in use: 755
  Number of file desc currently in use: 692
  Files queued for open: 0
  Available number of file descriptors: 332
  Reserved number of file descriptors: 100
  Store Disk files open: 0
 
 
  and here's the squid -v output
 
  Squid Cache: Version 3.2.1
  configure options: '--disable-maintainer-mode'
  '--disable-dependency-tracking' '--disable-silent-rules'
  '--enable-inline' '--enable-async-io=8' '--enable-storeio=ufs,aufs'
  '--enable-removal-policies=lru,heap' '--enable-cache-digests'
  '--enable-underscores' '--enable-follow-x-forwarded-for'
  '--disable-translation' '--with-filedescriptors=65536'
  '--with-default-user=proxy' '--enable-ssl' '--enable-ltdl-convenience'
 
  How can I get squid 3.2.1 to use more than 1024 ?
 
  I've verified that system is fine, there's no per user limit either.
 
  # cat /proc/sys/fs/file-max
  199839


RE: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?

2012-08-17 Thread Jenny Lee

So put it before that, then: 
ulimit -HSn 65536; ./squid -f squid.conf
Jenny

 From: sunyuc...@gmail.com
 Date: Fri, 17 Aug 2012 01:56:59 -0700
 Subject: Re: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?
 To: bodycar...@live.com
 CC: squid-users@squid-cache.org

 No, I just launch it with ./squid -f squid.conf , no script.

 I think this is a problem with default config , it might be
 initialized wrong in the default config.

 On Fri, Aug 17, 2012 at 1:09 AM, Jenny Lee bodycar...@live.com wrote:
 
  In your /etc/rc.d/init.d/squid file, or whatever script is starting squid, 
  put:
  ulimit -HSn 65536
  Jenny
  From: sunyuc...@gmail.com
  Date: Thu, 16 Aug 2012 20:03:05 -0700
  To: squid-users@squid-cache.org
  Subject: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?
 
  I found that if I include
  max_filedescriptors 16384 in the config, it will actually use the 16384 fds
 
  if I don't have this line, then it will use 1024, however the document
  and source code I can find doesn't say any thing like 1024 at all,
 
  what might be the reason?
 
  On Thu, Aug 16, 2012 at 7:31 PM, Yucong Sun (叶雨飞) sunyuc...@gmail.com 
  wrote:
   Here's what I get from mgr:info
  
   File descriptor usage for squid:
   Maximum number of file descriptors: 1024
   Largest file desc currently in use: 755
   Number of file desc currently in use: 692
   Files queued for open: 0
   Available number of file descriptors: 332
   Reserved number of file descriptors: 100
   Store Disk files open: 0
  
  
   and here's the squid -v output
  
   Squid Cache: Version 3.2.1
   configure options: '--disable-maintainer-mode'
   '--disable-dependency-tracking' '--disable-silent-rules'
   '--enable-inline' '--enable-async-io=8' '--enable-storeio=ufs,aufs'
   '--enable-removal-policies=lru,heap' '--enable-cache-digests'
   '--enable-underscores' '--enable-follow-x-forwarded-for'
   '--disable-translation' '--with-filedescriptors=65536'
   '--with-default-user=proxy' '--enable-ssl' '--enable-ltdl-convenience'
  
   How can I get squid 3.2.1 to use more than 1024 ?
  
   I've verified that system is fine, there's no per user limit either.
  
   # cat /proc/sys/fs/file-max
   199839 


RE: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?

2012-08-17 Thread Jenny Lee

You said if you use max_filedescriptors 16384 it is using 16K fds. If you do 
not use it, it is using your shell's limit which can be increased with the 
command I gave you.
Where is the problem adn what are you exactly trying to solve? Put 
max_filedescriptos 64K in your config and be done with it.
Jenny


 From: sunyuc...@gmail.com
 Date: Fri, 17 Aug 2012 02:13:48 -0700
 To: bodycar...@live.com
 CC: squid-users@squid-cache.org
 Subject: Re: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?

 told you there's no limit set at all anywhere, set it again won't
 solve it. it's squid that don't want to use more than 1024 unless told
 so explicitly in the config.

 On Fri, Aug 17, 2012 at 2:04 AM, Jenny Lee bodycar...@live.com wrote:
 
  So put it before that, then:
  ulimit -HSn 65536; ./squid -f squid.conf
  Jenny
  
  From: sunyuc...@gmail.com
  Date: Fri, 17 Aug 2012 01:56:59 -0700
  Subject: Re: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?
  To: bodycar...@live.com
  CC: squid-users@squid-cache.org
 
  No, I just launch it with ./squid -f squid.conf , no script.
 
  I think this is a problem with default config , it might be
  initialized wrong in the default config.
 
  On Fri, Aug 17, 2012 at 1:09 AM, Jenny Lee bodycar...@live.com wrote:
  
   In your /etc/rc.d/init.d/squid file, or whatever script is starting 
   squid, put:
   ulimit -HSn 65536
   Jenny
   From: sunyuc...@gmail.com
   Date: Thu, 16 Aug 2012 20:03:05 -0700
   To: squid-users@squid-cache.org
   Subject: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?
  
   I found that if I include
   max_filedescriptors 16384 in the config, it will actually use the 16384 
   fds
  
   if I don't have this line, then it will use 1024, however the document
   and source code I can find doesn't say any thing like 1024 at all,
  
   what might be the reason?
  
   On Thu, Aug 16, 2012 at 7:31 PM, Yucong Sun (叶雨飞) sunyuc...@gmail.com 
   wrote:
Here's what I get from mgr:info
   
File descriptor usage for squid:
Maximum number of file descriptors: 1024
Largest file desc currently in use: 755
Number of file desc currently in use: 692
Files queued for open: 0
Available number of file descriptors: 332
Reserved number of file descriptors: 100
Store Disk files open: 0
   
   
and here's the squid -v output
   
Squid Cache: Version 3.2.1
configure options: '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules'
'--enable-inline' '--enable-async-io=8' '--enable-storeio=ufs,aufs'
'--enable-removal-policies=lru,heap' '--enable-cache-digests'
'--enable-underscores' '--enable-follow-x-forwarded-for'
'--disable-translation' '--with-filedescriptors=65536'
'--with-default-user=proxy' '--enable-ssl' '--enable-ltdl-convenience'
   
How can I get squid 3.2.1 to use more than 1024 ?
   
I've verified that system is fine, there's no per user limit either.
   
# cat /proc/sys/fs/file-max
199839  


RE: [squid-users] Squid memory usage

2012-08-03 Thread Jenny Lee


 Date: Fri, 3 Aug 2012 14:16:29 +0200
 From: hugo.dep...@gmail.com
 To: squid-users@squid-cache.org
 Subject: [squid-users] Squid memory usage
 
 Dear community,
 
 I am running squid3 on Linux Debian squeeze.(3.1.6).
 
 I encounter a suddenly a high memory usage on my virtual machine don't
 really know why.
 Looking at the cacti memory graph is showing a memory jump from 1.5 Gb
 to 4GB and then ther server started to swap.
 
 For information the virtual machine has 4Gb of RAM.
 
 Here is the settings of squid.conf :
 
 cache_dir ufs /var/spool/squid3 100 16 256
 cache_mem 100 MB
 
 hierarchy_stoplist cgi-bin ?
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
 refresh_pattern . 0 20% 4320
 
 
 my squid3 process is using: 81% of my RAM. So arround 3,2Gb of memory.
 
 proxy 25889 0.6 81.1 3937744 3299616 ? S Aug02 9:34
 (squid) -YC -f /etc/squid3/squid.conf
 
 I am currently having arround 50 users using it.
 
 
 I did have a look at the FAQ
 (http://wiki.squid-cache.org/SquidFaq/SquidMemory#how-much-ram), but I
 didn't find any tips for my situation in it.
 
 
 Have you got any idea ? How can I troubleshoot this ?
 
 Thanks !
Upgrade to a newer version. I had the same issues with a memory leak on icons. 
It was fixed 4-5 months ago.
Jenny 

RE: [squid-users] Popular log analysis tools? SARG?

2012-03-24 Thread Jenny Lee

 Date: Sat, 24 Mar 2012 12:07:34 -0700
 From: nwv...@nottheoilrig.com
 To: squid-users@squid-cache.org
 Subject: [squid-users] Popular log analysis tools? SARG?
 
 Which are the most popular log analysis tools? SARG?
 
 The Squid website features a comprehensive list of log analysis tools 
 [1]. Which are the most popular?
 
 [1] http://www.squid-cache.org/Misc/log-analysis.html
 
 
None of those tools will produce anything good-looking. 
 
Any tools for Squid, just like everything else opensource, looks... well, like 
crap.
 
I use Flowerfire Sawmill. http://sawmill.net
 
It produces awesome graphs with excellent features. I use my custom squid log 
format with it, though.
 
Jenny
 
  

RE: [squid-users] requests per second

2012-03-11 Thread Jenny Lee

 Dears ,
 
 how we can achieve 5000 RPS through squid 
 
 Thanks in advance
 Liley
 
 
In your dreams.
 
Jenny 

RE: [squid-users] Are comments in external files allowed

2012-01-18 Thread Jenny Lee


 To: squid-users@squid-cache.org
 Date: Thu, 19 Jan 2012 10:33:31 +1300
 From: squ...@treenet.co.nz
 Subject: Re: [squid-users] Are comments in external files allowed
 
 On 18.01.2012 14:45, James Robertson wrote:
  Excuse the basic question but is adding comments to external files
  allowed in squid. For example...
 
  acl blockedsites dstdomain /etc/squid3/blockedsites.txt
 
  cat /etc/squid3/blockedsites.txt
  # block facebook
  .facebook.com
 
  I googled and found examples where comments were used but nothing
  conclusive to say it's ok. I have tested and it appears to work fine
  but want to be sure.
 
  Thanks
 
 May depend on your squid version. The 3.x series allow '#' comments in 
 the sub-files.
 
 I don't think the original 2.5 parser did though, there are quite a few 
 directives in the main squid.conf which use that older parser and break 
 with trailing # comments.
 
 Amos
 



I think it did. I remember having comments in the referenced files for as long 
as I have been using squid. (Proxy Auth files or dstdomain files I believe)

Jenny 

RE: [squid-users] Squid Hardware to Handle 150Mbps Peaks

2012-01-18 Thread Jenny Lee

 Date: Tue, 17 Jan 2012 10:02:14 -0800
 From: jth...@gmail.com
 To: squid-users@squid-cache.org
 Subject: [squid-users] Squid Hardware to Handle 150Mbps Peaks
 
 We currently have a commercial proxy solution in place but since we increased
 our bandwidth to 150meg connection, the proxy is slowing things down
 considerably as it's spec'd for 10meg connections. The commercial vendor
 proposes a new appliance that is 5 times what we can afford to spend. We're
 considering Squid as an option, but it needs to be able to support 50meg
 sustained throughput with spikes to 150meg. 
 
 We have about 200 users and only need the proxy to support ICAP integration
 with our DLP solution. The Squid proxy should provide visibility into our
 SSL connections for the DLP solution to scan and also provide blocking of
 web/FTP connections containing sensitive data. Caching and web filtering
 are secondary needs. 
 
 I expect Squid would be able to support our needs, but also expect that it
 won't run on light hardware (which is the reason behind our current need in
 the first place). Are there recommended hardware specs for such a
 configuration? 
 
 Any suggestions are appreciated.
 

We do about 500 Mbps constant at 600-700 reqs/sec on a Q6600. We do not do any 
caching. Most of these are SSL connections. This should give you an indication.

Squid should be able to handle what you want easily. Just throw at it some 
spare hardware and start testing.

Jenny 

RE: [squid-users] Squid only forwards GET requests to cache_peer

2012-01-09 Thread Jenny Lee


 Date: Mon, 9 Jan 2012 15:53:22 +1100
 From: leigh.wedd...@bigpond.com
 To: squid-users@squid-cache.org
 Subject: [squid-users] Squid only forwards GET requests to cache_peer
 
 Hi,
 
 I have a problem with squid only forwarding HTTP GET requests to cache_peers. 
 My setup is that the corporate network has no access to the Internet, access 
 is only via corporate wide http proxies. I also have another separate network 
 (NET2, which does not have Internet access), which has only restricted access 
 to the corporate network via a firewall. I am running a squid proxy in NET2 
 which should connect direct to various corporate WWW resources, and should 
 connect to the corporate proxies for any WWW resources on the Internet. This 
 all works fine for HTTP GET requests. However for HTTP HEAD requests (eg. 
 needed for wget -N), it does not work for WWW resources on the Internet; 
 Squid always tries to handle HEAD requests directly, it does NOT forward them 
 to the defined cache_peers. I have 8 cache_peers defined as follows:
 
 cache_peer 10.97.216.133 parent 8080 0 no-query round-robin
 cache_peer 10.97.216.136 parent 8080 0 no-query round-robin
 cache_peer 10.97.216.139 parent 8080 0 no-query round-robin
 cache_peer 10.97.216.142 parent 8080 0 no-query round-robin
 cache_peer 10.97.217.133 parent 8080 0 no-query round-robin
 cache_peer 10.97.217.136 parent 8080 0 no-query round-robin
 cache_peer 10.97.217.139 parent 8080 0 no-query round-robin
 cache_peer 10.97.217.142 parent 8080 0 no-query round-robin
 
 Can anyone shed any light on what might be the problem, and what I can do to 
 fix it?
 
 I am running squid 2.7.STABLE5 on SUSE Linux Enterprise Server 11 (x86_64) 
 PL1.
 
 Thanks,
 Leigh.

 
nonhierarchical_direct off
 
should fix it for you.
 
Jenny 

RE: [squid-users] Squid 3.2.0.14 beta is available

2011-12-24 Thread Jenny Lee

 Date: Sat, 24 Dec 2011 13:16:45 +1300
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid 3.2.0.14 beta is available
 
 On 24/12/2011 12:15 p.m., Jenny Lee wrote:
  Date: Sat, 24 Dec 2011 10:38:58 +1300
  From: squ...@treenet.co.nz
  To: squid-users@squid-cache.org
  Subject: Re: [squid-users] Squid 3.2.0.14 beta is available
 
  On 24/12/2011 9:25 a.m., Saleh Madi wrote:
  Hi Amos,
 
  After I set the memory_cache_shared off in the config file of the squid
  , after I disable the memory_cache_shared , squid not crash, but we need
  shared memory caching, any fix for this problem.
  Okay, did you find any other details about it? if you did please check
  bugzilla to see if there is any known bugs mentioning those details.
 
 
  I believe it is this bug: http://bugs.squid-cache.org/show_bug.cgi?id=3449
 
  or some of the related bugs mentioned there.
 
  Setting cache_mem 0 fixes it as per Alex's suggestion.
 
  Which new beta feature are we talking about? 3.2.0.13 did not have this 
  issue. Is memory_cache_shared a new feature?
 
 Yes. One of a few required for workers SMP caching.
 
 
  Since cache_mem 0 fixed this, i did not pursue it further. But as OP 
  mentioned above, he needs this shared memory feature.
 
  I honestly think development should be focused on fixing existing issues at 
  this stage instead of introducing additional ones at every release. Of 
  course you might disagree as developers, however, this is quite frustrating 
  for us as users.
 
 
 We are. The bunch of SMP caching features in 3.2.0.13 was the last 
 feature additions for 3.2. Now we have only to fix the bugs.
 
3.2.0.14 with a couple of custom patches is working 9 days without any single 
assert or segfault, doing steady 200 reqs/sec. It took almost a year to get to 
this point for me.
 
I have very simple configuration: no caching, no workers, no ipv6, no SSL, no 
icap, almost any possible compile time option that could be disabled had been 
disabled.
 
Well done and merry xmas for squid team, all your hard work is appreciated.
 
Jenny 

RE: [squid-users] Squid 3.2.0.14 beta is available

2011-12-23 Thread Jenny Lee

 Date: Sat, 24 Dec 2011 10:38:58 +1300
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid 3.2.0.14 beta is available
 
 On 24/12/2011 9:25 a.m., Saleh Madi wrote:
  Hi Amos,
 
  After I set the memory_cache_shared off in the config file of the squid
  , after I disable the memory_cache_shared , squid not crash, but we need
  shared memory caching, any fix for this problem.
 
 Okay, did you find any other details about it? if you did please check 
 bugzilla to see if there is any known bugs mentioning those details.
 
 
I believe it is this bug: http://bugs.squid-cache.org/show_bug.cgi?id=3449
 
or some of the related bugs mentioned there.
 
Setting cache_mem 0 fixes it as per Alex's suggestion.
 
Which new beta feature are we talking about? 3.2.0.13 did not have this issue. 
Is memory_cache_shared a new feature?
 
Since cache_mem 0 fixed this, i did not pursue it further. But as OP 
mentioned above, he needs this shared memory feature.
 
I honestly think development should be focused on fixing existing issues at 
this stage instead of introducing additional ones at every release. Of course 
you might disagree as developers, however, this is quite frustrating for us as 
users.
 
Jenny
 
  

RE: [squid-users] After reloading squid3, takes about 2 minutes to serve pages?

2011-12-21 Thread Jenny Lee

 From: hen...@henriknordstrom.net
 To: tdo...@associatedbrands.com
 CC: squid-users@squid-cache.org
 Date: Wed, 21 Dec 2011 19:36:51 +0100
 Subject: RE: [squid-users] After reloading squid3, takes about 2 minutes to 
 serve pages?
 
 tis 2011-12-20 klockan 10:48 -0500 skrev Terry Dobbs:
 
  I am using Berkley DB for the first time, perhaps that's why it takes
  longer? Although, I don't really see what Berkley DB is doing for me as
  I am still using flat files for my domains/urls? Guess I should take
  this to the squidGuard list!
 
 Please generate the DB files offline after updating the blacklist,
 then issue a squid -k rotate to have Squid restart the helpers.
 
 squidGuard starts very quick if the databases have been properly
 populated already, but will take a very long time to start up if not.
 
 Regards
 Henrik
 
 
Not related to squidguard but I thought I can chime in.
 
There are problems with squid's reconfigures on 3.2.0.9 onwards. 
 
It takes me a minute and half to reach full load when squid doing 100 req/sec 
is sent a reconfigure. Squid barely serves anything during this time (but it is 
functional). All my timeouts are low. It was not like this on 3.2.0.1.
 
Jenny 

RE: [squid-users] After reloading squid3, takes about 2 minutes to serve pages?

2011-12-21 Thread Jenny Lee

  Subject: RE: [squid-users] After 
reloading squid3, takes about 2 minutes to serve pages?  From: 
hen...@henriknordstrom.net  To: bodycar...@live.com  CC: 
squid-users@squid-cache.org  Date: Wed, 21 Dec 2011 19:56:37 +0100   ons 
2011-12-21 klockan 18:44 + skrev Jenny Lee:It takes me a minute and 
half to reach full load when squid doing 100 req/sec is sent a reconfigure. 
Squid barely serves anything during this time (but it is functional). All my 
timeouts are low. It was not like this on 3.2.0.1.   How big is your on-disk 
cache?   Is there any swap activity on the server?   Regards  Henrik  
 
Hi Henrik,
 
I don't have any caching. All caching modules are disabled during compile. 
 
There are also many other issues like, after 200 reconfigures, squid cpu usage 
keeps on increasing without ever going down even on an empty squid. A restart 
is being required to normalize it.
 
I stopped using reconfigures and had to use time-based ACLs to accomplish a 
similiar thing. That is why I requested this feature: 
http://bugs.squid-cache.org/show_bug.cgi?id=3300
 
Jenny 

RE: [squid-users] block TOR

2011-12-03 Thread Jenny Lee

I dont understand how you are managing to have anything to do with Tor to start 
with.

Tor is speaking SOCKS5. You need Polipo to speak HTTP on the client side and 
SOCKS on the server side.

I have actively tried to connect to 2 of our SOCKS5 machines (and Tor) via my 
Squid and I could not succeed. I have even tried Amos' custom squid with SOCKS 
support and still failed.

Can someone explain to me as to how you are connecting to Tor with squid (and 
consequently having a need to block it)?

Jenny


 Date: Sat, 3 Dec 2011 16:37:05 -0500
 Subject: Re: [squid-users] block TOR
 From: charlie@gmail.com
 To: leolis...@solutti.com.br
 CC: bodycar...@live.com; squid-users@squid-cache.org
 
 Sorry for reopen an old post, but a few days ago i tried with this
 solution, and . like magic, all traffic to the Tor net it's
 blocked, just typing this:
 acl tor dst /etc/squid3/tor
 http_access deny tor
 where /etc/squid3/tor it's the file that I download from the page you
 people recommend me !!!
 
 Thanks a lot, this is something that are searching a lot of admin that
 I know, you should put somewhere where are easily to find !!! Thanks
 again !!
 
 Sorry for my english
 
 On Fri, Nov 18, 2011 at 4:17 PM, Carlos Manuel Trepeu Pupo
 charlie@gmail.com wrote:
  Thanks a lot, I gonna make that script to refresh the list. You´ve
  been lot of helpful.
 
  On Fri, Nov 18, 2011 at 3:39 PM, Leonardo Rodrigues
  leolis...@solutti.com.br wrote:
 
  i dont know if this is valid for TOR ... but at least Ultrasurf, which i
  have analized a bit further, encapsulates traffic over squid always using
  CONNECT method and connecting to an IP address. It's basically different
  from normal HTTPS traffic, which also uses CONNECT method but almost always
  (i have found 2-3 exceptions in some years) connects to a FQDN.
 
  So, at least with Ultrasurf, i could handle it over squid simply blocking
  CONNECT connections which tries to connect to an IP address instead of a
  FQDN.
 
  Of course, Ultrasurf (and i suppose TOR) tries to encapsulate traffic to
  the browser-configured proxy as last resort. If it finds an NAT-opened
  network, it will always tries to go direct instead of through the proxy. 
  So,
  its mandatory that you do NOT have a NAT-opened network, specially on ports
  TCP/80 and TCP/443. If you have those ports opened with your NAT rules, 
  than
  i really think you'll never get rid of those services, like TOR and
  Ultrasurf.
 
 
 
 
  Em 18/11/11 14:03, Carlos Manuel Trepeu Pupo escreveu:
 
  So, like I see, we (the admin) have no way to block it !!
 
  On Thu, Sep 29, 2011 at 3:30 PM, Jenny Leebodycar...@live.com wrote:
 
  Date: Thu, 29 Sep 2011 11:24:55 -0400
  From: charlie@gmail.com
  To: squid-users@squid-cache.org
  Subject: [squid-users] block TOR
 
  There is any way to block TOR with my Squid ?
 
  How do you get it working with tor in the first place?
 
  I really tried for one of our users. Even used Amos's custom squid with
  SOCKS option but no go.
 
  Jenny
 
 
  --
 
 
  Atenciosamente / Sincerily,
  Leonardo Rodrigues
  Solutti Tecnologia
  http://www.solutti.com.br
 
  Minha armadilha de SPAM, NÃO mandem email
  gertru...@solutti.com.br
  My SPAMTRAP, do not email it
 
 
 
 
   

RE: AW: [squid-users] block TOR

2011-12-03 Thread Jenny Lee

Judging from dst acl, ultrasurf traffic and all in this thread, this is 
talking about outgoing traffic to Tor via squid.

Why would anyone want to block Tor traffic to his/her webserver (if this is not 
an ecommerce site)? If it was an ecommerce site, they would know what to do 
already and not ask this question here. Tor exists are made available daily and 
firewall is hte place to drop them.

I still want to hear what OP would say.

Jenny




 From: amuel...@gmx.de
 To: squid-users@squid-cache.org
 Date: Sun, 4 Dec 2011 00:39:01 +0100
 Subject: AW: [squid-users] block TOR
 
 The question is with traffic of tor should be blocked. Outgoing client
 traffic to the tor network or incoming httpd requests from tor exit nodes ?
 
 Andreas
 
 -Ursprüngliche Nachricht-
 Von: Jenny Lee [mailto:bodycar...@live.com] 
 Gesendet: Sonntag, 4. Dezember 2011 00:09
 An: charlie@gmail.com; leolis...@solutti.com.br
 Cc: squid-users@squid-cache.org
 Betreff: RE: [squid-users] block TOR
 
 
 I dont understand how you are managing to have anything to do with Tor to
 start with.
 
 Tor is speaking SOCKS5. You need Polipo to speak HTTP on the client side and
 SOCKS on the server side.
 
 I have actively tried to connect to 2 of our SOCKS5 machines (and Tor) via
 my Squid and I could not succeed. I have even tried Amos' custom squid with
 SOCKS support and still failed.
 
 Can someone explain to me as to how you are connecting to Tor with squid
 (and consequently having a need to block it)?
 
 Jenny
 
 
  Date: Sat, 3 Dec 2011 16:37:05 -0500
  Subject: Re: [squid-users] block TOR
  From: charlie@gmail.com
  To: leolis...@solutti.com.br
  CC: bodycar...@live.com; squid-users@squid-cache.org
  
  Sorry for reopen an old post, but a few days ago i tried with this 
  solution, and . like magic, all traffic to the Tor net it's 
  blocked, just typing this:
  acl tor dst /etc/squid3/tor
  http_access deny tor
  where /etc/squid3/tor it's the file that I download from the page you 
  people recommend me !!!
  
  Thanks a lot, this is something that are searching a lot of admin that 
  I know, you should put somewhere where are easily to find !!! Thanks 
  again !!
  
  Sorry for my english
  
  On Fri, Nov 18, 2011 at 4:17 PM, Carlos Manuel Trepeu Pupo 
  charlie@gmail.com wrote:
   Thanks a lot, I gonna make that script to refresh the list. You´ve 
   been lot of helpful.
  
   On Fri, Nov 18, 2011 at 3:39 PM, Leonardo Rodrigues 
   leolis...@solutti.com.br wrote:
  
   i dont know if this is valid for TOR ... but at least Ultrasurf, 
   which i have analized a bit further, encapsulates traffic over 
   squid always using CONNECT method and connecting to an IP address. 
   It's basically different from normal HTTPS traffic, which also uses 
   CONNECT method but almost always (i have found 2-3 exceptions in some
 years) connects to a FQDN.
  
   So, at least with Ultrasurf, i could handle it over squid simply 
   blocking CONNECT connections which tries to connect to an IP 
   address instead of a FQDN.
  
   Of course, Ultrasurf (and i suppose TOR) tries to encapsulate 
   traffic to the browser-configured proxy as last resort. If it finds 
   an NAT-opened network, it will always tries to go direct instead of 
   through the proxy. So, its mandatory that you do NOT have a 
   NAT-opened network, specially on ports
   TCP/80 and TCP/443. If you have those ports opened with your NAT 
   rules, than i really think you'll never get rid of those services, 
   like TOR and Ultrasurf.
  
  
  
  
   Em 18/11/11 14:03, Carlos Manuel Trepeu Pupo escreveu:
  
   So, like I see, we (the admin) have no way to block it !!
  
   On Thu, Sep 29, 2011 at 3:30 PM, Jenny Leebodycar...@live.com wrote:
  
   Date: Thu, 29 Sep 2011 11:24:55 -0400
   From: charlie@gmail.com
   To: squid-users@squid-cache.org
   Subject: [squid-users] block TOR
  
   There is any way to block TOR with my Squid ?
  
   How do you get it working with tor in the first place?
  
   I really tried for one of our users. Even used Amos's custom 
   squid with SOCKS option but no go.
  
   Jenny
  
  
   --
  
  
   Atenciosamente / Sincerily,
   Leonardo Rodrigues
   Solutti Tecnologia
   http://www.solutti.com.br
  
   Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br 
   My SPAMTRAP, do not email it
  
  
  
  
   
 
 

RE: [squid-users] SECURITY ALERT: Squid Cache: Version 3.2.0.13

2011-12-01 Thread Jenny Lee

 K. first problem:
 # host download.windowsupdate.com
 ...
 download.windowsupdate.com.c.footprint.net has address 204.160.124.126
 download.windowsupdate.com.c.footprint.net has address 8.27.83.126
 download.windowsupdate.com.c.footprint.net has address 8.254.3.254
 
 
 Client is connecting to server 4.26.235.254 port 80. Which is clearly 
 not download.windowsupdate.com according to the official DNS entries I 
 can see.

Yes, welcome to the host header forgery mess. I don't know who benefited from 
this but a lot of people got bitten by it.

I mentioned this first day http://bugs.squid-cache.org/show_bug.cgi?id=3325

Anyone doing ANYCAST will be screwed (and a whole lotta people do that).

p4$ host download.windowsupdate.com
mscom-wui-any.vo.msecnd.net has address 70.37.129.251
mscom-wui-any.vo.msecnd.net has address 70.37.129.244

p12$ host download.windowsupdate.com
a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.42
a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.8
a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.24
a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.26
a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.41

Jenny 

RE: [squid-users] Commercial Squid tweak speeds things up significantly!

2011-11-28 Thread Jenny Lee


 Date: Tue, 29 Nov 2011 00:59:29 +1300
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Commercial Squid tweak speeds things up 
 significantly!
 
 On 26/11/2011 8:02 p.m., - Mikael - wrote:
  Could you name this product and point at some documentation it has about
  this process?
  Its called Untangle. Its a Debian based distro with Squid based web
  caching app.
  FAQ about caching app with some tech details is available at:
  wiki.untangle.com/index.php/Web_Cache_FAQs
 
 (sorry for the delay, I had a bit of trouble reaching that wiki domain 
 for some reason).
 
 They are using a standard cache_peer parent relationship to fetch MISS 
 traffic through a second filtering proxy of their own design. There is 
 no bypass of Squid happening.
 
 Pity. :( I was looking forwad to something fancy and interesting.
 
 Amos


That wiki is full of false information in its context. I would be thinking 
twice to trust my setup to such a tool let alone pay for it.

Jenny 

RE: [squid-users] Squid box dropping connections

2011-11-17 Thread Jenny Lee

 I am running CentOS v5.1 with Squid-2.6 STABLE22 and Tproxy
 (cttproxy-2.6.18-2.0.6). My kernel is kernel-2.6.18-92. This is the most
 reliable setup I ever made running Squid. My problem is that I am having
 serious connections troubles when running squid over 155000 conntrack
 connections.
 
 From my clients I start losing packets to router when the
 connections go over 155000. My kernel is prepared to run over 260k
 connections. 
...
 $SYS net.ipv4.netfilter.ip_conntrack_max=262144
 
 
Just because you have conntract max at 260K does not mean that you can handle 
260K connections.
 
You will need to increase hashsize as well:
 
echo 262144  /sys/module/ip_conntrack/parameters/hashsize
 
I would be checking kernel logs for conntrack overflows and cache log for 
commBind errors. You might need to increase ephemeral port ranges to 64K 
(don't know if this would apply to tproxy though).
 
Jenny
 
 
PS: I am not responsible if this blows up your datacenter. It works for me when 
i am doing 500-600 reqs/sec with CONNECTs on forward proxy. 
  

RE: RES: [squid-users] Squid box dropping connections

2011-11-17 Thread Jenny Lee



 From: listas.n...@cnett.com.br
 To: bodycar...@live.com; squid-users@squid-cache.org
 Date: Thu, 17 Nov 2011 15:55:20 -0300
 Subject: RES: [squid-users] Squid box dropping connections

 Hello Jenny,

 Thanks for your answer. Sorry I haven't wrote but my hashsize is already in
 the same value as conntrack_max. I have some out of memory in dmesg:

 Nov 17 15:43:13 02 kernel: Out of socket memory
 
 
Well, there you go. Here is your problem. You will need to decrease your 
hashsize. I suggest you experiment with conntract max and hashsize nad buckets 
and watch for errors like these.
 
There are couple of good docs out there explaining kernel memory use with 
conntrack.
 
 

 And in cache.log I was not able to find any CommBind. I am reading about
 this port ranges (ephemeral). I think my squid is using too many sockets:

 sockets: used 16662
 TCP: inuse 28433 orphan 12185 tw 2191 alloc 28787 mem 18786
 UDP: inuse 8 mem 0
 RAW: inuse 1
 FRAG: inuse 0 memory 0

 And it has about 16k files open right now. I will try to find a way to make
 more ports available. Thanks!
 
You can check available port range with: 
cat /proc/sys/net/ipv4/ip_local_port_range

And increase it with:
echo 1024 65535  /proc/sys/net/ipv4/ip_local_port_range
 
 
This is for RHEL6, I don't recall if it is the same for RHEL5.
 
Here is a small perl script to log these for post-mortem review. Put it to 
cron, run every minute as root. Then you can review later.
 
Your orphans don't look good to me. However, you have nolocalbind and you are 
using tproxy.
 
I am neither linux, nor perl, nor tproxy, nor tcp expert. Just someone trying 
to solve her problems. So approach all these with caution, I take no 
responsibility.
 
Good luck!
 
Jenny
 
 
 
#!/usr/bin/perl

$ct = `cat /proc/sys/net/netfilter/nf_conntrack_count`;
chomp $ct;
@ss = `ss -s`;

foreach (@ss) {  
if 
(/TCP:\s+(\d+)\s+\(estab\s+(\d+),.+orphaned\s+(\d+),.+timewait\s+(\d+).+ports\s+(\d+)/)
 {
$tcp = $1; $est = $2; $orp = $3; $tw = $4; $ports = $5;
}
}


$file = /var/log/tcp.log;
$date=localtime(); 
 

open(OUT, $file);
print OUT $date: CT:$ct TCP:$tcp EST:$est ORP:$orp TW:$tw PORTS:$ports\n;
close OUT;

RE: [squid-users] Log file roll over Issues

2011-11-08 Thread Jenny Lee

 Hi,
 
 We're having issues with log file roll over in squid - when squid is under 
 heavy load and the log files are very big, triggering a log file roll over 
 (squid -k rotate) makes squid unresponsive, and has to be killed manually 
 with a kill -9. 

You would be better off moving the log files aside, sending squid a reconfigure 
and working on the log files later so that you do not block squid.

That is what I do for access.log:

mv /squid/logs/access.log /squid/logs/access.log.bak
/squid/squid -k reconfigure
gzip /squid/logs/access.log.bak 

Jenny 

RE: [squid-users] Is there any way to configure Squid to use local /etc/hosts in name resolution?

2011-10-26 Thread Jenny Lee




 Date: Wed, 26 Oct 2011 17:28:21 -0700
 From: dnw...@gmail.com
 To: squid-users@squid-cache.org
 Subject: [squid-users] Is there any way to configure Squid to use local 
 /etc/hosts in name resolution?
 
 Hi there,
 
 I'm using Squid 3.1 as part of a proxy chain. I'm trying to make
 Squid use the local /etc/hosts file for name resolution before
 forwarding the request to the next proxy in the chain, but I've been
 unable to make it work, even by explicitly using the hosts_file
 directive. I'd be really grateful if anyone could help!
 
 Here's an example:
 
 I'll access a website normally via the proxy, with no weirdness in /etc/hosts
 
  cat /etc/hosts
 127.0.0.1 localhost.localdomain localhost
  echo $http_proxy
 http://localhost:3128
  curl http://yahoo.com
 The document has moved A HREF=http://www.yahoo.com/;here/A.P
 !-- w33.fp.sk1.yahoo.com uncompressed/chunked Wed Oct 26 17:12:17
 PDT 2011 --
 
 
 Now I'll change /etc/hosts to point yahoo.com to google.com. Notice
 that the proxy doesn't respect this: it still goes to yahoo.com
 rather than google.com.

 
DNS responses are cached both by squid and local resolver. Are you reloading 
squid at this point? My 3.2 works fine with /etc/hosts mods
 
I am assuming order is alright in /etc/resolv.conf (judging from the tests 
below I will say it is).
 
Jenny 

RE: [squid-users] Is there any way to configure Squid to use local /etc/hosts in name resolution?

2011-10-26 Thread Jenny Lee


 Date: Wed, 26 Oct 2011 18:30:37 -0700
 From: dnw...@gmail.com
 To: bodycar...@live.com
 CC: squid-users@squid-cache.org
 Subject: Re: [squid-users] Is there any way to configure Squid to use local 
 /etc/hosts in name resolution?

 Hi Jenny,

 Thanks very much for replying to my question. I'm a bit confused --
 are you saying that you don't see my problem on your Squid 3.2
 installation?

 My /etc/resolv.conf just reads

 search XXX
 nameserver XXX.XXX.XXX.XXX


What about /etc/nsswitch.conf hosts line? Does it read: files dns?
 
DNS is a crucial part of squid and I doubt squid itself is not working in this 
respect.
 
What probably is happenning (since you tried a restart) is that you are 
retrieving the URL from your upstream parent. Try changing /etc/hosts in your 
parents (and restart them) and see if this resolves it. Or disable the parents 
and retest.
 
Jenny 

RE: [squid-users] empty acl

2011-10-25 Thread Jenny Lee

That is because the file is not there as squid says.

Change 'ad_block.txt' to 'ad.block.txt' in your script and all will be fine.

Jenny


 From: zongosa...@gmail.com
 To: squid-users@squid-cache.org
 Date: Tue, 25 Oct 2011 21:11:50 +0100
 Subject: RE: [squid-users] empty acl
 
 Amos, 
 
 Thanks for your reply. 
 I have deleted the ad_block.txt and downloaded it again
 I have chown the file to squid user and chmod 777 that file to make sure that 
 there is no permissions issue. I have done the same thing for temp_ad_ in 
 /temp directory.
 Still, I get the same error message as below which I do not get on Linux for 
 some reasons. 
 I believe the error occurs when the script asked squid to reconfigure squid 
 -k reconfigure as you rightfully mentioned below. 
 All the access are correct. So that would leave me with the other option you 
 talked about in your reply which is file is empty when script runs squid 
 -k reconfigure. There I have to admit I am lost. Did I over look something 
 or may be the syntax of the acl below is not working in FreeBSD ? 
 
 Kind Regards, 
 
 a warning: empty ACL: acl ads 
  dstdom_regex /usr/local/etc/squid/ad.block.txt everytime I run the 
  script that enables the refresh of the ad.block.txt file
 
 #!/bin/bash
 ## get new ad server list
 /usr/local/bin/wget -O /tmp/temp_ad_file \
 http://pgl.yoyo.org/adservers/serverlist.php?hostformat=squid-dstdom-regex;showintro=0
 
 ## clean html headers out of list 
 cat /tmp/temp_ad_file | grep (^|  /usr/local/etc/squid/ad_block.txt
 
 ## refresh squid
 /usr/local/sbin/squid -k reconfigure
 
 ## rm temp file
 rm -rf /tmp/tmp_ad_file
 
 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
 Sent: 25 October 2011 02:46
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] empty acl
 
 On Mon, 24 Oct 2011 23:48:28 +0100, zongo saiba wrote:
  Greetings to all ,
 
  Just a quick email as I get a warning: empty ACL: acl ads 
  dstdom_regex /usr/local/etc/squid/ad.block.txt everytime I run the 
  script that enables the refresh of the ad.block.txt file. I was using 
  linux (Ubuntu server
  11.10) and never got that. I am now on freebsd 8.2; this is when the 
  error above started. I modified the script and there is no issue 
  there.
 
  I can't figure out for the life of me why squid 3.1 is giving that 
  error message. Any help is much welcome.
 
 The file is empty at the time of Squid reconfigure/startup or the Squid user 
 account does not have read access to load it.
 
 Amos
 
 

RE: [squid-users] Recurrent crashes and warnings: Your cache is running out of filedescriptors

2011-10-13 Thread Jenny Lee

 Date: Thu, 13 Oct 2011 10:59:09 +0200
 From: leonardodiserpierodavi...@gmail.com
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Recurrent crashes and warnings: Your cache is 
 running out of filedescriptors
 
 On Wed, Oct 12, 2011 at 3:09 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 
  FATAL: storeDirOpenTmpSwapLog: Failed to open swap log.
 
  So what is taking up all that space?
  2GB+ objects in the cache screwing with the actual size calculation?
  logs?
  swap.state too big?
  core dumps?
  other applications?
 
 What's puzzling is that there appears to be plenty of free space:
 
 squid:/var/cache# df -h
 Filesystem Size Used Avail Use% Mounted on
 /dev/sda1 65G 41G 22G 66% /
 tmpfs 1.7G 0 1.7G 0% /lib/init/rw
 udev 10M 652K 9.4M 7% /dev
 tmpfs 1.7G 0 1.7G 0% /dev/shm
 
 Is it possible that the disk runs out of free space, and df just gives
 me the wrong output?
 
Perhaps you are running out of inodes?
 
df -i should give you what you are looking for.
 
Jenny 

RE: [squid-users] Recurrent crashes and warnings: Your cache is running out of filedescriptors

2011-10-13 Thread Jenny Lee

  Perhaps you are running out of inodes?
 
  df -i should give you what you are looking for.


 Well done. df reports indeed that I am out of inodes (100% used).
 I've seen that a Sarg daily report contains about 170'000 files. I am
 starting tar.gzipping them.

 Thank you very much Jenny.


 Leonardo
 

Glad this solved. Actually you could increase inode max (i think it was 
double/triple of /proc/sys/fs/file-max setting).
 
However, 170,000 files on a directory on a mechanical drive will make things 
awfully slow.
 
Also, ext4 is preferable since deletes are done at the background. Our tests on 
an SSD with ext3 took 9 mins to delete 1 million files. It was about 7 secs on 
ext4.
 
Whenever we need to deal with high number of files (sometimes in the tune of 
100 Million), we move them to an SSD with ext4 and perform operations there. 
And yes, that moving part... is very painful also unless the files were already 
tarred :)
 
Let me give you an example. Process 1 Million files on a single directory 
(read, write, split to directories, archive):
 
HDD: 6 days
SSD: 4 hours
 
Jenny 
  

RE: [squid-users] ACL's by Specific Date and Time

2011-10-09 Thread Jenny Lee

 Date: Sun, 9 Oct 2011 20:45:07 -0700
 From: maill...@jg555.com
 To: squid-users@squid-cache.org
 Subject: [squid-users] ACL's by Specific Date and Time
 
 I use my squid server at home for me to keep my eyes on my kids 
 internet. Was wondering if it was possible to allow or deny access by a 
 specific day and time.
 
 What my thoughts are is when they are on a holiday, to disable my normal 
 rules. So when they are out of school the proxy doesn't stop their 
 access, but if it's a non school day, it will allow them out.
 
 Not sure if this is possible.
 
Very easy to do.
 
 
See acl time: 
http://wiki.squid-cache.org/SquidFaq/SquidAcl?highlight=%28time%29#How_can_I_allow_some_clients_to_use_the_cache_at_specific_times.3F
 
You can add weekends to your rules to allow access to your kids. You can also 
download official public holiday list and create rules for these days.
 
Jenny 

RE: [squid-users] Facebook page very slow to respond

2011-10-08 Thread Jenny Lee

 Date: Sat, 8 Oct 2011 16:15:10 -0400
 From: wil...@optimumwireless.com
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Facebook page very slow to respond
 
 I disabled squid and I'm doing simple FORWARDING and things work, this 
 tells me that I'm having a configuration issue with squid 3.1.14.
 
 Now, I can't afford to run our network without squid since we are also 
 running SquidGuard for disabling some websites to certain users.
 
 Here's part of my squid.conf:

That looks like one bad config to me. I hope someone can straighten it out for 
you

Also, aren't netfilter errors telling you something?

Jenny 

RE: [squid-users] block TOR

2011-09-29 Thread Jenny Lee


 Date: Thu, 29 Sep 2011 11:24:55 -0400
 From: charlie@gmail.com
 To: squid-users@squid-cache.org
 Subject: [squid-users] block TOR
 
 There is any way to block TOR with my Squid ?

How do you get it working with tor in the first place?

I really tried for one of our users. Even used Amos's custom squid with SOCKS 
option but no go.

Jenny 

RE: [squid-users] Secure user authentication on a web proxy

2011-09-21 Thread Jenny Lee


 Date: Tue, 20 Sep 2011 21:51:23 +0300
 From: nmi...@noa.gr
 To: bodycar...@live.com
 CC: squid-users@squid-cache.org
 Subject: Re: [squid-users] Secure user authentication on a web proxy

 On 20/9/2011 8:58 μμ, Jenny Lee wrote:

  I don't know if stunnel uses TCP or not.

 Thanks for your thoughts Jenny.

 Stunnel works with SSL, which runs only on TCP. (Ref.:
 http://www.stunnel.org/?page=faq.)

  But OpenVPN has an option to use TCP. You will find that VPN over UDP
  is 3 times faster tha VPN over TCP. All is not vain, though. There is
  a kernel option not to not combine packets to bigger chunks and send
  them immediately as smaller chunks. OpenVPN option tcp-nodelay
  activates that and i can reach almost UDP speeds with TCP. I would
  check if something similiar exists for stunnel.

 The stunnel program is designed to work as an SSL encryption wrapper
 between remote client and local (inetd-startable) or remote server.

 I could directly use OpenVPN instead; I would expect it will take a much
 greater preparation in terms of system design and implementation, but it
 would be more versatile and manageable. Eventually I believe I might do it.
 
 
You can find the openvpn option i am talking about on the very page you quoted 
from stunnel:
 
My connections are slow, slow, slow

One option might be to turn on the TCP NODELAY option on both ends. On the 
server, include the following options: 
socket = l:TCP_NODELAY=1
and on the client include: 
socket = r:TCP_NODELAY=1

 
Amos, this option should be included in squid faqs. Those who tried to do tcp 
over tcp tunnelling know how painful it is.
 
 
Jenny 

RE: [squid-users] Secure user authentication on a web proxy

2011-09-20 Thread Jenny Lee

 Please also note that I also tried using Squid + Stunnel to achieve 
 secure user authentication, according to these directions: 
 http://www.jeffyestrumskas.com/index.php/how-to-setup-a-secure-web-proxy-using-ssl-encryption-squid-caching-proxy-and-pam-authentication/
  
 (except that I used ldap auth on the backend).
 
 It worked, but performance was *very* slow (practically awful), and I 
 couldn't find any solution to improve performance. Squid without stunnel 
 worked like a breeze (but without secure/encrypted user authentication)...


I don't know who tought inventing to tunnel TCP inside TCP is a good idea, but 
it is not. There is all sorts of race conditions when congestion causes 
retransmission of packets.

I don't know if stunnel uses TCP or not. 

But OpenVPN has an option to use TCP. You will find that VPN over UDP is 3 
times faster tha VPN over TCP.

All is not vain, though. There is a kernel option not to not combine packets to 
bigger chunks and send them immediately as smaller chunks. OpenVPN option 
tcp-nodelay activates that and i can reach almost UDP speeds with TCP.

I would check if something similiar exists for stunnel.

Jenny 

RE: [squid-users] Squid 3.2.0.12 beta is available

2011-09-17 Thread Jenny Lee

Thank you for your hard work. Most of the squirks seem to be gone.
 
Lots of: WARNING: always_direct resulted in 3. Username ACLs are not reliable 
here.
 
Why don't we have IP address logged in cache log? It is diffult to find 
anything when you get a GB of debug log by the time you run a reconfigure and 
reset debug level.
 
Jenny


 Date: Sat, 17 Sep 2011 22:00:25 +1200
 From: squ...@treenet.co.nz
 To: squid-annou...@squid-cache.org; squid-users@squid-cache.org
 Subject: [squid-users] Squid 3.2.0.12 beta is available
 
 The Squid HTTP Proxy team is very pleased to announce the
 availability of the Squid-3.2.0.12 beta release!
 
 
 This release brings fixes for all the currently known regressions since 
 3.2.0.8.
 
 This release is intended as the working reference package for users 
 testing regressions in the SMP caching support which will be added in 
 the next release. In the same manner that 3.2.0.8 was a reference for 
 regressions added in 3.2.0.9 TCP handling support.
 
 
 See the ChangeLog for the list of other minor changes in this release.
 
 
 All users of the 3.2.0.9 to 3.2.0.11 packages are urged to upgrade to 
 this release as soon as possible.
 
 Users of earlier 3.2 beta releases are encouraged to test this release 
 and upgrade as soon as possible.
 
 
 Upgrade tip:
 squid -k parse is starting to display even more useful hints about 
 squid.conf changes.
 
 
 Please refer to the release notes at
 http://www.squid-cache.org/Versions/v3/3.2/RELEASENOTES.html
 when you are ready to make the switch to Squid-3.2
 
 
 This new release can be downloaded from our HTTP or FTP servers
 
 http://www.squid-cache.org/Versions/v3/3.2/
 ftp://ftp.squid-cache.org/pub/squid/
 ftp://ftp.squid-cache.org/pub/archive/3.2/
 
 or the mirrors. For a list of mirror sites see
 
 http://www.squid-cache.org/Download/http-mirrors.html
 http://www.squid-cache.org/Download/mirrors.html
 
 If you encounter any issues with this release please file a bug report.
 http://bugs.squid-cache.org/
 
 
 Amos Jeffries
 

RE: [squid-users] Squid 3.2.0.12 beta is available

2011-09-17 Thread Jenny Lee

acl random was the issue. Adding an explicit always_direct fixed it.
Jenny


 From: bodycar...@live.com
 To: squ...@treenet.co.nz; squid-annou...@squid-cache.org; 
 squid-users@squid-cache.org
 Subject: RE: [squid-users] Squid 3.2.0.12 beta is available
 Date: Sat, 17 Sep 2011 14:54:54 +


 Thank you for your hard work. Most of the squirks seem to be gone.

 Lots of: WARNING: always_direct resulted in 3. Username ACLs are not reliable 
 here.

 Why don't we have IP address logged in cache log? It is diffult to find 
 anything when you get a GB of debug log by the time you run a reconfigure and 
 reset debug level.

 Jenny   

RE: [squid-users] Squid 3.2.0.12 beta is available

2011-09-17 Thread Jenny Lee




 Date: Sun, 18 Sep 2011 12:29:57 +1200
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid 3.2.0.12 beta is available

 On 18/09/11 03:28, Jenny Lee wrote:
 
  acl random was the issue. Adding an explicit always_direct fixed it.
  Jenny

 Aha. Thank you Fixed.

 
  
  From: bodycare_5
  Date: Sat, 17 Sep 2011 14:54:54 +
 
 
  Thank you for your hard work. Most of the squirks seem to be gone.
 
  Lots of: WARNING: always_direct resulted in 3. Username ACLs are not 
  reliable here.
 
  Why don't we have IP address logged in cache log? It is diffult to find 
  anything when you get a GB of debug log by the time you run a reconfigure 
  and reset debug level.

 What do you mean by this? in what case(s) are we missing it?
 
 
I believe source IP must be part of every entry in cache log.
 
Let's take a look at the error above, for example. I have no idea who is or 
what is generating it. It talked about Username ACLs, however, acl random 
paired with source IP was causing this. No usernames.
 
I have to enable debug log. Send a configure to squid. This is a busy cache. By 
the time I return the debug level back to default and send a reconfigure to 
squid, I am left with 1GB of text to scavenge through to find out what our who 
was causing it.
 
It could have taken me 2 seconds to figure it out had it printed the source IP.
 
There are too many examples like this. Unparseable header, failed to select 
source, etc. Yes, some we might check from access log URLs. Some we might check 
from second. However, when you are having 500 reqs in that particular second, 
the job does not get easy.
 
Source IP would be useful to narrow down the issues.
 
 
 NP: Look for local=$ip or remote=$ip.

That is what I am looking for, actually :)
 
Jenny 

RE: [squid-users] Authentication Prompts

2011-09-09 Thread Jenny Lee

 Date: Fri, 9 Sep 2011 12:50:24 +1200
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Authentication Prompts
 
 On 09/09/11 06:28, Matt Cochran wrote:
  I've been trying to model two different kinds of users in ACLs, where the 
  kids are authenticated by one account, and the adults another. The kids are 
  allowed to go only to a whitelist of websites, but I'd like the adults to 
  be able to override this behavior for a while if they enter their 
  credentials. I was also trying to wire this into a db-auth environment so I 
  can alter the accounts from my desktop.
 
  Following the guide at 
  http://wiki.squid-cache.org/Features/Authentication#How_do_I_ask_for_authentication_of_an_already_authenticated_user.3F,
   I can keep the kids restricted to a site but the parents get stuck in an 
  authentication loop or just denied access. Here's my config - can anyone 
  help me figure this out?
 
 
 
 Notice that would allow the kids to get a popup and re-try with parents 
 login to restricted sites without the parent being present.
 
 
 What you are asking for is this:
 
 # login required to go anywhere at all
 http_access deny !db-auth
 
 # kids to their sites
 http_access allow !parents kids_sites
 
 # parents anywhere
 http_access allow parents
 
 # challenge if not logged in with parents credentials
 http_access deny !parents
 
 # everything else is blocked.
 http_access deny all
 
 
Can't we simplify this to:
 
http_access deny !db-auth
http_access allow kids_sites
http_access deny all !parents

Jenny 

RE: [squid-users] Squid 3.0.STABLE26 is available

2011-08-28 Thread Jenny Lee

 - Correct parsing of large Gopher indexes

This gopher/WAIS... Does anyone use it actually?

Yes maybe in 1994 or during the days of Wildcat BBS. 

I think developers should consider removing this code.

Jenny 

RE: [squid-users] [ADVISORY] SQUID-2011:2 Password truncation in NCSA using DES

2011-08-28 Thread Jenny Lee

My honest opinion is that this is a totally unnecessary change. And a brutal 
one too.
 
What difference does it make if it is 8 chars or 888 chars? It is going 
plaintext over the wire.
 
For people having established systems, these functions are scattered everywhere 
-- in CGIs, PHPs, password changers, etc. It is not as easy as adding an -m 
to htpassword. I have to revise an entire platform for this to find out exactly 
where these are.
 
Wouldn't making this optional be a better solution? Or informing people to use 
an older ncsa_auth?
 
This change caused denial-of-service for many users in my system and it took 2 
days to figure it out. People are not necessarily computer literates and they 
don't exactly point out what the problem is. They just say: It is not 
working. It takes 20 emails back and forth and countless workhours to figure 
out what exactly is not working.
 
This one bit me very bad!
 
Jenny
 



 Date: Sun, 28 Aug 2011 22:29:18 +1200
 From: squ...@treenet.co.nz
 To: squid-annou...@squid-cache.org; squid-users@squid-cache.org
 Subject: [squid-users] [ADVISORY] SQUID-2011:2 Password truncation in NCSA 
 using DES

 __

 Squid Proxy Cache Security Update Advisory SQUID-2011:2
 __

 Advisory ID: SQUID-2011:2
 Date: August 27, 2010
 Summary: Password truncation in NCSA using DES
 Affected versions: Squid 3.0 - 3.0.STABLE25
 Squid 3.1 - 3.1.14
 Squid 3.2 - 3.2.0.10
 Fixed in version: Squid 3.2.0.11, 3.1.15, 3.0.STABLE26
 __

 http://www.squid-cache.org/Advisories/SQUID-2011_2.txt
 __

 Problem Description:

 DES algorithm implemented by htpasswd and crypt() in some popular
 encryption libraries silently truncates passwords. Squid NCSA
 authentication helper permits long and complex passwords to be
 used with DES despite this well known issue. Leaving users with
 a false view of their security. 

RE: [squid-users] Squid 3.0.STABLE26 is available

2011-08-28 Thread Jenny Lee


---
 Date: Sun, 28 Aug 2011 23:26:25 +1200
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid 3.0.STABLE26 is available

 On 28/08/11 21:19, Jenny Lee wrote:
 
  - Correct parsing of large Gopher indexes
 
  This gopher/WAIS... Does anyone use it actually?

 Yes, there are some.

I strongly doubt anyone knows what these are let alone use them. Those were 
long before web.
 
The only place I hear them after 1996 is on squid.
 
Jenny
 
 
  

RE: [squid-users] Debian Squeeze/Squid/ --enable-http-violations / header_replace User-Agent no effect

2011-08-04 Thread Jenny Lee

 Date: Thu, 4 Aug 2011 10:45:57 +0200
 From: ju...@klunky.co.uk
 To: squid-users@squid-cache.org
 Subject: [squid-users] Debian Squeeze/Squid/ --enable-http-violations / 
 header_replace User-Agent no effect
 
 Hi,
 
 I have recompiled squid3 on Debian Squeeze because the Debian repo' deb
 omits the --enable-ssl' '--enable-http-violations' options.
 
 For testing purposes I added this into the squid.conf:
 header_replace User-Agent Mozilla
 
 However, the user agent is not replaced when the client connects with a
 test apache server: (from the access.log)
 62.123.123.123 - - [04/Aug/2011:10:10:41 +0200] GET /favicon.ico
 HTTP/1.1 404 256 - Mozilla/5.0 (Android; Linux armv7l; rv:5.0)
 Gecko/20110615 Firefox/5.0 Fennec/5.0
 
 I know that its coming in via the squid proxy because the IP address is
 that of the squid proxy, although obfuscated in example above.
 
 I read the squid-proxy.org details and the syntax looks correct, and
 squid did not choke on it.

header replacement takes places when an acl denies header_access.

header_access User-Agent deny all
header_replace User-Agent Mozilla

If it didn't work:
request_header_access User-Agent deny all
header_replace User-Agent Mozilla

header_access branched into: request_header_access and reply_header_access

I use 3.2 and the 2nd invocation works for me. Of course, Amos can give better 
details when this was changed or what you should use. But to save his time, you 
can try both options above.

You will also be facing problems if you are using proxy authentication 
usernames and upstream peers. In first case, it will sometimes works, sometimes 
not. In 2nd case, it will not work. You will need to upgrade to latest 3.2's 
for more stable results. Those didn't even work for me before 3.2.0.7, so I am 
assuming it will not be working with 3.1's unless the release is very recent.

Actually come to think of it, upstream peers does not work even in latest 
3.2's. I think I had a temporary private solution from developers for this.

Jenny 

RE: [squid-users] Browsing slow after adding squid proxy.

2011-07-20 Thread Jenny Lee

 On Wed, 20 Jul 2011 09:13:34 +1200, Gregory Machin wrote:
  Hi.
  Been a long time since I last looked at a squid proxy. After add a
  proxy to the network , browsing seems to have slowed considerably. I
  have build a squid proxy , this is configured into the network on via
  our Sonicwall using the proxy feature. When I looked into the
  configuration I did a few optimizations based on what I found on a
  couple of websites. All though I opted not to tweak the OS more than
  increase the ulimit as I would not expect it to be required given the
  hardware. It is running out of a SSD drive.
 
 
 Two things in general to be aware of.
 
 * Careful with SSD. Squid is a mostly-write software, SSD work best 
 with mostly-read. So SSD lifetime and speed is reduced from the well 
 advertised specs. That said, they can still improve caching HIT speeds.

I think this must be made a FAQ entry.

As someone who has worked with SSDs since 2004, I can very easily say that SSDs 
are good only for read-only operations still in 2011. 

Anything requiring writes should be taken off SSDs. I had lenghty discussions 
and benchmarks on these matters in StorageReview and Anandtech since years.

Things seem to be working fine for couple of months. Then SSDs are crippled 
beyond repair no matter how many ATA secure erases are done.

I had Intel SSD's that required a secure erase for every TB written just to be 
able to function properly. 

I had SandForce SSDs that worked well, however, their garbage collection 
routines were so overloaded that it was taking for me 20 seconds to send an IM 
message when something was written to the disk.

All these were partitioned 10-20% less capacity to allow room for write 
amplification beyond manufacturer defaults.

I keep my access.log and cache.log out of SSD (I don't cache).

I really do not want to imagine what happens on a busy caching squid with an 
SSD. This should be a disaster waiting to happen, if it is not already 
happenning.

Jenny 

RE: [squid-users] Tuning the cache with heavy load

2011-07-18 Thread Jenny Lee

 Hello,
 
 i have got a Squid version 3.1.8 running on a CentOS 5.x.
 Since it works for url and content filtering, in conjunction with
 Dansguardian in front, for some hunderd of users, the load average of that
 machine is sometimes very high (also 5.0 or 8.0...).
 The biggest process is squid, both on CPUs and Memory; i noticed, by
 entering top -s and press 1 - that often the system is in wait, perhaps
 it waits for the disk IO subsystem to write and manage Squid's cache.
 
cat /proc/stat
 
check for: procs_blocked
 
LA is a very misunderstood metric. It doesn't exactly correspond to the load as 
one can imagine. 
 
Any time you are above 1, there is a problem somewhere. 
 
I have a Quad Core with 2-3 squids each running at 90% CPU load per core 
constantly. Yet my LA is never more than 0.60
 
Jenny
 
  

RE: [squid-users] Squid terminates when my network goes down

2011-07-12 Thread Jenny Lee

How can you expect *machineS* to get a response from squid if network is down?
 
If squid is listening on localhost, the only client that can connect to it is 
the one on that machine. Any other client cannot connect to it. This would be a 
very isolated case I believe (squid after all is a cache, you might as well use 
your browser's cache for that purpose). No point in running squid for only 1 
client.
 
But I have not used squid windows, so I would not know the behaviour. 
Moreoever, I have not used squid in an environment where there is no internet 
connectivity 24/7.
 
Another user pointed out that his squid on linux runs fine when network goes 
down. So this might be something with Windows.
 
http://www.squid-cache.org/Doc/config/windows_ipaddrchangemonitor/
 
windows_ipaddrchangemonitor on
 
Have you tried turning this config value this on/off?
 
Jenny
 
 



 Date: Mon, 11 Jul 2011 21:06:09 -0400
 From: january.sh...@gmail.com
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid terminates when my network goes down
 
 Your analysis is correct and helpful, Jenny. Can you clarify further?
 Are you saying that Windows machines running squid should not have
 local web clients, i.e., not even set its browser to use the
 locally-running squid?
 
 J
 
 On Sun, Jul 10, 2011 at 9:50 PM, Jenny Lee bodycar...@live.com wrote:
 
  Is this a bug? If the network is down, shouldn't squid just generate
  an error page, like ERR_CONNECT_FAIL, and not collapse like this?
 
  Logically, how would you expect squid to convey ERR_CONNECT_FAIL to the 
  client if the network is down?
 
  I can think of only one case where this might make sense -- client connects 
  from localhost to a squid listening on localhost but going out on other 
  interface... which would mean a very isolated case of use for a cache like 
  squid.
 
  Jenny 

RE: [squid-users] Squid terminates when my network goes down

2011-07-12 Thread Jenny Lee

  How can you expect *machineS* to get a response from squid if network is 
  down?

 Proxy server. Squid accepts clients on inside interface and
 connects to internet servers on outside interface.
 Outside interface goes down with inside interface still alive.

 I would actually like to have the problem/feature below, since
 that would mean no clients get stuck at the nonfunctioning squid
 instead of moving to the next squid in the roundrobin failover.
 
 
If you read the original post, he mentions squid terminating when network goes 
down.
 
No clients get stuck at the nonfunctioning squid in a cache-hierarcy. They 
would move on to the next one as is, since that one is already marked as dead 
and removed from roundrobin pool. So that feature is built-in already (if I am 
not misunderstanding your scenario).
 
Jenny 

RE: [squid-users] Squid terminates when my network goes down

2011-07-10 Thread Jenny Lee

 Is this a bug? If the network is down, shouldn't squid just generate
 an error page, like ERR_CONNECT_FAIL, and not collapse like this? 
 
Logically, how would you expect squid to convey ERR_CONNECT_FAIL to the client 
if the network is down?
 
I can think of only one case where this might make sense -- client connects 
from localhost to a squid listening on localhost but going out on other 
interface... which would mean a very isolated case of use for a cache like 
squid.
 
Jenny 

RE: [squid-users] insert into cache

2011-07-02 Thread Jenny Lee

Are you cloning interent for Iran?
 
Jenny
 
 Dear all,

 i have a squid server and separate server which has a million page from
 million URL,you know that i can insert page into cache via squidclient
 MYURL,but it uses GET http command and download page,now i have this
 page and just wanna insert into cache(already downloaded), do you have
 solution?

 Yours,
 Mohsen

 

RE: [squid-users] ext4 vs reiserfs

2011-06-28 Thread Jenny Lee

 Dear all,

 I don't know to use which ext4 stable or reiserFS for squid.
 Which has high performance?
 
I think reiserFS is not a wise choice. 
 
- Its user base is limited and becoming less and less
- It had corruption issues in the past (especially with postfix)
- No vendor supports it
- Its creator is in jail for killing his mail-order bride.
 
 
I will give examples from my usage when I was looking for a file system to 
process many files (200 million) on an SSD.
 
1,000,000 50Kb files delete operation (rm -Rf on directory):
 
NTFS: 7 minutes
ext3: 8 minutes
ext4: 12 seconds
 
I don't have squid caching, however on heavy filesystem activities I use 
exclusively ext4 since couple of years. Had no issues at all. It is default on 
many distros including RHEL6. So it is better to stick with it.
 
Jenny
 
 
 
 
 
 
 
 
 
 
 
 
  

RE: [squid-users] Memory issues

2011-06-28 Thread Jenny Lee

 Good Lord!!!

 The amount of free RAM in my system keeps decreasing, What happens
 when it RAM reaches to zero? Is it that it remove old object and free
 up space?
 
It is probably being used by buffer and cache.
 
free -m enter
 
should show you how much available memory and cache there is.
 
Jenny 

RE: [squid-users] Memory issues

2011-06-28 Thread Jenny Lee

 Subject: Re: [squid-users] Memory issues

 free -m
 total used free shared buffers cached
 Mem: 3722 3011 710 0 305 1352
 -/+ buffers/cache: 1353 2369
 Swap: 2047 21 2025

 Do I genuinely require to increase the memory of this system?



No. It looks good. 
 
I don't understand where you came up with the idea that you have memory issues.
 
Jenny
  

RE: [squid-users] Strange 503 on https sites [ipv6 edition]

2011-06-27 Thread Jenny Lee

 NP: (rant warning) if you followed most any online tutorial for 
 disabling IPv6 in RHEL. Most only go so far as to make the kernel drop 
 IPv6 packets. Rather than actually turning the OFF kernel control which 
 would inform the relevant software that it cannot use IPv6 ports. So it 
 sends a packet, and waits... and waits...
 (and yes I know you are connecting to an IPv4 host. Linux hybrid 
 stack which Squid uses can use IPv6 sockets to contact IPv4 space).

It probably is because ipv6 is no longer a module and built into kernel. 
 
Most online tutorials would not be working or half-working.

Proper way to disable ipv6 virus in rhel6 is:

/boot/grub/grub.conf
ipv6.disable=1
 
/etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
 
/etc/modprobe.conf
/etc/modprobe.d/local.conf
alias net-pf-10 off
alias ipv6 off

/etc/sysconfig/network
NETWORKING_IPV6=off

echo 1  /proc/sys/net/ipv6/conf/all/disable_ipv6
 
chkconfig ip6tables off

/etc/sysconfig/network-scripts/ifcfg-eth0
make sure ipv6 DNS entries are removed
 
 
Doing all above would disable ipv6 both in RHEL5 and RHEL6. Instead of thinking 
what is what and what works or not, I run this everywhere and it covers all my 
machines.
 
I also run this just in case ipv6 is enabled somewhere, it is dropped:
 
#!/bin/bash
if [ -d /proc/sys/net/ipv6/conf ];then
IPT6=/sbin/ip6tables 
 
# Flush all
$IPT6 -F ; $IPT6 -F FORWARD ; $IPT6 -X ; $IPT6 -Z ;
 
$IPT6 -A INPUT   -j LOG --log-prefix IPv6 INPUT DROPPED: 
$IPT6 -A OUTPUT  -j LOG --log-prefix IPv6 OUTPUT DROPPED: 
$IPT6 -A FORWARD -j LOG --log-prefix IPv6 FORWARD DROPPED: 
$IPT6 -P INPUT DROP
$IPT6 -P OUTPUT DROP
$IPT6 -P FORWARD DROP
fi
 
 
Little bit old school perhaps, but I don't have knowledge about this ipv6 and I 
would rather have it disabled until I learn it instead of keeping my machines 
open for another vector of attack. 
 
You might not agree with me but this minimalistic approach Don't use it now, 
don't keep it saved me many times over the years.
 
Hope someone finds this helpful.
 
Jenny
 
 
DISCLAIMER: Use at your own risk. I am not responsible if it blows up your 
house, bites your dog, does your wife.
 
 
  

RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-06-27 Thread Jenny Lee

 Dear Jenny and Amos,

 I thought it worth mentioning that I too am having troubles with the
 ACL processing of the request_header_access User-Agent configuration
 directive. It seems like Jenny's issue is the same one I am seeing.

 Using a src ACL in the directive doesn't work when you have a cache
 peer. The ACL is only ever checked to see if the IP address
 255.255.255.255 exists in the list.

 I know this was only reported recently, but I wanted to know if there
 was a fix in the works or if Amos is still waiting for a fix to be
 submitted.

 Thanks and Best Regards,

 Sean Butler
 
I hired developer time for private patch. Seems to be working now. Will get you 
the patch once all is ready.
 
Jenny 

RE: [squid-users] Strange 503 on https sites [ipv6 edition]

2011-06-27 Thread Jenny Lee

 Ouch! Add these at least:
 $IPT6 -A INPUT -j REJECT
 $IPT6 -A OUTPUT -j REJECT
 $IPT6 -A FORWARD -j REJECT
 
 
  $IPT6 -P INPUT DROP
  $IPT6 -P OUTPUT DROP
  $IPT6 -P FORWARD DROP
  fi
 
 
 And *that* is exactly the type of false disable I was talking about.
 
 Squid and other software will attempt to open an IPv6 socket(). As long 
 as the IPv6 modules are loaded in the kernel that will *succeed*. At 
 first glance this is fine, IPv4 can still come and go through that 
 socket.
 
 - In TCP they might then try to bind() to an IPv6, that *succeeds*. 
 [bingo! IPv6 enabled and working. Squid will use it.]
 Then try to connect() to an IPv6. That also succeeds (partially). 
 But the firewall DROP prevents the SYN packet ever going anywhere. Up to 
 *15 minutes* later TCP will timeout.
 
 - In UDP things get even stranger. It expects no response, so send() 
 to both IPv4 and IPv6 will *succeed*.
 
 Does the DNS error No Servers responding;; sound all too familiar? 
 then you or a transit network is most likely using DROP somewhere on 
 UDP, TCP or ICMP.

Unlikely to happen. Because we inserted ipv6 disable mechanisms to 50 different 
places. And that was the last line just in case nothing worked.

If it came to that part, it is a mute point if it is dropped or rejected. We 
have bigger problems.

From a client point, or in testing, I agree with you. REJECT should be used to 
inform failing clients. Otherwise DROPs will cause lenghty delays.

But on internet-facing production systems, DROP should be used.

- Less network traffic when there are attacks
- More secure
- Immune to spoofing and reflection scans on other systems
- Immune to probes

But as I mentioned, my rules should be considered in the whole context of 
disabling ipv6, whereas the OP's issue might very well be these very DROP rules 
that I advocate.

My intention was to post useful info to those who are trying to disable ipv6 on 
RHEL rather than find a solution to OP's squid problems which is your expertise.

I surely will be bothering you with bugs and mistakes about ipv6 once I compile 
squid with it... But I don't expect that to be before 2020 or until I am left 
as the last person on earth who is not supporting ipv6.

Jenny

PS: I have never seen these IPV6 DROPPED entries over the years in logs.  
  

[squid-users] 3.2.0.9 Issues

2011-06-19 Thread Jenny Lee

Hello Squid Team,

Thank you for much awaited 3.2.0.9 release. This one seem to have one major 
issue:

1) Peers are not honored. All connections going direct. I tried everything 
possible but of no use. Can someone verify?
 
Others:

2) assertion failed: mem.cc:190: MemPools[type] == NULL

3) What does this mean?
2011/06/19 14:16:14 kid1| Failure Ratio at 1.008
2011/06/19 14:16:14 kid1| Going into hit-only-mode for 5 minutes...

4) --disable-ipv6 still not honored. need to use USE_IPV6 0 to disable.

5) forward.cc has Forwarding client request... line at debug 17,1 which seems 
to create a cache.log as big as the access.log!
 
 
This is with a drop-in 3.2.0.8 config file at a cursory glance. I will go 
throught the options to see if anything changed and if these are my wrongdoings.
 
Jenny 

[squid-users] [RESOLVED] RE: [squid-users] WORKERS: Any compile option to enable? commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory

2011-06-13 Thread Jenny Lee


 Date: Sun, 12 Jun 2011 21:23:44 +1200
 From: squ...@treenet.co.nz
 To: bodycar...@live.com
 CC: squid-users@squid-cache.org; squid-...@squid-cache.org
 Subject: Re: [squid-users] WORKERS: Any compile option to enable? commBind: 
 Cannot bind socket FD 13 to [::]: (2) No such file or directory

 On 12/06/11 20:21, Jenny Lee wrote:
 
  Subject: Re: [squid-users] WORKERS: Any compile option to enable? 
  commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
 
  On 12/06/11 16:17, Jenny Lee wrote:
 
  I can't get the workers work. They are started fine. However I get:
 
  kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
  kid2| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
  kid3| commBind: Cannot bind socket FD 9 to [::]: (2) No such file or 
  directory
 
  Is there a compile option to enable/disable workers that I am missing?
 
  I can't seem to replicate that here. More details are needed about what
  FD 13 and FD 9 were being used for please.
 
 
  649 kid1| comm.cc(2507) comm_open_uds: Attempt open socket for: 
  /usr/local/squid/var/run/squid-1.ipc
  649 kid1| comm.cc(2525) comm_open_uds: Opened UDS FD 13 : family=1, 
  type=2, protocol=0
  649 kid1| comm.cc(2528) comm_open_uds: FD 13 is a new socket
  649 kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
 
  symlinking /usr/local/squid/var/run to /squid fixed the problem. I have 
  everything in /squid.
 
  Aha, then you probably need to use ./configure --prefix=/squid
 
  That should make the socket /squid/var/run/squid-1.ipc
 
  Trimmed for brevity:
 
  strings /squid/squid|grep '^\/'
  /lib64/ld-linux-x86-64.so.2
  /etc/resolv.conf
  /squid/errors/templates
  /dev/tty
  /dev/null
  /squid/squid.conf
  /usr/local/squid/var/run/coordinator.ipc
  /usr/local/squid/var/run/squid

 Strange those last two are UNIX sockets with address:
 $(prefix)/var/run/coordinator.ipc
 $(prefix)/var/run/squid

 and yet the errors and squid.conf picked up the prefix right.

 Notice how that last one has the same name as the squid binary? Yet is a
 unix socket name. Strange things can be expected when you execute a
 socket or try to write a data stream over a binary.
 
Oh jesus! Apparently --prefix was added to the end of --target because of 
end-of-line space!
 
strings squid|grep bindir:
snip '--target=x86_64-redhat-linux-gnu--prefix=/squid' '--bindir=/squid' 
'--sbindir=/squid' snip
 
 
./configure \
--build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu 
--target=x86_64-redhat-linux-gnu\
--prefix=/$SQUID --bindir=/$SQUID --sbindir=/$SQUID --libexecdir=/$SQUID 
--datadir=/$SQUID --sysconfdir=/$SQUID \
 
Thanks Amos!
 
Jenny
 
 
 
 
 
 
 
 


 I suggest you let the installer use the standard FS hierarchy locations
 for these special things. You can use --prefix to setup a chroot folder
 (/squid) where a duplicate of the required layout will be created inside.


 Meanwhile I'm not sure exactly how the /usr/local/squid/var/run/ got
 into the binary. Maybe junk from a previous build.
 Try erasing src/ipc/Makefile src/ipc/Makefile.in and src/ip/Port.o
 then running ./configure and make again.

 Amos
 --
 Please be using
 Current Stable Squid 2.7.STABLE9 or 3.1.12
 Beta testers wanted for 3.2.0.8 and 3.1.12.2  
   

RE: [squid-users] squid 3.2.0.5 smp scaling issues

2011-06-12 Thread Jenny Lee

On Sat, Jun 11, 2011 at 9:40 PM, Jenny Lee bodycar...@live.com wrote:

I like to know how you are able to do 13000 requests/sec.
tcp_fin_timeout is 60 seconds default on all *NIXes and available ephemeral 
port range is 64K.
I can't do more than 1K requests/sec even with tcp_tw_reuse/tcp_tw_recycle with 
ab. I get commBind errors due to connections in TIME_WAIT.
Any tuning options suggested for RHEL6 x64?
Jenny

I would have a concern using both those at the same time.   reuse and recycle. 
Reuse a socket, but recycle it, I've seen issues when testing my own linux 
distro's with both of these settings. Right or wrong that was my experience.
fin_timeout, if you have a good connection, there should be no reason that a 
system takes 60 seconds to send out a fin. Cut that in half, if not by 2/3's
And what is your limitation at 1K requests/sec, load (if so look at I/O) 
Network saturation? Maybe I missed an earlier thread and I too would tilt my 
head at 13K requests sec!
Tory
---
 
 
As I mentioned, my limitation is the ephemeral ports tied up with TIME_WAIT.  
TIME_WAIT issue is a known factor when you are doing testing.
 
When you are tuning, you apply options one at a time. tw_reuse/tc_recycle were 
not used togeter and I had 10 sec fin_timeout which made no difference.
 
Jenny

 
nb: i still dont know how to do indenting/quoting with this hotmail... after 10 
years.
  

RE: [squid-users] WORKERS: Any compile option to enable? commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory

2011-06-12 Thread Jenny Lee




 Date: Sun, 12 Jun 2011 17:41:18 +1200
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 CC: squid-...@squid-cache.org
 Subject: Re: [squid-users] WORKERS: Any compile option to enable? commBind: 
 Cannot bind socket FD 13 to [::]: (2) No such file or directory

 On 12/06/11 16:17, Jenny Lee wrote:
 
  I can't get the workers work. They are started fine. However I get:
 
  kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
  kid2| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
  kid3| commBind: Cannot bind socket FD 9 to [::]: (2) No such file or 
  directory
 
  Is there a compile option to enable/disable workers that I am missing?

 I can't seem to replicate that here. More details are needed about what
 FD 13 and FD 9 were being used for please.
 
 
649 kid1| comm.cc(2507) comm_open_uds: Attempt open socket for: 
/usr/local/squid/var/run/squid-1.ipc
649 kid1| comm.cc(2525) comm_open_uds: Opened UDS FD 13 : family=1, type=2, 
protocol=0
649 kid1| comm.cc(2528) comm_open_uds: FD 13 is a new socket
649 kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
directory

symlinking /usr/local/squid/var/run to /squid fixed the problem. I have 
everything in /squid.
 
Shutdown issue also fixed with this.
 
Jenny
 
  

RE: [squid-users] WORKERS: Any compile option to enable? commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory

2011-06-12 Thread Jenny Lee

  On 12/06/11 16:17, Jenny Lee wrote:
 
  I can't get the workers work. They are started fine. However I get:
 
  kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
  kid2| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
  kid3| commBind: Cannot bind socket FD 9 to [::]: (2) No such file or 
  directory
 
  Is there a compile option to enable/disable workers that I am missing?
 
  I can't seem to replicate that here. More details are needed about what
  FD 13 and FD 9 were being used for please.
 
 
  649 kid1| comm.cc(2507) comm_open_uds: Attempt open socket for: 
  /usr/local/squid/var/run/squid-1.ipc
  649 kid1| comm.cc(2525) comm_open_uds: Opened UDS FD 13 : family=1, type=2, 
  protocol=0
  649 kid1| comm.cc(2528) comm_open_uds: FD 13 is a new socket
  649 kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
 
  symlinking /usr/local/squid/var/run to /squid fixed the problem. I have 
  everything in /squid.

 Aha, then you probably need to use ./configure --prefix=/squid
 
 
Squid is compiled with:
 
--prefix=/squid --bindir=/squid --sbindir=/squid --libexecdir=/squid 
--datadir=/squid --sysconfdir=/squid --libdir=/squid --localstatedir=/squid
 
What do i need change to get /usr/local/squid/var/run/*ipc to /squid?
 
Jenny 

RE: [squid-users] WORKERS: Any compile option to enable? commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory

2011-06-12 Thread Jenny Lee

  Subject: Re: [squid-users] WORKERS: Any compile option to enable? 
  commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
 
  On 12/06/11 16:17, Jenny Lee wrote:
 
  I can't get the workers work. They are started fine. However I get:
 
  kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
  kid2| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
  kid3| commBind: Cannot bind socket FD 9 to [::]: (2) No such file or 
  directory
 
  Is there a compile option to enable/disable workers that I am missing?
 
  I can't seem to replicate that here. More details are needed about what
  FD 13 and FD 9 were being used for please.
 
 
  649 kid1| comm.cc(2507) comm_open_uds: Attempt open socket for: 
  /usr/local/squid/var/run/squid-1.ipc
  649 kid1| comm.cc(2525) comm_open_uds: Opened UDS FD 13 : family=1, type=2, 
  protocol=0
  649 kid1| comm.cc(2528) comm_open_uds: FD 13 is a new socket
  649 kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
 
  symlinking /usr/local/squid/var/run to /squid fixed the problem. I have 
  everything in /squid.

 Aha, then you probably need to use ./configure --prefix=/squid

 That should make the socket /squid/var/run/squid-1.ipc
 
Trimmed for brevity:
 
strings /squid/squid|grep '^\/'
/lib64/ld-linux-x86-64.so.2
/etc/resolv.conf
/squid/errors/templates
/dev/tty
/dev/null
/squid/squid.conf
/usr/local/squid/var/run/coordinator.ipc
/usr/local/squid/var/run/squid
 
Jenny 

RE: [squid-users] squid 3.2.0.5 smp scaling issues

2011-06-12 Thread Jenny Lee




 Date: Sun, 12 Jun 2011 19:54:10 +1200
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] squid 3.2.0.5 smp scaling issues

 On 12/06/11 18:46, Jenny Lee wrote:
 
  On Sat, Jun 11, 2011 at 9:40 PM, Jenny Lee wrote:
 
  I like to know how you are able to do13000 requests/sec.
  tcp_fin_timeout is 60 seconds default on all *NIXes and available ephemeral 
  port range is 64K.
  I can't do more than 1K requests/sec even with tcp_tw_reuse/tcp_tw_recycle 
  with ab. I get commBind errors due to connections in TIME_WAIT.
  Any tuning options suggested for RHEL6 x64?
  Jenny
 
  I would have a concern using both those at the same time. reuse and 
  recycle. Reuse a socket, but recycle it, I've seen issues when testing my 
  own linux distro's with both of these settings. Right or wrong that was my 
  experience.
  fin_timeout, if you have a good connection, there should be no reason that 
  a system takes 60 seconds to send out a fin. Cut that in half, if not by 
  2/3's
  And what is your limitation at 1K requests/sec, load (if so look at I/O) 
  Network saturation? Maybe I missed an earlier thread and I too would tilt 
  my head at 13K requests sec!
  Tory
  ---
 
 
  As I mentioned, my limitation is the ephemeral ports tied up with 
  TIME_WAIT. TIME_WAIT issue is a known factor when you are doing testing.
 
  When you are tuning, you apply options one at a time. tw_reuse/tc_recycle 
  were not used togeter and I had 10 sec fin_timeout which made no difference.
 
  Jenny
 
 
  nb: i still dont know how to do indenting/quoting with this hotmail... 
  after 10 years.
 

 Couple of thing to note.
 Firstly that this was an ab (apache bench) reported figure. It
 calculates the software limitation based on speed of transactions done.
 Not necessarily accounting for things like TIME_WAIT. Particularly if it
 was extrapolated from say, 50K requests, which would not hit that OS limit.
 
Ab accounts for 200-OK responses and TIME_WAITS cause squid to issue 500. Of 
course if you send in 50K it would not be subject to this but I usually send 
couple 10+ million to simulate load at least for a while.

 
 He also mentioned using a local IP address. If that was on the lo
 interface. It would not be subject to things like TIME_WAIT or RTT lag.
 
When I was running my benches on loopback, I had tons of TIME_WAITS for 
127.0.0.1 and squid would bail out with: commBind: Cannot bind socket...
 
Of course, I might be doing things wrong.
 
I am interested in what to optimize on RHEL6 OS level to achieve higher 
requests per second.
 
Jenny
 
 
 
 
 
 
 
 
 
 
  

RE: [squid-users] squid 3.2.0.5 smp scaling issues

2011-06-12 Thread Jenny Lee

 Date: Sun, 12 Jun 2011 03:02:23 -0700
 From: da...@lang.hm
 To: bodycar...@live.com
 CC: squ...@treenet.co.nz; squid-users@squid-cache.org
 Subject: RE: [squid-users] squid 3.2.0.5 smp scaling issues
 
 On Sun, 12 Jun 2011, Jenny Lee wrote:
 
  On 12/06/11 18:46, Jenny Lee wrote:
 
  On Sat, Jun 11, 2011 at 9:40 PM, Jenny Lee wrote:
 
  I like to know how you are able to do13000 requests/sec.
  tcp_fin_timeout is 60 seconds default on all *NIXes and available 
  ephemeral port range is 64K.
  I can't do more than 1K requests/sec even with 
  tcp_tw_reuse/tcp_tw_recycle with ab. I get commBind errors due to 
  connections in TIME_WAIT.
  Any tuning options suggested for RHEL6 x64?
  Jenny
 
  I would have a concern using both those at the same time. reuse and 
  recycle. Reuse a socket, but recycle it, I've seen issues when testing my 
  own linux distro's with both of these settings. Right or wrong that was 
  my experience.
  fin_timeout, if you have a good connection, there should be no reason 
  that a system takes 60 seconds to send out a fin. Cut that in half, if 
  not by 2/3's
  And what is your limitation at 1K requests/sec, load (if so look at I/O) 
  Network saturation? Maybe I missed an earlier thread and I too would tilt 
  my head at 13K requests sec!
  Tory
  ---
 
 
  As I mentioned, my limitation is the ephemeral ports tied up with 
  TIME_WAIT. TIME_WAIT issue is a known factor when you are doing testing.
 
  When you are tuning, you apply options one at a time. tw_reuse/tc_recycle 
  were not used togeter and I had 10 sec fin_timeout which made no 
  difference.
 
  Jenny
 
 
  nb: i still dont know how to do indenting/quoting with this hotmail... 
  after 10 years.
 
 
  Couple of thing to note.
  Firstly that this was an ab (apache bench) reported figure. It
  calculates the software limitation based on speed of transactions done.
  Not necessarily accounting for things like TIME_WAIT. Particularly if it
  was extrapolated from say, 50K requests, which would not hit that OS limit.
 
  Ab accounts for 200-OK responses and TIME_WAITS cause squid to issue 500. 
  Of course if you send in 50K it would not be subject to this but I usually 
  send couple 10+ million to simulate load at least for a while.
 
 
  He also mentioned using a local IP address. If that was on the lo
  interface. It would not be subject to things like TIME_WAIT or RTT lag.
 
  When I was running my benches on loopback, I had tons of TIME_WAITS for 
  127.0.0.1 and squid would bail out with: commBind: Cannot bind socket...
 
  Of course, I might be doing things wrong.
 
  I am interested in what to optimize on RHEL6 OS level to achieve higher 
  requests per second.
 
  Jenny
 
 I'll post my configs when I get back to the office, but one thing is that 
 if you send requests faster than they can be serviced the pending requests 
 build up until you start getting timeouts. so I have to tinker with the 
 number of requests that can be sent in parallel to keep the request rate 
 below this point.
 
 note that when I removed the long list of ACLs I was able to get this 13K 
 requests/sec rate going from machine A to squid on machine B to apache on 
 machine C so it's not a localhost thing.
 
 getting up to the 13K rate on apache does require doing some tuning and 
 tweaking of apache, stock configs that include dozens of dynamically 
 loaded modules just can't achieve these speeds. These are also fairly 
 beefy boxes, dual quad core opterons with 64G ram and 1G ethernet 
 (multiple cards, but I haven't tried trunking them yet)
 
 David Lang


Ok, I am assuming that persistent-connections are on. This doesn't simulate any 
real life scenario.

I would like to know if anyone can do more than 500 reqs/sec with persistent 
connections off.

Jenny 

RE: [squid-users] WORKERS: Any compile option to enable? commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory

2011-06-12 Thread Jenny Lee




 Date: Sun, 12 Jun 2011 21:23:44 +1200
 From: squ...@treenet.co.nz
 To: bodycar...@live.com
 CC: squid-users@squid-cache.org; squid-...@squid-cache.org
 Subject: Re: [squid-users] WORKERS: Any compile option to enable? commBind: 
 Cannot bind socket FD 13 to [::]: (2) No such file or directory

 On 12/06/11 20:21, Jenny Lee wrote:
 
  Subject: Re: [squid-users] WORKERS: Any compile option to enable? 
  commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
 
  On 12/06/11 16:17, Jenny Lee wrote:
 
  I can't get the workers work. They are started fine. However I get:
 
  kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
  kid2| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
  kid3| commBind: Cannot bind socket FD 9 to [::]: (2) No such file or 
  directory
 
  Is there a compile option to enable/disable workers that I am missing?
 
  I can't seem to replicate that here. More details are needed about what
  FD 13 and FD 9 were being used for please.
 
 
  649 kid1| comm.cc(2507) comm_open_uds: Attempt open socket for: 
  /usr/local/squid/var/run/squid-1.ipc
  649 kid1| comm.cc(2525) comm_open_uds: Opened UDS FD 13 : family=1, 
  type=2, protocol=0
  649 kid1| comm.cc(2528) comm_open_uds: FD 13 is a new socket
  649 kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
  directory
 
  symlinking /usr/local/squid/var/run to /squid fixed the problem. I have 
  everything in /squid.
 
  Aha, then you probably need to use ./configure --prefix=/squid
 
  That should make the socket /squid/var/run/squid-1.ipc
 
  Trimmed for brevity:
 
  strings /squid/squid|grep '^\/'
  /lib64/ld-linux-x86-64.so.2
  /etc/resolv.conf
  /squid/errors/templates
  /dev/tty
  /dev/null
  /squid/squid.conf
  /usr/local/squid/var/run/coordinator.ipc
  /usr/local/squid/var/run/squid

 Strange those last two are UNIX sockets with address:
 $(prefix)/var/run/coordinator.ipc
 $(prefix)/var/run/squid

 and yet the errors and squid.conf picked up the prefix right.

 Notice how that last one has the same name as the squid binary? Yet is a
 unix socket name. Strange things can be expected when you execute a
 socket or try to write a data stream over a binary.
 
I think that is not a socket. But that is a directory where socket will reside. 
Most daemons now are moving to their own directories under /var/run due to 
selinux.
 
squid-1.ipc and squid-2.ipc is created inside that direcotory with workers. How 
to find out what exactly it is from the binary? 
 
It indeed sounds alarming. But then again, I didn't face a bug in this setup 
that was not reproducible with stock install over the years.
 
But now... if i indeed symlink /usr/local/squid/var/run to /squid... that last 
entry will point straight to squid binary. 
 

 I suggest you let the installer use the standard FS hierarchy locations
 for these special things. You can use --prefix to setup a chroot folder
 (/squid) where a duplicate of the required layout will be created inside.
 
I can't do chroot because squid interfaces with other stuff or some other stuff 
interfaces with squid.
 
I am having /squid setup 9 years now.


 Meanwhile I'm not sure exactly how the /usr/local/squid/var/run/ got
 into the binary. Maybe junk from a previous build.
 Try erasing src/ipc/Makefile src/ipc/Makefile.in and src/ip/Port.o
 then running ./configure and make again.
 
I run configure from a script and it erases squid tree and unpacks the tarball 
at each compile (so that I would not face issues like this).
 
Thanks!
 
Jenny 

RE: [squid-users] squid 3.2.0.5 smp scaling issues

2011-06-12 Thread Jenny Lee


 Date: Sun, 12 Jun 2011 22:47:25 +1200
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] squid 3.2.0.5 smp scaling issues

 On 12/06/11 22:20, Jenny Lee wrote:
 
  Date: Sun, 12 Jun 2011 03:02:23 -0700
  From: da...@lang.hm
  To: bodycar...@live.com
  CC: squ...@treenet.co.nz; squid-users@squid-cache.org
  Subject: RE: [squid-users] squid 3.2.0.5 smp scaling issues
 
  On Sun, 12 Jun 2011, Jenny Lee wrote:
 
  On 12/06/11 18:46, Jenny Lee wrote:
 
  On Sat, Jun 11, 2011 at 9:40 PM, Jenny Lee wrote:
 
  I like to know how you are able to do13000 requests/sec.
  tcp_fin_timeout is 60 seconds default on all *NIXes and available 
  ephemeral port range is 64K.
  I can't do more than 1K requests/sec even with 
  tcp_tw_reuse/tcp_tw_recycle with ab. I get commBind errors due to 
  connections in TIME_WAIT.
  Any tuning options suggested for RHEL6 x64?
  Jenny
 
  I would have a concern using both those at the same time. reuse and 
  recycle. Reuse a socket, but recycle it, I've seen issues when testing 
  my own linux distro's with both of these settings. Right or wrong that 
  was my experience.
  fin_timeout, if you have a good connection, there should be no reason 
  that a system takes 60 seconds to send out a fin. Cut that in half, if 
  not by 2/3's
  And what is your limitation at 1K requests/sec, load (if so look at 
  I/O) Network saturation? Maybe I missed an earlier thread and I too 
  would tilt my head at 13K requests sec!
  Tory
  ---
 
 
  As I mentioned, my limitation is the ephemeral ports tied up with 
  TIME_WAIT. TIME_WAIT issue is a known factor when you are doing testing.
 
  When you are tuning, you apply options one at a time. 
  tw_reuse/tc_recycle were not used togeter and I had 10 sec fin_timeout 
  which made no difference.
 
  Jenny
 
 
  nb: i still dont know how to do indenting/quoting with this hotmail... 
  after 10 years.
 
 
  Couple of thing to note.
  Firstly that this was an ab (apache bench) reported figure. It
  calculates the software limitation based on speed of transactions done.
  Not necessarily accounting for things like TIME_WAIT. Particularly if it
  was extrapolated from say, 50K requests, which would not hit that OS 
  limit.
 
  Ab accounts for 200-OK responses and TIME_WAITS cause squid to issue 500. 
  Of course if you send in 50K it would not be subject to this but I 
  usually send couple 10+ million to simulate load at least for a while.
 
 
  He also mentioned using a local IP address. If that was on the lo
  interface. It would not be subject to things like TIME_WAIT or RTT lag.
 
  When I was running my benches on loopback, I had tons of TIME_WAITS for 
  127.0.0.1 and squid would bail out with: commBind: Cannot bind socket...
 
  Of course, I might be doing things wrong.
 
  I am interested in what to optimize on RHEL6 OS level to achieve higher 
  requests per second.
 
  Jenny
 
  I'll post my configs when I get back to the office, but one thing is that
  if you send requests faster than they can be serviced the pending requests
  build up until you start getting timeouts. so I have to tinker with the
  number of requests that can be sent in parallel to keep the request rate
  below this point.
 
  note that when I removed the long list of ACLs I was able to get this 13K
  requests/sec rate going from machine A to squid on machine B to apache on
  machine C so it's not a localhost thing.
 
  getting up to the 13K rate on apache does require doing some tuning and
  tweaking of apache, stock configs that include dozens of dynamically
  loaded modules just can't achieve these speeds. These are also fairly
  beefy boxes, dual quad core opterons with 64G ram and 1G ethernet
  (multiple cards, but I haven't tried trunking them yet)
 
  David Lang
 
 
  Ok, I am assuming that persistent-connections are on. This doesn't simulate 
  any real life scenario.

 What do you mean by that? it is the basic requirement for access to the
 major HTTP/1.1 performance features. ON is the default.
 
 
 
First of all, this breaks tcp_outgoing_address in squid. So it is definitely 
off for me.
 
Above issue also makes persistent-connections unusable for peers.
 
Second, when you have many users going to many destinations, 
persistent-connections is of no use. Even though I have persistent connections 
on for the client site, I am still bitten by the ephemeral ports.
 
These are my scenarios.
 
 

  I would like to know if anyone can do more than 500 reqs/sec with 
  persistent connections off.
 
  Jenny

 Good question. Anyone?
 
I can do 450 reqs/sec under constant load. But no more. And I have tried all 
available TCP tuning options.
 
Jenny 

RE: [squid-users] squid 3.2.0.5 smp scaling issues

2011-06-12 Thread Jenny Lee




 Date: Sun, 12 Jun 2011 03:35:28 -0700
 From: da...@lang.hm
 To: bodycar...@live.com
 CC: squid-users@squid-cache.org
 Subject: RE: [squid-users] squid 3.2.0.5 smp scaling issues

 On Sun, 12 Jun 2011, Jenny Lee wrote:

  Date: Sun, 12 Jun 2011 03:02:23 -0700
  From: da...@lang.hm
  To: bodycar...@live.com
  CC: squ...@treenet.co.nz; squid-users@squid-cache.org
  Subject: RE: [squid-users] squid 3.2.0.5 smp scaling issues
 
  On Sun, 12 Jun 2011, Jenny Lee wrote:
 
  On 12/06/11 18:46, Jenny Lee wrote:
 
  On Sat, Jun 11, 2011 at 9:40 PM, Jenny Lee wrote:
 
  I like to know how you are able to do13000 requests/sec.
  tcp_fin_timeout is 60 seconds default on all *NIXes and available 
  ephemeral port range is 64K.
  I can't do more than 1K requests/sec even with 
  tcp_tw_reuse/tcp_tw_recycle with ab. I get commBind errors due to 
  connections in TIME_WAIT.
  Any tuning options suggested for RHEL6 x64?
  Jenny
 
  I would have a concern using both those at the same time. reuse and 
  recycle. Reuse a socket, but recycle it, I've seen issues when testing 
  my own linux distro's with both of these settings. Right or wrong that 
  was my experience.
  fin_timeout, if you have a good connection, there should be no reason 
  that a system takes 60 seconds to send out a fin. Cut that in half, if 
  not by 2/3's
  And what is your limitation at 1K requests/sec, load (if so look at 
  I/O) Network saturation? Maybe I missed an earlier thread and I too 
  would tilt my head at 13K requests sec!
  Tory
  ---
 
 
  As I mentioned, my limitation is the ephemeral ports tied up with 
  TIME_WAIT. TIME_WAIT issue is a known factor when you are doing testing.
 
  When you are tuning, you apply options one at a time. 
  tw_reuse/tc_recycle were not used togeter and I had 10 sec fin_timeout 
  which made no difference.
 
  Jenny
 
 
  nb: i still dont know how to do indenting/quoting with this hotmail... 
  after 10 years.
 
 
  Couple of thing to note.
  Firstly that this was an ab (apache bench) reported figure. It
  calculates the software limitation based on speed of transactions done.
  Not necessarily accounting for things like TIME_WAIT. Particularly if it
  was extrapolated from say, 50K requests, which would not hit that OS 
  limit.
 
  Ab accounts for 200-OK responses and TIME_WAITS cause squid to issue 500. 
  Of course if you send in 50K it would not be subject to this but I 
  usually send couple 10+ million to simulate load at least for a while.
 
 
  He also mentioned using a local IP address. If that was on the lo
  interface. It would not be subject to things like TIME_WAIT or RTT lag.
 
  When I was running my benches on loopback, I had tons of TIME_WAITS for 
  127.0.0.1 and squid would bail out with: commBind: Cannot bind socket...
 
  Of course, I might be doing things wrong.
 
  I am interested in what to optimize on RHEL6 OS level to achieve higher 
  requests per second.
 
  Jenny
 
  I'll post my configs when I get back to the office, but one thing is that
  if you send requests faster than they can be serviced the pending requests
  build up until you start getting timeouts. so I have to tinker with the
  number of requests that can be sent in parallel to keep the request rate
  below this point.
 
  note that when I removed the long list of ACLs I was able to get this 13K
  requests/sec rate going from machine A to squid on machine B to apache on
  machine C so it's not a localhost thing.
 
  getting up to the 13K rate on apache does require doing some tuning and
  tweaking of apache, stock configs that include dozens of dynamically
  loaded modules just can't achieve these speeds. These are also fairly
  beefy boxes, dual quad core opterons with 64G ram and 1G ethernet
  (multiple cards, but I haven't tried trunking them yet)
 
  David Lang
 
 
  Ok, I am assuming that persistent-connections are on. This doesn't simulate 
  any real life scenario.
 
  I would like to know if anyone can do more than 500 reqs/sec with 
  persistent connections off.

 I'm not using persistant connections. I do this same sort of testing to
 validate various proxies that don't support persistant connections.

 I'm remembering the theoretical max of the TCP stack (from one source IP
 to one destination IP) as being ~16K requests/sec, but I don't have
 references to point to at the moment.

 David Lang
 
 
With tcp_fin_timeout set at theoretical minimum of 12 secs, we can do 5K req/s 
with 64K ports.
 
Setting tcp_fin_timeout had no effect for me. Apparently there is conflicting / 
outdated information everywhere and I could not lower TIME_WAIT from its 
default of 60 secs which is hardcoded into include/net/tcp.h. But I doubt this 
would have any effect when you are constantly loading the machine.
 
Making localhost to localhost connections didn't help either.
 
I am not a network guru, so of course I am probably doing things wrong. But no 
matter how wrong you do stuff

RE: [squid-users] kid1| assertion failed: helper.cc:697: hlp-childs.n_running 0

2011-06-12 Thread Jenny Lee

  
  Date: Sun, 12 Jun 2011 14:26:09 +1200
  From: squ...@treenet.co.nz
  To: squid-users@squid-cache.org
  Subject: Re: [squid-users] kid1| assertion failed: helper.cc:697: 
  hlp-childs.n_running 0
 
  On 12/06/11 14:16, Jenny Lee wrote:
 
  Dear Squid Users,
 
  I get this occasionally with with NCSA auth followed by a restart.
 
  What does it mean?
 
  Jenny
 
  RHEL6 x64
  Squid 3.2.0.7
 
 
  A helper process died or shutdown. But Squid internal state indicates
  there were none of that type of helper running.
 
  Thanks Amos,
 
  Is there a limit to the amount of requests helpers can service?
 
 The bundled helpers don't use the concurrency protocol yet, so they can 
 handle 2 simultaneous requests each: one being worked on, and one queued 
 waiting.
 
 
  I have:
 
  auth_param basic children 20 startup=20 idle=2
 
  When this happens, helpers are running and available.
 
 Yes, 20 maximum. minimum of 20 loaded on startup. These can handled 40 
 simultaneous client requests.
 
 The error is about a helper closing/dying/shutdown when there are none 
 running.
 
 For example 20 helpers started, and 21 socket closed notices received by 
 Squid.
 
number active: 50 of 50 (0 shutting down)
requests sent: 424394
replies received: 424394
queue length: 0
avg service time: 0 msec
  #  FD PID  # Requests Flags  Time  Offset Request
  1 210   167946187   0.000   0 (none)
  2 240   16795 820   0.000   0 (none)
  3 268   16796  14   0.001   0 (none)
  4 278   16797   0   0.000   0 (none)
  5 284   16798   0   0.000   0 (none)
 
 
5-10-20-50 makes no difference. I have not seen more than 3 helpers in use yet 
i always get assertion failed restart couple of times a day.
 
What else could be causing this issue?
 
Jenny
  

[squid-users] kid1| assertion failed: helper.cc:697: hlp-childs.n_running 0

2011-06-11 Thread Jenny Lee

Dear Squid Users,
 
I get this occasionally with with NCSA auth followed by a restart.
 
What does it mean?
 
Jenny
 
RHEL6 x64
Squid 3.2.0.7 

RE: [squid-users] kid1| assertion failed: helper.cc:697: hlp-childs.n_running 0

2011-06-11 Thread Jenny Lee


 Date: Sun, 12 Jun 2011 14:26:09 +1200
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] kid1| assertion failed: helper.cc:697: 
 hlp-childs.n_running  0

 On 12/06/11 14:16, Jenny Lee wrote:
 
  Dear Squid Users,
 
  I get this occasionally with with NCSA auth followed by a restart.
 
  What does it mean?
 
  Jenny
 
  RHEL6 x64
  Squid 3.2.0.7


 A helper process died or shutdown. But Squid internal state indicates
 there were none of that type of helper running.
 
Thanks Amos,
 
Is there a limit to the amount of requests helpers can service?
 
I have: 
 
auth_param basic children 20 startup=20 idle=2
 
When this happens, helpers are running and available.
 
Jenny 

Re: [squid-users] Re: squid 3.2.0.5 even slower than squid 3.1

2011-06-11 Thread Jenny Lee

Hello David,
 
We read your benchmarks with interest. Thank you for the work.
 
I have mentioned --disable-ipv6 issue before and its solution. Attaching it 
for your perusal.
 
Jenny
 
 

one thing that I've found is that even with --disable-ipv6 squid will 
still use IPv6 on a system that has it configured (next I'll try and see 
if that's what's going wrong on the systems that don't have it configured, 
but those systems don't have strace on them, so I'll have to build a 
throw-away system instead of using one of my standard build test systems)
David Lang


 To: squid-users@squid-cache.org
 Date: Thu, 5 May 2011 07:58:40 +
 Subject: [squid-users] Impressions about 3.2.0.7

snip
 4. --disable-ipv6 does not work. We had to modify configure to include 
 #define USE_IPV6 0 to remove ipv6.  
  

[squid-users] WORKERS: Any compile option to enable? commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory

2011-06-11 Thread Jenny Lee

I can't get the workers work. They are started fine. However I get:
 
kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
kid2| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
kid3| commBind: Cannot bind socket FD 9 to [::]: (2) No such file or directory
 
Is there a compile option to enable/disable workers that I am missing?
 
The ports below work fine on mono squid.
 
---
workers 2
 
if ${process_number} = 1
http_port 1.1.1.1:3128
else
http_port 1.1.1.1:3129
endif
---
 
Thanks
 
Jenny
 
 
RHEL6 x64
Squid 3.2.0.7
 
 
Compile:
--disable-carp \
--disable-wccp \
--disable-wccpv2 \
--disable-snmp \
--disable-htcp \
--disable-ident-lookups \
--disable-unlinkd \
--disable-translation \
--disable-auto-locale \
--disable-loadable-modules \
--disable-esi \
--disable-disk-io \
--disable-eui \
--disable-storeio \
--disable-auth-ntlm \
--disable-auth-negotiate \
--disable-auth-digest \
--disable-cache-digests \
--disable-ntlm-auth-helpers \
--disable-negotiate-auth-helpers \
--disable-digest-auth-helpers \
--disable-ipfw-transparent \
--disable-ipf-transparent \
--disable-pf-transparent \
--disable-linux-tproxy \
--disable-linux-netfilter \
--without-netfilter-conntrack \
--disable-url-rewrite-helpers \
--disable-win32-service \
--disable-zph-qos \
--disable-icap-client \
--disable-ecap \
--disable-useragent-log \
--disable-referer-log \
--disable-eui \
--disable-poll \
--disable-select \
--disable-kqueue \
--disable-icmp \
--disable-gnuregex \
--disable-cpu-profiling \
--disable-kill-parent-hack \
--disable-follow-x-forwarded-for \
--disable-forw-via-db \
--without-valgrind-debug \
--without-ipv6-split-stack \
--without-po2html 

FW: [squid-users] WORKERS: Any compile option to enable? commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory

2011-06-11 Thread Jenny Lee

I also cannot shut down squid when workers are enabled.
 
squid -k shutdown gives No Running Copy
 
I have to run a killall -9 squid
 
Also what happens when I have 2 cores but start 7 workers?
 
Jenny


 From: bodycar...@live.com
 To: squid-users@squid-cache.org
 Date: Sun, 12 Jun 2011 04:17:41 +
 Subject: [squid-users] WORKERS: Any compile option to enable? commBind: 
 Cannot bind socket FD 13 to [::]: (2) No such file or directory


 I can't get the workers work. They are started fine. However I get:

 kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
 directory
 kid2| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
 directory
 kid3| commBind: Cannot bind socket FD 9 to [::]: (2) No such file or directory

 Is there a compile option to enable/disable workers that I am missing?

 The ports below work fine on mono squid.

 ---
 workers 2

 if ${process_number} = 1
 http_port 1.1.1.1:3128
 else
 http_port 1.1.1.1:3129
 endif
 ---

 Thanks

 Jenny


 RHEL6 x64
 Squid 3.2.0.7


 Compile:
 --disable-carp \
 --disable-wccp \
 --disable-wccpv2 \
 --disable-snmp \
 --disable-htcp \
 --disable-ident-lookups \
 --disable-unlinkd \
 --disable-translation \
 --disable-auto-locale \
 --disable-loadable-modules \
 --disable-esi \
 --disable-disk-io \
 --disable-eui \
 --disable-storeio \
 --disable-auth-ntlm \
 --disable-auth-negotiate \
 --disable-auth-digest \
 --disable-cache-digests \
 --disable-ntlm-auth-helpers \
 --disable-negotiate-auth-helpers \
 --disable-digest-auth-helpers \
 --disable-ipfw-transparent \
 --disable-ipf-transparent \
 --disable-pf-transparent \
 --disable-linux-tproxy \
 --disable-linux-netfilter \
 --without-netfilter-conntrack \
 --disable-url-rewrite-helpers \
 --disable-win32-service \
 --disable-zph-qos \
 --disable-icap-client \
 --disable-ecap \
 --disable-useragent-log \
 --disable-referer-log \
 --disable-eui \
 --disable-poll \
 --disable-select \
 --disable-kqueue \
 --disable-icmp \
 --disable-gnuregex \
 --disable-cpu-profiling \
 --disable-kill-parent-hack \
 --disable-follow-x-forwarded-for \
 --disable-forw-via-db \
 --without-valgrind-debug \
 --without-ipv6-split-stack \
 --without-po2html   

[squid-users] squid 3.2.0.5 smp scaling issues

2011-06-11 Thread Jenny Lee

I like to know how you are able to do 13000 requests/sec.
 
tcp_fin_timeout is 60 seconds default on all *NIXes and available ephemeral 
port range is 64K.
 
I can't do more than 1K requests/sec even with tcp_tw_reuse/tcp_tw_recycle with 
ab. I get commBind errors due to connections in TIME_WAIT.
 
Any tuning options suggested for RHEL6 x64?
 
Jenny
 
 
 
 
---
test setup
box A running apache and ab
test against local IP address 13000 requests/sec
box B running squid, 8 2.3 GHz Opteron cores with 16G ram
non acl/cache-peer related lines in the config are (including typos from 
me manually entering this)
http_port 8000
icp_port 0
visible_hostname gromit1
cache_effective_user proxy
cache_effective_group proxy
appaend_domain .invalid.server.name
pid_filename /var/run/squid.pid
cache_dir null /tmp
client_db off
cache_access_log syslog squid
cache_log /var/log/squid/cache.log
cache_store_log none
coredump_dir none
no_cache deny all

results when requesting short html page 
squid 3.0.STABLE12 4200 requests/sec
squid 3.1.11 2100 requests/sec
squid 3.2.0.5 1 worker 1400 requests/sec
squid 3.2.0.5 2 workers 2100 requests/sec
squid 3.2.0.5 3 workers 2500 requests/sec
squid 3.2.0.5 4 workers 2900 requests/sec
squid 3.2.0.5 5 workers 2900 requests/sec
squid 3.2.0.5 6 workers 2500 requests/sec
squid 3.2.0.5 7 workers 2000 requests/sec
squid 3.2.0.5 8 workers 1900 requests/sec
in all these tests the squid process was using 100% of the cpu
I tried it pulling a large file (100K instead of 50 bytes) on the thought 
that this may be bottlenecking on accepting the connections but with 
something that took more time to service the connections it could do 
better however what I found is that with 8 workers all 8 were using 50% 
of the CPU at 1000 requests/sec
local machine would do 7000 requests/sec to itself
1 worker 500 requests/sec
2 workers 957 requests/sec
from there it remained about 1000 requests/sec with the cpu 
utilization slowly dropping off (but not dropping as fast as it should 
with the number of cores available)
so it looks like there is some significant bottleneck in version 3.2 that 
makes the SMP support fairly ineffective.

in reading the wiki page at wili.squid-cache.org/Features/SmpScale I see 
you worrying about fairness between workers. If you have put in code to 
try and ensure fairness, you may want to remove it and see what happens to 
performance. what you are describing on that page in terms of fairness is 
what I would expect form a 'first-come-first-served' approach to multiple 
processes grabbing new connections. The worker that last ran is hot in the 
cache and so has an 'unfair' advantage in noticing and processing the new 
request, but as that worker gets busier, it will be spending more time 
servicing the request and the other processes will get more of a chance to 
grab the new connection, so it will appear unfair under light load, but 
become more fair under heavy load.
David Lang

FW: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-06-08 Thread Jenny Lee

I just realized that Cookie headers are also not obeyed when going through 
peers.
 
Everything works going direct, but nothing works if you are using any peers.
 
I surely cannot be the only person out of all squid users that is bitten by 
this anomaly.
 
Jenny
 
 


 From: bodycar...@live.com
 To: squ...@treenet.co.nz; squid-users@squid-cache.org
 Date: Thu, 28 Apr 2011 19:25:27 +
 Subject: RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly 
 with aclnames?
 
 
   It seems to me that ACL SRC is NEVER checked when going to a Peer.
  
   WHAT I WANT TO DO:
   acl OFFICE src 1.1.1.1
   request_header_access User-Agent allow OFFICE
   request_header_access User-Agent deny all
   request-header_replace User-Agent BOGUS AGENT
  
  
   [OFFICE UA should not be modified whehter going direct or through a peer]
  
   Thanks,
  
   Jenny
  
   PS: Running 3.2.0.7 on production and works good and reliably. The UA 
   issue above is present on both 3.2.0.1 and 3.2.0.7. 
  
  
  Okay, this is going to need a cache.log trace for debug_options 28,9 
  to see what is being tested where.
 
 
 No difference whatever is done. PEER1, !PEER1, !PEER2... No peer... Seperate 
 lines...
 
 SRC IP is never available, so it always fails. PEER is available though, I 
 can make it work with using just PEER1. Going direct works also as expected.
 
 Thanks.
 
 Jenny
 
 
 kid1| ACLChecklist::preCheck: 0x7504abc0 checking 'request_header_access 
 User-Agent allow OFFICE_IP !PEER1'
 kid1| ACLList::matches: checking OFFICE_IP
 kid1| ACL::checklistMatches: checking 'OFFICE_IP'
 kid1| aclIpAddrNetworkCompare: compare: 
 [::]/[:::::::ff00] ([::]) vs 
 2.2.2.0-[::]/[:::::::ff00]
 kid1| aclIpMatchIp: '[::]' NOT found
 kid1| ACL::ChecklistMatches: result for 'OFFICE_IP' is 0
 kid1| ACLList::matches: result is false
 kid1| aclmatchAclList: 0x7504abc0 returning false (AND list entry failed 
 to match)  

RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-06-08 Thread Jenny Lee

Hello Amos,


 To: squid-users@squid-cache.org
 Date: Thu, 9 Jun 2011 13:02:49 +1200
 From: squ...@treenet.co.nz
 Subject: Re: FW: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work 
 properly with aclnames?

 On Wed, 8 Jun 2011 17:01:39 +, Jenny Lee wrote:
  I just realized that Cookie headers are also not obeyed when going
  through peers.
 
  Everything works going direct, but nothing works if you are using any
  peers.
 
  I surely cannot be the only person out of all squid users that is
  bitten by this anomaly.
 
  Jenny
 

 Possibly not. Although most of the anonymous crowd just put all ACL
 by instinct and leave it at that.
 
Yes but this is not a server for 1 person. I block cookies, but I want to allow 
my OFFICE to pass cookies through.
 
It seems like nothing except all works on this HEADER_ACCESS lines. Anything 
else has empty value and fails.
 
 
2011/06/08 22:50:26.271 kid1| ACLList::matches: checking all
2011/06/08 22:50:26.271 kid1| ACL::checklistMatches: checking 'all'
2011/06/08 22:50:26.271 kid1| aclIpAddrNetworkCompare: compare: [::]/[::] 
([::])  vs [::]-[::]/[::]
2011/06/08 22:50:26.271 kid1| aclIpMatchIp: '[::]' found
2011/06/08 22:50:26.271 kid1| ACL::ChecklistMatches: result for 'all' is 1
2011/06/08 22:50:26.271 kid1| ACLList::matches: result is true
2011/06/08 22:50:26.271 kid1| aclmatchAclList: 0x7fff4e4885d0 returning true 
(AND list satisfied)
 

 
 Did you check if 3.2.0.8 for the myportname problem?
 
I browsed through Changelog for 3.2.0.8 but did not see any of my bugs 
addressed. I did not have myportname issue.


 On the Cookie: header. Is the content coming from the peer cached?
 Cookies are erased on cached HITs.
 

httpSendRequest is not sending cookies to peers if allow all is not 
specified. Caching is disabled, proxy-only peers.
 
Thanks.
 
Jenny 

[squid-users] CAN TCP_OUTGOING_ADDRESS BIND TO ETH1? How to make D-S-L work on a machine with static routings?

2011-05-26 Thread Jenny Lee

Hello Squid Users,
 
I have a machine that has static connections (running apache, vsftpd, etc).
 
Upstream bandwidth is costly, so I would like to use our D-S-L connection to 
save up on some traffic.
 
On D-S-L line, IP changes at each authentication [(PPPoE authentication using a 
secondary IP route table). I am using a secondary route table as follows:

echo '101 d-s-l'  /etc/iproute2/rt_tables
ip rule add from 192.168.1.64 table d-s-l
ip route add default via 192.168.1.254 table d-s-l
ip rule add from 192.168.1.0/24 table d-s-l

squid: tcp_outgoing_address 192.168.1.64
 
[192.168.1.64 being ppp interface IP, 192.168.1.254 being DSLAM IP from telco]
 
This works.
 
However, on PPPoE, end points are not known beforehand so I cannot attach squid 
outgoing. 
 
Machine has eth0, eth1, and ppp (over eth1). eth0 is static server IP where 
main routing is done. D-S-L is on eth1 via ip route table above.
 
Is it possible to bind squid to an interface?
 
I think this sounded absurd :) Other option probably tcp_outgoing_tos/mark?
 
Have a good day!
 
Jenny 

RE: [squid-users] CAN TCP_OUTGOING_ADDRESS BIND TO ETH1? How to make D-S-L work on a machine with static routings?

2011-05-26 Thread Jenny Lee

Hello Amos,
 
  Is it possible to bind squid to an interface?

 Squid uses the bind() API to the kernel. So no.
 
Thanks.
 
  I think this sounded absurd :) Other option probably tcp_outgoing_tos/mark?

 Have you tried to get it working without Squid needing a particular
 sending IP? When Squid leaves the IP selection up the the OS its should
 be given the primary box IP as of the time of the connection setup. Most
 software use bind()/connect() just like Squid, so will also be having
 problems on your box if Squids default wont work.
 
 
There is default routing on the server and everything else including squid 
works fine.
 
I need to send some problematic users with high downloads to braodband line via 
the alternate 'ip route' table created. 
 

 As a kludge workaround you can add an OS trigger on ppp-up/down to
 reconfigure Squid.
 
I do it like this now but I face some occasional issues. I wanted to know if it 
could be done within squid.
 
Jenny
  

[squid-users] cache_peer name is not available for logging on CONNECT

2011-05-06 Thread Jenny Lee

Hello Squid Users,
 
cache_peer 2.2.2.2parent 31280 name=PARENT_X
 
On http connections, access log shows PARENT_X entry.
 
On https connections, access log shows 2.2.2.2 entry.
 
This messes up log processing.
 
Is there any reason for this?
 
Thanks.
 
Jenny

3.2.0.7   

[squid-users] Impressions about 3.2.0.7

2011-05-05 Thread Jenny Lee


I would like to thank squid team for the good work on 3.2.0.7.
 
I went from 3.2.0.1 to 3.2.0.7 straight to development and faced no issues. 
Runs reliably since 2 weeks.
 
 
1. Irritating 0 HTTP Response Code on CONNECT to peers fixed.
 
2. Equally irritating CD_SIBLING_HIT and all CD_ are replaced with 
ANY_OLD_PARENT (we have cache digest disabled on compile)
 
3. %oa ported from 2.7 and works fine (%la).
 
4. --disable-ipv6 does not work. We had to modify configure to include 
#define USE_IPV6 0 to remove ipv6.
 
5. -fPIE does not work as always (standard on RHEL).
 
Thank you again.
 
Jenny
  

RE: [squid-users] Impressions about 3.2.0.7

2011-05-05 Thread Jenny Lee

  4. --disable-ipv6 does not work. We had to modify configure to include 
  #define USE_IPV6 0 to remove ipv6.
 
  5. -fPIE does not work as always (standard on RHEL).


 Is that all a list of fixed? or are 4  5 still problems?
 
 
Hello Amos,
 
#4 and #5 are are still problems.
 
#5 is bug 2996. -fPIE does not work on RHEL5 or RHEL6 with 3.2's (I don't know 
about earlier versions).
 
Jenny
 
  

RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-05-04 Thread Jenny Lee

 No difference whatever is done. PEER1, !PEER1, !PEER2... No peer... Seperate 
 lines...

 SRC IP is never available, so it always fails. PEER is available though, I 
 can make it work with using just PEER1. Going direct works also as expected.

 Thanks.

 Jenny


 kid1| ACLChecklist::preCheck: 0x7504abc0 checking 'request_header_access 
 User-Agent allow OFFICE_IP !PEER1'
 kid1| ACLList::matches: checking OFFICE_IP
 kid1| ACL::checklistMatches: checking 'OFFICE_IP'
 kid1| aclIpAddrNetworkCompare: compare: 
 [::]/[:::::::ff00] ([::]) vs 
 2.2.2.0-[::]/[:::::::ff00]
 kid1| aclIpMatchIp: '[::]' NOT found
 kid1| ACL::ChecklistMatches: result for 'OFFICE_IP' is 0
 kid1| ACLList::matches: result is false
 kid1| aclmatchAclList: 0x7504abc0 returning false (AND list entry failed 
 to match)
 
Is there an update on this? Shall I file a bug?
 
I am going on about this matter since November-2010.
 
Thanks
 
Jenny 

RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-05-04 Thread Jenny Lee

  kid1| ACLChecklist::preCheck: 0x7504abc0 checking
  'request_header_access User-Agent allow OFFICE_IP !PEER1'
  kid1| ACLList::matches: checking OFFICE_IP
  kid1| ACL::checklistMatches: checking 'OFFICE_IP'
  kid1| aclIpAddrNetworkCompare: compare:
  [::]/[:::::::ff00] ([::]) vs
  2.2.2.0-[::]/[:::::::ff00]
  kid1| aclIpMatchIp: '[::]' NOT found

 Aha! so it is the source IP not being known at all.
 request_header_access uses IP from the HTP request details. We need to
 find out if that is NULL or just lacking the client IP and why it got
 that way.

I don't understand this. Isn't the source IP provided by TCP/IP stack?
 
Squid is not doing anything extra to find it. It is already being provided by 
the connection. If it is not available when going through a peer, then it must 
be squid's problem.
 
 
 Yes please if there is not already one on this. If there is please
 'bump' it by mentioning the latest release you have seen it in.
 
I will file a bug. But unfortunately each bug I post takes 6 months to fix. 
This is on 3.2.0.7 and all 3.2.0's we have used.
 
Thanks.
 
Jenny 

RE: [squid-users] Access log not using logformat config line.

2011-05-04 Thread Jenny Lee




 Date: Wed, 4 May 2011 19:36:56 -0400
 From: far...@itouchpoint.com
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Access log not using logformat config line.

 I don't have any specific access_log config line, but that's not the
 issue. The access log file is being created but the entries aren't in
 the format I've specified.
 
I am sure putting a single line there as Amos suggested and seeing it works is 
easier than posting here and complaining when the solution was already provided.
 
It works same way for the rest of us.
 
Jenny
 
  

RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-04-28 Thread Jenny Lee

  It seems to me that ACL SRC is NEVER checked when going to a Peer.
 
  WHAT I WANT TO DO:
  acl OFFICE src 1.1.1.1
  request_header_access User-Agent allow OFFICE
  request_header_access User-Agent deny all
  request-header_replace User-Agent BOGUS AGENT
 
 
  [OFFICE UA should not be modified whehter going direct or through a peer]
 
  Thanks,
 
  Jenny
 
  PS: Running 3.2.0.7 on production and works good and reliably. The UA issue 
  above is present on both 3.2.0.1 and 3.2.0.7. 
 
 
 Okay, this is going to need a cache.log trace for debug_options 28,9 
 to see what is being tested where.
 
 
No difference whatever is done. PEER1, !PEER1, !PEER2... No peer... Seperate 
lines...
 
SRC IP is never available, so it always fails. PEER is available though, I can 
make it work with using just PEER1. Going direct works also as expected.
 
Thanks.
 
Jenny
 
 
kid1| ACLChecklist::preCheck: 0x7504abc0 checking 'request_header_access 
User-Agent allow OFFICE_IP !PEER1'
kid1| ACLList::matches: checking OFFICE_IP
kid1| ACL::checklistMatches: checking 'OFFICE_IP'
kid1| aclIpAddrNetworkCompare: compare: 
[::]/[:::::::ff00] ([::])  vs 
2.2.2.0-[::]/[:::::::ff00]
kid1| aclIpMatchIp: '[::]' NOT found
kid1| ACL::ChecklistMatches: result for 'OFFICE_IP' is 0
kid1| ACLList::matches: result is false
kid1| aclmatchAclList: 0x7504abc0 returning false (AND list entry failed to 
match)

RE: [squid-users] Persistent Connections to Parent Proxy

2011-04-28 Thread Jenny Lee




 Date: Fri, 29 Apr 2011 01:12:55 +1200
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Persistent Connections to Parent Proxy

 On 28/04/11 20:19, Mathias Fischer wrote:
  Hi,
 
  We use squid together with a content scanner connected as parent proxy
  (cache_peer parent) with none of them caching any content. When
  upgrading from squid 2.7 to 3.1, we observed an increased number of TCP
  connections between squid and its parent. I analysed the traffic between
  squid and the parent proxy (for both squid versions), and found (among
  some differences in HTTP version and (Proxy-)Connection header) that the

 Proxy-Connection: has never been a registered header suitable for
 transmission. Squid-3 was mistakenly made to send it for a while instead
 of just accept it. That bug has been fixed in recent releases.
 Only Connection: shod be sent over the wire.

  usage of persistent connections has changed. In squid 2.7, a persistent
  connection to the parent proxy is shared for multiple origin servers,
  while in squid 3.1, there is at least one connection per origin server.
  Obviously, this results in a much higher total number of connections.

 Hmm, I thought we corrected that the same way in both 3.1 and 2.7.
 3.0 and 2.6 certainly had that behaviour.

 Current 2.7 and 3.1 should have (peer_IP, domain_name) as the pconn key.
 There can be multiple duplicates of course up to as many as needed to
 handle peak load (moderated by how fast the peer closes them).

 
  Is there a possibility to influence this behaviour? To me, it looks like
  this is related to the introduced Connection Pinning [1] feature.

 Pinning links one server FD per client connection, kind of an
 independent and special type of persistence. It should not be showing
 this behaviour, though yes it also will cause a multitude of server
 connections.

 
  As a workaround, I see the option to reduce the number of open
  persistent connections through pconn_timeout, but this will have an
  impact on other connections as well which could reduce performance.

 We have a re-structuring if the conn and pconn handling coming to 3.2
 shortly (a few weeks) which removes the domain name from the pconn key.
 
We have the same problem in 3.2.0.1 and 3.2.0.7
 
Is this planned for 3.2.0.8?
 
Thanks!
 
Jenny
  

RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-04-25 Thread Jenny Lee

 I'm a little confused by this scenario and your statement It would be 
 nice if the crawler identified itself.
 Is it spoofing an agent name identical that on your OFFICE machines? 
 Even the absence of a U-A header is identification in a way.
 
That was just an example. In its simplest form:
DO NOT MODIFY UA OF SRC ACL OFFICE Machines
Change UA of everything else to a fixed value.
 
 

 AFAIK it *should* only require that config you have. If we can figure 
 out whats going wrong the bug can be fixed.
 
I have submitted close to 20 bugs over teh years (not all are from this email) 
and all of them are fixed over time. I am positive this issue does not arise 
because of my config.
 
HALF-BAKED:
acl OFFICE src 1.1.1.1
request_header_access User-Agent allow OFFICE
request_header_access User-Agent deny all
request-header_replace User-Agent BOGUS AGENT

[DIRECT works as expected for OFFICE -- no modifications. However, UA for 
OFFICE is replaced as soon as the connection is forwarded to a peer]
 
 
HALF-BAKED:
acl OFFICE src 1.1.1.1
cache_peer 2.2.2.2 parent 2  0 proxy-only no-query name=PEER2
acl PEER2 peername PEER2
request_header_access User-Agent allow PEER2 OFFICE
request_header_access User-Agent deny PEER2 !OFFICE 
request_header_access User-Agent deny all
request-header_replace User-Agent BOGUS AGENT
[all and every combination of ALLOW/DENY/PEER2/OFFICE... does not work]
 
 
WORKS WHEN GOING THROUGH A PEER:
request_header_access User-Agent allow PEER2
request_header_access User-Agent deny all
request-header_replace User-Agent BOGUS AGENT
 
 
It seems to me that ACL SRC is NEVER checked when going to a Peer.
 
WHAT I WANT TO DO:
acl OFFICE src 1.1.1.1
request_header_access User-Agent allow OFFICE
request_header_access User-Agent deny all
request-header_replace User-Agent BOGUS AGENT
 

[OFFICE UA should not be modified whehter going direct or through a peer]
 
Thanks,
 
Jenny
 
PS: Running 3.2.0.7 on production and works good and reliably. The UA issue 
above is present on both 3.2.0.1 and 3.2.0.7.   
  

RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-04-20 Thread Jenny Lee

 Reality after looking at the code:
 Mangling is done after peer selection right at the last milli-second 
 before sending the headers down the wire. It is done on all HTTP 
 requests including CONNECT tunnels when they are relayed.
 
 Peering info *is* available. But src ACL does not check for that 
 property.
 
 If you have 3.1 I think you want to add a peername ACL like so:
 
 acl peerX peername X
 request_header_access User-Agent allow OFFICE !peerX
 ...

I have 3.2.0.1 and unfortunately this does not work either. I will check on 
3.2.0.7 (would that make a difference?).

Furthermore, it would be nice to able to select UA like:

request_header_replace User-Agent OFFICE Mozilla
request_header_replace User-Agent HOME IE 

Many sites require teh UA to come from known browsers. We tried randomizing UA 
but many things broke on destination sites.

Thanks

Jenny 

RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-04-20 Thread Jenny Lee

  I have 3.2.0.1 and unfortunately this does not work either. I will check on 
  3.2.0.7 (would that make a difference?).
 
 May do. I don't recall changing anything there directly but the passing 
 around of request details has been fixed in a few places earlier which 
 may affect it.
 
 Also, do you have this part which I forgot to add?
 cache_peer  name=X
 
Yes I do, Amos.

 
  Furthermore, it would be nice to able to select UA like:
 
  request_header_replace User-Agent OFFICE Mozilla
  request_header_replace User-Agent HOME IE
 
 Well...
 
 request_header_access User-Agent deny OFFICE Mozilla
 request_header_replace User-Agent HOME IE
 
 ... should also be working if a browser type ACL is used to check the 
 User-Agent field for Mozilla.
 
Actually, I am trying to fix a UA for source IPs.
 
For example:
If the connection is from OFFICE, set the UA to Mozilla. 
If the connection is from HOME, set the UA to Internet Explorer.

 
 P.S.: Nice for some maybe, but which of the 3.5 million or more browser 
 U-A strings do you suggest we hard-code into Squid for faking like this?
 
It should be left to hte user the way it is now. 
 
Here what I am trying to do is to brand our connections. Suppose we have a 
crawler. It would be nice if the crawler identified itself as such. On the 
other hand, I do not want to modify the UA of our OFFICE users. They should be 
passed as is.
 
I thought this would be relatively easy to accomplish in squid, after all it is 
very able and comes with the whole shabang and the kitchen sink, but 
unfortunately i have had no success so far.
 
Jenny
  

RE: [squid-users] Squid 3.1.12 is available

2011-04-18 Thread Jenny Lee

  When you say earlier, what would be the upper end of the timeframe?
  (1 week, 1 month?)

 By early I mean earlier than 1st May which was the next scheduled
 monthly beta.
 Specifically as soon as I can migrate a half dozen bug fixes around,
 test for build failures and write the ChangeLog. 72 hours or so if the
 tests go well.

 Amos

 
Excellent. May 1st is good for us and will plan as so. Anything earlier is a 
bonus :)
 
J 

RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-04-18 Thread Jenny Lee

  What is the definition of OFFICE ?
  request_header_access are fast ACL which will not wait for unavailable
  details to be fetched.

 Ah! proxy_auth :)

 Jenny
 
 
acl OFFICE src 2.2.2.2
 
request_header_access User-Agent allow OFFICE
request_header_access User-Agent deny all
header_replace User-Agent BOGUS AGENT
 
 
This works as expected when going direct.
 
However, if there is a cache_peer, still the UA is replaced. Cache_peer logs 
show connection is coming with the replaced UA (cache_peer does not modify UA 
in its config).
 
I must be missing something.
 
Jenny
 

  

RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-04-18 Thread Jenny Lee



 To: squid-users@squid-cache.org
 Date: Tue, 19 Apr 2011 14:36:31 +1200
 From: squ...@treenet.co.nz
 Subject: RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly 
 with aclnames?
 
 On Mon, 18 Apr 2011 19:15:53 +, Jenny Lee wrote:
   What is the definition of OFFICE ?
   request_header_access are fast ACL which will not wait for 
  unavailable
   details to be fetched.
 
  Ah! proxy_auth :)
 
  Jenny
 
 
  acl OFFICE src 2.2.2.2
 
  request_header_access User-Agent allow OFFICE
  request_header_access User-Agent deny all
  header_replace User-Agent BOGUS AGENT
 
 
  This works as expected when going direct.
 
  However, if there is a cache_peer, still the UA is replaced.
  Cache_peer logs show connection is coming with the replaced UA
  (cache_peer does not modify UA in its config).
 
  I must be missing something.
 
 Header mangling is done before forwarding. Regardless of where it is 
 forwarded to. So there is no peer information available at that time.
 
 Also, src matches the website IP address(es). The public website IPs 
 will not change because you have a cache_peer configured.
 
 Amos
 
Hello Amos,
 
You handle 500 users here alone. Must be a tiring day. I am matching my IP with 
src.
 
Regardless, it doesn't work as expected when there is a peer forwarding.
 
Is there any debug options I must use and watch out for?

Jenny
  

RE: [squid-users] Squid 3.1.12 is available

2011-04-17 Thread Jenny Lee



 Sorry for not answering. There was just had nothing I could be sure 
 about until now...
 
 3.2.0.7 will be out early (and very soon) with fixes for the critical 
 and blocker bugs currently known to exist in 3.2.0.6 tarballs. The fixes 
 are now in 3.HEAD awaiting some maintenance and any testing you care to 
 throw at them.

Thanks Amos. 

We will be waiting 3.2.0.7. 
 
Release cycle seem to be 1.5 months between 3.2.0's. 
 
When you say earlier, what would be the upper end of the timeframe? (1 week, 1 
month?)
 
I know you have your plans and schedules but I would appreciate if we get a 
ballpark estimate so we do not waste time on 3.2.0.6 or 3.HEAD and plan on 
3.2.0.7. We have held off 3.2.0.6 that we wanted to upgrade so long now.
 
I can respectfully understand if you do not want to answer this stupid question 
:)
 
Jenny 

RE: [squid-users] Squid 3.1.12 is available

2011-04-06 Thread Jenny Lee

 On Wed, 6 Apr 2011 11:26:09 +0800, Sharl.Jimh.Tsin wrote:
  how about the dev branch? i found the tarball of 6th version of
  3.2.0.x,any information?

 The bundles were made, however we have already found a few nasty
 problems.
 I'm giving it a few more days to see how much can be fixed.

 Amos

 
Shall we expect a 3.2.0.7 or shall we start using 3.2.0.6 or 3.2.HEAD?
 
3.2.0.6 fixes few issues that were plaguing us since 6 months. However, due to 
issues encountered, we are afraid to put it to production use (i know what it 
is but we use 3.2.0.2 in production now since the features we want is not 
available on stable versions).
 
Jenny 

[squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-04-05 Thread Jenny Lee

Hello Squid Folks,
 
Here is an excerpt from squid.conf.documented:
 
#  TAG: request_header_access   
#   Usage: request_header_access header_name allow|deny [!]aclname ...
 
This seems to work only as:
 
request_header_access User-Agent deny all
 
Why can't I do:
 
request_header_access User-Agent deny all !OFFICE
 
or
 
request_header_access User-Agent allow OFFICE1
request_header_access User-Agent allow OFFICE2
request_header_access User-Agent deny all
 
It just does not allow OFFICE through without modification on (header_replace 
User-Agent).
 
This is driving me crazy. I cannot modify user-agents of our executives.
 
 
Thank you,
 
Jennt
 
 
  

RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-04-05 Thread Jenny Lee

Hello Amos,
 
 What is the definition of OFFICE ?
 request_header_access are fast ACL which will not wait for unavailable
 details to be fetched.

Ah! proxy_auth :)
 
Jenny 

[squid-users] oa in 3.2?

2011-03-28 Thread Jenny Lee

Hello Squid folks,
 
When are we going to see oa in logformat in 3.2?
 
It has been a very long while this existed on 2.7 but seems to be forgotten for 
3.2.
 
I see it is commented in Token.cc. Ditto in 3.HEAD.
 
Thanks
 
Jenny
  

RE: [squid-users] oa in 3.2?

2011-03-28 Thread Jenny Lee


 On 29/03/11 02:45, Amos Jeffries wrote:
  On 29/03/11 01:31, Jenny Lee wrote:
 
  Hello Squid folks,
 
  When are we going to see oa in logformat in 3.2?
 
  Thanks for the reminder. The next 3.2 should have it.
 
 
 I should also mention the 3.2 version will be %la to fit in with the 
 existing tags better.
 
Thanks Amos,
 
Any eta for 3.2.0.6?
 
It also fixes nasty http://bugs.squid-cache.org/show_bug.cgi?id=3007 for me. I 
lived with it for a year :)
 
Wait a sec, does it? I don't see:
 
I guess the following line is needed in tunnel.cc#tunnelProxyConnected() 
 *tunnelState-status_ptr = HTTP_OK;
 
in squid-3.2.0.5-20110328.
 
Jenny 

  1   2   >