[squid-users] Building 3.5.1 without libcom_err?

2015-02-23 Thread Mike Mitchell
Is there a way to build 3.5.1 without libcom_err?
On my old Redhat system (2.6.18-128.1.1.el5) I get compilation failures unless 
I remove all references to libcom_err.

Here's a snippet from the config log:

configure:24277: checking for krb5.h
configure:24277: result: yes
configure:24277: checking com_err.h usability
configure:24277: g++ -c -g -O2    conftest.cpp 5
conftest.cpp:110:21: error: com_err.h: No such file or directory
configure:24277: $? = 1
configure: failed program was:
| /* confdefs.h */
...

configure:24330: checking for error_message in -lcom_err
configure:24355: g++ -o conftest -g -O2    -g conftest.cpp -lcom_err  -lrt -ldl 
-ldl    -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lkrb5 -lk5crypto -lcom_err  
5
/usr/bin/ld: skipping incompatible /usr/lib/libcom_err.so when searching for 
-lcom_err
/usr/bin/ld: skipping incompatible /usr/lib/libcom_err.a when searching for 
-lcom_err
/usr/bin/ld: cannot find -lcom_err
collect2: ld returned 1 exit status


Later when I try to build squid I get the same incompatible 
/usr/lib/libcom_err.so error message and the build stops.

If I hand-edit the Makefiles in the various directories and remove -lcom_err, 
the build succeeds and the executables run properly.

I run configure with --with-krb5-config=no --without-mit-krb5 
--without-heimdal-krb5 --without-gnutls

But it still tries linking in the krb libraries and the com_err library.

Any suggestions?

Mike Mitchell
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] request_body_max_size on transparent proxy

2015-02-23 Thread Mike Mitchell

I'm trying to POST large files (1MB) through a squid 3.5.2 proxy set up to 
intercept connections.

The client is including an 'Expect: 100-continue' header, and sends all headers 
in a single network packet.
POSTs of content smaller than 1MB go through, but larger POSTs do not.
The client's TCP connection is being reset without squid sending any sort of 
error page.
Nothing is logged in squid -- not in the access log, not in the cache log.  
It's as if that request never happened.
The client just gets a closed connection.

I'm running with the default 'request_body_max_size', it is not specified in my 
configuration.
That should mean unlimited for the request body.

If I configure the client to explicitly use the same proxy on a different, 
non-transparent port, the large POSTs go through correctly.  It is as if 
request_body_max_size does not function on a port marked 'transparent'.

Has anyone else seen this problem?
I've found one reference to it in my searches, 
http://nerdanswer.com/answer.php?q=336233

Mike Mitchell

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Re: BUG 3279: HTTP reply without Date

2014-06-03 Thread Mike Mitchell
I followed the advice found here:
  http://www.mail-archive.com/squid-users@squid-cache.org/msg95078.html

Switching to diskd from aufs fixed the crashes for me.
I still get 
  WARNING: swapfile header inconsistent with available data
messages in the log.  They appear within a hour of starting with a clean cache.
When I clean the cache I stop squid, rename the cache directory, create a new 
cache directory,
start removing the old cache directory, then run squid -z before starting 
squid.

I run the following commands:

/etc/init.d/squid stop
sleep 5
rm -f /var/squid/core*
rm -f /var/squid/swap.state*
rm -rf /var/squid/cache.hold
mv /var/squid/cache /var/squid/cache.hold
rm -rf /var/squid/cache.hold 
squid -z
/etc/init.d/squid start

I'm running on a Red Hat Linux VM.
Here is the output of 'uname -rv':
   2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010

Squid Cache: Version 3.4.5-20140514-r13135
configure options: '--with-maxfd=16384' '--enable-storeio=diskd' 
'--enable-removal-policies=heap' '--enable-delay-pools' '--enable-wccpv2' 
'--enable-auth-basic=DB NCSA NIS POP3 RADIUS fake getpwnam' 
'--enable-auth-digest=file' '--with-krb5-config=no' '--disable-auth-ntlm'  
'--disable-auth-negotiate' '--disable-external-acl-helpers' '--disable-ipv6' 
--enable-ltdl-convenience

Mike Mitchell





[squid-users] Re: BUG 3279: HTTP reply without Date

2014-06-02 Thread Mike Mitchell
I too see this problem with squid 3.4.5  under aufs.  I switched 100+ Linux 
caches over from aufs to diskd and no longer see crashes.

Mike Mitchell



[squid-users] Re: swapfile header inconsistent

2014-05-22 Thread Mike Mitchell
Amos Jeffries wrote:
 Are you using the StoreID or SMP features of Squid?
  Is there another Squid instance running on the same box and perhapse
 altering the cache?
 
 Amos

The answer is no to both questions.  I am not using StoreID or SMP features.  
There is not a another Squid instance running.

I see this behavior regularly on all of my caches.  Most are running 3.3.12, 
but I've started switching to 3.4.5 in hopes of reducing the 'isEmpty()' 
crashes.  The 'isEmpty()' crashes are preceded by  'missing date' messages (bug 
3279).
 
Mike Mitchell


[squid-users] swapfile header inconsistent

2014-05-21 Thread Mike Mitchell
I'm running squid 3.4.5-20140514-r13135

I started switching over to diskd from aufs because I was tired of all the 
is_empty() crashes.
I stopped squid, removed the cache directory and swapfile completely, then 
started squid with the '-z' option to rebuild the cache directory.

Within a half-hour my cache.log file started reporting lines like:

2014/05/21 11:09:26 kid1| WARNING: swapfile header inconsistent with available 
data
2014/05/21 11:09:56 kid1| WARNING: swapfile header inconsistent with available 
data
2014/05/21 11:10:49 kid1| WARNING: swapfile header inconsistent with available 
data
2014/05/21 11:11:04 kid1| WARNING: swapfile header inconsistent with available 
data
2014/05/21 11:11:19 kid1| WARNING: swapfile header inconsistent with available 
data
2014/05/21 11:11:34 kid1| WARNING: swapfile header inconsistent with available 
data
2014/05/21 11:12:04 kid1| WARNING: swapfile header inconsistent with available 
data

At least it is not crashing.  This instance was started with a clean cache.

Mike Mitchell






RE: [squid-users] parent request order

2013-06-26 Thread Mike Mitchell
I use

cache_peer pp01 parent 3128 0 name=dp round-robin weight=100
cache_peer pp02 parent 3128 0 name=p1 round-robin
cache_peer pp03 parent 3128 0 name=p2 round-robin

This puts the three parents into a round-robin pool, but weights pp01
much heavier.  pp01 will be chosen over pp02 and pp03 unless
pp01 stops responding.

Squid resets the counters used for comparison every five minutes,
so you don't have to worry about pp01 accumulating so many
requests that the other parents are used.

There is one problem with this.  The counters are reset to zero,
and the comparison is done by dividing the count by the weight.
Zero divided by a large number is still zero, same as the other
parents.  Every five minutes all the parents are equally preferred,
until each parent gets one request.

I have patched my version of squid so that it resets the counters to
one instead of zero.

The patch follows:

*** src/cache_cf.cc.orig2013-04-26 23:07:29.0 -0400
--- src/cache_cf.cc 2013-05-03 16:41:03.0 -0400
***
*** 2044,2049 
--- 2044,2050 
  p-icp.port = CACHE_ICP_PORT;
  p-weight = 1;
  p-basetime = 0;
+ p-rr_count = 1;
  p-stats.logged_state = PEER_ALIVE;
  
  if ((token = strtok(NULL, w_space)) == NULL)
*** src/neighbors.cc.orig   2013-04-26 23:07:29.0 -0400
--- src/neighbors.cc2013-05-07 11:15:25.0 -0400
***
*** 421,427 
  {
  peer *p = NULL;
  for (p = Config.peers; p; p = p-next) {
! p-rr_count = 0;
  }
  }
  
--- 421,427 
  {
  peer *p = NULL;
  for (p = Config.peers; p; p = p-next) {
! p-rr_count = 1;
  }
  }
  
Mike Mitchell


From: T Ls [t...@pries.pro]
Sent: Monday, June 24, 2013 4:15 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] parent request order

Am 24.06.2013 21:51, schrieb Marcus Kool:

 ... Can't you make a setup where S1 uses P2 if P1 fails?

No, this mapping is fix.

 In an old thread I read Squid has a configuration option FIRST_UP_PARENT
 so it can be configured to use P1 and only P2 if P1 is not available.

Yep, this kind of prioritization is exactly what I'm looking for, I'm
going to search for this FIRST_UP_PARENT-option tomorrow.


Thanks so far.

Thomas





[squid-users] RE: Squid CPU 100% infinite loop

2013-06-24 Thread Mike Mitchell
It appears that moving to 3.3.5-20130607-r12573 from 3.2.11-20130524-r11822
has eliminated my problem.  I have seen a few unexplainable spikes in CPU
usage, but they haven't lasted long and squid has remained responsive.
I've been running 3.3.5-20130607-r12573 for just over two weeks without a
problem.

Mike Mitchell
__
From: Stuart Henderson [s...@spacehopper.org]
Sent: Friday, June 21, 2013 9:57 AM
To: squid-users@squid-cache.org
Subject: Re: Squid CPU 100% infinite loop

On 2013-05-28, Stuart Henderson s...@spacehopper.org wrote:
 On 2013-05-17, Alex Rousskov rouss...@measurement-factory.com wrote:
 On 05/17/2013 01:28 PM, Loïc BLOT wrote:

 I have found the problem. In fact it's the problem mentionned on my
 last mail, is right. Squid FD limit was reached, but squid doesn't
 mentionned every time the freeze appear that it's a FD limit
 problem, then the debug was so difficult.

 Squid should warn when it runs out of FDs. If it does not, it is a
 bug. If you can reproduce this, please open a bug report in bugzilla
 and post relevant logs there.

 FWIW, I cannot confirm or deny whether reaching FD limit causes what
 you call an infinite loop -- there was not enough information in your
 emails to do that. However, if reaching FD limit causes high CPU
 usage, it is a [minor] bug.

 I've just hit this one, ktrace shows that it's in a tight loop doing
 sched_yield(), I'll try and reproduce on a non-production system and open
 a ticket if I get more details..

I haven't reproduced this in squid yet, but I recently hit a case with
similar symptoms with another threaded program on OpenBSD which hit a loop
on sched_yield if it received a signal while forking, this has now been
fixed in the thread library. So if anyone knows how to reproduce, please
try again after updating src/lib/librthread/rthread_fork.c to r1.8.






[squid-users] RE: Squid CPU 100% infinite loop

2013-06-12 Thread Mike Mitchell
The FD limit is 16384.  During the day I see peak utilization around
8,000.  At night the utilization is less than 1,000.  During the four
hours the CPU rises from 10% to 100% the FD utilization stays
less than 1,000.

Again, I have not seen this problem under load, only while
squid is relatively idle.  It usually starts around 10:00 PM, and
is not related to log rotation.

# uname -a
Linux pxsrv03 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64 x86_64 
x86_64 GNU/Linux

# /opt/squid/bin/squid -v
Squid Cache: Version 3.3.5-20130607-r12573
configure options:  '--prefix=/opt/squid' '--with-maxfd=16384' 
'--with-pthreads' '--enable-storeio=aufs' '--enable-removal-policies=heap' 
'--disable-external-acl-helpers' '--disable-ipv6' --enable-ltdl-convenience


Mike Mitchell

On 11/06/2013 10:42:32 -0700, Loïc BLOT wrote:
 Hello mike,
 please look at the number of system file descriptors opened, the squid
 limit and the squid user limit. I have this problem on 3.2 and 3.3
 because squid was at the FD limit. (look at the system fd limit for
 squid, ulimit -n with the squid user)
 -- 
 Best regards,
 Loïc BLOT, 
 UNIX systems, security and network expert
 http://www.unix-experience.fr


[squid-users] RE: Squid CPU 100% infinite loop

2013-06-11 Thread Mike Mitchell
I dropped the cache size to 150 GB instead of 300 GB.  Cached object count 
dropped
from ~7 million to ~3.5 million.  After a week I saw one occurrence of the same 
problem.
CPU usage climbed steadily over 4 hours from 10% to 100%, then squid became
unresponsive for 20 minutes.  After that it picked up as if nothing had 
happened -- no
error messages in any logs, no restarts, no core dumps.

I'm now testing again using version 3.3.5-20130607-r12573 instead of 
3.2.11-20130524-r11822.
I've left everything else the same, with the cache size still at 150 GB.

Mike Mitchell

On 30/05/2013 08:43:24 -0700, Ron Wheeler wrote:

 Some ideas here.
 http://www.freeproxies.org/blog/2007/10/03/squid-cache-disk-io-performance-enhancements/
 http://www.gcsdstaff.org/roodhouse/?p=2784
 
 
 You might try dropping your disk cache to 50Gb and see what happens.
 
 I am not sure that caching 7 Million pages gives you much of an advantage 
 over 1 million. The 1,000,001th most  popular page probably does not come up 
 that often and by the time you get down to a page that is 7,000,000 in the 
 list of most accessed pages, you are not seeing much demand for that page.
 
 Probably most of the cache is just accessed once.
 
 Your cache_mem looks low but is not related to your problem but would improve 
 performance a lot. Getting a few  thousand of the most active pages in 
 memory is worth a lot more than 6 million of the least active pages sitting 
 on a disk.
 
 
 I am not a big squid expert but have run squid for a long time.
 
 Ron



[squid-users] Squid CPU 100% infinite loop

2013-05-30 Thread Mike Mitchell
What garbage collection parameters can I change?
I'm not using authentication, so the default
   auth_param digest nonce_garbage_interval 5 minutes
doesn't really apply.
I also run with
   client_db off
so the default
  authenticate_cache_garbage_interval 1 hour
doesn't apply either.

The lock-up happens randomly across the four servers.  I can go several
days without a lock-up.  I've only seen one lock-up in a night.  Over the
last two nights I had lock-ups both nights, but on different servers.

# squid -v
Squid Cache: Version 3.2.11-20130524-r11822
configure options:  '--prefix=/opt/squid' '--with-maxfd=16384' 
'--with-pthreads' '--enable-storeio=aufs' '--enable-removal-policies=heap'  
'--disable-external-acl-helpers' '--disable-ipv6' --enable-ltdl-convenience

Here are the relevant parts of the configuration:

acl CIDR_A  src 10.0.0.0/8
ident_lookup_access allow CIDR_A
http_port 3128
cache_mem 1024 MB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
cache_dir aufs /cache/squid 323368 64 253 max-size=512000
maximum_object_size 500 KB
cache_swap_low 95
cache_swap_high 97
cache_store_log none
client_idle_pconn_timeout 5 seconds
check_hostnames on
allow_underscore on
dns_defnames on
dns_v4_first on
ipcache_size 4096
fqdncache_size 8192
client_db off

I have about 7,000,000 objects in the cache.  During the four
hours the CPU rises from 10% to 100%, the number of objects
does not change by very much.  The cache utilization sits at
95% during the four hours.



[squid-users] RE: Squid CPU 100% infinite loop

2013-05-29 Thread Mike Mitchell
I've hit something similar.  I have four identically configured systems with 
16K squid FD limit, 24 GB RAM, 300 GB cache directory.  I've seen the same 
failure randomly on all four systems.  During the day the squid process handles 
 100 requests/second, with a peak FD usage around 8K FDs.  In the evenings the 
load drops to about 20 requests/second, with an FD usage around 1K FDs.  CPU 
usage hovers less than 10% during this time.
Randomly one of the four systems will start increasing its CPU usage.  It takes 
about 4 hours to go from less than 10% to 100%.  During the four hours the FD 
usage stays at 1K and the request rate stays right around 20 requests/second.  
Once the CPU reaches 100% the squid service stops responding.  About 20 minutes 
later it starts responding again with CPU levels back down below 10%.  There is 
nothing in the cache log to indicate a problem.  The squid process did not core 
dump, nor did the parent restart a child.

I have not seen the problem during the day, only after the load drops.  The 
hangs do not coincide with the scheduled log rotates.  The one last night 
recovered a half-hour before the log rotated at 2:00 AM.

Every one of my hangs have been proceeded with a rise in CPU usage, and squid 
recovers on its own without logging anything.

I have a script that does
  GET cache_object://localhost/info
  GET cache_object://localhost/counters
every five minutes and puts the interesting (to me) bits into RRD files.
Obviously the script fails during the 20 minutes the squid process is 
non-responsive.


From: Stuart Henderson [s...@spacehopper.org]
Sent: Tuesday, May 28, 2013 12:01 PM
To: squid-users@squid-cache.org
Subject: Re: Squid CPU 100% infinite loop

On 2013-05-17, Alex Rousskov rouss...@measurement-factory.com wrote:
 On 05/17/2013 01:28 PM, Loïc BLOT wrote:

 I have found the problem. In fact it's the problem mentionned on my
 last mail, is right. Squid FD limit was reached, but squid doesn't
 mentionned every time the freeze appear that it's a FD limit
 problem, then the debug was so difficult.

 Squid should warn when it runs out of FDs. If it does not, it is a
 bug. If you can reproduce this, please open a bug report in bugzilla
 and post relevant logs there.

 FWIW, I cannot confirm or deny whether reaching FD limit causes what
 you call an infinite loop -- there was not enough information in your
 emails to do that. However, if reaching FD limit causes high CPU
 usage, it is a [minor] bug.

I've just hit this one, ktrace shows that it's in a tight loop doing
sched_yield(), I'll try and reproduce on a non-production system and open
a ticket if I get more details..






[squid-users] RE: exceeding cache_dir size

2013-01-16 Thread Mike Mitchell
The patch did not have the desired effect.
I still exceeded the disk space specified on the partition.
After many
  diskHandleWrite: FD 35: disk write error: (28) No space left on device
messages, squid terminated with
  WARNING: swapfile header inconsistent with available data
  FATAL: Received Segment Violation...dying.

Mike Mitchell

From: Mike Mitchell
Sent: Monday, January 14, 2013 3:35 PM
To: squid-users@squid-cache.org
Subject: RE: exceeding cache_dir size

I'm using a belt-and-suspenders approach.
I've installed 3.2.6 with the patch from 
http://bugs.squid-cache.org/show_bug.cgi?id=3686
My cache_dir line now looks like

  cache_dir aufs /cache/squid/aufs 3800 15 253 max-size=134217728
  maximum_object_size 131072 KB
  cache_swap_state /var/squid/swap.state

I now specify a max-size on the cache_dir line, and I've moved the
swap.state file to a different disk partition.

So far I've updated 14 of my 61 proxy servers running 3.2.
I have another 53 that are stuck on 2.7STABLE9.

Mike Mitchell




[squid-users] RE: exceeding cache_dir size

2013-01-16 Thread Mike Mitchell
Turns out the cache directory was filled up with
core files cause by bug #3732.
http://bugs.squid-cache.org/show_bug.cgi?id=3732

I had compiled with --disable-ipv6, yet the core file
shows that Ip::Address::GetAddrInfo() was called
with force set to zero and m_SocketAddr.sin6_addr
set to all zeros.  This fails the (force == AF_UNSPEC  IsIPv4())
test, causing an assert.

Yet another patch to try

From: Mike Mitchell
Sent: Wednesday, January 16, 2013 11:54 AM
To: squid-users@squid-cache.org
Subject: RE: exceeding cache_dir size

The patch did not have the desired effect.
I still exceeded the disk space specified on the partition.
After many
  diskHandleWrite: FD 35: disk write error: (28) No space left on device
messages, squid terminated with
  WARNING: swapfile header inconsistent with available data
  FATAL: Received Segment Violation...dying.

Mike Mitchell





[squid-users] RE: exceeding cache_dir size

2013-01-14 Thread Mike Mitchell
I'm using a belt-and-suspenders approach.
I've installed 3.2.6 with the patch from 
http://bugs.squid-cache.org/show_bug.cgi?id=3686
My cache_dir line now looks like

  cache_dir aufs /cache/squid/aufs 3800 15 253 max-size=134217728
  maximum_object_size 131072 KB
  cache_swap_state /var/squid/swap.state

I now specify a max-size on the cache_dir line, and I've moved the
swap.state file to a different disk partition.

So far I've updated 14 of my 61 proxy servers running 3.2.
I have another 53 that are stuck on 2.7STABLE9.
 
Mike Mitchell




[squid-users] exceeding cache_dir size

2013-01-09 Thread Mike Mitchell
I'm having problems with squid 3.2.5 exceeding the cache_dir size.
I have a 5 GB disk partition with nothing else on it, with a cache_dir
size of 3800 MB:

cache_dir aufs /cache/squid/aufs 3800 15 253
maximum_object_size 131072 KB

Today I found squid had terminated and the /cache partition was
100% full.

After a little investigation in the cache directory I found this file:
# ls -l 02/8C/00027F4A
-rw-r- 1 nobody nobody 915697664 Jan  8 21:15 02/8C/00027F4A

Very strange, a 900 MB file stored when I have a maximum_object_size of 128 MB.

Here's the header of the file, with the initial binary data stripped out:

http://152.2.63.23/WUNC HTTP/1.0 200 OK
Content-Type: application/x-mms-framed
Server: Cougar/9.00.00.3372
Date: Tue, 08 Jan 2013 10:14:18 GMT
Pragma: no-cache, client-id=3433994619, xResetStrm=1, features=broadcast, 
timeout=6, AccelBW=350, AccelDuration=2, Speed=1.000
Cache-Control: no-cache
Last-Modified: Tue, 08 Jan 2013 10:14:18 GMT
Supported: com.microsoft.wm.srvppair, com.microsoft.wm.sswitch, 
com.microsoft.wm.predstrm, com.microsoft.wm.fastcache

It is a streaming audio of a local radio station.

I'm guessing that since there isn't a content-length header squid will store
the data until it all arrives, then flush it from disk later on.  This is a 
problem
because my swap.state files are on the same partition.  When squid
can no longer write to swap.state because of the full disk, it terminates.

The only solution is to move the swap.state files, but that is
counter to the recommendation in the squid.conf.documented file:

#  TAG: cache_swap_state
#   Location for the cache swap.state file. This index file holds
#   the metadata of objects saved on disk.  It is used to rebuild
#   the cache during startup.  Normally this file resides in each
#   'cache_dir' directory, but you may specify an alternate
#   pathname here.  Note you must give a full filename, not just
#   a directory. Since this is the index for the whole object
#   list you CANNOT periodically rotate it!
...
#   them).  We recommend you do NOT use this option.  It is
#   better to keep these index files in each 'cache_dir' directory.

Since it is possible to have files much larger than maximum_object_size
in the cache_dir directory, there is always a possibility of running out
of space.  Not being able to write swap.state causes squid to abort,
which leads me to believe that swap.state should never be on the
same partition as the cache_dir directory.

Mike Mitchell



[squid-users] RE: Memory leak in 3.2.5

2012-12-25 Thread Mike Mitchell
With the ident patch the memory leaks are at manageable levels.
It looks like I'm leaking HttpHeaderEntry, Short String, and ConnOpener
structures.

After 1,000,000 requests I have 740,968 HttpHeaderEntry structures in use,
with 1,600,341 Short Strings in use.  The two go hand-in-hand, as the
HttpHeaderEntry structure allocates two Short Strings.

I also have 22,314 ConnOpener structures in use.  I found one leak of
ConnOpener structures in the internalDNS routines.  I doubt it is large
enough to account for the over 22,000 structures in 1,000,000 queries.
The leak would only happen if the DNS query switched to TCP and the
TCP connection failed.

Mike Mitchell



[squid-users] RE: Memory leak in 3.2.5

2012-12-24 Thread Mike Mitchell
I just posted a revised patch to squid-dev.  The ident leak of Connection
structures has been fixed.

Mike Mitchell

From: Mike Mitchell
Sent: Sunday, December 23, 2012 8:25 AM
To: squid-users@squid-cache.org
Subject: RE: Memory leak in 3.2.5

I posted a patch to the squid-dev mailing list.
After 24 hours and over 1,000,000 HTTP queries, memory usage is 1.8 GB.
There are 45 cbdata IdentStateData structures allocated, with 3 in use.
There are 283 cbdata ConnStateData structures allocated, with 147 in use.
There are 321728 Connection structures allocated, with 321709 in use.

The major ident leak has been plugged, but it looks like there is still a leak 
of
the Connection structures.

Mike Mitchell

From: Mike Mitchell
Sent: Friday, December 21, 2012 7:38 AM
To: squid-users@squid-cache.org
Subject: RE: Memory leak in 3.2.5

The ident memory is properly freed only if the ident query suceeds.
The ident memory is not freed if the ident query times out or if the
client is no longer around for the result.

I'm testing a patch and I should have something by the end of the day.

Mike Mitchell


From: Mike Mitchell
Sent: Thursday, December 20, 2012 6:06 PM
To: squid-users@squid-cache.org
Subject: RE: Memory leak in 3.2.5

I used cachemgr.cgi and looked at the memory utilization.
My first four rows are:

   PoolAllocatedIn 
Use
 (#)(KB) 
(#)(KB)
cbdata IdentStateData (21) 1088861  4414992 1088861  4414992
mem_node  259773  1049240   259524  1048234
cbdata ConnStateData (20) 1274359448017 1274358448017
Connection2152578403609 2152521403598


It doesn't look like the ident connections are cleaned up properly.
That explains why I have 6 GB allocated, when over 4 GB are ident
structures.

Mike Mitchell



[squid-users] RE: Memory leak in 3.2.5

2012-12-23 Thread Mike Mitchell
I posted a patch to the squid-dev mailing list.
After 24 hours and over 1,000,000 HTTP queries, memory usage is 1.8 GB.
There are 45 cbdata IdentStateData structures allocated, with 3 in use.
There are 283 cbdata ConnStateData structures allocated, with 147 in use.
There are 321728 Connection structures allocated, with 321709 in use.

The major ident leak has been plugged, but it looks like there is still a leak 
of
the Connection structures.

Mike Mitchell

From: Mike Mitchell
Sent: Friday, December 21, 2012 7:38 AM
To: squid-users@squid-cache.org
Subject: RE: Memory leak in 3.2.5

The ident memory is properly freed only if the ident query suceeds.
The ident memory is not freed if the ident query times out or if the
client is no longer around for the result.

I'm testing a patch and I should have something by the end of the day.

Mike Mitchell


From: Mike Mitchell
Sent: Thursday, December 20, 2012 6:06 PM
To: squid-users@squid-cache.org
Subject: RE: Memory leak in 3.2.5

I used cachemgr.cgi and looked at the memory utilization.
My first four rows are:

   PoolAllocatedIn 
Use
 (#)(KB) 
(#)(KB)
cbdata IdentStateData (21) 1088861  4414992 1088861  4414992
mem_node  259773  1049240   259524  1048234
cbdata ConnStateData (20) 1274359448017 1274358448017
Connection2152578403609 2152521403598


It doesn't look like the ident connections are cleaned up properly.
That explains why I have 6 GB allocated, when over 4 GB are ident
structures.

Mike Mitchell



[squid-users] RE: Memory leak in 3.2.5

2012-12-21 Thread Mike Mitchell
The ident memory is properly freed only if the ident query suceeds.
The ident memory is not freed if the ident query times out or if the
client is no longer around for the result.

I'm testing a patch and I should have something by the end of the day.

Mike Mitchell


From: Mike Mitchell
Sent: Thursday, December 20, 2012 6:06 PM
To: squid-users@squid-cache.org
Subject: RE: Memory leak in 3.2.5

I used cachemgr.cgi and looked at the memory utilization.
My first four rows are:

   PoolAllocatedIn 
Use
 (#)(KB) 
(#)(KB)
cbdata IdentStateData (21) 1088861  4414992 1088861  4414992
mem_node  259773  1049240   259524  1048234
cbdata ConnStateData (20) 1274359448017 1274358448017
Connection2152578403609 2152521403598


It doesn't look like the ident connections are cleaned up properly.
That explains why I have 6 GB allocated, when over 4 GB are ident
structures.

Mike Mitchell



[squid-users] Memory leak in 3.2.5

2012-12-20 Thread Mike Mitchell
I just upgraded from 2.7STABLE9 to 3.2.5, and now I'm battling a memory leak.

Squid Cache: Version 3.2.5
configure options:  '--prefix=/local/proxy/squid' '--with-maxfd=8192' 
'--with-pthreads' '--enable-storeio=aufs' '--enable-removal-policies=heap' 
'--enable-cache-digests' '--enable-delay-pools' '--enable-wccpv2' 
'--disable-external-acl-helpers' '--disable-ipv6' --enable-ltdl-convenience

Here's the significant part of the configuration:

ident_lookup_access permit all

http_port 8000

cache_peer 127.0.0.1 parent 8080 0 no-query no-digest round-robin weight=100
#cache_peer p01.sas.com parent 8080 0 name=p1 no-query no-digest round-robin
cache_peer p02.sas.com parent 8080 0 name=p2 no-query no-digest round-robin
cache_peer p03.sas.com parent 8080 0 name=p3 no-query no-digest round-robin
cache_peer p04.sas.com parent 8080 0 name=p4 no-query no-digest round-robin

#cache_peer p01.sas.com sibling 8000 3130 name=s1 proxy-only
cache_peer p02.sas.com sibling 8000 3130 name=s2 proxy-only
cache_peer p03.sas.com sibling 8000 3130 name=s3 proxy-only
cache_peer p04.sas.com sibling 8000 3130 name=s4 proxy-only

memory_replacement_policy heap GDSF

cache_replacement_policy heap LFUDA

cache_dir aufs /cache/squid/aufs 10 64 253

maximum_object_size 500 KB

I have four squid servers identically configured,
each a sibling to the other.  Each has a local parent
proxy (virus scanner).  If the local virus scanner is
unresponsive, the request is forwarded to one of the
other three in round-robin fashion.

I am also performing ident queries just for logging
purposes.  Requests do not require authentication.

This configuration has worked without issue for
several years with squid 2.7STABLE9.  It doesn't
work well with squid 3.2.5.

Memory usage grows at about the same rate as the
query rate.  After 16 hours memory usage is about
6 GB.  The query rate is about 200/sec during
business hours.  If I restart squid the memory
usage goes back to ~600 MB and starts growing.
There are about 2.5 million objects in the cache.

I thought the problem might be ICP, so I tried HTCP
instead of ICP for the cache siblings but that did
not make a difference.  I do have another 3.2.5 system
handling ~100 requests/second that does not exhibit
the problem.  The one without a problem uses these
four as siblings, and I've tried both ICP and HTCP
there, too.

I've checked the cache.log file for clues.  After
~5,000,000 queries I found ~1,500 messages like
  WARNING: Forwarding loop detected for
and ~3,000 messages like
  Failed to select source for '[null_entry]'
Those messages would have to leak about 1 MB each to
account for the memory loss I'm seeing.

The system without the problem does not perform the
ident queries.  Could the leak be there?  Is anyone
using ident?

Mike Mitchell
  




[squid-users] RE: Memory leak in 3.2.5

2012-12-20 Thread Mike Mitchell
I used cachemgr.cgi and looked at the memory utilization.
My first four rows are:

   PoolAllocatedIn 
Use
 (#)(KB) 
(#)(KB)
cbdata IdentStateData (21) 1088861  4414992 1088861  4414992
mem_node  259773  1049240   259524  1048234
cbdata ConnStateData (20) 1274359448017 1274358448017
Connection2152578403609 2152521403598


It doesn't look like the ident connections are cleaned up properly.
That explains why I have 6 GB allocated, when over 4 GB are ident
structures.

Mike Mitchell


Re: [squid-users] ROCK store and UFS (Squid 3.2.3)

2012-12-18 Thread Mike Mitchell
On 27.11.2012 14:07, Horacio H. wrote:
 Hi,
 
 Amos, thanks for your reply. I'll test the patch and use
 memory_cache_shared set to OFF.
 
 Sorry, I was wrong. Objects bigger than maximum_object_size_in_memory
 are not cached on disk. Although objects smaller than
 maximum_object_size_in_memory but bigger than 32KB were written to
 disk, I guess they got a HIT because Squid keeps a copy in memory of
 hot and in-transit objects. That explains why the UFS store was
 ignored when Squid was restarted.
 
 Thanks.

I'm seeing the same problem with Squid 3.2.5.  I have not installed the 
mentioned patch,
but I do use the following cache_dir configuration lines:

  cache_dir rock /cache/squid/rock-08k 610 min-size=0max-size=8192
  cache_dir rock /cache/squid/rock-30k 390 min-size=8193 max-size=30720
  cache_dir aufs /cache/squid/aufs 3000 15 253  min-size=30721 
max-size=20480
  maximum_object_size 20 KB
  maximum_object_size_in_memory 512 KB

If I'm reading the code correctly, as long as I specify max-size on each 
cach_dir directive
the mentioned patch will not be needed.

With these configuration lines the AUFS directory never stores anything larger 
than the value
specified by maximum_object_size_in_memory.  When squid is shutting down the 
cache log
will contain a line like
  2012/12/18 00:53:14 kid1| Not currently OK to rewrite swap log.
so when it restarts it assumes the AUFS cache is empty.

ROCK store seems to work if that is all that I'm using, but it doesn't work 
well when combined
with AUFS.  Has anyone gotten ROCK store to work combined with anything else?

Mike Mitchell



[squid-users] RE: cachemgr.cgi Store Directory Stats with multiple cache_dir lines

2012-12-11 Thread Mike Mitchell
I think my problem with Store Directory Stats and Rock store is related to 
bug #3694,
  http://bugs.squid-cache.org/show_bug.cgi?id=3694

I also am hitting bug #3640
  http://bugs.squid-cache.org/show_bug.cgi?id=3640
when I rotate the logs.   I'm working around the bug by using
  logfile_rotate 0
  debug_options ALL,1,rotate=0
The rotate=0 clause doesn't seem to work, as I still get cache.log.XX files, 
one for each worker.

Mike Mitchell
mike.mitch...@sas.com


From: Mike Mitchell
Sent: Monday, December 10, 2012 4:34 PM
To: squid-users@squid-cache.org
Subject: cachemgr.cgi Store Directory Stats with multiple cache_dir lines

I'm running squid 3.2.4 with the errno patch.
My squid.conf has the following for cache_dir lines:

cache_dir rock /cache/rock-1k  128 min-size=0 max-size=1008
cache_dir rock /cache/rock-2k  128 min-size=1009  max-size=2032
cache_dir rock /cache/rock-4k  128 min-size=2033  max-size=4080
cache_dir rock /cache/rock-8k  160 min-size=4081  max-size=8176
cache_dir rock /cache/rock-16k 180 min-size=8177  max-size=16368
cache_dir rock /cache/rock-30k 300 min-size=16369 max-size=30704
cache_dir aufs /cache/aufs 3072 15 253  min-size=30705 max-size=2

It all seems to be working, until I do a squid -k reconfig.  After that, the 
cachemgr.cgi application starts reporting zeros for the rock directories.  
The output looks like:

by kid1 {
Store Directory Statistics:
Store Entries  : 38936
Maximum Swap Size  : 3145728 KB
Current Store Swap Size: 2856644.00 KB
Current Capacity   : 90.81% used, 9.19% free

Shared Memory Cache
Maximum Size: 131072 KB
Maximum entries:  4096
Current entries: 4096 100.00%

Store Directory #6 (aufs): /cache/aufs
FS Block Size 4096 Bytes
First level subdirectories: 15
Second level subdirectories: 253
Maximum Size: 3145728 KB
Current Size: 2856644.00 KB
Percent Used: 90.81%
Filemap bits in use: 38167 of 65536 (58%)
Filesystem Space in use: 4068160/5078656 KB (80%)
Filesystem Inodes in use: 42005/1310720 (3%)
Flags:
Removal policy: heap
} by kid1

by kid2 {
Store Directory Statistics:
Store Entries  : 53
Maximum Swap Size  : 0 KB
Current Store Swap Size: 0.00 KB
Current Capacity   : 0.00% used, 0.00% free

} by kid2

by kid3 {
Store Directory Statistics:
Store Entries  : 53
Maximum Swap Size  : 0 KB
Current Store Swap Size: 0.00 KB
Current Capacity   : 0.00% used, 0.00% free

} by kid3

by kid4 {
Store Directory Statistics:
Store Entries  : 53
Maximum Swap Size  : 0 KB
Current Store Swap Size: 0.00 KB
Current Capacity   : 0.00% used, 0.00% free

} by kid4

by kid5 {
Store Directory Statistics:
Store Entries  : 53
Maximum Swap Size  : 0 KB
Current Store Swap Size: 0.00 KB
Current Capacity   : 0.00% used, 0.00% free

} by kid5

by kid6 {
Store Directory Statistics:
Store Entries  : 53
Maximum Swap Size  : 0 KB
Current Store Swap Size: 0.00 KB
Current Capacity   : 0.00% used, 0.00% free

} by kid6

by kid7 {
Store Directory Statistics:
Store Entries  : 53
Maximum Swap Size  : 0 KB
Current Store Swap Size: 0.00 KB
Current Capacity   : 0.00% used, 0.00% free

} by kid7

I do see TCP_HIT:HIER_NONE messages in the access.log file for files that 
should be stored in the rock directory:

  10.17.17.147 - - [10/Dec/2012:16:24:34.246 -0500] GET http://conn.skype.com/ 
HTTP/1.0 200 585 TCP_HIT:HIER_NONE

but cachemgr.cgi's Store Directory Stats report that nothing is cached in the 
rock directories.  If I stop and restart squid cachemgr.cgi starts displaying 
data for the rock directories.  A squid -k reconfig later and the reports 
are empty again.

Has anyone seen this problem?

Mike Mitchell
mike.mitch...@sas.com



[squid-users] cachemgr.cgi Store Directory Stats with multiple cache_dir lines

2012-12-10 Thread Mike Mitchell
I'm running squid 3.2.4 with the errno patch.
My squid.conf has the following for cache_dir lines:

cache_dir rock /cache/rock-1k  128 min-size=0 max-size=1008
cache_dir rock /cache/rock-2k  128 min-size=1009  max-size=2032
cache_dir rock /cache/rock-4k  128 min-size=2033  max-size=4080
cache_dir rock /cache/rock-8k  160 min-size=4081  max-size=8176
cache_dir rock /cache/rock-16k 180 min-size=8177  max-size=16368
cache_dir rock /cache/rock-30k 300 min-size=16369 max-size=30704
cache_dir aufs /cache/aufs 3072 15 253  min-size=30705 max-size=2

It all seems to be working, until I do a squid -k reconfig.  After that, the 
cachemgr.cgi application starts reporting zeros for the rock directories.  
The output looks like:

by kid1 {
Store Directory Statistics:
Store Entries  : 38936
Maximum Swap Size  : 3145728 KB
Current Store Swap Size: 2856644.00 KB
Current Capacity   : 90.81% used, 9.19% free

Shared Memory Cache
Maximum Size: 131072 KB
Maximum entries:  4096
Current entries: 4096 100.00%

Store Directory #6 (aufs): /cache/aufs
FS Block Size 4096 Bytes
First level subdirectories: 15
Second level subdirectories: 253
Maximum Size: 3145728 KB
Current Size: 2856644.00 KB
Percent Used: 90.81%
Filemap bits in use: 38167 of 65536 (58%)
Filesystem Space in use: 4068160/5078656 KB (80%)
Filesystem Inodes in use: 42005/1310720 (3%)
Flags:
Removal policy: heap
} by kid1

by kid2 {
Store Directory Statistics:
Store Entries  : 53
Maximum Swap Size  : 0 KB
Current Store Swap Size: 0.00 KB
Current Capacity   : 0.00% used, 0.00% free

} by kid2

by kid3 {
Store Directory Statistics:
Store Entries  : 53
Maximum Swap Size  : 0 KB
Current Store Swap Size: 0.00 KB
Current Capacity   : 0.00% used, 0.00% free

} by kid3

by kid4 {
Store Directory Statistics:
Store Entries  : 53
Maximum Swap Size  : 0 KB
Current Store Swap Size: 0.00 KB
Current Capacity   : 0.00% used, 0.00% free

} by kid4

by kid5 {
Store Directory Statistics:
Store Entries  : 53
Maximum Swap Size  : 0 KB
Current Store Swap Size: 0.00 KB
Current Capacity   : 0.00% used, 0.00% free

} by kid5

by kid6 {
Store Directory Statistics:
Store Entries  : 53
Maximum Swap Size  : 0 KB
Current Store Swap Size: 0.00 KB
Current Capacity   : 0.00% used, 0.00% free

} by kid6

by kid7 {
Store Directory Statistics:
Store Entries  : 53
Maximum Swap Size  : 0 KB
Current Store Swap Size: 0.00 KB
Current Capacity   : 0.00% used, 0.00% free

} by kid7

I do see TCP_HIT:HIER_NONE messages in the access.log file for files that 
should be stored in the rock directory:

  10.17.17.147 - - [10/Dec/2012:16:24:34.246 -0500] GET http://conn.skype.com/ 
HTTP/1.0 200 585 TCP_HIT:HIER_NONE

but cachemgr.cgi's Store Directory Stats report that nothing is cached in the 
rock directories.  If I stop and restart squid cachemgr.cgi starts displaying 
data for the rock directories.  A squid -k reconfig later and the reports 
are empty again.

Has anyone seen this problem?

Mike Mitchell
mike.mitch...@sas.com



[squid-users] Transparent HTTPS Parent proxy

2012-09-08 Thread Mike Mitchell
I have several clients that cannot be reconfigured to use a PAC file or
proxy, their traffic must be intercepted.  They are all behind a Cisco
firewall.  I've set up WCCP and am intercepting both the HTTP and
HTTPS traffic, using two different service groups and two different
proxy ports.

One problem I had with the Cisco firewall was that it insisted on having
the Squid proxy on the same network as the other clients.  Since I do
not want that network to have direct access to the Internet, I'm chaining
the local squid to another squid process on a different network.  It looks
like
client - squid1 - squid2 - internet
where the squid1 process is picking up the traffic via WCCP and squid2
is a cache_peer (parent) of squid1.

It all works well for HTTP traffic, but I have yet to get HTTPS traffic to
work.  WCCP is intercepting the traffic and squid1 is seeing it, but an
error page is returned to the client saying Unsupported Request
Method and Protocol

I've tried both
   https_port 4433 cert=myCA.pem intercept
and
https_port 4433 cert=myCA.pem intercept ssl-bump
but I get the same behaviour with both.
I do have
ssl_bump allow all
never_direct allow all
in the configuration.

Am I missing something simple?  Is it just not possible yet with a parent
proxy?  I realize the request will have to be converted from a GET to a
CONNECT.  It would not surprise me if the conversion hasn't been
implemented yet.

This is with squid 3.2.1.

Mike Mitchell
mike.mitch...@sas.com



RE: [squid-users] moved permanently loop detection

2009-08-11 Thread Mike Mitchell
He got hit with another one today.  The access log fills to the
maximum file size, then squid dies.

Having squid return a reasonable error to the client may be a
problem.  It would probably be sufficient if squid did not cache
the 301/302 return if the Location: field points to the requested
URL.  We'd still have the loop, but the remote access to the
web server will rate-limit the loop and make it more apparent to
the web administrators.

Mike Mitchell


-Original Message-
From: Henrik Nordstrom [mailto:hen...@henriknordstrom.net]
Sent: Monday, August 10, 2009 7:46 PM
To: Mike Mitchell
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] moved permanently loop detection

mån 2009-08-10 klockan 16:58 -0400 skrev Mike Mitchell:

 I was thinking a simple string compare of the requested URL and the contents 
 of the 'Location:' field.  If the two are the same it's a loop.

If it's cacheble yes.

Regards
Henrik




[squid-users] moved permanently loop detection

2009-08-10 Thread Mike Mitchell
Would it be possible to add a simple loop detection for moved permanently 
response codes?
We've been hit a couple of times with loops from URLs like 
http://wwwcache.localtechwire.com/favicon.png.
I know the fix should really go into the browser, but IE is broken and 
Microsoft won't fix it.
See http://connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=357905

I was thinking a simple string compare of the requested URL and the contents of 
the 'Location:' field.  If the two are the same it's a loop.


Mike Mitchell
SAS Institute Inc.
mike.mitch...@sas.com
(919) 531-6793





RE: [squid-users] Cache-Control problems with Korean sites

2009-07-21 Thread Mike Mitchell
I used telnet to connect to the problem web server and sent a minimal HTTP 
request.  The web server returned a page, so I tried again adding a header from 
the trace one at a time until I did not get a response.  I only tried one value 
of Cache-Control, max-age=0.

I've tried accessing the Korean Government sites from other proxy servers 
around the world and I get the same behavior.  I know the problem isn't with 
the proxy server's ISP, but rather with the Korean Government sites.

Mike Mitchell

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Tuesday, July 21, 2009 2:37 AM
To: Mike Mitchell
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Cache-Control problems with Korean sites

Mike Mitchell wrote:
 We're having problems accessing Korean Government sites like 
 parcel.epost.go.kr and www.g2b.go.krhttp://www.g2b.go.kr from a squid cache 
 that is physically in Seoul, Korea.  I performed network captures and found 
 that if the request included a 'Cache-Control' header the remote server did 
 not send TCP ACK messages back for the request.  The remote server did 
 complete the three-way TCP connection handshake, but would not acknowledge 
 the request.  When I stripped the 'Cache-Control' header using

   acl NoCacheCtl dstdomain .epost.go.kr .gtb.go.kr
   header_access Cache-Control deny NoCacheCtl

 the TCP ACKs started coming back and we could retrieve content.

 My guess is there is a firewall protecting the remote web servers.  Has 
 anyone seen this behavior before?

Any cache-control values? or just specific ones?

It's really up to whoever runs the broken software to fix the issue.
Just find out where the breakage is and yell loudly at them.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE16
   Current Beta Squid 3.1.0.10 or 3.1.0.11



[squid-users] Cache-Control problems with Korean sites

2009-07-20 Thread Mike Mitchell
We're having problems accessing Korean Government sites like parcel.epost.go.kr 
and www.g2b.go.krhttp://www.g2b.go.kr from a squid cache that is physically 
in Seoul, Korea.  I performed network captures and found that if the request 
included a 'Cache-Control' header the remote server did not send TCP ACK 
messages back for the request.  The remote server did complete the three-way 
TCP connection handshake, but would not acknowledge the request.  When I 
stripped the 'Cache-Control' header using

  acl NoCacheCtl dstdomain .epost.go.kr .gtb.go.kr
  header_access Cache-Control deny NoCacheCtl

the TCP ACKs started coming back and we could retrieve content.

My guess is there is a firewall protecting the remote web servers.  Has anyone 
seen this behavior before?

Mike Mitchell
SAS Institute Inc.
mike.mitch...@sas.com
(919) 531-6793




[squid-users] NTLM problems

2008-01-09 Thread Mike Mitchell
I've set up squid on a Windows 2003 server using the pre-compiled binaries from 
 http://squid.acmeconsulting.it/download/squid-2.6.STABLE17-bin.zip
NTLM authentication consistently works for some users, but consistently fails 
for others.
Here's what debugging shows on a failure:

...

2008/01/04 18:38:09| helperStatefulOpenServers: Starting 5 
'mswin_ntlm_auth.exe' processes
mswin_ntlm_auth[5192]: c:/squid/libexec/mswin_ntlm_auth.exe build Nov 27 2007, 
21:53:46 starting up...
mswin_ntlm_auth[5192]: SSPI initialized OK
mswin_ntlm_auth[6112]: c:/squid/libexec/mswin_ntlm_auth.exe build Nov 27 2007, 
21:53:46 starting up...
mswin_ntlm_auth[6112]: SSPI initialized OK
mswin_ntlm_auth[5368]: c:/squid/libexec/mswin_ntlm_auth.exe build Nov 27 2007, 
21:53:46 starting up...
mswin_ntlm_auth[5368]: SSPI initialized OK
2008/01/04 18:38:09| User-Agent logging is disabled.
2008/01/04 18:38:09| Referer logging is disabled.
mswin_ntlm_auth[3084]: c:/squid/libexec/mswin_ntlm_auth.exe build Nov 27 2007, 
21:53:46 starting up...
mswin_ntlm_auth[3084]: SSPI initialized OK
mswin_ntlm_auth[4160]: c:/squid/libexec/mswin_ntlm_auth.exe build Nov 27 2007, 
21:53:46 starting up...
mswin_ntlm_auth[4160]: SSPI initialized OK

...

mswin_ntlm_auth[5192]: Got 'YR 
TlRMTVNTUAABBlIAAAYABgAmBgAGACBEMTU2MDRDQVJZTlQ=' from Squid
mswin_ntlm_auth[5192]: attempting SSPI challenge retrieval
mswin_ntlm_auth[5192]: Got it
mswin_ntlm_auth[5192]: sending 'TT 
TlRMTVNTUAACBgAGADgGAoECX74wIorjbCkAAHwAfAA+BQLODg9DQVJZTlQCAAwAQwBBAFIAWQBOAFQAAQAQAE4AQQBNAEUAUwBSAFYAMgAEABQAbgBhAC4AcwBhAHMALgBjAG8AbQADACYATgBBAE0ARQBTAFIAVgAyAC4AbgBhAC4AcwBhAHMALgBjAG8AbQAFAA4AUwBBAFMALgBDAE8ATQAA'
 to squid
mswin_ntlm_auth[5192]: Got 'KK 
TlRMTVNTUAADGAAYAFIAagYABgBABgAGAEYGAAYATABqBlIAAENBUllOVE1BQkxBS0QxNTYwNKWk6TgT5BCIQBjSilR+VqBRLF/GRRzxhg=='
 from Squid
mswin_ntlm_auth[5192]: checking domain: 'CARYNT', user: 'MABLAK'
mswin_ntlm_auth[6112]: Got 'YR 
TlRMTVNTUAABBlIAAAYABgAmBgAGACBEMTU2MDRDQVJZTlQ=' from Squid
mswin_ntlm_auth[6112]: attempting SSPI challenge retrieval
mswin_ntlm_auth[6112]: Got it
mswin_ntlm_auth[6112]: sending 'TT 
TlRMTVNTUAACBgAGADgGAoEC1yrM0oj/3vQAAHwAfAA+BQLODg9DQVJZTlQCAAwAQwBBAFIAWQBOAFQAAQAQAE4AQQBNAEUAUwBSAFYAMgAEABQAbgBhAC4AcwBhAHMALgBjAG8AbQADACYATgBBAE0ARQBTAFIAVgAyAC4AbgBhAC4AcwBhAHMALgBjAG8AbQAFAA4AUwBBAFMALgBDAE8ATQAA'
 to squid
mswin_ntlm_auth[6112]: Got 'KK 
TlRMTVNTUAADGAAYAFIAagYABgBABgAGAEYGAAYATABqBlIAAENBUllOVE1BQkxBS0QxNTYwNOqSkQJvL12+T28RjkSZbHD0GEvSApUMpA=='
 from Squid
mswin_ntlm_auth[6112]: checking domain: 'CARYNT', user: 'MABLAK'

The last line shown is currently the last line in the cache.log file.
Notice that there is not a 'Login attempt had result' line.  My guess is that 
the SSP_ValidateNTLMCredentials() call in libntlmssp.c is hanging.  That 
routine calls several Windows routines, but I can't tell which one is hanging.  
Both process IDs 5192 and 6112 are still running.

Has anyone seen a problem like this?

--  [EMAIL PROTECTED]mailto:[EMAIL PROTECTED]




[squid-users] ACLs to direct request to proper parent?

2007-10-22 Thread Mike Mitchell
I've recently installed a Squid 2.6STABLE16 system in a country that
requires all web browsing to go through a government-specified proxy
server.  The Government runs a non-transparent proxy setup that must be
explicitly listed in the Squid configuration.

That would normally be easy, as all I'd do is list the Governement proxy
as a parent.  However, I have three types of traffic I'd like to direct
to different places:

1.  Traffic that should be virus-scanned before delivering to
the client.
2.  Traffic that should not be virus-scanned such as web
conferencing.
3.  Traffic that is internal and should not be virus scanned or
given to the Government proxy.

Here's what I have so far:

  cache_peer 127.0.0.1 parent 8080 7 name=vscan no-query no-digest
default
  cache_peer govproxy  parent 3128 7 no-query no-digest

  cache_peer_domain vscan  !.pressaccess.com !.presentonline.com
  cache_peer_domain vscan  !.interactconferencing.com !.raindance.com
  cache_peer_domain vscan  !.mshow.com !.placeware.com
  cache_peer_domain vscan  !.ilearning.com !.kindercam.com
!.fidelity.com
  cache_peer_domain vscan  !.lexisnexis.com !data.finlistics-vm.com
  cache_peer_domain vscan  !library.midicorp.com
  cache_peer_domain vscan  !.finance.yahoo.com !.tenrox.com
!.riskadvisory.com

  acl internal-dst dst 10.0.0.0/255.0.0.0
  acl internal-dst dst 172.16.0.0/255.240.0.0
  acl internal-dst dst 192.168.0.0/255.255.0.0
  always_direct allow internal-dst

I'd like to bypass the virus scanner for more things than just domain
lists.  I'd like to be able to use an ACL like:
  acl novirus-url urlpath_regex -i \.gif(\?.*)?$ \.jpg(\?.*)?$
\.png(\?.*)?$
  acl novirus-url urlpath_regex -i \.mpe?g(\?.*)?$ \.avi(\?.*)?$
\.swf(\?.*)?$
  acl novirus-url urlpath_regex -i \.qt(\?.*)?$ \.mov(\?.*)?$
\.as[fx](\?.*)?$
  acl novirus-url urlpath_regex -i \.rm(\?.*)?$ \.wm[av](\?.*)?$
\.mp3(\?.*)?$
  acl novirus-url urlpath_regex -i \.m4[avp](\?.*)?$ \.mp4v?(\?.*)?$
  acl novirus-url urlpath_regex -i \.wav(\?.*)?$
And then use that ACL to bypass the virus scanner and go directly to the
Government proxy.  I didn't see anything in Squid 2.6STABLE16 that would
do what I need.  Am I missing something?

Mike Mitchell
SAS Institute Inc.
[EMAIL PROTECTED]
(919) 531-6793



RE: [squid-users] Fidelity Active Trader Pro vs. Squid 2.6STABLE12

2007-03-27 Thread Mike Mitchell
I installed 2.6STABLE12-20070327 on our test proxy and Fidelity Active Trader 
Pro is now working correctly.  It is getting the streaming updates.
Thank you for the help!

-- Mike Mitchell

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 26, 2007 7:05 PM
To: Mike Mitchell
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Fidelity Active Trader Pro vs. Squid 2.6STABLE12

mån 2007-03-26 klockan 18:42 -0400 skrev Mike Mitchell:
 At 22:40 GMT I get the same behavior with 2.6STABLE9, 2.6STABLE12
 minus the chunked-encoding patch, and 2.6STABLE12 plus patch 11256.
 The US markets are closed so I can't really be sure I'm seeing the
 streaming data.  With 2.6STABLE12 alone I get a timeout error message
 and the application does not show streaming data.

Sounds good.

Looking at the list of unmerged changes I saw that there is a bugfix for
the last changeset. You'll need that as well or your Squid will be a bit
unstable..

http://www.squid-cache.org/Versions/v2/HEAD/changesets/11257.patch

Regards
Henrik


[squid-users] Fidelity Active Trader Pro vs. Squid 2.6STABLE12

2007-03-26 Thread Mike Mitchell
I recently upgraded from 2.6STABLE9 to 2.6STABLE12 and now Fidelity's Active 
Trader Pro application is broken.  It no longer gets streaming updates.  I put 
back 2.6STABLE9 and it starts working again, no configuration changes other 
than the binary.

I've disabled any helpers, I've verified that there are no If-Modified-Since 
requests, and the logs show every request was a MISS.
I backed out the Upgrade HTTP/0.9 responses to our HTTP version (HTTP/1.0) 
patch,  I'm not using helpers or collapsed forwarding.  That leaves the patches 
for bugs 1787 and 1875 as a likely candidate, but since all requests were a 
MISS I don't think that's the problem.
The only thing left is the primitive support for chunked encoding, unless 
someone has a better idea.
Has anyone seen this problem?  The Active Trader Pro application works, it just 
doesn't get the streaming quotes anymore.

Mike Mitchell
SAS Institute Inc.
[EMAIL PROTECTED]
(919) 531-6793



RE: [squid-users] Fidelity Active Trader Pro vs. Squid 2.6STABLE12

2007-03-26 Thread Mike Mitchell
At 22:40 GMT I get the same behavior with 2.6STABLE9, 2.6STABLE12 minus the 
chunked-encoding patch, and 2.6STABLE12 plus patch 11256.  The US markets are 
closed so I can't really be sure I'm seeing the streaming data.  With 
2.6STABLE12 alone I get a timeout error message and the application does not 
show streaming data.

I'll know more tomorrow after the markets open.

-- Mike Mitchell  

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 26, 2007 5:59 PM
To: Mike Mitchell
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Fidelity Active Trader Pro vs. Squid 2.6STABLE12

mån 2007-03-26 klockan 17:35 -0400 skrev Mike Mitchell:

 I backed out the Upgrade HTTP/0.9 responses to our HTTP version
 (HTTP/1.0) patch,  I'm not using helpers or collapsed forwarding.
 That leaves the patches for bugs 1787 and 1875 as a likely candidate,
 but since all requests were a MISS I don't think that's the problem.
 The only thing left is the primitive support for chunked encoding,
 unless someone has a better idea.

Not very likely as the affected sites is completely broken without that
patch, and for the rest there should be no difference.

Hmm.. thinking. Could maybe be this related patch which was overlooked
in the 2.6.STABLE merge process (it was mistakenly sorted as belonging
to another set of changes not yet merged to 2.6.STABLE):

http://www.squid-cache.org/Versions/v2/HEAD/changesets/11256.patch

try applying it on top of 2.6.STABLE12 (ignore the reject on the first
chunk with $Id:, it's expected) and let me know if it makes any
difference.

Reminds me that it's about time to start sorting the Squid-2 changes in
what has been applied to 2.6.STABLE and not..

Regards
Henrik


RE: [squid-users] Fidelity Active Trader Pro vs. Squid 2.6STABLE12

2007-03-26 Thread Mike Mitchell
Yes, it's dying now with 'forward.c:1002: e-store_status == STORE_PENDING'.
I'll try adding patch 11257 and see if that stabilizes it.

-- Mike Mitchell

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 26, 2007 7:05 PM
To: Mike Mitchell
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Fidelity Active Trader Pro vs. Squid 2.6STABLE12

mån 2007-03-26 klockan 18:42 -0400 skrev Mike Mitchell:
 At 22:40 GMT I get the same behavior with 2.6STABLE9, 2.6STABLE12
 minus the chunked-encoding patch, and 2.6STABLE12 plus patch 11256.
 The US markets are closed so I can't really be sure I'm seeing the
 streaming data.  With 2.6STABLE12 alone I get a timeout error message
 and the application does not show streaming data.

Sounds good.

Looking at the list of unmerged changes I saw that there is a bugfix for
the last changeset. You'll need that as well or your Squid will be a bit
unstable..

http://www.squid-cache.org/Versions/v2/HEAD/changesets/11257.patch

Regards
Henrik


[squid-users] Memory leak in squid 2.5STABLE13?

2006-04-24 Thread Mike Mitchell
I have four servers running 2.5STABLE13, each handling ~100 requests/second.
They're all running on RedHat Linux ES 2.1.  Each of the four servers leaks
about 100 MB a week.  It's been that way for years.  I restart squid once a
month so I don't run out of memory.

Here's some cachemgr output:
Squid Object Cache: Version 2.5.STABLE13
Start Time: Sat, 15 Apr 2006 21:43:47 GMT
Current Time:   Mon, 24 Apr 2006 19:22:37 GMT
Connection information for squid:
Number of clients accessing cache:  0
Number of HTTP requests received:   12318603
Number of ICP messages received:3718919
Number of ICP messages sent:3719279
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   961.0
Average ICP messages per minute since start:580.3
Select loop called: 102978835 times, 7.469 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 76.4%, 60min: 66.9%
Byte Hit Ratios:5min: 19.1%, 60min: 9.3%
Request Memory Hit Ratios:  5min: 4.6%, 60min: 6.4%
Request Disk Hit Ratios:5min: 6.3%, 60min: 9.9%
Storage Swap size:  31866372 KB
Storage Mem size:   98444 KB
Mean Object Size:   14.16 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.00562  0.00767
Cache Misses:  0.12783  0.13498
Cache Hits:0.00463  0.00463
Near Hits: 0.06640  0.07409
Not-Modified Replies:  0.00379  0.00463
DNS Lookups:   0.03223  0.03374
ICP Queries:   0.00137  0.00185
Resource usage for squid:
UP Time:769130.108 seconds
CPU Time:   49356.840 seconds
CPU Usage:  6.42%
CPU Usage, 5 minute avg:21.48%
CPU Usage, 60 minute avg:   26.74%
Process Data Segment Size via sbrk(): 492179 KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 2439
Memory usage for squid via mallinfo():
Total space in arena:  492179 KB
Ordinary blocks:   454101 KB 344150 blks
Small blocks:   0 KB  0 blks
Holding blocks: 19968 KB 12 blks
Free Small blocks:  0 KB
Free Ordinary blocks:   38077 KB
Total in use:  474069 KB 93%
Total free: 38077 KB 7%
Total size:512147 KB
Memory accounted for:
Total accounted:   258504 KB
memPoolAlloc calls: 1436813623
memPoolFree calls: 1431954945
File descriptor usage for squid:
Maximum number of file descriptors:   4096
Largest file desc currently in use:389
Number of file desc currently in use:  226
Files queued for open:   0
Available number of file descriptors: 3870
Reserved number of file descriptors:   100
Store Disk files open:   1
Internal Data Structures:
2263720 StoreEntries
 18309 StoreEntries with MemObjects
 18288 Hot Object Cache Items
2250084 on-disk objects

Notice that the memory usage Total in Use: is 474 MB, while the
Memory accounted for is only 258 MB.  I've tried configuring with
--enable-dlmalloc and that didn't have any affect on the memory leak.
I've also tried replacing the version of dlmalloc (2.6.4) that's
shipped with squid with a newer version (2.7.2), but that didn't
have any affect either.

Does anyone have an idea of what I should try next?

Mike Mitchell
SAS Institute Inc.
[EMAIL PROTECTED]
(919) 531-6793



[squid-users] Segmentation violations in 2.5.STABLE9-20050404

2005-04-07 Thread Mike Mitchell
I've installed 2.5.STABLE9-20050404 on over 30 servers,
each running RedHat Linux 2.1 AS on Intel Xenon CPUs.
They've been running since Monday morning at 12:30 AM EST.
Since then I've gotten 6 mail messages from the squid
process saying I've encountered a fatal error.  The
cache.log file says it's a segmentation violation.  I
have yet to see the same server die twice.  I haven't
found a core file to examine.

I build squid with the following options:
   --enable-cache-digests
   --enable-underscores
   --with-pthreads
   --enable-storeio=aufs
   --enable-removal-policies
   --enable-gnuregex

Has anyone else seen this problem?  I didn't have this problem
with 2.5STABLE9.

Mike Mitchell
SAS Institute Inc.
[EMAIL PROTECTED]
(919) 531-6793


RE: [squid-users] Re: Help, Squid ACL regex_url BYPASSS

2004-04-19 Thread Mike Mitchell
 I use a pattern of
\.bz2(\?.*)?$
Which matches '.bz2' at the end of a URL or '.bz2?' followed by anything.
The un-escaped '?' matches 0 or 1 occurrence of the pattern in parenthesis, which in 
this case is a question-mark followed by zero or more characters.

Mike Mitchell
-Original Message-
From: Herman (ISTD) [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 16, 2004 11:07 PM
To: Adam Aube; [EMAIL PROTECTED]
Subject: RE: [squid-users] Re: Help, Squid ACL regex_url BYPASSS

Thank you all,
At last, I just use \.bz2 entry, since the user may just put ?? or ???
behind the URL.

Regards,

herman

 -Original Message-
 From: Adam Aube [mailto:[EMAIL PROTECTED]
 Sent: Friday, April 16, 2004 7:44 PM
 To: [EMAIL PROTECTED]
 Subject: [squid-users] Re: Help, Squid ACL regex_url BYPASSS
 
 Herman (ISTD) wrote:
 
  Currently, I am preventing my users for downloading some files e.g
file
  with .bz2 extention.
 
  In squid.conf I define as following :
 acl BadUrl url_regex -i
/usr/local/squid/etc/data/BadUrlFile
 
  Add I add this entry to /usr/local/squid/etc/data/BadUrlFile :
 \.bz2$
 
  But some of the users did some trick by adding ? or ?/ in the URL
 
  And they successful to bypass my ACL and download the files they
wanted.
 
  I have try to add \.bz2?$ and \.bz2?/$ in to
  /usr/local/squid/etc/data/BadUrlFile file. But it does not work.
 
 Like with the '.', you need to escape the '?' and '/' with a '\'.
 
 Adam



RE: [squid-users] Impossible keep-alive header

2004-01-19 Thread Mike Mitchell
I've just installed the squid-2.5.STABLE4-20040119 snapshot and now I'm flooded with 
Impossible keep-alive header messages.
I have a parent proxy of a Trend Micro Interscan Viruswall version 3.8 running on the 
same machine.  Here's an example from the cache.log file:

2004/01/19 10:58:34| ctx: enter level  0: 
'http://wisapidata.weatherbug.com/WxAlertIsapi/WxAlertIsapi.cgi?GetAlert30Magic=1ZipCode=27519StationID=RALGHUnits=0RegNum=27560925Version=5.02t=1074526042lv=0'
2004/01/19 10:58:34| httpProcessReplyHeader: Impossible keep-alive header from 
'http://wisapidata.weatherbug.com/WxAlertIsapi/WxAlertIsapi.cgi?GetAlert30Magic=1ZipCode=27519StationID=RALGHUnits=0RegNum=27560925Version=5.02t=1074526042lv=0'
2004/01/19 10:58:34| ctx: exit level 0

The corresponding access.log entry says:

10.23.11.86 - - [19/Jan/2004:10:59:34 -0500] GET 
http://wisapidata.weatherbug.com/WxAlertIsapi/WxAlertIsapi.cgi?GetAlert30Magic=1ZipCode=27519StationID=RALGHUnits=0RegNum=28333984Version=3.0t=1074525302lv=0
 HTTP/1.0 200 236 TCP_MISS:DEFAULT_PARENT 

Mike Mitchell
SAS Institute Inc.
[EMAIL PROTECTED]
(919) 531-6793
-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 14, 2004 1:51 PM
To: Steve Snyder; Alex Sharaz
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] Impossible keep-alive header


On Wed, 14 Jan 2004, Henrik Nordstrom wrote:

 Thinking... OK, I think I know what the bug is in that patch.

Confirmed. The logics on what keep-alive headers are impossible was a 
little too broad, sometimes triggering on fully valid HTTP/1.0 replies.

The patch has been corrected, and attached to this message you can find the 
incremental patch if you are using the snapshot release or otherwise can't easily 
get/apply the updated patch.

Regards
Henrik


RE: [squid-users] Control squid VSZ and RSS from growing

2003-10-21 Thread Mike Mitchell
I'm running Red Hat ES 2.1 and my squid process continually grows.
Running squid -v returns:
  Squid Cache: Version 2.5.STABLE4-20031015
  configure options:  --prefix=/local/proxy/squid --enable-cache-digests
  --enable-underscores --with-pthreads --enable-storeio=aufs,ufs
  --enable-removal-policies=heap --enable-gnuregex

I've tried compiling it with --enable-dlmalloc but that didn't make a difference.
Right now cachemgr.cgi says:
  Memory usage for squid via mallinfo():
Total space in arena:  464715 KB
  Memory accounted for:
Total accounted:   269273 KB

Running ldd -v squid returns:
  libcrypt.so.1 = /lib/libcrypt.so.1 (0x4002e000)
  libpthread.so.0 = /lib/i686/libpthread.so.0 (0x4005b000)
  libm.so.6 = /lib/i686/libm.so.6 (0x4008c000)
  libresolv.so.2 = /lib/libresolv.so.2 (0x400af000)
  libnsl.so.1 = /lib/libnsl.so.1 (0x400c1000)
  libc.so.6 = /lib/i686/libc.so.6 (0x400d7000)
  /lib/ld-linux.so.2 = /lib/ld-linux.so.2 (0x4000)
  Version information:

  squid:
 libm.so.6 (GLIBC_2.0) = /lib/i686/libm.so.6
 libc.so.6 (GLIBC_2.1.3) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.2) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.1) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.0) = /lib/i686/libc.so.6
 libpthread.so.0 (GLIBC_2.0) = /lib/i686/libpthread.so.0
 libpthread.so.0 (GLIBC_2.1) = /lib/i686/libpthread.so.0
  /lib/libcrypt.so.1:
 libc.so.6 (GLIBC_2.1.3) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.0) = /lib/i686/libc.so.6
  /lib/i686/libpthread.so.0:
 libc.so.6 (GLIBC_2.1.3) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.1) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.2) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.1.2) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.2.3) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.0) = /lib/i686/libc.so.6
  /lib/i686/libm.so.6:
 libc.so.6 (GLIBC_2.1.3) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.0) = /lib/i686/libc.so.6
  /lib/libresolv.so.2:
 libc.so.6 (GLIBC_2.1.3) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.1) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.2) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.0) = /lib/i686/libc.so.6
  /lib/libnsl.so.1:
 libc.so.6 (GLIBC_2.1.3) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.2) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.2.3) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.0) = /lib/i686/libc.so.6
 libc.so.6 (GLIBC_2.1) = /lib/i686/libc.so.6
  /lib/i686/libc.so.6:
 ld-linux.so.2 (GLIBC_2.1.1) = /lib/ld-linux.so.2
 ld-linux.so.2 (GLIBC_2.2.3) = /lib/ld-linux.so.2
 ld-linux.so.2 (GLIBC_2.1) = /lib/ld-linux.so.2
 ld-linux.so.2 (GLIBC_2.2) = /lib/ld-linux.so.2
 ld-linux.so.2 (GLIBC_2.0) = /lib/ld-linux.so.2

-Original Message-
From: Marc Elsen [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 21, 2003 3:03 AM
To: Zand, Nooshin
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] Control squid VSZ and RSS from growing




Zand, Nooshin wrote:
 
 Hi,
 
 How can I control squid VSZ and RSS from growing.
 Run squid2.5Stable4 on Redhat Linux 9.
 Here are the list of library in use.
 Is any known issue on using lpthread on linux?
 
   libcrypt.so.1 = /lib/libcrypt.so.1 (0x4001a000)
 libpthread.so.0 = /lib/tls/libpthread.so.0 (0x40047000)
 libm.so.6 = /lib/tls/libm.so.6 (0x40055000)
 libresolv.so.2 = /lib/libresolv.so.2 (0x40078000)
 libnsl.so.1 = /lib/libnsl.so.1 (0x4008a000)
 libc.so.6 = /lib/tls/libc.so.6 (0x4200)
 /lib/ld-linux.so.2 = /lib/ld-linux.so.2 (0x4000)
 
 Regards,
 nooshin

 I have a fairly steady process size, after a few weeks with squid on Linux (+aufs in 
use). Make sure 'cache_mem' is set to reasonable value with respect to physical mem 
(e.g).

M.


-- 

 'Love is truth without any future.
 (M.E. 1997)


RE: [squid-users] xmalloc and out of memory errors in messages log

2003-10-13 Thread Mike Mitchell
I've been having a similar problem and it's been present in all Squid version 2 
releases, not just 2.5STABLE4.  I've seen it in all releases of 2.4 and 2.3.
I have three identical squid servers, each with two 16GB cache directories.  Right now 
cachemgr reports ~265 MB of memory accounted for, but sbrk and mallinfo say ~700 MB of 
memory is used.  The sbrk and mallinfo reports continue to increase as the system 
runs, but the Memory accounted for: hovers around 265 MB.  See 
ftp://ftp.sas.com/pub/nam/squid-mem.png for a graph that shows how our memory usage 
increases.  The graph is with a full cache and shows a week's worth of data following 
a restart.

I've tried both the standard malloc library and compiling with --enable-dlmalloc and 
have gotten the same results.
I've seen this problem on HP-UX 10.20, Red Hat 7.2, Red Hat 7.3, and now Red Hat ES 
2.1.

I eventually wrote a script that checks if squid is too large.  I then call that 
script in the nightly cron job and restart squid if the script returns true. Here's 
the script for Linux:

#!/usr/bin/perl
# returns 1 if the squid process consumes more than
# the passed-in percent of physical memory, returns 0
# otherwise.  Typical use:
#   squid_too_big 45
#
# Can also pass in the number of megabytes instead of
# percent:
#   squid_too_big 300M
#
$Debug = 0;
$lim = 0;
$per = 0;
if ($ARGV[0] =~ /m/i) {
$lim = $ARGV[0];
$lim =~ s/[^\d]//;
$lim = int($lim) * 1024 * 1024;
} else {
$per = $ARGV[0];
$per =~ s/[^\d]//;
$per = int($per);
}
$per = 45 if ($per = 0 || $per  100);

# Get the physical memory in the machine
open(MEM, /proc/meminfo);
while(MEM) {
next unless /^Mem:\s/;
($jnk, $tot, $used, $free) = split(' ');
}
close(MEM);
exit 0 if ($tot = 0);

# calculate the maximum allowed
$lim = $tot * $per / 100 if ($lim = 0 || $lim  $tot);

#
# Get the amount of memory the squid process is using
#
$tot = 0;
open(PS, ps -efl |);
while(PS) {
next unless /\s\(squid\)$/;
($jnk, $jnk, $jnk, $jnk, $jnk, $jnk, $jnk, $jnk, $jnk, $tmp, $jnk)
= split(' ');
$tmp *= 4096;   # convert memory pages to bytes
$tot = $tmp if ($tmp  $tot);
}
close(PS);

print limit = $lim, squid = $tot\n if ($Debug);

exit 1 if ($tot  $lim);
exit 0;
 
-Original Message-
From: Tay Teck Wee [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 09, 2003 10:33 PM
To: Squid Users
Subject: Re: [squid-users] xmalloc and out of memory errors in messages log


Can anyone help? Thanks.

--
Wolf 

 --- Tay Teck Wee [EMAIL PROTECTED] wrote:  
--- Robert Collins [EMAIL PROTECTED] wrote:
 
 On Tue, 2003-10-07 at 17:17, Tay Teck Wee wrote:
  
   The Total Accounted Memory (seen from cachemgr)
  always
   shows about 60-70% of the size of the squid
  process,
   i.e. when squid process is at 500M, Total
  Accounted
   Memory is about 350M. This ratio is maintained
   throughout the squid process lifetime.
   
   One thing peculiar is that under Memory usage
 for
   squid via mallinfo(): the total in use is always
  100%.
   
   
   From Memory Utilization-In Use-KB column, the
  top 3
   are:
   StoreEntry 356131
   MD5 digest 118711
   mem_node 65677
   
  
  Well you've got 356MB of StoreEntries - cache
 index
  elements. How much
  mem cache and disk cache do you ahve in your
  configuration?
 
 cache_mem 64 MB
 cache_dir aufs /cdata1 16000 36 256
 cache_dir aufs /cdata2 16000 36 256
 cache_dir aufs /cdata3 16000 36 256
 
 BTW the physical RAM is 2G.
 
 Thanks.
 
  
  Cheers,
  Rob
  
  
  --
  GPG key available at:
 
 http://members.aardvark.net.au/lifeless/keys.txt.
  
 
  ATTACHMENT part 2 application/pgp-signature
 name=signature.asc
  
 
 __
 Do You Yahoo!?
 A free party for the most shiok photo!
 http://sg.yahoo.com/shiok 

__
Do You Yahoo!?
A free party for the most shiok photo! 
http://sg.yahoo.com/shiok


[squid-users] RE: Squid memory leak?

2003-10-02 Thread Mike Mitchell
I've re-compiled with --enable-dlmalloc and it didn't make a difference. I'm still 
leaking memory.  The cachemgr lists 427621 KB in use, but only 260926 KB accounted 
for. Here's the cachemgr output:

Squid Object Cache: Version 2.5.STABLE4-20030929
Start Time: Wed, 01 Oct 2003 03:58:41 GMT
Current Time:   Thu, 02 Oct 2003 15:07:46 GMT
Connection information for squid:
Number of clients accessing cache:  0
Number of HTTP requests received:   3500368
Number of ICP messages received:378876
Number of ICP messages sent:379091
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   1659.7
Average ICP messages per minute since start:359.4
Select loop called: 17631204 times, 7.177 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 68.9%, 60min: 67.2%
Byte Hit Ratios:5min: 9.5%, 60min: 19.1%
Request Memory Hit Ratios:  5min: 6.0%, 60min: 7.5%
Request Disk Hit Ratios:5min: 12.9%, 60min: 13.8%
Storage Swap size:  31896476 KB
Storage Mem size:   98292 KB
Mean Object Size:   14.27 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.01847  0.02592
Cache Misses:  0.17711  0.17711
Cache Hits:0.01309  0.01648
Near Hits: 0.10281  0.10857
Not-Modified Replies:  0.01387  0.01648
DNS Lookups:   0.00464  0.00669
ICP Queries:   0.00541  0.00575
Resource usage for squid:
UP Time:126544.845 seconds
CPU Time:   24888.760 seconds
CPU Usage:  19.67%
CPU Usage, 5 minute avg:64.31%
CPU Usage, 60 minute avg:   54.81%
Process Data Segment Size via sbrk(): 427621 KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 837
Memory usage for squid via mallinfo():
Total space in arena:  427621 KB
Ordinary blocks:   405719 KB 102690 blks
Small blocks:   0 KB  0 blks
Holding blocks: 17336 KB  9 blks
Free Small blocks:  0 KB
Free Ordinary blocks:   21901 KB
Total in use:  423055 KB 95%
Total free: 21901 KB 5%
Total size:444957 KB
Memory accounted for:
Total accounted:   260926 KB
memPoolAlloc calls: 427293143
memPoolFree calls: 422443907
File descriptor usage for squid:
Maximum number of file descriptors:   4096
Largest file desc currently in use:997
Number of file desc currently in use:  877
Files queued for open:   0
Available number of file descriptors: 3219
Reserved number of file descriptors:   100
Store Disk files open:   3
Internal Data Structures:
2241473 StoreEntries
 18942 StoreEntries with MemObjects
 18926 Hot Object Cache Items
2234980 on-disk objects


-Original Message-
From: Mike Mitchell 
Sent: Monday, September 29, 2003 6:23 PM
To: [EMAIL PROTECTED]
Subject: Squid memory leak?


I've been chasing what looks like a memory leak for quite some time.  I've seen it all 
through the Squid 2.X releases, and it still is present in 2.5STABLE4. I've seen it on 
machines running HP-UX 10.20 and on Red Hat Linux.  Currently I'm running on three 
identical Dell 2650 servers, each with dual 1.8 GHz processors, 2 GB of RAM, and two 
20 GB cache partitions.  They are all running Red Hat ES 2.1, kernel 2.4.9-e.25smp.

Squid is built with these configuration options:
Squid Cache: Version 2.5.STABLE4
configure options:  --prefix=/local/proxy/squid --enable-cache-digests
--enable-underscores --with-pthreads --enable-storeio=aufs,ufs
--enable-removal-policies=heap --enable-gnuregex

With a full cache, Squid memory utilization starts at about 250 MB.  It then grows at 
about 300 MB per week.  The graph at
ftp://ftp.sas.com/pub/nam/squid-mem.png
shows how the memory grows.
We see about 80 requests per second during business hours.  The graph
ftp://ftp.sas.com/pub/nam/squid-reqs.png
shows our requests per second.
We use DNS round-robin to load balance requests between the three systems.

Each squid process forwards most requests on to a Trend Interscan VirusWall process 
listening on port 8080 on the same server.  It also sends the URLs through the adzap 
redirector.  ACLs are used to decide which URLs are sent through the redirector and 
also which URLs are sent to the virus scanner.

Here's some squid information output from the cachemgr.cgi: 
===
Squid Object Cache: Version 2.5.STABLE4
Start Time

[squid-users] Squid memory leak?

2003-09-29 Thread Mike Mitchell
 sibling 80 3130 proxy-only
cache_peer inetgw03.unx.sas.com sibling 80 3130 proxy-only
cache_peer inetgw02.unx.sas.com parent 8080 7 no-query no-digest round-robin
cache_peer inetgw03.unx.sas.com parent 8080 7 no-query no-digest round-robin
cache_mem 96 MB
cache_swap_low 95
cache_swap_high 97
maximum_object_size 700 MB
maximum_object_size_in_memory 32 KB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
cache_dir aufs /cache/cache1/squid 16384 53 254
cache_dir aufs /cache/cache2/squid 16384 53 254
cache_store_log none
emulate_httpd_log on
acl all src 0.0.0.0/0.0.0.0
acl noAds myport 80
acl Reads method GET HEAD
acl CIDR_A src 10.0.0.0/255.0.0.0
acl virus_proto proto HTTP
acl novirus-url urlpath_regex -i \.gif(\?.*)?$ \.jpg(\?.*)?$ \.png(\?.*)?$
ident_lookup_access allow CIDR_A
ident_lookup_access deny all
memory_pools on
log_icp_queries off
client_db off
always_direct allow novirus-url
always_direct allow !virus_proto
never_direct allow all
strip_query_terms off
redirect_program /opt/adzap/wrapzap
redirect_children 10
redirector_access deny !Reads
redirector_access allow noAds
redirector_bypass on
===

Here's some messages that are showing up in the cache.log file.

===
2003/09/29 16:38:26| sslReadServer: FD 578: read failure: (104) Connection reset by 
peer
2003/09/29 16:39:00| sslReadServer: FD 574: read failure: (104) Connection reset by 
peer
2003/09/29 16:41:44| sslReadServer: FD 560: read failure: (104) Connection reset by 
peer
2003/09/29 16:42:14| sslReadServer: FD 485: read failure: (104) Connection reset by 
peer
2003/09/29 16:42:54| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:42:55| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:42:59| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:43:03| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:43:07| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:43:08| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:43:17| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:43:21| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:43:31| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:43:32| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:43:33| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:43:34| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:43:46| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:43:47| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:44:00| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
2003/09/29 16:44:00| urlParse: Illegal character in hostname 
'inetgw01.unx.sas.com:80inetgw01.unx.sas.com'
===
Does anyone have an idea of why I'm consuming so much memory?  Am I just running into 
memory
fragmentation issues?  Should I be linking with a different version of malloc?  Would 
the
configuration option '--enable-dlmalloc' help?
I've worked around this problem by having a cron job check how large the squid process 
is.
If it grows too large I restart it.  Currently it gets restarted twice a month.
--
Mike Mitchell
[EMAIL PROTECTED]
(919) 677-8000 X16793