[squid-users] Squid Restart - Solaris with 3.1.19

2012-07-11 Thread Justin Lawler
Hi,

Squid restarted in a production environment with below error:

2012/07/11 09:28:25| FATAL: dying from an unhandled exception: virginConsumed 
= offset  offset = end
2012/07/11 09:28:32| Starting Squid Cache version 3.1.19 for 
sparc-sun-solaris2.10...

Is this a known issue for this release on solaris?

Thanks and regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] Squid Restarting

2012-05-14 Thread Justin Lawler
Thanks Amos - we have heap dumps but unfortunately we could not share with the 
wider community as they're taken from a customer production environment. 
However, we can send on information taken from the heap dump - like output from 
pflags/pstack/etc. Would this be sufficient to investigate the issue?

Thanks and regards,
Justin

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Sunday, May 06, 2012 7:16 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Restarting

On 4/05/2012 9:59 p.m., Justin Lawler wrote:
 Hi,

 We're running squid 3.1.19 - and have seen it restarting from the logs, just 
 after the below error:

 2012/04/19 12:12:28| assertion failed: forward.cc:496: server_fd == fd
 2012/04/19 12:12:59| Starting Squid Cache version 3.1.19 for 
 sparc-sun-solaris2.10...

 Is this a known issue? any workaround?

Seems to be new and a bit strange. Squid opens one connection to the server to 
fetch content sometime later a connection was closed, but not the one which was 
opened to begin with.

Do you have a core dump or stack trace available to identify what the fd and 
server_fd values actually were during the crash?


 It's been in production for 6 weeks now, and have only seen it once, but we 
 need to have an answer for the customer. We're worried it'll be more 
 frequently as traffic goes up.

Being the first report over a month after the release, it would seem to be very 
rare.

Amos
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] Squid Restarting

2012-05-04 Thread Justin Lawler
Hi,

We're running squid 3.1.19 - and have seen it restarting from the logs, just 
after the below error:

2012/04/19 12:12:28| assertion failed: forward.cc:496: server_fd == fd
2012/04/19 12:12:59| Starting Squid Cache version 3.1.19 for 
sparc-sun-solaris2.10...

Is this a known issue? any workaround?

It's been in production for 6 weeks now, and have only seen it once, but we 
need to have an answer for the customer. We're worried it'll be more frequently 
as traffic goes up.

Thanks,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] Squid Reconfigure ICAP Settings

2012-04-30 Thread Justin Lawler
Hi,

Will squid reconfigure ICAP settings if a 'squid -k reconfigure' is triggered?

Want to know can we update ICAP acl settings on the fly without restarting 
squid.

Thanks and regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] Accessing Squid by telnet - differences in behavior?

2012-03-22 Thread Justin Lawler
Hi,

Comparing a legacy instance of squid (squid 3.0.15) with the latest (3.1.19), 
and we noticed some differences. Wondering what the change was.

When we telnet into the port  do a Get http://www.gmail.com http/1.1 - on 
legacy squid, once it returns the content it closes the connection immediately. 
On the newest version of squid it waits 2 minutes before closing the connection.

Is this anything to be worried about? Can it be configured?


Thanks and regards,
Justin

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] Accessing Squid by telnet - differences in behavior?

2012-03-22 Thread Justin Lawler
Thanks Amos,

Should this affect performance of squid for clients using HTTP1.1? For instance:
* will number of connections at any one time increase to squid  so 
we'll need to increase squid file descriptors?
* will the increase in connections affect the performance/capacity of 
squid?
* will it affect the network performance?

I presume the user experience will benefit, with reduced loading time for pages 
with many images/etc.  but in general once a browser has downloaded all the 
images/etc., will it then close the connection immediately, or wait 2 minutes 
to close?

Thanks and regards,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, March 23, 2012 12:37 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Accessing Squid by telnet - differences in behavior?

On 23/03/2012 5:22 p.m., Justin Lawler wrote:
 Hi,

 Comparing a legacy instance of squid (squid 3.0.15) with the latest (3.1.19), 
 and we noticed some differences. Wondering what the change was.

 When we telnet into the port  do a Get http://www.gmail.com http/1.1 - on 
 legacy squid, once it returns the content it closes the connection 
 immediately. On the newest version of squid it waits 2 minutes before closing 
 the connection.

 Is this anything to be worried about? Can it be configured?

Squid 3.0 is HTTP/1.0-only.

Squid 3.1 is almost HTTP/1.1 compliant. It attempts to use HTTP/1.1 features 
whenever possible, but still advertises itself as 1.0 to prevent the client 
software depending on the few 1.1-only features which are still missing.
  One of the supported HTTP/1.1 features is persistent connections being 
assumed on by default. Your telnet request specified that you were a
HTTP/1.1 compliant client and failed to specify Connection:close, therefore 
Squid keeps the connection open waiting for another request.


Not something to worry about. But if you are in the habit of writing scripts 
that send HTTP/1.1, it may be worthwhile checking that they are actully 
HTTP/1.1 compliant in even small ways like this.

Amos
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] Squid Compile Errors

2012-03-20 Thread Justin Lawler
Hi,

We're getting the below error when compiling squid 3.1.19 - looks like a 
missing library 'libnet'. Strange things is its compiling fine on another 
machine that doesn't have 'libnet' installed (not found with pkginfo anyway). 
Do we really need libnet installed to get squid 3.1.19 compiled? And if so - 
does 'libnet' have many dependencies?

This is on solaris.

Any help much appreciated.

Thanks,
Justin

Making all in LDAP
gcc -DHAVE_CONFIG_H  -I../../.. -I../../../include -I../../../src  
-I../../../include  -I../../../libltdl  -I.   -I/usr/local/ssl/include -Wall 
-Wpointer-arith -Wwrite-strings -Wmissing-prototypes -Wmissing-declarations 
-Wcomments -Werror -D_REENTRANT -pthreads -Wall -g -O2 -MT squid_ldap_auth.o 
-MD -MP -MF .deps/squid_ldap_auth.Tpo -c -o squid_ldap_auth.o squid_ldap_auth.c
mv -f .deps/squid_ldap_auth.Tpo .deps/squid_ldap_auth.Po
/bin/bash ../../../libtool --tag=CC--mode=link gcc -Wall -Wpointer-arith 
-Wwrite-strings -Wmissing-prototypes -Wmissing-declarations -Wcomments -Werror 
-D_REENTRANT -pthreads -Wall -g -O2   -g -o squid_ldap_auth squid_ldap_auth.o 
../../../compat/libcompat.la  -L../../../lib -lmiscutil  -lldap  -llber  -lm 
-lsocket -lresolv -lnsl
libtool: link: gcc -Wall -Wpointer-arith -Wwrite-strings -Wmissing-prototypes 
-Wmissing-declarations -Wcomments -Werror -D_REENTRANT -pthreads -Wall -g -O2 
-g -o squid_ldap_auth squid_ldap_auth.o  ../../../compat/.libs/libcompat.a 
/usr/local/lib/libstdc++.so 
-L/sol10/SOURCES/S10/gcc-3.4.6/objdir/sparc-sun-solaris2.10/libstdc++-v3/src 
-L/sol10/SOURCES/S10/gcc-3.4.6/objdir/sparc-sun-solaris2.10/libstdc++-v3/src/.libs
 -L/usr/local/lib -L/usr/local/ssl/lib -L/usr/openwin/lib 
-L/sol10/SOURCES/S10/gcc-3.4.6/objdir/gcc -L/usr/ccs/bin -L/usr/ccs/lib -lgcc_s 
-L/apps/cwapps/deploy/squid-3.1.19/lib -lmiscutil /usr/local/lib/libldap.so 
-L/usr/local/pgsql/lib -L/usr/lib -L/usr/X11R6/lib 
-L/usr/local/BerkeleyDB.4.7/lib -L/usr/local/BerkeleyDB.4.2/lib 
-L/usr/openwinlib /usr/local/lib/libsasl2.so -ldl -lgss -lssl -lcrypto 
/usr/local/lib/liblber.so -lgen -lnet -lm -lsocket -lresolv -lnsl -pthreads 
-R/usr/local/lib -R/usr/local/lib
ld: fatal: library -lnet: not found
ld: fatal: File processing errors. No output written to squid_ldap_auth
collect2: ld returned 1 exit status
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] Re: Squid Compile Errors

2012-03-20 Thread Justin Lawler
Hi Gareth,

OpenLDAP installed, as well as 'SUNWlldap'

root@mib01 / pkginfo | grep [lL][dD][aA][pP]
application SMColdap openldap
system  SUNWlldapLDAP Libraries

I see the libnet.so libraries as part of the java runtime on the machine. I 
guess these are different?

Looks like best way is to install libnet anyway.

Thanks for the help!
Justin


-Original Message-
From: GarethC [mailto:gar...@garethcoffey.com] 
Sent: Tuesday, March 20, 2012 11:55 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Re: Squid Compile Errors

Hi Justin,

I've compiled Squid on Solaris systems before and there can be the odd 
dependency nightmare.
I haven't compiled Squid with ldap_auth but after a bit of digging it looks 
like you may need to install libnet (no dependencies other than possibly
libpcap) as this is a dependency of libldap.

What version of LDAP are you running on the box? (i.e. OpenLDAP?)

If you run 'cd /; find ./ -name libnet.so' on your other Solaris box, if it's 
found it may suggest libnet was installed from source as opposed to package 
based. 

Thanks,
Gareth

-
Follow me on...

My Blog
Twitter
LinkedIn
Facebook 

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Compile-Errors-tp4487870p4489311.html
Sent from the Squid - Users mailing list archive at Nabble.com.
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] Re: Squid Compile Errors

2012-03-20 Thread Justin Lawler
FYI - configure options being used:

./configure --enable-icap-client --enable-ssl --with-openssl=/usr/local/ssl 
--prefix=/apps/cwapps/squid-3119 --enable-storeio=diskd,aufs,ufs 
--with-aio-threads=N --enable-removal-polices=heap,lru --enable-icmp 
--enable-snmp --enable-cache-digests  --enable-useragent-log 
--enable-referer-log --enable-follow-x-forwarded-for --disable-ident-lookups 
--enable-auth-basic=LDAP,MSNY,POP3,SMB --enable-auth-ntlm=smb_lm,no_check  

Thanks and regards,
Justin

-Original Message-
From: Justin Lawler 
Sent: Wednesday, March 21, 2012 12:16 PM
To: 'GarethC'; squid-users@squid-cache.org
Subject: RE: [squid-users] Re: Squid Compile Errors

Hi Gareth,

OpenLDAP installed, as well as 'SUNWlldap'

root@mib01 / pkginfo | grep [lL][dD][aA][pP]
application SMColdap openldap
system  SUNWlldapLDAP Libraries

I see the libnet.so libraries as part of the java runtime on the machine. I 
guess these are different?

Looks like best way is to install libnet anyway.

Thanks for the help!
Justin


-Original Message-
From: GarethC [mailto:gar...@garethcoffey.com] 
Sent: Tuesday, March 20, 2012 11:55 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Re: Squid Compile Errors

Hi Justin,

I've compiled Squid on Solaris systems before and there can be the odd 
dependency nightmare.
I haven't compiled Squid with ldap_auth but after a bit of digging it looks 
like you may need to install libnet (no dependencies other than possibly
libpcap) as this is a dependency of libldap.

What version of LDAP are you running on the box? (i.e. OpenLDAP?)

If you run 'cd /; find ./ -name libnet.so' on your other Solaris box, if it's 
found it may suggest libnet was installed from source as opposed to package 
based. 

Thanks,
Gareth

-
Follow me on...

My Blog
Twitter
LinkedIn
Facebook 

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Compile-Errors-tp4487870p4489311.html
Sent from the Squid - Users mailing list archive at Nabble.com.
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] Percentages of Squid Instances Running on Different OSs

2012-03-02 Thread Justin Lawler
Hi,

Wondering is there any analysis of what % of squid instances are running on 
each OS? Which would be the most popular OS and by how much? 

Thanks,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] compile error on Squid 3.1.18

2012-02-12 Thread Justin Lawler
Thanks Amos - but trying to find the library name, can't seem to google 
anything on libber? Can't figure it out from the 'configure' script either. Any 
hint?

And where would the usual directory containing this library be? We've built 
previously with existing versions of squid (3.1.16), so it should be on the 
machine.

BTW - this is a solaris machine. And we're only having problems compiling on 
one instance, so other instances should have all the dependencies.

Thanks,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Saturday, February 11, 2012 10:17 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] compile error on Squid 3.1.18

On 11/02/2012 8:01 p.m., Justin Lawler wrote:
 Apologies - actual error earlier in the process - when compiling LDAP:


 Making all in LDAP
 gcc -DHAVE_CONFIG_H  -I../../.. -I../../../include -I../../../src  
 -I../../../include  -I../../../libltdl  -I.-Wall -Wpointer-arith 
 -Wwrite-strings -Wmissing-prototypes -Wmissing-declarations -Wcomments 
 -Werror -D_REENTRANT -pthreads -Wall -g -O2 -MT squid_ldap_auth.o -MD -MP -MF 
 .deps/squid_ldap_auth.Tpo -c -o squid_ldap_auth.o squid_ldap_auth.c
 mv -f .deps/squid_ldap_auth.Tpo .deps/squid_ldap_auth.Po
 /bin/bash ../../../libtool --tag=CC--mode=link gcc -Wall -Wpointer-arith 
 -Wwrite-strings -Wmissing-prototypes -Wmissing-declarations -Wcomments 
 -Werror -D_REENTRANT -pthreads -Wall -g -O2   -g -o squid_ldap_auth 
 squid_ldap_auth.o ../../../compat/libcompat.la  -L../../../lib -lmiscutil  
 -lldap-lm -lsocket -lresolv -lnsl
 libtool: link: gcc -Wall -Wpointer-arith -Wwrite-strings -Wmissing-prototypes 
 -Wmissing-declarations -Wcomments -Werror -D_REENTRANT -pthreads -Wall -g -O2 
 -g -o squid_ldap_auth squid_ldap_auth.o  ../../../compat/.libs/libcompat.a 
 /usr/sfw/lib/libstdc++.so -L/usr/sfw/lib -lgcc_s 
 -L/apps/cwapps/deployCSL/squid-3.1.19/lib -lmiscutil -lldap -lm -lsocket 
 -lresolv -lnsl -pthreads -R/usr/sfw/lib -R/usr/sfw/lib
 Undefined   first referenced
   symbol in file
 ldap_start_tls_ssquid_ldap_auth.o
 ldap_initialize squid_ldap_auth.o
 ber_pvt_opt_on  squid_ldap_auth.o
 ld: fatal: Symbol referencing errors. No output written to 
 squid_ldap_auth
 collect2: ld returned 1 exit status
 *** Error code 1

You do not seem to have a development version of the libber library 
installed, or at least not in a place Squid can locate it.

Amos
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] compile error on Squid 3.1.18

2012-02-10 Thread Justin Lawler
HI,

I'm getting the same problem on 3.1.19 on solaris:

list='compat lib snmplib libltdl scripts src icons  errors doc helpers 
test-suite tools'; for subdir in $list; do \
  echo Making $target in $subdir; \
  if test $subdir = .; then \
dot_seen=yes; \
local_target=$target-am; \
  else \
local_target=$target; \
  fi; \
  (CDPATH=${ZSH_VERSION+.}:  cd $subdir  make  $local_target) \
  || eval $failcom; \
done; \
if test $dot_seen = no; then \
  make  $target-am || exit 1; \
fi; test -z $fail
make: Fatal error: Command failed for target `all-recursive'

Any suggestions?

Thanks,
Justin

-Original Message-
From: kzl [mailto:kwan...@rocketmail.com] 
Sent: Tuesday, December 06, 2011 2:20 PM
To: squid-users@squid-cache.org
Subject: [squid-users] compile error on Squid 3.1.18

There's error thrown while compiling Squid 3.1.18 in Solaris Sparc which never 
experience in earlier version like 3.1.17, 3.1.16, 3.0.15 Anyone having any 
idea what's the problem? 

Making all in compat
/bin/bash ../libtool --tag=CXX    --mode=link g++ -Wall -Wpointer-arith 
-Wwrite-strings -Wcomments -Werror  -D_REENTRANT -pthreads -g -O2   -g 
-olibcompat.la  assert.lo compat.lo GnuRegex.lo
libtool: link: false cru .libs/libcompat.a .libs/assert.o .libs/compat.o 
.libs/GnuRegex.o
*** Error code 1
make: Fatal error: Command failed for target `libcompat.la'
Current working directory /home/squid-3.1.18/compat
*** Error code 1
The following command caused the error:
fail= failcom='exit 1'; \
for f in x $MAKEFLAGS; do \
  case $f in \
    *=* | --[!k]*);; \
    *k*) failcom='fail=yes';; \
  esac; \
done; \
dot_seen=no; \
target=`echo all-recursive | sed s/-recursive//`; \ list='compat lib snmplib 
libltdl scripts src icons  errors doc helpers test-suite tools'; for subdir in 
$list; do \
  echo Making $target in $subdir; \
  if test $subdir = .; then \
    dot_seen=yes; \
    local_target=$target-am; \
  else \
    local_target=$target; \
  fi; \
  (CDPATH=${ZSH_VERSION+.}:  cd $subdir  make  $local_target) \
  || eval $failcom; \
done; \
if test $dot_seen = no; then \
  make  $target-am || exit 1; \
fi; test -z $fail
make: Fatal error: Command failed for target `all-recursive'
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] compile error on Squid 3.1.18

2012-02-10 Thread Justin Lawler
Apologies - actual error earlier in the process - when compiling LDAP:


Making all in LDAP
gcc -DHAVE_CONFIG_H  -I../../.. -I../../../include -I../../../src  
-I../../../include  -I../../../libltdl  -I.-Wall -Wpointer-arith 
-Wwrite-strings -Wmissing-prototypes -Wmissing-declarations -Wcomments -Werror 
-D_REENTRANT -pthreads -Wall -g -O2 -MT squid_ldap_auth.o -MD -MP -MF 
.deps/squid_ldap_auth.Tpo -c -o squid_ldap_auth.o squid_ldap_auth.c
mv -f .deps/squid_ldap_auth.Tpo .deps/squid_ldap_auth.Po
/bin/bash ../../../libtool --tag=CC--mode=link gcc -Wall -Wpointer-arith 
-Wwrite-strings -Wmissing-prototypes -Wmissing-declarations -Wcomments -Werror 
-D_REENTRANT -pthreads -Wall -g -O2   -g -o squid_ldap_auth squid_ldap_auth.o 
../../../compat/libcompat.la  -L../../../lib -lmiscutil  -lldap-lm -lsocket 
-lresolv -lnsl
libtool: link: gcc -Wall -Wpointer-arith -Wwrite-strings -Wmissing-prototypes 
-Wmissing-declarations -Wcomments -Werror -D_REENTRANT -pthreads -Wall -g -O2 
-g -o squid_ldap_auth squid_ldap_auth.o  ../../../compat/.libs/libcompat.a 
/usr/sfw/lib/libstdc++.so -L/usr/sfw/lib -lgcc_s 
-L/apps/cwapps/deployCSL/squid-3.1.19/lib -lmiscutil -lldap -lm -lsocket 
-lresolv -lnsl -pthreads -R/usr/sfw/lib -R/usr/sfw/lib
Undefined   first referenced
 symbol in file
ldap_start_tls_ssquid_ldap_auth.o
ldap_initialize squid_ldap_auth.o
ber_pvt_opt_on  squid_ldap_auth.o
ld: fatal: Symbol referencing errors. No output written to squid_ldap_auth
collect2: ld returned 1 exit status
*** Error code 1



-Original Message-
From: Justin Lawler 
Sent: Saturday, February 11, 2012 2:52 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] compile error on Squid 3.1.18

HI,

I'm getting the same problem on 3.1.19 on solaris:

list='compat lib snmplib libltdl scripts src icons  errors doc helpers 
test-suite tools'; for subdir in $list; do \
  echo Making $target in $subdir; \
  if test $subdir = .; then \
dot_seen=yes; \
local_target=$target-am; \
  else \
local_target=$target; \
  fi; \
  (CDPATH=${ZSH_VERSION+.}:  cd $subdir  make  $local_target) \
  || eval $failcom; \
done; \
if test $dot_seen = no; then \
  make  $target-am || exit 1; \
fi; test -z $fail
make: Fatal error: Command failed for target `all-recursive'

Any suggestions?

Thanks,
Justin

-Original Message-
From: kzl [mailto:kwan...@rocketmail.com]
Sent: Tuesday, December 06, 2011 2:20 PM
To: squid-users@squid-cache.org
Subject: [squid-users] compile error on Squid 3.1.18

There's error thrown while compiling Squid 3.1.18 in Solaris Sparc which never 
experience in earlier version like 3.1.17, 3.1.16, 3.0.15 Anyone having any 
idea what's the problem? 

Making all in compat
/bin/bash ../libtool --tag=CXX    --mode=link g++ -Wall -Wpointer-arith 
-Wwrite-strings -Wcomments -Werror  -D_REENTRANT -pthreads -g -O2   -g 
-olibcompat.la  assert.lo compat.lo GnuRegex.lo
libtool: link: false cru .libs/libcompat.a .libs/assert.o .libs/compat.o 
.libs/GnuRegex.o
*** Error code 1
make: Fatal error: Command failed for target `libcompat.la'
Current working directory /home/squid-3.1.18/compat
*** Error code 1
The following command caused the error:
fail= failcom='exit 1'; \
for f in x $MAKEFLAGS; do \
  case $f in \
    *=* | --[!k]*);; \
    *k*) failcom='fail=yes';; \
  esac; \
done; \
dot_seen=no; \
target=`echo all-recursive | sed s/-recursive//`; \ list='compat lib snmplib 
libltdl scripts src icons  errors doc helpers test-suite tools'; for subdir in 
$list; do \
  echo Making $target in $subdir; \
  if test $subdir = .; then \
    dot_seen=yes; \
    local_target=$target-am; \
  else \
    local_target=$target; \
  fi; \
  (CDPATH=${ZSH_VERSION+.}:  cd $subdir  make  $local_target) \
  || eval $failcom; \
done; \
if test $dot_seen = no; then \
  make  $target-am || exit 1; \
fi; test -z $fail
make: Fatal error: Command failed for target `all-recursive'
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement, you may review at 
http://www.amdocs.com/email_disclaimer.asp



[squid-users] ICAP Processing Times

2012-01-26 Thread Justin Lawler
Hi,

We are just examining performance of our ICAP server - we were trying to 
determine for ICAP RESPMODs how squid behaves:
1) does squid stream the response over an ICAP connection as soon as it 
starts to receive the response?
2) does squid wait until its received the entire response before it 
starts to send the response over ICAP RESPMOD?

While the first streaming method could potentially be faster - it could also 
hold up resources on the ICAP server for longer, especially if the remote 
response was slow - which is a concern. It would also make the logged response 
times on our ICAP server not be representative of the performance.

Thanks and regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] Problem compiling Squid 3.1.18 on Ubuntu 10.04 LTS - store.cc

2012-01-11 Thread Justin Lawler
Hi,

Any time line for the 3.1.19 release, or any beta releases :-)

Thanks and regards,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, December 09, 2011 7:23 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Problem compiling Squid 3.1.18 on Ubuntu 10.04 LTS - 
store.cc

On 9/12/2011 9:19 p.m., Justin Lawler wrote:
 Hi Amos,

 Is there a beta testing process where we can be notified before a release is 
 planned - so we can do some pre-release testing on these patches?

 Thanks and regards,
 Justin

Notifications are processed through bugzilla. With applied to squid-X 
updates going out to everyone subscribed to the relevant bug. At that time or 
shortly after the patch is available on the changesets page. For changes and 
fixes without specific bugs there is no explicit notifications, usually just 
feedback to the discussion thread which brought it to our attention for fixing.

Pre-release snapshots of everything (tarballs, checkpoints, dailies, nightlies, 
bundles, whatever you call them) are released for testing on a daily basis 
provided they build on a test machine. Those who want to beta-test everything 
on an ongoing basis usually rsync the sources or follow the series bzr branch 
then create bug reports of issues found there. The reports prevent me thinking 
the state is stable enough to tag the snapshot revision for release and creates 
a point for notifications back to the tester when fixed.

HTH
AYJ

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] Problem compiling Squid 3.1.18 on Ubuntu 10.04 LTS - store.cc

2011-12-09 Thread Justin Lawler
Hi Amos,

Is there a beta testing process where we can be notified before a release is 
planned - so we can do some pre-release testing on these patches?

Thanks and regards,
Justin


-Original Message-
From: kzl [mailto:kwan...@rocketmail.com] 
Sent: Thursday, December 08, 2011 2:11 PM
To: Amos Jeffries; squid-users@squid-cache.org
Subject: Re: [squid-users] Problem compiling Squid 3.1.18 on Ubuntu 10.04 LTS - 
store.cc

So it need to change two file. 
Thanks. 

cheers, 
KZ


- Original Message -
From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
Cc: 
Sent: Thursday, December 8, 2011 2:04 PM
Subject: Re: [squid-users] Problem compiling Squid 3.1.18 on Ubuntu 10.04 LTS - 
store.cc

On 8/12/2011 6:33 p.m., kzl wrote:
 Hi Amos, 

   As refer to http://bugs.squid-cache.org/show_bug.cgi?id=3440 , how to 
change StoreEntry::deferProducer  not to be const?
 I'd tried just remove const word in the line, it shows:
 store.cc:372: error: prototype for `void 
 StoreEntry::deferProducer(RefCountAsyncCall)' does not match any in class 
 `StoreEntry'
 Store.h:194: error: candidate is: void StoreEntry::deferProducer(const 
 RefCountAsyncCall)
 *** Error code 1

 cheers,
 kz


This is the patch that went on top of 3.1.18:
http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-10415.patch

Amos

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] MemBuf issue in Squid 3.1.16 on Solaris

2011-11-27 Thread Justin Lawler
Thanks Amos, so it seems like the next squid patch will be released sooner 
rather than later? :-)

Thanks and regards,
Justin

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, November 25, 2011 7:58 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] MemBuf issue in Squid 3.1.16 on Solaris

On 25/11/2011 8:53 p.m., Justin Lawler wrote:
 Hi, thanks Amos for this.

 Is there any site on this patch - giving a list of all changes going into 
 this patch? And also the latest estimated due date?

 Thanks,
 Justin

http://www.squid-cache.org/Versions/v3/3.1/changesets/

Since I wrote that last message there have been a bunch more 3.1 bugs fixed :). 
The amount of changes criteria are now nearly reached for a new release. I just 
got mailed about another Solaris bug about crashes on error page display. Looks 
easy to fix and will be enough to push out a new release when its done.

Amos


 -Original Message-
 From: Amos Jeffries

 On 15/11/2011 7:30 p.m., Justin Lawler wrote:
 Thanks Amos,

 Just BTW - is there a scheduled date for 3.1.17 build currently?
 Probably mid or end of Dec.

 Amos

 This message and the information contained herein is proprietary and 
 confidential and subject to the Amdocs policy statement, you may 
 review at http://www.amdocs.com/email_disclaimer.asp


Hmm :( . A copyright disclaimer not added by me being attributed to my texts in 
violation of the CreativeCommons copyright on my mailing list submissions. Your 
email system needs a fix quite urgently. Fair cop adding it to your own emails, 
but attributing it to third-party creations without prior consent is a bit of a 
problem.

Amos
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] MemBuf issue in Squid 3.1.16 on Solaris

2011-11-24 Thread Justin Lawler
Hi, thanks Amos for this.

Is there any site on this patch - giving a list of all changes going into this 
patch? And also the latest estimated due date?

Thanks,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, November 15, 2011 6:33 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] MemBuf issue in Squid 3.1.16 on Solaris

On 15/11/2011 7:30 p.m., Justin Lawler wrote:
 Thanks Amos,

 Just BTW - is there a scheduled date for 3.1.17 build currently?

Probably mid or end of Dec.

Amos

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] MemBuf issue in Squid 3.1.16 on Solaris

2011-11-14 Thread Justin Lawler
Thanks Amos,

Just BTW - is there a scheduled date for 3.1.17 build currently?

Thanks and regards,
Justin

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, November 14, 2011 2:37 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] MemBuf issue in Squid 3.1.16 on Solaris

On 14/11/2011 5:25 p.m., Justin Lawler wrote:
 Hi,

 We're just reviewing all the patches that went into 3.1.16, and we came 
 across an old issue:

 http://bugs.squid-cache.org/show_bug.cgi?id=2910

 With running squid  ICAP server on the same Solaris box.

 A patch has been provided for this issue already. We're wondering is this 
 patch planned to be in any official release of squid in the future?


Thanks for the reminder. It seems to have been hung waiting for a better fix. 
Although I am not sure what could be better. It has now been applied and should 
be in the next release.

Amos
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] MemBuf issue in Squid 3.1.16 on Solaris

2011-11-13 Thread Justin Lawler
Hi,

We're just reviewing all the patches that went into 3.1.16, and we came across 
an old issue:

http://bugs.squid-cache.org/show_bug.cgi?id=2910

With running squid  ICAP server on the same Solaris box. 

A patch has been provided for this issue already. We're wondering is this patch 
planned to be in any official release of squid in the future?

Thanks and regards,
Justin


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] Log file roll over Issues

2011-11-08 Thread Justin Lawler
Hi,

We're having issues with log file roll over in squid - when squid is under 
heavy load and the log files are very big, triggering a log file roll over 
(squid -k rotate) makes squid unresponsive, and has to be killed manually with 
a kill -9. 

Has this ever been seen before?

We're running squid 3.1.16 on solaris/sparc (t5220) machines. We're using ICAP 
for both REQMOD  RESPMOD's. We have not been able to reproduce the issue on 
solaris AMD machines. Note - the sparc machines are a good deal slower than the 
AMD machines, although have support for 64 threads on 8 cores.


Thanks and regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] Squid Crashed - then ran out of file descriptors after restart

2011-11-06 Thread Justin Lawler
Hi,

We're running squid 3.1.16 on solaris on a sparc box.

We're running it against an ICAP server, and were testing some scenarios when 
ICAP server went down, how squid would handle it. After freezing the ICAP 
server, squid seemed to have big problems.

Once it was back up again, it kept on sending OPTION requests to the server - 
but squid itself became completely unresponsive. It wouldn't accept any further 
requests, you couldn't use squidclient against it or doing a squid reconfigure, 
and was not responding to 'squid -k shutdown', so had to be manually killed 
with a 'kill -9'.

We then restarted the squid instance, and it started to go crazy, file 
descriptors reaching the limit (4096 - previously it never went above 1k during 
long stability test runs), and a load of 'Queue Congestion' errors in the logs. 
Tried to restart it again, and it seemed to behave better then, but still the 
number of file descriptors is very big (above 3k).

Has this been seen before? Should we try to clear the cache  restart again? 
Any other maintenance that could be done?

Thanks and regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] Squid Crashed - then ran out of file descriptors after restart

2011-11-06 Thread Justin Lawler
Thanks Amos for the explanation. Apologies for the lack of clarity.

FYI - we have ICAP connection set up to be a 'critical' service.

Do you know if the squid ICAP functionality has changed between 3.0  3.1? we 
were not seeing some of these issues previously. For instance, if the ICAP 
server went down previously - after the ICAP timeout (icap_io_timeout), squid 
clients would just receive 500 responses for all queued connections (as seen in 
the squid access logs), effectively limiting the number of connections to be 
queued. Are you saying that *all* client connections will now be queued - even 
after the timeout? If ICAP server went down for a long period, and connections 
kept on being made to squid, the number of queued connections would be very 
big. This could easily stop squid from being responsive for a long time, or 
completely - would this be correct?

Note - we are still seeing very high file descriptor usage in squid still - 
even 3 hours after the restarts. Currently file descriptor is still about 3.2k, 
and has never gone much below this number. Would it take this long to rebuild 
the journal? 

I'm just noticing the 'TIME_WAIT' connections between squid  ICAP server is 
also very high - above 2k - and has been like this for the last 30 minutes. Is 
this anything to worry about? The number of 'ESTABLISHED' connections never 
goes above 50.

Queue Congestion Errors:
2011/11/06 08:03:07| squidaio_queue_request: WARNING - Queue congestion

Thanks,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Sunday, November 06, 2011 6:22 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Crashed - then ran out of file descriptors 
after restart

On 6/11/2011 9:15 p.m., Justin Lawler wrote:
 Hi,

 We're running squid 3.1.16 on solaris on a sparc box.

 We're running it against an ICAP server, and were testing some scenarios when 
 ICAP server went down, how squid would handle it. After freezing the ICAP 
 server, squid seemed to have big problems.

For reference the exected behaviour is this:

  ** if squid is configured to allow bypass of the ICAP service
-- no noticable problems. possibly faster response time for clients.

** if squid is configured not to bypass on failures (ie critical ICAP
service)
   -- New connections continue to be accepted.
   -- All traffic needing ICAP halts waiting recovery, RAM and FD consumption 
rises until available resources are full.
   -- On ICAP recovery the traffic being held gets sent to it and service 
resumes as the results come back.


 Once it was back up again, it kept on sending OPTION requests to the server - 
 but squid itself became completely unresponsive. It wouldn't accept any 
 further requests, you couldn't use squidclient against it or doing a squid 
 reconfigure, and was not responding to 'squid -k shutdown', so had to be 
 manually killed with a 'kill -9'.

This description is not very clear. You seem to use it torefer to several 
different things in first sentence of paragraph 2.

Apparently:
  * it comes back up again. ... apparently refering to ICAP?   JL (yes)
  * it sends OPTION requests ... apparently referring to Squid now? or to 
some unmentioned backend part of the ICAP service?   JL (referring to 
squid)
  * squid itself is unresponsive  waiting for queued requets to get through 
ICAP and the network fetch stages perhapse? noting that ICAP may be slowed as 
it faces teh spike or waiting traffic from Squid.



 We then restarted the squid instance, and it started to go crazy, file 
 descriptors reaching the limit (4096 - previously it never went above 
 1k during long

kill -9 causes Squid to terminate before savign teh cache index or closing 
the journal properly. Thus on restart the journal is discovered corrupt and a 
DIRTY rebuild is begun. Scanning the entire disk cache object by object to 
rebuild the index and journa contents. This can consume a lot of FD, for a 
period of time proportional to the size of your disk cache(s).

Also, clients can hit Squid with a lot of connections that accumulated during 
the outage. Which each have to be processed in full, including all lookups and 
tests. Immediately. This startup spike is normal immediately after a 
start/restart or reconfigure when all the active running state is erased and 
requires rebuilding.

The lag problems and resource/queue overloads can be expected to drop away 
relatively quickly as the nromal running state gets rebuilt from the new 
traffic. The FD consumption from cache scan will disappear abruptly when that 
process completes.

 stability test runs), and a load of 'Queue Congestion' errors in the logs. 
 Tried to restart it again, and it seemed to behave better then, but still the 
 number of file descriptors is very big (above 3k).

Any particular queue mentioned?


Amos
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you

RE: [squid-users] Squid Crashed - then ran out of file descriptors after restart

2011-11-06 Thread Justin Lawler
Thanks Amos.

Checked that ' mgr:filedescriptors' - wasn't aware of this functionality before.

95% of the file descriptors have the Description ' Waiting for next request'

-
21 Socket   40 222*5237  10.1.5.53:53279   Waiting for next request
  22 Socket   30 264*5237  10.1.5.53:53029   Waiting for next 
request
  23 Socket   95 266*  540530  10.1.5.53:54489   Waiting for next 
request
  24 Socket  110 321*  539782  10.1.5.54:53322   Waiting for next 
request
  25 Socket   49 306*5344  10.1.5.53:53435   Waiting for next 
request
  26 Socket   29 295*  130535  10.1.5.54:51444   Waiting for next 
request
-

This is in a performance lab, with a load injector hitting squid, trying to 
simulate a production system. Could it be a problem with the load injector not 
closing connections? Could the load injector have opened many connections when 
squid was unresponsive (because ICAP was unresponsive), and those connections 
never got closed?

Thanks,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, November 07, 2011 8:14 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid Crashed - then ran out of file descriptors 
after restart

 On Sun, 6 Nov 2011 10:52:41 +, Justin Lawler wrote:
 Thanks Amos for the explanation. Apologies for the lack of clarity.

 FYI - we have ICAP connection set up to be a 'critical' service.

 Do you know if the squid ICAP functionality has changed between 3.0  
 3.1? we were not seeing some of these issues previously. For instance, 
 if the ICAP server went down previously - after the ICAP timeout 
 (icap_io_timeout), squid clients would just receive 500 responses for 
 all queued connections (as seen in the squid access logs), effectively 
 limiting the number of connections to be queued. Are you saying that
 *all* client connections will now be queued - even after the timeout?

 All client requests are queued anyway. The only difference is the  length of 
queue vs processing time vs duration of timeouts. Normally  these are nowhere 
near colliding, with quick processing and long  timeout. Under spike conditions 
the 500 happens when the queue fills or  timeout gets hit.  When the FD count 
runs low Squid leaves new  connections in the TCP buffers and they can cannot 
connect errors  instead.

 The only big change I can think of between 3.0 and 3.1 is that 3.0  queued as 
LIFO and serviced new traffic fastest. Leaving old to timeout  and die. 3.1 
fixed that and services FIFO queues of all clients,  spreading the lag more 
evenly across the set.

 3.1 has additional capabilities in ICAP service so its possibly  different. 
You will have to contact the ICAP developers (at
 measurement-factory.com) for particular details in this area.


 If ICAP server went down for a long period, and connections kept on 
 being made to squid, the number of queued connections would be very 
 big. This could easily stop squid from being responsive for a long 
 time, or completely - would this be correct?

 Yes. After a long period you should see the 500's with a timeout error  page 
start appearing in 3.1 as well AFAIK.


 Note - we are still seeing very high file descriptor usage in squid 
 still - even 3 hours after the restarts. Currently file descriptor is 
 still about 3.2k, and has never gone much below this number. Would it 
 take this long to rebuild the journal?

 Maybe. Has cache.log mentioned swap rebuild completion or still  periodically 
logging a N done messages? I've seen a 200GB caches take  a day or so to 
rebuild.


 I'm just noticing the 'TIME_WAIT' connections between squid  ICAP
 server is also very high - above 2k - and has been like this for the
 last 30 minutes. Is this anything to worry about? The number of
 'ESTABLISHED' connections never goes above 50.

 This could just be a sign that your services are processing more than 
 40 requests per second over the last few minutes.
  Or that TCP is holding the sockets in TIME_WAIT a bit long (can be 
 dangerous to touch that, so careful there).
  Or that Squid was holding a lot of persistent connections from the 
 traffic spike and released them recently (~15 minutes in TIME_WAIT is 
 normal IIRC). Although ESTABLISHED being low at the start indicates its 
 probably not this.



 Queue Congestion Errors:
 2011/11/06 08:03:07| squidaio_queue_request: WARNING - Queue 
 congestion


 That is Disk I/O queue. From the cache scan.
 http://wiki.squid-cache.org/KnowledgeBase/QueueCongestion


 The squid filedescriptors management report (squidclient 
 mgr:filedescriptors) lists what each FD is used for, you can check it 
 for how many of those ~3-4K are for disk files and how many for network 
 sockets.

 Amos


 -Original Message-
 From: Amos Jeffries

RE: [squid-users] ICAP on Solaris

2011-10-29 Thread Justin Lawler
Hi Amos,

Thanks for the response. We're not 100% sure about all the different scenarios 
 are still investigating. We can follow up with you on this.

In our application, we do some response modifications, so it's necessary to get 
the RESPMOD here. Would the only way to do this is to set up a cache peer, and 
on the squid instance with ICAP enabled, we'd need to disable the caching?

As far as the squid cache - if an object does not exist in cache  has to be 
downloaded - our application will then do the response modifications. Would the 
'modified' response then be cached for future calls - or would the original one?

Thanks and regards,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Saturday, October 29, 2011 12:28 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] ICAP on Solaris

On 29/10/11 00:23, Justin Lawler wrote:
 Hi,

 We're using ICAP on Solaris for squid 3.1.16 and we're seeing some strange 
 behavior.

 When cache is disabled and/or using 'ufs' caching policy, all works ok, but 
 if we change to 'aufs' and enable caching, we're seeing REQMOD's coming 
 through, but no 'RESPMOD's.


Note that REQMOD is asking about the client request. Before it gets satisfied 
by Squid. It gets called for all requests.

Whereas, RESPMOD is asking about some reply, which has *already* been satisfied 
by some server. It _never_ gets called on TCP_HIT.

http://wiki.squid-cache.org/Features/ICAP#Squid_Details

Most of what you describe is expected behaviour (cache enabled == less RESPMOD).
  Having same/more RESPMOD with UFS despite caching enabled sounds like a bug.

 We see the same issue on 3.0.15.

 Is this a known issue - or anybody see this before? Could it be a 
 configuration option? Should we consider sticking with ufs, or changing to 
 'diskd'?


It has not been mentioned in this way before. Though maybe was described in 
other terms.

Note that the I/O system functions used to read/write data onto HDD in no way 
affects the decision about whether to pass traffic over the network to ICAP.

However, if one method of I/O were corrupting the objects during load it could 
cause Squid to abandon the cached object and fetch a new one. This would appear 
as similar behaviour to what you describe for UFS with caching enabled.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.16
   Beta testers wanted for 3.2.0.13
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] ICAP on Solaris

2011-10-28 Thread Justin Lawler
Hi,

We're using ICAP on Solaris for squid 3.1.16 and we're seeing some strange 
behavior.

When cache is disabled and/or using 'ufs' caching policy, all works ok, but if 
we change to 'aufs' and enable caching, we're seeing REQMOD's coming through, 
but no 'RESPMOD's.

We see the same issue on 3.0.15.

Is this a known issue - or anybody see this before? Could it be a configuration 
option? Should we consider sticking with ufs, or changing to 'diskd'?

Thanks and regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] RE: Essential ICAP service eown error not working reliably

2011-10-19 Thread Justin Lawler
Hi Amos,

We're seeing these OPTIONS health-check requests coming in every second in the 
ICAP server. Is this correct behavior?

Is this customizable in the squid.conf file? Or does squid calculate this 
setting itself?

We're seeing these requests come in every second in production, but in our test 
environment, they're coming in every 40-60 seconds - and we're a little 
confused as to why.

Thanks and regards,
Justin


-Original Message-
From: Justin Lawler 
Sent: Tuesday, October 18, 2011 7:12 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] RE: Essential ICAP service eown error not working 
reliably

HI Amos, thanks for that.

Yea - we're in the middle of running against a JVM with tuned GC settings, 
which we hope will resolve the issue.

One problem is we need to be 100% the issue is being caused by long GC pauses, 
as the patch has to go into a busy production system. Currently we're not, as 
we're not always getting ICAP errors for every long GC pause - maybe only 20% 
of the time we're getting ICAP errors only.

Thanks,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Tuesday, October 18, 2011 7:03 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] RE: Essential ICAP service eown error not working 
reliably

On 18/10/11 18:02, Justin Lawler wrote:
 Hi,

 Just a follow up to this. Anyone know how/when squid will trigger ICAP 
 service as down?


When it stops responding.

 From ICAP logs, we can see squid is sending in an 'OPTIONS' request 
 every second. Is this request a health-check on the ICAP service? Or 
 is there any other function to it?


Yes, and yes. A service responding to OPTIONS is obviously running.

See the ICAP specification for what else its used for:
http://www.rfc-editor.org/rfc/rfc3507.txt section 4.10

 We're still seeing very long pauses in our ICAP server that should 
 really trigger an ICAP error on squid, but it isn't always.

 Thanks, Justin

Can you run it against a better GC? I've heard that there were competing GC 
algorithms in Java these last few years with various behaviour benefits.



 -Original Message-
  From: Justin Lawler

 Hi,

 We have an application that integrates with squid over ICAP - a java 
 based application. We're finding that the java application has very 
 long garbage collection pauses at times (20+ seconds), where the 
 application becomes completely unresponsive.

 We have squid configured to use this application as an essential 
 service, with a timeout for 20 seconds. If the application goes into a 
 GC pause, squid can throw an 'essential ICAP service is down'
 error.

 The problem is most of the time it doesn't. It only happens maybe 20% 
 of the time - even though some of the pauses are 25 seconds+.

 Squid is setup to do an 'OPTIONS' request on the java application 
 every second, so I don't understand why it doesn't detect the java 
 application becoming unresponsive.


It's very likely these requests are being made and being serviced, just very 
much later.

http://www.squid-cache.org/Doc/config/icap_connect_timeout/
   Note the default is: 30-60 seconds inherited from [peer_]connect_timeout.

Also http://www.squid-cache.org/Doc/config/icap_service_failure_limit/

So 10 failures in a row are required to detect an outage. Each failure takes 
30+ seconds to be noticed.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.16
   Beta testers wanted for 3.2.0.13
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement, you may review at 
http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] RE: Essential ICAP service eown error not working reliably

2011-10-18 Thread Justin Lawler
HI Amos, thanks for that.

Yea - we're in the middle of running against a JVM with tuned GC settings, 
which we hope will resolve the issue.

One problem is we need to be 100% the issue is being caused by long GC pauses, 
as the patch has to go into a busy production system. Currently we're not, as 
we're not always getting ICAP errors for every long GC pause - maybe only 20% 
of the time we're getting ICAP errors only.

Thanks,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, October 18, 2011 7:03 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] RE: Essential ICAP service eown error not working 
reliably

On 18/10/11 18:02, Justin Lawler wrote:
 Hi,

 Just a follow up to this. Anyone know how/when squid will trigger ICAP 
 service as down?


When it stops responding.

 From ICAP logs, we can see squid is sending in an 'OPTIONS' request 
 every second. Is this request a health-check on the ICAP service? Or 
 is there any other function to it?


Yes, and yes. A service responding to OPTIONS is obviously running.

See the ICAP specification for what else its used for:
http://www.rfc-editor.org/rfc/rfc3507.txt section 4.10

 We're still seeing very long pauses in our ICAP server that should 
 really trigger an ICAP error on squid, but it isn't always.

 Thanks, Justin

Can you run it against a better GC? I've heard that there were competing GC 
algorithms in Java these last few years with various behaviour benefits.



 -Original Message-
  From: Justin Lawler

 Hi,

 We have an application that integrates with squid over ICAP - a java
 based application. We're finding that the java application has very
 long garbage collection pauses at times (20+ seconds), where the
 application becomes completely unresponsive.

 We have squid configured to use this application as an essential
 service, with a timeout for 20 seconds. If the application goes into
 a GC pause, squid can throw an 'essential ICAP service is down'
 error.

 The problem is most of the time it doesn't. It only happens maybe 20%
 of the time - even though some of the pauses are 25 seconds+.

 Squid is setup to do an 'OPTIONS' request on the java application
 every second, so I don't understand why it doesn't detect the java
 application becoming unresponsive.


It's very likely these requests are being made and being serviced, just 
very much later.

http://www.squid-cache.org/Doc/config/icap_connect_timeout/
   Note the default is: 30-60 seconds inherited from [peer_]connect_timeout.

Also http://www.squid-cache.org/Doc/config/icap_service_failure_limit/

So 10 failures in a row are required to detect an outage. Each failure 
takes 30+ seconds to be noticed.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.16
   Beta testers wanted for 3.2.0.13
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] RE: Essential ICAP service eown error not working reliably

2011-10-17 Thread Justin Lawler
Hi,

Just a follow up to this. Anyone know how/when squid will trigger ICAP service 
as down?

From ICAP logs, we can see squid is sending in an 'OPTIONS' request every 
second. Is this request a health-check on the ICAP service? Or is there any 
other function to it?

We're still seeing very long pauses in our ICAP server that should really 
trigger an ICAP error on squid, but it isn't always.

Thanks,
Justin

-Original Message-
From: Justin Lawler 
Sent: Tuesday, October 11, 2011 4:29 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Essential ICAP service eown error not working reliably

Hi,

We have an application that integrates with squid over ICAP - a java based 
application. We're finding that the java application has very long garbage 
collection pauses at times (20+ seconds), where the application becomes 
completely unresponsive. 

We have squid configured to use this application as an essential service, with 
a timeout for 20 seconds. If the application goes into a GC pause, squid can 
throw an 'essential ICAP service is down' error.

The problem is most of the time it doesn't. It only happens maybe 20% of the 
time - even though some of the pauses are 25 seconds+.

Squid is setup to do an 'OPTIONS' request on the java application every second, 
so I don't understand why it doesn't detect the java application becoming 
unresponsive.

Any feedback much appreciated.

Thanks,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement, you may review at 
http://www.amdocs.com/email_disclaimer.asp



[squid-users] Essential ICAP service eown error not working reliably

2011-10-11 Thread Justin Lawler
Hi,

We have an application that integrates with squid over ICAP - a java based 
application. We're finding that the java application has very long garbage 
collection pauses at times (20+ seconds), where the application becomes 
completely unresponsive. 

We have squid configured to use this application as an essential service, with 
a timeout for 20 seconds. If the application goes into a GC pause, squid can 
throw an 'essential ICAP service is down' error.

The problem is most of the time it doesn't. It only happens maybe 20% of the 
time - even though some of the pauses are 25 seconds+.

Squid is setup to do an 'OPTIONS' request on the java application every second, 
so I don't understand why it doesn't detect the java application becoming 
unresponsive.

Any feedback much appreciated.

Thanks,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] Internal 503 Errors on Squid

2011-10-10 Thread Justin Lawler
Hi,

We are just testing the 3.1.15 branch of squid on Solaris10, and we're getting 
503 errors whenever a URL is passed in with a query parameter, for instance 
googling an expression:

http://www.google.com/search?q=squid+query+paramter+3.1.15+solarisie=utf-8oe=utf-8aq=trls=org.mozilla:en-US:officialclient=firefox-a

We're not seeing any errors in the logs - the only error is in the web page and 
a 503 error in the squid access logs. Error returned to the user is:

-
The following error was encountered while trying to retrieve the URL: 
http://www.google.com/search?
Connection to 209.85.143.99 failed.
The system returned: (146) Connection refused
The remote host or network may be down. Please try the request again.
Your cache administrator is ad...@admin.com
-

Requests to just say google work fine.

We previously had an installation of 3.0.15 installed, so we're using almost 
the same configurations as for that installation.

Is this a known issue? Is there any dependency on OS libraries that may be 
missing? 

Regards,
Justin



This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] Large Downloads Killing Squid

2011-09-06 Thread Justin Lawler
Hi,

We had a situation where a user tried to download 20 mp4 files (at 115+ Mb 
each) at the same time through squid, and squid died for up to 20 minutes 
after, not responding to any other traffic. Is this a known issue? Can we work 
around it with setting update in squid.conf?

We're running squid 3.0.15 on solaris connecting into an ICAP server.

Regards,
Justin

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] Suspicious Stats from squid client

2011-08-24 Thread Justin Lawler
Hi, we have a script to log CPU usage from squidclient mgr:info - see below.

Question is, how can the current stats be ~50% consistently, but the 5 minute 
average dropping down from 57%  - 4%? 

Where are these figures taken from anyway?

I'm running squid 3.0.15 on solaris.

Wed Aug 24 13:14:00 HKT 2011 Current=56.17% 5 Minute Avg=57.70% One Hour 
Avg=66.59%
Wed Aug 24 13:15:00 HKT 2011 Current=56.13% 5 Minute Avg=45.88% One Hour 
Avg=65.56%
Wed Aug 24 13:16:00 HKT 2011 Current=56.10% 5 Minute Avg=31.98% One Hour 
Avg=64.75%
Wed Aug 24 13:17:00 HKT 2011 Current=56.07% 5 Minute Avg=20.10% One Hour 
Avg=64.08%
Wed Aug 24 13:18:00 HKT 2011 Current=56.03% 5 Minute Avg=7.43% One Hour 
Avg=63.61%
Wed Aug 24 13:19:00 HKT 2011 Current=56.00% 5 Minute Avg=4.13% One Hour 
Avg=62.33%
Wed Aug 24 13:20:00 HKT 2011 Current=55.97% 5 Minute Avg=3.95% One Hour 
Avg=61.09%
Wed Aug 24 13:21:00 HKT 2011 Current=55.93% 5 Minute Avg=3.89% One Hour 
Avg=60.05%
Wed Aug 24 13:22:00 HKT 2011 Current=55.90% 5 Minute Avg=3.89% One Hour 
Avg=59.09%

Thanks and regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] ICAP Bypassing Causing Performance Issues

2011-08-22 Thread Justin Lawler
Hi,

We have had to put in a number of URLs to the squid bypass

icap_service service_1 reqmod_precache 0 icap://127.0.0.1:1344/reqmod
icap_class class_1 service_1

acl bypassIcapRequestURLregex urlpath_regex 
./squid-3/etc/byPass_ICAP_request_URLregex.properties
icap_access class_1 deny bypassIcapRequestURLregex


When we added 4 regular expressions to this file, we started to see the CPU 
usage going up quite a bit, and we started to see the number of established 
connections from squid to ICAP server double or triple.

Is this a known issue? Is there a better/more efficient way to bypass ICAP than 
above? 

Regular expressions were very simple, just matching end of URLs.

We're running squid 3.0.15 on Solaris 10.

Thanks and regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] ICAP Bypassing Causing Performance Issues

2011-08-22 Thread Justin Lawler
Thanks Amos - regex pattern we're using is:

.*some_url_end.html$

We also have many individual domains which we're bypassing 

acl bypassIcapRequest dstdomain 
/apps/cwapps/squid-3/etc/byPass_ICAP_request.properties
icap_access class_1 deny bypassIcapRequest

as time has gone on - we've been adding more URLs to this list also (currently 
up to 39 URLs) - this won't be doing regular expression matching, but we've 
seen as time goes on, more and more established connections on ICAP server 
port. Also CPU usage going up, and we're seeing more 'essential ICAP service is 
down' errors in the logs.

Traffic has not changed significantly - in fact has maybe gone down. The only 
change we can really identify is the extra bypassed domains.

Does squid parse the properties file for every hit?

Also, we've only been reconfiguring squid when we update this file. Is this 
enough, or do we need to restart?

Will look into extra debugging now.

Thanks and regards,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, August 22, 2011 10:29 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] ICAP Bypassing Causing Performance Issues

On 23/08/11 00:03, Justin Lawler wrote:
 Hi,

 We have had to put in a number of URLs to the squid bypass

 icap_service service_1 reqmod_precache 0 icap://127.0.0.1:1344/reqmod
 icap_class class_1 service_1

 acl bypassIcapRequestURLregex urlpath_regex 
 ./squid-3/etc/byPass_ICAP_request_URLregex.properties
 icap_access class_1 deny bypassIcapRequestURLregex


 When we added 4 regular expressions to this file, we started to see the CPU 
 usage going up quite a bit, and we started to see the number of established 
 connections from squid to ICAP server double or triple.

 Is this a known issue? Is there a better/more efficient way to bypass ICAP 
 than above?

Other than using other ACL types, no.


 Regular expressions were very simple, just matching end of URLs.

a) regex is a bit slow. Did you remember to anchor the ends? and 
manually aggregate the patterns? avoid extended-regex pattern tricks?

b) URLs can be many KB in length. That can make URL regex very CPU 
intensive.

d) routing selection ACLs are run multiple times per request.

You can turn on access control debugging (level 28,3) to see how many 
times those are run and how long they take each test.


 We're running squid 3.0.15 on Solaris 10.



Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.14
   Beta testers wanted for 3.2.0.10
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] Installing Squid from Binary

2011-08-17 Thread Justin Lawler
Hi,

We want to upgrade squid to a greater number of FD's. We want to do a build on 
an off-line environment to do testing on, and then deploy that executable in 
production.

Is this possible? From all the articles I've seen so far, the only way to 
install squid is to rebuild on the same machine, then do a 'make install'.

Thanks and regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] RE: Too Many Open File Descriptors

2011-08-10 Thread Justin Lawler
Thanks again for this.

Yes, just talking internally - squid was recompiled with FD set to 2048. 

So to confirm - you think we should update this value to 65535  recompile? 
Could we get away with a lower value - say 4096?

If we did this, would we need to use the ulimit in a startup script?

Thanks and regards,
Justin



-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, August 10, 2011 2:34 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] RE: Too Many Open File Descriptors

 On Wed, 10 Aug 2011 08:59:08 +0300, Justin Lawler wrote:
 Hi,

 Thanks for this. Is this a known issue? Is there any bugs/articles on
 this? Just we would need something more concrete to go to the 
 customer
 with on this issue - more of a background on this issue would be very
 helpful.


 http://onlamp.com/pub/a/onlamp/2004/02/12/squid.html

 Each ICAP service the client request passes through counts as an FD 
 consuming external helper.


 Is 2048 FD's enough? Is there any connection leaks? Does squid ignore
 this 2048 value?

 The fact that you are here asking that question is proof that no, its 
 not (for you).


 The OS has FD limits as below - so would have thought current
 configuration should be ok?
 set rlim_fd_max=65536
 set rlim_fd_cur=8192

 Only if squid is not configured with a lower number. As appears to be 
 the case.
 As proof, the manager report from inside squid:
  Maximum number of file descriptors:   2048

 Squid could have been built with an absolute 2048 limit hard coded by 
 the configure options.
 Squid could have been started by an init script which lowered the 
 available from the OS default to 2048.

 You say its 3.0, which does not support configurable FD limits in 
 squid.conf. So that alternative is out.

 Amos



 Thanks,
 Justin


 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Wednesday, August 10, 2011 11:47 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] RE: Too Many Open File Descriptors

  On Tue, 09 Aug 2011 23:07:05 -0400, Wilson Hernandez wrote:
 That used to happen to us and had to write a script to start squid
 like this:

 #!/bin/sh -e
 #

 echo Starting squid...

 ulimit -HSn 65536
 sleep 1
 /usr/local/squid/sbin/squid

 echo Done..



  Pretty much the only solution.

  ICAP raises the potential worst-case socket consumption per client
  request from 3 FD to 7. REQMOD also doubles the minimum resource
  consumption from 1 FD to 2.

  Amos


 On 8/9/2011 10:47 PM, Justin Lawler wrote:
 Hi,

 We have two instances of squid (3.0.15) running on a solaris box.
 Every so often (like many once every month) we get a load of below
 errors:

 2011/08/09 19:22:10| comm_open: socket failure: (24) Too many open
 files

 Sometimes it goes away of its own, sometimes squid crashes and
 restarts.

 When it happens, generally happens on both instances of squid on 
 the
 same box.

 We have number open file descriptors set to 2048 - using 
 squidclient
 mrg:info:

 root@squid01# squidclient mgr:info | grep file
  Maximum number of file descriptors:   2048
  Largest file desc currently in use:   2041
  Number of file desc currently in use: 1903
  Available number of file descriptors:  138
  Reserved number of file descriptors:   100
  Store Disk files open:  68

 We're using squid as an ICAP client. Both squid instances point two
 different ICAP servers, so it's unlikely a problem with the ICAP
 server.

 Is this a known issue? As its going on for a long time (over 40
 minutes continuously), it doesn't seem like it's just the traffic
 spiking for a long period. Also, we're not seeing it on other 
 boxes,
 which are load balanced.

 Any pointers much appreciated.

 Regards,
 Justin
 This message and the information contained herein is proprietary 
 and
 confidential and subject to the Amdocs policy statement,
 you may review at http://www.amdocs.com/email_disclaimer.asp



 This message and the information contained herein is proprietary and
 confidential and subject to the Amdocs policy statement,
 you may review at http://www.amdocs.com/email_disclaimer.asp


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp

[squid-users] RE: Too Many Open File Descriptors

2011-08-09 Thread Justin Lawler
Hi,

We have two instances of squid (3.0.15) running on a solaris box. Every so 
often (like many once every month) we get a load of below errors:

2011/08/09 19:22:10| comm_open: socket failure: (24) Too many open files

Sometimes it goes away of its own, sometimes squid crashes and restarts.

When it happens, generally happens on both instances of squid on the same box.

We have number open file descriptors set to 2048 - using squidclient mrg:info:

root@squid01# squidclient mgr:info | grep file
    Maximum number of file descriptors:   2048
    Largest file desc currently in use:   2041
    Number of file desc currently in use: 1903
    Available number of file descriptors:  138
    Reserved number of file descriptors:   100
    Store Disk files open:  68

We're using squid as an ICAP client. Both squid instances point two different 
ICAP servers, so it's unlikely a problem with the ICAP server.

Is this a known issue? As its going on for a long time (over 40 minutes 
continuously), it doesn't seem like it's just the traffic spiking for a long 
period. Also, we're not seeing it on other boxes, which are load balanced.

Any pointers much appreciated.

Regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] RE: Too Many Open File Descriptors

2011-08-09 Thread Justin Lawler
Hi,

Thanks for this. Is this a known issue? Is there any bugs/articles on this? 
Just we would need something more concrete to go to the customer with on this 
issue - more of a background on this issue would be very helpful.

Is 2048 FD's enough? Is there any connection leaks? Does squid ignore this 2048 
value?

The OS has FD limits as below - so would have thought current configuration 
should be ok?
set rlim_fd_max=65536
set rlim_fd_cur=8192


Thanks,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, August 10, 2011 11:47 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] RE: Too Many Open File Descriptors

 On Tue, 09 Aug 2011 23:07:05 -0400, Wilson Hernandez wrote:
 That used to happen to us and had to write a script to start squid 
 like this:

 #!/bin/sh -e
 #

 echo Starting squid...

 ulimit -HSn 65536
 sleep 1
 /usr/local/squid/sbin/squid

 echo Done..



 Pretty much the only solution.

 ICAP raises the potential worst-case socket consumption per client 
 request from 3 FD to 7. REQMOD also doubles the minimum resource 
 consumption from 1 FD to 2.

 Amos


 On 8/9/2011 10:47 PM, Justin Lawler wrote:
 Hi,

 We have two instances of squid (3.0.15) running on a solaris box. 
 Every so often (like many once every month) we get a load of below 
 errors:

 2011/08/09 19:22:10| comm_open: socket failure: (24) Too many open 
 files

 Sometimes it goes away of its own, sometimes squid crashes and 
 restarts.

 When it happens, generally happens on both instances of squid on the 
 same box.

 We have number open file descriptors set to 2048 - using squidclient 
 mrg:info:

 root@squid01# squidclient mgr:info | grep file
  Maximum number of file descriptors:   2048
  Largest file desc currently in use:   2041
  Number of file desc currently in use: 1903
  Available number of file descriptors:  138
  Reserved number of file descriptors:   100
  Store Disk files open:  68

 We're using squid as an ICAP client. Both squid instances point two 
 different ICAP servers, so it's unlikely a problem with the ICAP 
 server.

 Is this a known issue? As its going on for a long time (over 40 
 minutes continuously), it doesn't seem like it's just the traffic 
 spiking for a long period. Also, we're not seeing it on other boxes, 
 which are load balanced.

 Any pointers much appreciated.

 Regards,
 Justin
 This message and the information contained herein is proprietary and 
 confidential and subject to the Amdocs policy statement,
 you may review at http://www.amdocs.com/email_disclaimer.asp



This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp

RE: [squid-users] RE: Squid Restarting Issue

2011-08-03 Thread Justin Lawler
Hey, no patches have been done to squid. We're using that version.

Unfortunately doesn't seem to be any core dumps for that date lying around. 
Seems like its not configured correctly to do core dumps - changing that now.

Note - we have 6 instances of squid handling heavy load under a load balancer. 
This has only happened once on one instance in the last couple of months that 
I'm aware of - only on the project for a few months - so it's unlikely to 
happen again anytime soon to get a new core dump.

Note - this happened 21 times in a row on only one instance of squid - so it's 
unlikely that a specific URL came through to cause it - we haven't seen any 
URLs that weren't in all 6 instances of squid. I'm wondering why did it not 
continue to happen? Is there any kind of further resilience in squid that would 
stop the restarts from continuing on? I'm just very concerned that in the 
future this may happen continuously?

Is there any routine maintenance we could suggest for our client that would 
mitigate this happening again, like restarts of the server every few weeks? 
Manually clearing the cache/etc?

Thanks,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, August 02, 2011 5:23 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] RE: Squid Restarting Issue

On 02/08/11 18:25, Justin Lawler wrote:
 Hi,

 We have an instance of squid - running version 3.0.15 (I know it's fairly old 
 at this stage - but it'll be very difficult to upgrade unless there's a 
 really good reason).

 We're getting the below  error in squid - this is happening 10-20 times, 
 where squid throws the error  restarts. It eventually seemed to recover 
 itself.

 I've found very old bugs on this on version 3.0-Pre3-20050510, but found 
 nothing since then. Is there a bug logged for this issue - could only find 
 something logged on 3.0pre3 - (bug 879) but it says that it was fixed for 
 final release version. Is this a known issue? Or is there any workarounds we 
 can use for now?

 Any help much appreciated.

 Regards,
 Justin


You are the first to hit it again since then.
Is your squid patched with old patches written prior to 3.0 perhapse?

There should be a core dump sitting around from the crash which you can 
use to figure out what the real problem behind the crash is. You need to 
know that to find a workaround or fix.

Amos


 2011/08/01 04:29:46| WARNING: swapfile header inconsistent with available data
 2011/08/01 04:29:46| could not parse headers from on disk structure!
 2011/08/01 04:29:46| memCopy: could not find start of [348,) in memory.
 2011/08/01 04:29:46| mem_hdr::debugDump: lowest offset: 0 highest offset + 1: 
 163862.
 2011/08/01 04:29:46| mem_hdr::debugDump: Current available data is: [0,348) - 
 .
 2011/08/01 04:29:46| storeDirWriteCleanLogs: Starting...
 2011/08/01 04:29:46| WARNING: Closing open FD 14
 2011/08/01 04:29:46| Finished. Wrote 43655 entries.
 2011/08/01 04:29:46| Took 0.11 seconds (386399.24 entries/sec).
 FATAL: Squid has attempted to read data from memory that is not present. This 
 is an indication of of
 (pre-3.0) code that hasn't been updated to deal with sparse objects in 
 memory. Squid should
 coredump.allowing to review the cause. Immediately preceeding this message is 
 a dump of the available data in the
 format [start,end). The [ means from the value, the ) means up to the value. 
 I.e. [1,5) means that there are 4
 bytes of data, at offsets 1,2,3,4.

 Squid Cache (Version 3.0.STABLE15): Terminated abnormally.
 CPU Usage: 4.322 seconds = 2.452 user + 1.871 sys
 Maximum Resident Size: 0 KB
 Page faults with physical i/o: 0
 2011/08/01 04:29:49| Starting Squid Cache version 3.0.STABLE15 for 
 sparc-sun-solaris2.10...
 This message and the information contained herein is proprietary and 
 confidential and subject to the Amdocs policy statement,
 you may review at http://www.amdocs.com/email_disclaimer.asp



-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.14
   Beta testers wanted for 3.2.0.10


[squid-users] RE: Squid Restarting Issue

2011-08-02 Thread Justin Lawler
Hi,

We have an instance of squid - running version 3.0.15 (I know it's fairly old 
at this stage - but it'll be very difficult to upgrade unless there's a really 
good reason).

We're getting the below  error in squid - this is happening 10-20 times, where 
squid throws the error  restarts. It eventually seemed to recover itself.

I've found very old bugs on this on version 3.0-Pre3-20050510, but found 
nothing since then. Is there a bug logged for this issue - could only find 
something logged on 3.0pre3 - (bug 879) but it says that it was fixed for final 
release version. Is this a known issue? Or is there any workarounds we can use 
for now?

Any help much appreciated.

Regards,
Justin


2011/08/01 04:29:46| WARNING: swapfile header inconsistent with available data 
2011/08/01 04:29:46| could not parse headers from on disk structure! 
2011/08/01 04:29:46| memCopy: could not find start of [348,) in memory. 
2011/08/01 04:29:46| mem_hdr::debugDump: lowest offset: 0 highest offset + 1: 
163862. 
2011/08/01 04:29:46| mem_hdr::debugDump: Current available data is: [0,348) - . 
2011/08/01 04:29:46| storeDirWriteCleanLogs: Starting... 
2011/08/01 04:29:46| WARNING: Closing open FD 14 
2011/08/01 04:29:46| Finished. Wrote 43655 entries. 
2011/08/01 04:29:46| Took 0.11 seconds (386399.24 entries/sec). 
FATAL: Squid has attempted to read data from memory that is not present. This 
is an indication of of
(pre-3.0) code that hasn't been updated to deal with sparse objects in memory. 
Squid should
coredump.allowing to review the cause. Immediately preceeding this message is a 
dump of the available data in the
format [start,end). The [ means from the value, the ) means up to the value. 
I.e. [1,5) means that there are 4
bytes of data, at offsets 1,2,3,4. 

Squid Cache (Version 3.0.STABLE15): Terminated abnormally. 
CPU Usage: 4.322 seconds = 2.452 user + 1.871 sys 
Maximum Resident Size: 0 KB 
Page faults with physical i/o: 0 
2011/08/01 04:29:49| Starting Squid Cache version 3.0.STABLE15 for 
sparc-sun-solaris2.10...
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp