Re[4]: [squid-users] Non-transparent port works, transparent doesn't

2011-10-19 Thread zozo zozo
  I.e. I can't put my transparent proxy to internet, I need it to be in
  same IP space as my network interface?
 
  You can put it anywhere you like. There are only two requirements:
 
   1) NAT happens on the same OS.
  So Squid can have direct access to the NAT data to undo the
  destination IP erasure.
 
   2) Squid needs access to the same DNS as the clients.
  To verify the packets destination IP matches the HTTP requested
  domain.

But I can't redirect to outer networks using policy routing, only to gateways I 
have direct access to. I.e. not Internet.
I have a rented Linux machine out there in the Internet, to route packets there 
I'd need access to all ISP's gateways.
NAT seems to be my only option to send packets there.

And can I trick squid by putting same iptables rules to that machine? 
Or by another NAT, like one machine NATs to port 3129, and on squid machine it 
NATs to 3128?


[squid-users] squid 3.2.0.10 = sentRequestBody error: FD xxxx: (32) Broken pipe

2011-10-19 Thread Saleh Madi
Hi,

Our squid cache.log file is full with this error message what does these
errors mean, I think when the error occurs  severely it effects the squid
performance.

2011/10/19 15:43:46 kid1| sentRequestBody error: FD 7848: (32) Broken pipe
2011/10/19 15:43:46 kid2| sentRequestBody error: FD 8061: (32) Broken pipe
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 8431: (32) Broken pipe
2011/10/19 15:43:46 kid2| sentRequestBody error: FD 8074: (32) Broken pipe
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 11700: (32) Broken pipe
2011/10/19 15:43:46 kid2| sentRequestBody error: FD 3682: (32) Broken pipe
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 11708: (32) Broken pipe
2011/10/19 15:43:46 kid2| sentRequestBody error: FD 8080: (32) Broken pipe
2011/10/19 15:43:46 kid2| sentRequestBody error: FD 1151: (32) Broken pipe
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 8480: (0) Success
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 11725: (0) Success
2011/10/19 15:43:46 kid2| sentRequestBody error: FD 8105: (0) Success
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 5597: (0) Success
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 4474: (0) Success
2011/10/19 15:43:46 kid2| sentRequestBody error: FD 8109: (0) Success
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 5682: (0) Success
2011/10/19 15:43:46 kid2| sentRequestBody error: FD 8116: (0) Success
2011/10/19 15:43:46 kid2| sentRequestBody error: FD 183: (0) Success
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 11739: (0) Success
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 11741: (0) Success
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 1923: (0) Success
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 7667: (0) Success
2011/10/19 15:43:46 kid2| sentRequestBody error: FD 3628: (0) Success
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 9841: (32) Broken pipe
2011/10/19 15:43:46 kid2| sentRequestBody error: FD 4474: (0) Success
2011/10/19 15:43:46 kid2| sentRequestBody error: FD 6350: (0) Success
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 11699: (32) Broken pipe
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 660: (32) Broken pipe
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 4802: (32) Broken pipe
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 9757: (32) Broken pipe
2011/10/19 15:43:46 kid1| sentRequestBody error: FD 11764: (32) Broken pipe
2011/10/19 15:43:47 kid2| sentRequestBody error: FD 792: (0) Success
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 11766: (32) Broken pipe
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 11768: (32) Broken pipe
2011/10/19 15:43:47 kid2| sentRequestBody error: FD 2133: (0) Success
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 11754: (32) Broken pipe
2011/10/19 15:43:47 kid2| sentRequestBody error: FD 416: (0) Success
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 11776: (32) Broken pipe
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 11779: (32) Broken pipe
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 11781: (32) Broken pipe
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 11791: (32) Broken pipe
2011/10/19 15:43:47 kid2| sentRequestBody error: FD 1458: (32) Broken pipe
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 11798: (32) Broken pipe
2011/10/19 15:43:47 kid2| sentRequestBody error: FD 5998: (32) Broken pipe
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 11247: (32) Broken pipe
2011/10/19 15:43:47 kid2| sentRequestBody error: FD 8139: (32) Broken pipe
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 11711: (32) Broken pipe
2011/10/19 15:43:47 kid2| sentRequestBody error: FD 4713: (32) Broken pipe
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 11816: (32) Broken pipe
2011/10/19 15:43:47 kid2| sentRequestBody error: FD 721: (32) Broken pipe
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 4363: (32) Broken pipe
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 4353: (32) Broken pipe
2011/10/19 15:43:47 kid1| sentRequestBody error: FD 11769: (32) Broken pipe

Thanks and Best Regards,
Saleh



Re: [squid-users] Change cache_dir from ufs to aufs

2011-10-19 Thread Emmanuel Lacour
On Tue, Oct 18, 2011 at 11:38:28AM -0500, Luis Daniel Lucio Quiroz wrote:
 2011/10/18 Emmanuel Lacour elac...@easter-eggs.com:
 
  If do not change the size/L1/L2, can I just change ufs to aufs in
  squid.conf and only do a squid reload, or do I need to restart squid?
 
 
 
 restart it
 

I did it, it works, but now, I have some messages like this (not many,
but some):

2011/10/19 16:19:58| DiskThreadsDiskFile::openDone: (2) No such file or 
directory
2011/10/19 16:19:58|/var/spool/squid/81/DB/0081DB55


I think that I'm going to squid-z again the spools (I plan to reduce its
size any way), but I'm curious and would be happy to understand why
those messages happens;)



Re: [squid-users] Change cache_dir from ufs to aufs

2011-10-19 Thread Luis Daniel Lucio Quiroz
2011/10/19 Emmanuel Lacour elac...@easter-eggs.com:
 On Tue, Oct 18, 2011 at 11:38:28AM -0500, Luis Daniel Lucio Quiroz wrote:
 2011/10/18 Emmanuel Lacour elac...@easter-eggs.com:
 
  If do not change the size/L1/L2, can I just change ufs to aufs in
  squid.conf and only do a squid reload, or do I need to restart squid?
 
 

 restart it


 I did it, it works, but now, I have some messages like this (not many,
 but some):

 2011/10/19 16:19:58| DiskThreadsDiskFile::openDone: (2) No such file or 
 directory
 2011/10/19 16:19:58|    /var/spool/squid/81/DB/0081DB55


 I think that I'm going to squid-z again the spools (I plan to reduce its
 size any way), but I'm curious and would be happy to understand why
 those messages happens;)


That means you change L1 and L2 ,   run squid -z

LD
http://www.twitter.com/ldlq


Re: [squid-users] Change cache_dir from ufs to aufs

2011-10-19 Thread Emmanuel Lacour
On Wed, Oct 19, 2011 at 10:52:19AM -0500, Luis Daniel Lucio Quiroz wrote:
 
 That means you change L1 and L2 ,   run squid -z
 

but I did not, I only changer ufs to aufs, for sure!

before:
cache_dir ufs /var/spool/squid 307200 736 256

after:
cache_dir aufs /var/spool/squid 307200 736 256




[squid-users] Assertion failure in squid 3.1.15

2011-10-19 Thread Alex Sharaz

Just upgraded a batch of caches to 3.1.15 and I'm seeing the occasional

2011/10/19 17:10:27| Reconfiguring Squid Cache (version 3.1.15)...
2011/10/19 17:10:27| FD 114 Closing HTTP connection
2011/10/19 17:10:27| FD 115 Closing HTTP connection
2011/10/19 17:10:27| FD 116 Closing HTTP connection
2011/10/19 17:10:27| FD 117 Closing ICP connection
2011/10/19 17:10:27| FD 118 Closing HTCP socket
2011/10/19 17:10:27| assertion failed: disk.cc:377: fd = 0

Thought this was fixed in an earlier patch for 3.1

Rgds
Alex


==
Time for another Macmillan Cancer Support event. This time its the 12  
day Escape to Africa challenge

View route at 
http://maps.google.co.uk/maps/ms?ie=UTF8hl=enmsa=0msid=203779866436035016780.00049e867720273b73c39z=8
Please sponsor me at http://www.justgiving.com/Alex-Sharaz





Re: [squid-users] How to filter response in squid-3.1.x?

2011-10-19 Thread Kaiwang Chen
2011/10/19 Amos Jeffries squ...@treenet.co.nz:
 On Wed, 19 Oct 2011 05:15:22 +0800, Kaiwang Chen wrote:

 After a few investigation, I found the statement from
 http://www.squid-cache.org/Doc/config/ecap_service/:
        vectoring_point =
 reqmod_precache|reqmod_postcache|respmod_precache|respmod_postcache
                This specifies at which point of transaction processing the
                eCAP service should be activated. *_postcache
 vectoring points
                are not yet supported.

 Also in http://wiki.squid-cache.org/Features/ICAP, similiar statement
 was found:
 Pre-cache REQMOD and RESPMOD vectoring points are supported

 Notice 6.1 Vectoring points from rfc5703 suggests 4 classess of
 different adaption. I guess the above statemets is class 1, client
 requests on its way into the cache, and class 3, responses on its
 way into the cache? A positive answer might be really bad news for
 me, since I am looking for class 4,  client-specific responses coming
 from the surrogate.. Would anyone please make me clear?

 Sort of.

 In Squid there are several mangling interfaces which the request goes
 through (URL rewrite, ESI etc). The ICAP/eCAP adaptation is the first layer.
 This means:
  * pre-cache REQMOD is request received from client before any other local
 alterations are done. Some minor normalisation is performed during parsing
 but that is all. The adaptation producing a reply will prevent any other
 modifications being done. The reply gets sent straight back to the client
 (and not cached).

  * pre-cache RESPMOD is responses coming from the server. Again with only
 minor parser normalizations. Caching here is determined by the output HTTP
 headers of the adaptation step. So you can at the adaptation step add
 client-specific things and strip away the cacheability of the response.


 To only change the HTTP headers, there are some tricks you can do with the
 must-revalidate and/or proxy-revalidate cache control. These controls
 causes the surrogate to contact the origin web server on every request. The
 origin can send back new headers on a 304 not-modified response. Meaning the
 headers get changed per-response, but the cached body gets sent only when
 actually changed. Retaining most of the bandwidth and performance benefits
 of caching.

So, the possible solution could be injecting a Cache-Control:
must-revalidate header by some eCap reqmod_precache service, then
Squid will revalidate the response on every request carrying new
request headers, then the origin server has its chance to set new
response headers? A little counter-intuitive workaround for class 4
adaption. Not perfect, since revalidate only occurs only when the
response is stale, while what I am looking for is adapting every
response before it leaves Squid for the client. 'Cache-Control:
max-age=0' will force revalidation every response, though.

I also chance read ESI which really resembles class 4 adaption with
limited capability that only modifies response body. Looks like it is
incapable of doing custom complex calculation. So Squid does not
support class 4 adaption in general? Any other alternative?


  NP: this trick with 304 is only possible for headers which do not update
 headers with details about the particular body object. ie you can use it for
 altering Cookie values per-request, but not for changing the apparent
 Content-Encoding from gzip to deflate. For things affecting the body you use
 the normal 200 response and send the updated body as well.

Sure.

BTW, I tried the gzip compression adapter from
http://code.google.com/p/squid-ecap-gzip/, and found that after a
request carrying Accept-Encoding: gzip, Squid always passes back
gzip'ed response to the client, even it no longer carries that header,
because the object is not modified. A request without gzip support and
with 'Cache-Control: no-cache' refreshes the cache to be always
returning plain text responses.  Does it imply that Squid only caches
one copy of response, rather than one per each enconding? How to make
it serve other encoding different from the cached one?



 HTH

 Amos



Thanks,
Kaiwang


Re: [squid-users] Facebook page very slow to respond

2011-10-19 Thread Wilson Hernandez

Hello.

After attempting several suggestions from guys here in the list, I'm 
still experiencing the same problem: Facebook is so sluggish that my 
users are complaining everyday and is just depressing.


Today I came up with an idea: Use a dedicated line for facebook 
traffic. For ei.


LAN
   |
   |
SERVER --- Internet line for facebook only
   |
   |
   Internet

Can this be possible?
Can this solution fix my problems or give me more problems?

Thanks.

Wilson Hernandez
www.figureo56.com
www.optimumwireless.com


On 10/11/2011 9:25 AM, Wilson Hernandez wrote:

On 10/11/2011 7:47 AM, Ed W wrote:

On 08/10/2011 20:25, Wilson Hernandez wrote:

Thanks for replying.

Well, our cache.log looks ok. No real problems there but, will be
monitoring it closely to check if there is something unusual.

As for the DNS, we have local DNS server inside our LAN that is used
by 95% of the machines. This server uses our provider's servers as
well as google's:

  forwarders {
 8.8.8.8;
 196.3.81.5;
 196.3.81.132;
 };

Our users are just driving me crazy with calls regarding facebook: is
slow, doesn't work, and a lot other complaints...


Occasionally you will find that Google DNS servers get poisoned and
take you to a non local facebook page.  I guess run dig against specific
servers and be sure you are ending up on a server which doesn't have
some massive ping to it?  I spent a while debugging a similar problem
where the BBC home page got suddenly slow on me because I was being
redirected to some german akamai site rather than the UK one...

This is likely to make a difference between snappy and sluggish though,
not dead...

Let me remove google's DNS and continue testing Facebook sluggishness.

Thanks for replying.



Good luck

Ed W



Re: [squid-users] Facebook page very slow to respond

2011-10-19 Thread Andrew Beverley
On Wed, 2011-10-19 at 12:48 -0400, Wilson Hernandez wrote:
 Hello.
 
 After attempting several suggestions from guys here in the list, I'm 
 still experiencing the same problem: Facebook is so sluggish that my 
 users are complaining everyday and is just depressing.
 
 Today I came up with an idea: Use a dedicated line for facebook 
 traffic. For ei.
 
  LAN
 |
 |
  SERVER --- Internet line for facebook only
 |
 |
 Internet
 
 Can this be possible?

Yes, it's possible, using policy based routing with iproute2. However,
you'll need all the IP addresses for facebook, which I imagine will
prove difficult.

 Can this solution fix my problems or give me more problems?
 

I'm not convinced this is the answer to your problem though. Are you
having problems with any other websites? Have you tried by-passing Squid
to see if it is indeed a bandwidth related issue or a problem with Squid
itself?

Andy




Re: [squid-users] Change cache_dir from ufs to aufs

2011-10-19 Thread Amos Jeffries

On Wed, 19 Oct 2011 16:27:18 +0200, Emmanuel Lacour wrote:
On Tue, Oct 18, 2011 at 11:38:28AM -0500, Luis Daniel Lucio Quiroz 
wrote:

2011/10/18 Emmanuel Lacour:

 If do not change the size/L1/L2, can I just change ufs to aufs in
 squid.conf and only do a squid reload, or do I need to restart 
squid?




restart it



I did it, it works, but now, I have some messages like this (not 
many,

but some):

2011/10/19 16:19:58| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/10/19 16:19:58|/var/spool/squid/81/DB/0081DB55


I think that I'm going to squid-z again the spools (I plan to reduce 
its

size any way), but I'm curious and would be happy to understand why
those messages happens;)


The Squid in-memory index indicates a file exists, but the disk does 
not have it.


Can be due to manual removal of the files, shutdown not having enough 
time to rebuild the swap.state journal fully.



For a simple size change (MB capacity rather than L1/L2), you can just 
alter and reload the config. Squid will drop files automatically until 
the cache fits within the new limit.


Amos



Re: [squid-users] How to filter response in squid-3.1.x?

2011-10-19 Thread Amos Jeffries

On Thu, 20 Oct 2011 00:39:32 +0800, Kaiwang Chen wrote:

2011/10/19 Amos Jeffries:

On Wed, 19 Oct 2011 05:15:22 +0800, Kaiwang Chen wrote:

snip


To only change the HTTP headers, there are some tricks you can do 
with the
must-revalidate and/or proxy-revalidate cache control. These 
controls
causes the surrogate to contact the origin web server on every 
request. The
origin can send back new headers on a 304 not-modified response. 
Meaning the
headers get changed per-response, but the cached body gets sent only 
when
actually changed. Retaining most of the bandwidth and performance 
benefits

of caching.


So, the possible solution could be injecting a Cache-Control:
must-revalidate header by some eCap reqmod_precache service, then
Squid will revalidate the response on every request carrying new
request headers, then the origin server has its chance to set new
response headers? A little counter-intuitive workaround for class 4
adaption. Not perfect, since revalidate only occurs only when the
response is stale,


That would be 'normal' revalidation operation. Which is why the control 
exists and is called must-revalidate. To override the normal operation 
and force revalidation on every request.


You could set it in a filter module altering the headers. And repeat 
the setup on every proxy surrogate as your expand the CDN. It is far 
easier to send it from the origin which is designed to do set these 
controls very efficiently and scales perfectly.




while what I am looking for is adapting every
response before it leaves Squid for the client. 'Cache-Control:
max-age=0' will force revalidation every response, though.


Otherwise known as force reload.
Forces full erasure and new a full new fetch on every request. Not 
revalidation.




I also chance read ESI which really resembles class 4 adaption with
limited capability that only modifies response body. Looks like it is
incapable of doing custom complex calculation. So Squid does not
support class 4 adaption in general? Any other alternative?


ESI, yes is good for personalization of the body. It does not exactly 
do calculations. It does widget insertion in to pages for 
personalization at the gateway machine. Allowing caching of the page 
template and widgets separately within a CDN.


You were taking about personalizing Cookies etc, which are not part of 
the body content.






 NP: this trick with 304 is only possible for headers which do not 
update
headers with details about the particular body object. ie you can 
use it for
altering Cookie values per-request, but not for changing the 
apparent
Content-Encoding from gzip to deflate. For things affecting the body 
you use

the normal 200 response and send the updated body as well.


Sure.

BTW, I tried the gzip compression adapter from
http://code.google.com/p/squid-ecap-gzip/, and found that after a
request carrying Accept-Encoding: gzip, Squid always passes back
gzip'ed response to the client, even it no longer carries that 
header,
because the object is not modified. A request without gzip support 
and

with 'Cache-Control: no-cache' refreshes the cache to be always
returning plain text responses.  Does it imply that Squid only caches
one copy of response, rather than one per each enconding? How to make
it serve other encoding different from the cached one?


Sounds like the adapter is not working. What you describe is normal 
Squid behaviour without the adapter.


IIRC the module was supposed to update the background requests to 
prefer gzipped, and itself do the un-zipping when an identity encoded 
response was required by the client.



Amos


Re: [squid-users] Assertion failure in squid 3.1.15

2011-10-19 Thread Amos Jeffries

On Wed, 19 Oct 2011 17:12:30 +0100, Alex Sharaz wrote:
Just upgraded a batch of caches to 3.1.15 and I'm seeing the 
occasional


2011/10/19 17:10:27| Reconfiguring Squid Cache (version 3.1.15)...
2011/10/19 17:10:27| FD 114 Closing HTTP connection
2011/10/19 17:10:27| FD 115 Closing HTTP connection
2011/10/19 17:10:27| FD 116 Closing HTTP connection
2011/10/19 17:10:27| FD 117 Closing ICP connection
2011/10/19 17:10:27| FD 118 Closing HTCP socket
2011/10/19 17:10:27| assertion failed: disk.cc:377: fd = 0



http://bugs.squid-cache.org/show_bug.cgi?id=3097



Thought this was fixed in an earlier patch for 3.1



Seems not. Similar ones in different location maybe what you are 
thinking.


It seems this is related to reconfigure/shutdown FD closure timing 
problems. We have discussed a solution, but not had time to implement 
and test it out yet.


Amos



Re: [squid-users] Storeurl_rewrite Cache Peers

2011-10-19 Thread Ghassan Gharabli
Hello Amos,


Thank  you so much . I have fixed the issue by editing the source and
then I compiled it on windows using MinGW ..


I am happy again :D

On 10/19/11, Amos Jeffries squ...@treenet.co.nz wrote:
 On 19/10/11 12:12, Ghassan Gharabli wrote:
 Hello,


 My question is about storeurl_rewrite ...

 I used to have more than 7 windows servers with Squid2.7 STABLE8
 installed (Sibling Mode) ..

 I was wondering why I cant share cached data that was saved locally
 through storeurl_rewrite between all squid proxy servers!?

 It was working before .. Now I am working on SQUID2.7STABLE7 but
 should I upgrade to Squid2.7STABLE8 to make it work like before or I
 must do soemthing in Squid.Conf?


 The output of storeurl_rewrite is a private URL for use only within
 that Squid. All external communications including to peers uses the
 public URL which some client is wanting.

 You may have hit http://bugs.squid-cache.org/show_bug.cgi?id=2354

 ICP/HTCP being how siblings interact to determine the URLs stored. I'm
 not sure why it was working in the earlier version. Perhapse you had
 cache digests working there?

 Amos
 --
 Please be using
Current Stable Squid 2.7.STABLE9 or 3.1.16
Beta testers wanted for 3.2.0.13



RE: [squid-users] RE: Essential ICAP service eown error not working reliably

2011-10-19 Thread Justin Lawler
Hi Amos,

We're seeing these OPTIONS health-check requests coming in every second in the 
ICAP server. Is this correct behavior?

Is this customizable in the squid.conf file? Or does squid calculate this 
setting itself?

We're seeing these requests come in every second in production, but in our test 
environment, they're coming in every 40-60 seconds - and we're a little 
confused as to why.

Thanks and regards,
Justin


-Original Message-
From: Justin Lawler 
Sent: Tuesday, October 18, 2011 7:12 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] RE: Essential ICAP service eown error not working 
reliably

HI Amos, thanks for that.

Yea - we're in the middle of running against a JVM with tuned GC settings, 
which we hope will resolve the issue.

One problem is we need to be 100% the issue is being caused by long GC pauses, 
as the patch has to go into a busy production system. Currently we're not, as 
we're not always getting ICAP errors for every long GC pause - maybe only 20% 
of the time we're getting ICAP errors only.

Thanks,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Tuesday, October 18, 2011 7:03 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] RE: Essential ICAP service eown error not working 
reliably

On 18/10/11 18:02, Justin Lawler wrote:
 Hi,

 Just a follow up to this. Anyone know how/when squid will trigger ICAP 
 service as down?


When it stops responding.

 From ICAP logs, we can see squid is sending in an 'OPTIONS' request 
 every second. Is this request a health-check on the ICAP service? Or 
 is there any other function to it?


Yes, and yes. A service responding to OPTIONS is obviously running.

See the ICAP specification for what else its used for:
http://www.rfc-editor.org/rfc/rfc3507.txt section 4.10

 We're still seeing very long pauses in our ICAP server that should 
 really trigger an ICAP error on squid, but it isn't always.

 Thanks, Justin

Can you run it against a better GC? I've heard that there were competing GC 
algorithms in Java these last few years with various behaviour benefits.



 -Original Message-
  From: Justin Lawler

 Hi,

 We have an application that integrates with squid over ICAP - a java 
 based application. We're finding that the java application has very 
 long garbage collection pauses at times (20+ seconds), where the 
 application becomes completely unresponsive.

 We have squid configured to use this application as an essential 
 service, with a timeout for 20 seconds. If the application goes into a 
 GC pause, squid can throw an 'essential ICAP service is down'
 error.

 The problem is most of the time it doesn't. It only happens maybe 20% 
 of the time - even though some of the pauses are 25 seconds+.

 Squid is setup to do an 'OPTIONS' request on the java application 
 every second, so I don't understand why it doesn't detect the java 
 application becoming unresponsive.


It's very likely these requests are being made and being serviced, just very 
much later.

http://www.squid-cache.org/Doc/config/icap_connect_timeout/
   Note the default is: 30-60 seconds inherited from [peer_]connect_timeout.

Also http://www.squid-cache.org/Doc/config/icap_service_failure_limit/

So 10 failures in a row are required to detect an outage. Each failure takes 
30+ seconds to be noticed.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.16
   Beta testers wanted for 3.2.0.13
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement, you may review at 
http://www.amdocs.com/email_disclaimer.asp



Re: [squid-users] squid 3.2.0.10 = sentRequestBody error: FD xxxx: (32) Broken pipe

2011-10-19 Thread Amos Jeffries

On 20/10/11 02:50, Saleh Madi wrote:

Hi,

Our squid cache.log file is full with this error message what does these
errors mean, I think when the error occurs  severely it effects the squid
performance.


Step one of checking out errors in beta packages is to upgrade to the 
latest and see if the problem remains. Please do that.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.16
  Beta testers wanted for 3.2.0.13


Re: [squid-users] Facebook page very slow to respond

2011-10-19 Thread Wilson Hernandez


Wilson Hernandez
849-214-8030
www.figureo56.com
www.optimumwireless.com


On 10/19/2011 4:31 PM, Andrew Beverley wrote:

On Wed, 2011-10-19 at 12:48 -0400, Wilson Hernandez wrote:

Hello.

After attempting several suggestions from guys here in the list, I'm
still experiencing the same problem: Facebook is so sluggish that my
users are complaining everyday and is just depressing.

Today I came up with an idea: Use a dedicated line for facebook
traffic. For ei.

  LAN
 |
 |
  SERVER --- Internet line for facebook only
 |
 |
 Internet

Can this be possible?

Yes, it's possible, using policy based routing with iproute2. However,
you'll need all the IP addresses for facebook, which I imagine will
prove difficult.


I thought of this but, thought with the DNS record thing might be easier


Can this solution fix my problems or give me more problems?


I'm not convinced this is the answer to your problem though. Are you
having problems with any other websites? Have you tried by-passing Squid
to see if it is indeed a bandwidth related issue or a problem with Squid
itself?


I tried this in the past but, didn't work.

To tell you the truth I don't know whats the deal: bandwithd or squid 
but, is really getting in my nerve loosing users left and right every 
week I need to come up with a solution before my whole network goes 
down the drain


Thanks Andy for replying


Andy