Re: [squid-users] Can't access facebook.com through squid 3.1.19

2012-09-20 Thread Wilson Hernandez


On 19/09/2012 23:21, Amos Jeffries wrote:

On 20/09/2012 1:25 a.m., Eliezer Croitoru wrote:

On 9/19/2012 3:28 PM, Wilson Hernandez wrote:

I do not see any facebook stuff in access.log.

I'm using squid in transparent mode.

If I connect my pc directly to the modem then facebook works.


what are you trying to do with squid?
if it works for other sites but not for facebook check if you do use 
facebook or facebook with SSL.


Also whether your clients are using IPv4 or IPv6 to access facebook. 
It is an IPv6-enabled website these days. transparent mode as 
commonly spoken about is IPv4-only.


Amos




Amos.

Thanks for replying.

So, what do I need to do? Upgrade?

Thanks.


Re: [squid-users] Can't access facebook.com through squid 3.1.19

2012-09-19 Thread Wilson Hernandez

I do not see any facebook stuff in access.log.

I'm using squid in transparent mode.

If I connect my pc directly to the modem then facebook works.


On 18/09/2012 17:08, Eliezer Croitoru wrote:

On 9/17/2012 8:20 PM, Wilson Hernandez wrote:

Hello.

As of a couple of days ago I've been experiencing a very strange problem
with squid 3.1.19: Facebook doesn't work!

Every site work well except for FB. I checked the access.log file while
accessing FB and nothing about FB shows. At first I thought it might be
a DNS problem but, is not. Checked cache.log and nothing there either.

I re-installed 3.1.19 version and still can't get access to FB.

Any idea of what can be happening?

Thanks in advanced for your time and help.



some logs output can help.
if you cant see any logs in access.log it only means that the problem 
is not in squid.


Do you use squid in transparent mode ? or forward?

Eliezer


--
Wilson Hernandez
829-679-2105
www.figureo56.com
www.sonrisasparamipueblo.org
www.optimumwireless.com



Re: [squid-users] Can't access facebook.com through squid 3.1.19

2012-09-19 Thread Wilson Hernandez
This were working right for months and all over a sudden there's that 
problem. We haven't change anything on our server.



On 19/09/2012 9:25, Eliezer Croitoru wrote:

On 9/19/2012 3:28 PM, Wilson Hernandez wrote:

I do not see any facebook stuff in access.log.

I'm using squid in transparent mode.

If I connect my pc directly to the modem then facebook works.


what are you trying to do with squid?
if it works for other sites but not for facebook check if you do use 
facebook or facebook with SSL.

also try using regular forward proxy.
I think your setup of the server is wrong not related to squid.

Regards,
Eliezer



--
Wilson Hernandez
829-679-2105
www.figureo56.com
www.sonrisasparamipueblo.org
www.optimumwireless.com



Re: [squid-users] Can't access facebook.com through squid 3.1.19

2012-09-18 Thread Wilson Hernandez
I'm still looking for reasons why this is not working for me and can't 
find anything about it.


I'm not blocking this domain and is not working for me.


On 17/09/2012 13:20, Wilson Hernandez wrote:

Hello.

As of a couple of days ago I've been experiencing a very strange 
problem with squid 3.1.19: Facebook doesn't work!


Every site work well except for FB. I checked the access.log file 
while accessing FB and nothing about FB shows. At first I thought it 
might be a DNS problem but, is not. Checked cache.log and nothing 
there either.


I re-installed 3.1.19 version and still can't get access to FB.

Any idea of what can be happening?

Thanks in advanced for your time and help.




--
Wilson Hernandez
829-679-2105
www.figureo56.com
www.sonrisasparamipueblo.org
www.optimumwireless.com



[squid-users] Can't access facebook.com through squid 3.1.19

2012-09-17 Thread Wilson Hernandez

Hello.

As of a couple of days ago I've been experiencing a very strange problem 
with squid 3.1.19: Facebook doesn't work!


Every site work well except for FB. I checked the access.log file while 
accessing FB and nothing about FB shows. At first I thought it might be 
a DNS problem but, is not. Checked cache.log and nothing there either.


I re-installed 3.1.19 version and still can't get access to FB.

Any idea of what can be happening?

Thanks in advanced for your time and help.


--
Wilson Hernandez
www.figureo56.com
www.sonrisasparamipueblo.org
www.optimumwireless.com



[squid-users] How can I use to squid servers ea for specific website?

2011-11-10 Thread Wilson Hernandez

Hello List.

I would like to know how I can use two squid servers to redirect traffic 
for an especific page through the second squid server connected to a 
different provider.


For ie:
squid2  provider 1 (used for facebook.com 
ONLY)

|
lan   |
|
squid1  provider 2 (default server used for 
everything except facebook)



I need to know how to use squid2 as a slave and what configuration do I 
need on squid1 (master) in order for me to accomplish my task or the 
rules need to be done with iptables?


Thanks for your time.



--
Wilson Hernandez
www.figureo56.com
www.optimumwireless.com


Re: [squid-users] How can I use to squid servers ea for specific website?

2011-11-10 Thread Wilson Hernandez

Thanks Amos for your reply, but I have some confusion (see below)


On 11/10/2011 8:06 PM, Amos Jeffries wrote:

On 11/11/2011 12:00 p.m., Wilson Hernandez wrote:

Hello List.

I would like to know how I can use two squid servers to redirect 
traffic for an especific page through the second squid server 
connected to a different provider.


For ie:
squid2  provider 1 (used for facebook.com 
ONLY)

|
lan   |
|
squid1  provider 2 (default server used 
for everything except facebook)



I need to know how to use squid2 as a slave and what configuration do 
I need on squid1 (master) in order for me to accomplish my task or 
the rules need to be done with iptables?




The term master/slave terms may be where you are getting into trouble. 
In HTTP terminology there are parent/child and sibling 
relationships only. The parent/child relationship is a little 
different from master/slave concept due to the two-way nature of the 
data flow.  Child is master of the request flow and slave for the 
reply flow. Parent is master of the reply flow and slave for the 
request flow.


If I interpret your message right you are asking about squid2 as 
parent, squid1 as child. Nothing special for squid2 config. This would 
be the config for squid1:


  # link squid1 to parent (data source) squid2
  # see http://www.squid-cache.org/Doc/config/cache_peer/ for the 
actual text

  cache_peer squid2 parent ...


So, this would go in squid1:

cache_peer squid2's-ip parent

This confuses me somewhat because i thought squid1 would be the squid2's 
parent since squid1 would be the default for everything and it will send 
request to squid2 for facebook traffic responding back to squid1 and 
squid1 responding to the client (please, correct me if I'm wrong).


So, still all the LAN traffic would hit squid1... wouldn't this be the 
same as I have it now? I would like to see if facebook traffic gets 
better in our LAN



  # specify that only facebook.com requests goes to squid2
  acl FB dstdomain .facebook.com
  cache_peer_access allow FB
  cache_peer_access deny all

  # specify that squid1 MUST NOT service facebook.com requests itself
  never_direct allow FB



Thanks.


Alternatively you maybe could also do this with only one Squid server 
by marking packets for facebook.com requests with tcp_outgoing_tos or 
tcp_outgoing_address which the OS layer routing detects and sends to 
provider 1 interface and blocks from accessing provider 2.



Amos


Re: [squid-users] How can I use to squid servers ea for specific website?

2011-11-10 Thread Wilson Hernandez

Amos,

Thanks once again for your reply.

Don't know if you remember a couple of weeks ago that I was having 
problems with all my clients complaining that facebook was sluggish. You 
helped me with some configuration issues but, that hasn't fix the 
problem. I don't know if is my set up here or what but, I have two lines 
from the same provider that are connected to a tplink load balancer (I 
know, not the best but is what I can afford) mixing the two lines.


I truly don't know if that's where my problem is at but, now I'm trying 
to test using one line for facebook and the other for the rest of the 
traffic this way I avoid the balancer and would like to test if things 
work out better this way.


I'm trying to configure a test server (squid2) to check if my situation 
improves.


Wilson Hernandez
www.figureo56.com
www.optimumwireless.com


On 11/10/2011 10:31 PM, Amos Jeffries wrote:

On 11/11/2011 2:16 p.m., Wilson Hernandez wrote:

Thanks Amos for your reply, but I have some confusion (see below)


On 11/10/2011 8:06 PM, Amos Jeffries wrote:

On 11/11/2011 12:00 p.m., Wilson Hernandez wrote:

Hello List.

I would like to know how I can use two squid servers to redirect 
traffic for an especific page through the second squid server 
connected to a different provider.


For ie:
squid2  provider 1 (used for 
facebook.com ONLY)

|
lan   |
|
squid1  provider 2 (default server used 
for everything except facebook)



I need to know how to use squid2 as a slave and what configuration 
do I need on squid1 (master) in order for me to accomplish my task 
or the rules need to be done with iptables?




The term master/slave terms may be where you are getting into 
trouble. In HTTP terminology there are parent/child and sibling 
relationships only. The parent/child relationship is a little 
different from master/slave concept due to the two-way nature of the 
data flow.  Child is master of the request flow and slave for the 
reply flow. Parent is master of the reply flow and slave for the 
request flow.


If I interpret your message right you are asking about squid2 as 
parent, squid1 as child. Nothing special for squid2 config. This 
would be the config for squid1:


  # link squid1 to parent (data source) squid2
  # see http://www.squid-cache.org/Doc/config/cache_peer/ for the 
actual text

  cache_peer squid2 parent ...


So, this would go in squid1:

cache_peer squid2's-ip parent

This confuses me somewhat because i thought squid1 would be the 
squid2's parent since squid1 would be the default for everything and 
it will send request to squid2 for facebook traffic responding back 
to squid1 and squid1 responding to the client (please, correct me if 
I'm wrong).


You are looking at the relationship backwards. The child is the one 
closer to the clients and parent closer to the Internet data source.


So with all your traffic going from the clients to squid1, that is the 
child.




So, still all the LAN traffic would hit squid1... wouldn't this be 
the same as I have it now? I would like to see if facebook traffic 
gets better in our LAN




Er, yes. Maybe we have another mixup.  I thought that was what you 
were asking about. How to make the requests go from squid1 to squid2, 
after they had already arrived at squid1.


ie
  clients - squid1 - squid2 - provider 1(for facebook)
  clients - squid1 - provider 2   (for non-facebook).

Things should improve if provider 2 has better service for facebook 
than provider 1. If provider2 has the worse service I would question 
why you want FB stuff to go that way anyway.



You can make the clients not go to squid1 at all for facebook. But 
that is a completely different setup again.
It requires a PAC file instead. The squid do not need any linkage 
between them when the client is making the squid1 vs squid2 connection 
choice by its PAC file logic.
http://http://findproxyforurl.com/ has useful examples and 
documentation for PAC scripts and the associated WPAD protocol.


Amos


Re: [squid-users] Facebook page very slow to respond

2011-10-19 Thread Wilson Hernandez

Hello.

After attempting several suggestions from guys here in the list, I'm 
still experiencing the same problem: Facebook is so sluggish that my 
users are complaining everyday and is just depressing.


Today I came up with an idea: Use a dedicated line for facebook 
traffic. For ei.


LAN
   |
   |
SERVER --- Internet line for facebook only
   |
   |
   Internet

Can this be possible?
Can this solution fix my problems or give me more problems?

Thanks.

Wilson Hernandez
www.figureo56.com
www.optimumwireless.com


On 10/11/2011 9:25 AM, Wilson Hernandez wrote:

On 10/11/2011 7:47 AM, Ed W wrote:

On 08/10/2011 20:25, Wilson Hernandez wrote:

Thanks for replying.

Well, our cache.log looks ok. No real problems there but, will be
monitoring it closely to check if there is something unusual.

As for the DNS, we have local DNS server inside our LAN that is used
by 95% of the machines. This server uses our provider's servers as
well as google's:

  forwarders {
 8.8.8.8;
 196.3.81.5;
 196.3.81.132;
 };

Our users are just driving me crazy with calls regarding facebook: is
slow, doesn't work, and a lot other complaints...


Occasionally you will find that Google DNS servers get poisoned and
take you to a non local facebook page.  I guess run dig against specific
servers and be sure you are ending up on a server which doesn't have
some massive ping to it?  I spent a while debugging a similar problem
where the BBC home page got suddenly slow on me because I was being
redirected to some german akamai site rather than the UK one...

This is likely to make a difference between snappy and sluggish though,
not dead...

Let me remove google's DNS and continue testing Facebook sluggishness.

Thanks for replying.



Good luck

Ed W



Re: [squid-users] Facebook page very slow to respond

2011-10-19 Thread Wilson Hernandez


Wilson Hernandez
849-214-8030
www.figureo56.com
www.optimumwireless.com


On 10/19/2011 4:31 PM, Andrew Beverley wrote:

On Wed, 2011-10-19 at 12:48 -0400, Wilson Hernandez wrote:

Hello.

After attempting several suggestions from guys here in the list, I'm
still experiencing the same problem: Facebook is so sluggish that my
users are complaining everyday and is just depressing.

Today I came up with an idea: Use a dedicated line for facebook
traffic. For ei.

  LAN
 |
 |
  SERVER --- Internet line for facebook only
 |
 |
 Internet

Can this be possible?

Yes, it's possible, using policy based routing with iproute2. However,
you'll need all the IP addresses for facebook, which I imagine will
prove difficult.


I thought of this but, thought with the DNS record thing might be easier


Can this solution fix my problems or give me more problems?


I'm not convinced this is the answer to your problem though. Are you
having problems with any other websites? Have you tried by-passing Squid
to see if it is indeed a bandwidth related issue or a problem with Squid
itself?


I tried this in the past but, didn't work.

To tell you the truth I don't know whats the deal: bandwithd or squid 
but, is really getting in my nerve loosing users left and right every 
week I need to come up with a solution before my whole network goes 
down the drain


Thanks Andy for replying


Andy




Re: [squid-users] Recurrent crashes and warnings: Your cache is running out of filedescriptors

2011-10-11 Thread Wilson Hernandez
I was having this problem in the past and created the following script 
to start squid:


#!/bin/sh -e
#

echo Starting squid...

ulimit -HSn 65536
sleep 1
/usr/local/squid/sbin/squid

echo Done..

That fixed the problem and hasn't happen ever since.

Hope that helps.

On 10/11/2011 9:07 AM, Leonardo wrote:

Hi all,

I'm running a transparent Squid proxy on a Linux Debian 5.0.5,
configured as a bridge.  The proxy serves a few thousands of users
daily.  It uses Squirm for URL rewriting, and (since 6 weeks) sarg for
generating reports.  I compiled it from source.
This is the output of squid -v:
Squid Cache: Version 3.1.7
configure options:  '--enable-linux-netfilter' '--enable-wccp'
'--prefix=/usr' '--localstatedir=/var' '--libexecdir=/lib/squid'
'--srcdir=.' '--datadir=/share/squid' '--sysconfdir=/etc/squid'
'CPPFLAGS=-I../libltdl' --with-squid=/root/squid-3.1.7
--enable-ltdl-convenience
I set squid.conf to allocate 10Gb of disk cache:
cache_dir ufs /var/cache 1 16 256


Everything worked fine for almost one year, but now suddenly I keep
having problems.


Recently Squid crashed and I had to delete swap.state.


Now I keep seeing this warning message on cache.log and on console:
client_side.cc(2977) okToAccept: WARNING! Your cache is running out of
filedescriptors

At OS level, /proc/sys/fs/file-max reports 314446.
squidclient mgr:info reports 1024 as the max number of file descriptors.
I've tried both to set SQUID_MAXFD=4096 on etc/default/squid and
max_filedescriptors 4096 on squid.conf but neither was successful.  Do
I really have to recompile Squid to increase the max number of FDs?


Today Squid crashed again, and when I tried to relaunch it it gave this output:

2011/10/11 11:18:29| Process ID 28264
2011/10/11 11:18:29| With 1024 file descriptors available
2011/10/11 11:18:29| Initializing IP Cache...
2011/10/11 11:18:29| DNS Socket created at [::], FD 5
2011/10/11 11:18:29| DNS Socket created at 0.0.0.0, FD 6
(...)
2011/10/11 11:18:29| helperOpenServers: Starting 40/40 'squirm' processes
2011/10/11 11:18:39| Unlinkd pipe opened on FD 91
2011/10/11 11:18:39| Store logging disabled
2011/10/11 11:18:39| Swap maxSize 1024 + 262144 KB, estimated 807857 objects
2011/10/11 11:18:39| Target number of buckets: 40392
2011/10/11 11:18:39| Using 65536 Store buckets
2011/10/11 11:18:39| Max Mem  size: 262144 KB
2011/10/11 11:18:39| Max Swap size: 1024 KB
2011/10/11 11:18:39| /var/cache/swap.state.new: (28) No space left on device
FATAL: storeDirOpenTmpSwapLog: Failed to open swap log.

I therefore deactivated the cache and rerun Squid.  It showed a long
list of errors of this type:
IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 10: (2) No such file or
directory
and then started.  Now Squid is running and serving requests, albeit
without caching.  However, I keep seeing the same error:
client_side.cc(2977) okToAccept: WARNING! Your cache is running out of
filedescriptors

What is the reason of this since I'm not using caching at all?


Thanks a lot if you can shed some light on this.
Best regards,


Leonardo


Re: [squid-users] Facebook page very slow to respond

2011-10-11 Thread Wilson Hernandez

On 10/11/2011 7:47 AM, Ed W wrote:

On 08/10/2011 20:25, Wilson Hernandez wrote:

Thanks for replying.

Well, our cache.log looks ok. No real problems there but, will be
monitoring it closely to check if there is something unusual.

As for the DNS, we have local DNS server inside our LAN that is used
by 95% of the machines. This server uses our provider's servers as
well as google's:

  forwarders {
 8.8.8.8;
 196.3.81.5;
 196.3.81.132;
 };

Our users are just driving me crazy with calls regarding facebook: is
slow, doesn't work, and a lot other complaints...


Occasionally you will find that Google DNS servers get poisoned and
take you to a non local facebook page.  I guess run dig against specific
servers and be sure you are ending up on a server which doesn't have
some massive ping to it?  I spent a while debugging a similar problem
where the BBC home page got suddenly slow on me because I was being
redirected to some german akamai site rather than the UK one...

This is likely to make a difference between snappy and sluggish though,
not dead...

Let me remove google's DNS and continue testing Facebook sluggishness.

Thanks for replying.



Good luck

Ed W



Re: [squid-users] Facebook page very slow to respond

2011-10-10 Thread Wilson Hernandez

Amos.

Made the changes you suggested on this post.


On 10/8/2011 11:24 PM, Amos Jeffries wrote:

On 09/10/11 09:15, Wilson Hernandez wrote:
 I disabled squid and I'm doing simple FORWARDING and things work, this
 tells me that I'm having a configuration issue with squid 3.1.14.

 Now, I can't afford to run our network without squid since we are also
 running SquidGuard for disabling some websites to certain users.

 Here's part of my squid.conf:

 # Port Squid listens on
 http_port 172.16.0.1:3128 intercept disable-pmtu-discovery=off

 error_default_language es-do

 # Access-lists (ACLs) will permit or deny hosts to access the proxy
 acl lan-access src 172.16.0.0/16
 acl localhost src 127.0.0.1
 acl localnet src 172.16.0.0/16
 acl proxy src 172.16.0.1
 acl clientes_registrados src /etc/msd/ipAllowed

 # acl adstoblock dstdomain /etc/squid/blockAds

 acl CONNECT method CONNECT

snip

 http_access allow proxy
 http_access allow localhost

 # Block some sites

 acl blockanalysis01 dstdomain .scorecardresearch.com clkads.com
 acl blockads01 dstdomain .rad.msn.com ads1.msn.com ads2.msn.com
 ads3.msn.com ads4.msn.com
 acl blockads02 dstdomain .adserver.yahoo.com ad.yieldmanager.com
 acl blockads03 dstdomain .doubleclick.net .fastclick.net
 acl blockads04 dstdomain .ero-advertising.com .adsomega.com
 acl blockads05 dstdomain .adyieldmanager.com .yieldmanager.com
 .adyieldmanager.net .yieldmanager.net
 acl blockads06 dstdomain .e-planning.net .super-publicidad.com
 .super-publicidad.net
 acl blockads07 dstdomain .adbrite.com .contextweb.com .adbasket.net
 .clicktale.net
 acl blockads08 dstdomain .adserver.com .adv-adserver.com
 .zerobypass.info .zerobypass.com
 acl blockads09 dstdomain .ads.ak.facebook.com .pubmatic.com 
.baynote.net

 .publicbt.com

Optimization tip:
  These ACLs are the same as far as Squid is concerned. You are also 
using them the same way at the same time below. So the best thing to 
do is drop those 01,02,03 numbers and have all the blocked domains in 
one ACL name.


Then the below testing can be reduced to a single:
   http_access deny blockads



Changed all these to:

acl blockadsdstdomain .rad.msn.com ads1.msn.com ads2.msn.com 
ads3.msn.com ads4.msn.com

acl blockadsdstdomain .adserver.yahoo.com
acl blockadsdstdomain .doubleclick.net .fastclick.net
acl blockadsdstdomain .ero-advertising.com .adsomega.com
acl blockadsdstdomain .adyieldmanager.com .yieldmanager.com 
.adyieldmanager.net .yieldmanager.net
acl blockadsdstdomain .e-planning.net .super-publicidad.com 
.super-publicidad.net
acl blockadsdstdomain .adbrite.com .contextweb.com 
.adbasket.net .clicktale.net
acl blockadsdstdomain .adserver.com .adv-adserver.com 
.zerobypass.info .zerobypass.com
acl blockadsdstdomain .ads.ak.facebook.com .pubmatic.com 
.baynote.net .publicbt.com


http_access deny blockads





 balance_on_multiple_ip on

This erases some of the benefits from connection persistence and 
reuse. It is not such a great idea with 3.1+ as it was with earlier 
Squid.


Although you turned of connection persistence anyway below. So this 
only is noticable when it breaks websites depending on IP-base security.



Removed this line as suggested later...





 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
 refresh_pattern . 0 20% 4320


You may as well erase all the refresh_pattern rules below. The CGI and 
'.' pattern rules are the last ones Squid processes.



Also deleted all the rules and left what's above.
 visible_hostname www.optimumwireless.com

 cache_mgr optimumwirel...@hotmail.com


Optimum wireless. Hmm. I'm sure I've audited this config before and 
mentioned the same things...




You probably have..


 # TAG: store_dir_select_algorithm
 # Set this to 'round-robin' as an alternative.
 #
 #Default:
 # store_dir_select_algorithm least-load
 store_dir_select_algorithm round-robin


Changed this to least-load... Don't know if is better or not...



Interesting. Forcing round-robin selection between one dir. :)



 # PERSISTENT CONNECTION HANDLING
 #
 
- 



 #
 # Also see pconn_timeout in the TIMEOUTS section

 # TAG: client_persistent_connections
 # TAG: server_persistent_connections
 # Persistent connection support for clients and servers. By
 # default, Squid uses persistent connections (when allowed)
 # with its clients and servers. You can use these options to
 # disable persistent connections with clients and/or servers.
 #
 #Default:
 client_persistent_connections off
 server_persistent_connections off
 # TAG: persistent_connection_after_error
 # With this directive the use of persistent connections after
 # HTTP errors can be disabled. Useful if you have clients
 # who fail to handle errors on persistent connections proper.
 #
 #Default

Re: [squid-users] Facebook page very slow to respond

2011-10-10 Thread Wilson Hernandez


Wilson Hernandez
849-214-8030
www.figureo56.com
www.optimumwireless.com


On 10/10/2011 9:54 PM, Wilson Hernandez wrote:

Amos.

Made the changes you suggested on this post.


On 10/8/2011 11:24 PM, Amos Jeffries wrote:

On 09/10/11 09:15, Wilson Hernandez wrote:
 I disabled squid and I'm doing simple FORWARDING and things work, this
 tells me that I'm having a configuration issue with squid 3.1.14.

 Now, I can't afford to run our network without squid since we are also
 running SquidGuard for disabling some websites to certain users.

 Here's part of my squid.conf:

 # Port Squid listens on
 http_port 172.16.0.1:3128 intercept disable-pmtu-discovery=off

 error_default_language es-do

 # Access-lists (ACLs) will permit or deny hosts to access the proxy
 acl lan-access src 172.16.0.0/16
 acl localhost src 127.0.0.1
 acl localnet src 172.16.0.0/16
 acl proxy src 172.16.0.1
 acl clientes_registrados src /etc/msd/ipAllowed

 # acl adstoblock dstdomain /etc/squid/blockAds

 acl CONNECT method CONNECT

snip

 http_access allow proxy
 http_access allow localhost

 # Block some sites

 acl blockanalysis01 dstdomain .scorecardresearch.com clkads.com
 acl blockads01 dstdomain .rad.msn.com ads1.msn.com ads2.msn.com
 ads3.msn.com ads4.msn.com
 acl blockads02 dstdomain .adserver.yahoo.com ad.yieldmanager.com
 acl blockads03 dstdomain .doubleclick.net .fastclick.net
 acl blockads04 dstdomain .ero-advertising.com .adsomega.com
 acl blockads05 dstdomain .adyieldmanager.com .yieldmanager.com
 .adyieldmanager.net .yieldmanager.net
 acl blockads06 dstdomain .e-planning.net .super-publicidad.com
 .super-publicidad.net
 acl blockads07 dstdomain .adbrite.com .contextweb.com .adbasket.net
 .clicktale.net
 acl blockads08 dstdomain .adserver.com .adv-adserver.com
 .zerobypass.info .zerobypass.com
 acl blockads09 dstdomain .ads.ak.facebook.com .pubmatic.com 
.baynote.net

 .publicbt.com

Optimization tip:
  These ACLs are the same as far as Squid is concerned. You are also 
using them the same way at the same time below. So the best thing to 
do is drop those 01,02,03 numbers and have all the blocked domains in 
one ACL name.


Then the below testing can be reduced to a single:
   http_access deny blockads



Changed all these to:

acl blockadsdstdomain .rad.msn.com ads1.msn.com 
ads2.msn.com ads3.msn.com ads4.msn.com

acl blockadsdstdomain .adserver.yahoo.com
acl blockadsdstdomain .doubleclick.net .fastclick.net
acl blockadsdstdomain .ero-advertising.com .adsomega.com
acl blockadsdstdomain .adyieldmanager.com 
.yieldmanager.com .adyieldmanager.net .yieldmanager.net
acl blockadsdstdomain .e-planning.net 
.super-publicidad.com .super-publicidad.net
acl blockadsdstdomain .adbrite.com .contextweb.com 
.adbasket.net .clicktale.net
acl blockadsdstdomain .adserver.com .adv-adserver.com 
.zerobypass.info .zerobypass.com
acl blockadsdstdomain .ads.ak.facebook.com .pubmatic.com 
.baynote.net .publicbt.com


http_access deny blockads





 balance_on_multiple_ip on

This erases some of the benefits from connection persistence and 
reuse. It is not such a great idea with 3.1+ as it was with earlier 
Squid.


Although you turned of connection persistence anyway below. So this 
only is noticable when it breaks websites depending on IP-base security.



Removed this line as suggested later...





 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
 refresh_pattern . 0 20% 4320


You may as well erase all the refresh_pattern rules below. The CGI 
and '.' pattern rules are the last ones Squid processes.



Also deleted all the rules and left what's above.
 visible_hostname www.optimumwireless.com

 cache_mgr optimumwirel...@hotmail.com


Optimum wireless. Hmm. I'm sure I've audited this config before and 
mentioned the same things...




You probably have..


 # TAG: store_dir_select_algorithm
 # Set this to 'round-robin' as an alternative.
 #
 #Default:
 # store_dir_select_algorithm least-load
 store_dir_select_algorithm round-robin


Changed this to least-load... Don't know if is better or not...



Interesting. Forcing round-robin selection between one dir. :)



 # PERSISTENT CONNECTION HANDLING
 #
 
- 



 #
 # Also see pconn_timeout in the TIMEOUTS section

 # TAG: client_persistent_connections
 # TAG: server_persistent_connections
 # Persistent connection support for clients and servers. By
 # default, Squid uses persistent connections (when allowed)
 # with its clients and servers. You can use these options to
 # disable persistent connections with clients and/or servers.
 #
 #Default:
 client_persistent_connections off
 server_persistent_connections off
 # TAG: persistent_connection_after_error
 # With this directive the use of persistent connections after
 # HTTP errors can

[squid-users] Facebook page very slow to respond

2011-10-08 Thread Wilson Hernandez

Hello.

Our LAN users have been experiencing problems accessing facebook.com 
through squid... Most of the times is so slow that the page doesn't even 
responds to anything. Sometimes this error is returned by squid: /(104) 
Connection reset by peer./


Everything was working fine for a while but, things have been a little 
strange in the past week.


I don't know what's going on and can't even troubleshoot it. Is it only 
happening in our LAN?


Thanks in advanced for your time.

--
Wilson Hernandez
849-214-8030
www.figureo56.com
www.optimumwireless.com



Re: [squid-users] Facebook page very slow to respond

2011-10-08 Thread Wilson Hernandez

I've tried not to cache any facebook content like this:

acl special_domains dstdomain .facebook.com .fbcdn.net .verisign.com
cache deny special_domains

But, that is not working either.

Wilson Hernandez
849-214-8030
www.figureo56.com
www.optimumwireless.com


On 10/8/2011 2:40 PM, Wilson Hernandez wrote:

Hello.

Our LAN users have been experiencing problems accessing facebook.com 
through squid... Most of the times is so slow that the page doesn't 
even responds to anything. Sometimes this error is returned by squid: 
/(104) Connection reset by peer./


Everything was working fine for a while but, things have been a little 
strange in the past week.


I don't know what's going on and can't even troubleshoot it. Is it 
only happening in our LAN?


Thanks in advanced for your time.



Re: [squid-users] Facebook page very slow to respond

2011-10-08 Thread Wilson Hernandez

Thanks for replying.

Well, our cache.log looks ok. No real problems there but, will be 
monitoring it closely to check if there is something unusual.


As for the DNS, we have local DNS server inside our LAN that is used by 
95% of the machines. This server uses our provider's servers as well as 
google's:


 forwarders {
8.8.8.8;
196.3.81.5;
196.3.81.132;
};

Our users are just driving me crazy with calls regarding facebook: is 
slow, doesn't work, and a lot other complaints...


Wilson Hernandez
www.figureo56.com
www.optimumwireless.com


On 10/8/2011 3:06 PM, Joachim Wiedorn wrote:

Wilson Hernandezwil...@optimumwireless.com  wrote on 2011-10-08 14:40:


I don't know what's going on and can't even troubleshoot it. Is it only
happening in our LAN?

In our school network there were no problems. I know them because after
activating squidguard the pupils had problems with facebook.

But perhaps it is another problem which I had some weeks before: check
your DNS server entries (in /etc/resolv.conf and in your local bind9).
The provider sometimes change its IP's. And you should always use the
DNS server of your own provider - that is usually the fastest way.

---
Have a nice day.

Joachim (Germany)


Re: [squid-users] Facebook page very slow to respond

2011-10-08 Thread Wilson Hernandez
(SO_ORIGINAL_DST) failed on FD 86: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 91: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 94: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 98: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 99: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 100: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 101: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 102: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 103: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 104: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 105: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 106: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 107: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 109: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 110: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 111: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 112: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 113: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 114: (2) No such file or directory
2011/10/08 15:26:31| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 115: (2) No such file or directory
2011/10/08 15:26:37| errorpage.cc(288) errorTryLoadText: 
'/usr/local/squid/share/errors/eu/ERR_ACCESS_DENIED': (2) No such file 
or directory

2011/10/08 15:26:37| WARNING: Error Pages Missing Language: eu

Wilson Hernandez
www.figureo56.com
www.optimumwireless.com


On 10/8/2011 2:59 PM, Luis Daniel Lucio Quiroz wrote:

2011/10/8 Wilson Hernandezwil...@optimumwireless.com:

I've tried not to cache any facebook content like this:

acl special_domains dstdomain .facebook.com .fbcdn.net .verisign.com
cache deny special_domains

But, that is not working either.

Wilson Hernandez
849-214-8030
www.figureo56.com
www.optimumwireless.com


On 10/8/2011 2:40 PM, Wilson Hernandez wrote:

Hello.

Our LAN users have been experiencing problems accessing facebook.com
through squid... Most of the times is so slow that the page doesn't even
responds to anything. Sometimes this error is returned by squid: /(104)
Connection reset by peer./

Everything was working fine for a while but, things have been a little
strange in the past week.

I don't know what's going on and can't even troubleshoot it. Is it only
happening in our LAN?

Thanks in advanced for your time.


Check the content of cache.log and paste line in access that fails,
this will help than only guessing.

LD
http://www.twitter.com/ldlq


Re: [squid-users] Facebook page very slow to respond

2011-10-08 Thread Wilson Hernandez
()d. Thus, it is safe to set
#   memory_pools_limit to a reasonably high value even if your
#   configuration will use less memory.
#
#   If set to zero, Squid will keep all memory it can. That is, there
#   will be no limit on the total amount of memory used for 
safe-keeping.

#
#   To disable memory allocation optimization, do not set
#   memory_pools_limit to 0. Set memory_pools to off instead.
#
#   An overhead for maintaining memory pools is not taken into account

memory_pools_limit 64 MB

#  TAG: refresh_all_ims on|off
#   When you enable this option, squid will always check
#   the origin server for an update when a client sends an
#   If-Modified-Since request.  Many browsers use IMS
#   requests when the user requests a reload, and this
#   ensures those clients receive the latest version.
#
#   By default (off), squid may return a Not Modified response
#   based on the age of the cached version.
#
#Default:
refresh_all_ims off

#  TAG: reload_into_ims on|off
#   When you enable this option, client no-cache or ``reload''
#   requests will be changed to If-Modified-Since requests.
#   Doing this VIOLATES the HTTP standard.  Enabling this

reload_into_ims off

#  TAG: retry_on_error
#   If set to on Squid will automatically retry requests when
#   receiving an error response. This is mainly useful if you
#   are in a complex cache hierarchy to work around access
#   control errors.
#
#Default:
retry_on_error on

#  TAG: coredump_dir
#   By default Squid leaves core files in the directory from where
#   it was started. If you set 'coredump_dir' to a directory
#   that exists, Squid will chdir() to that directory at startup
#   and coredump files will be left there.
#
#Default:
# coredump_dir none
#
# Leave coredumps in the first cache dir
coredump_dir none

#  TAG: pipeline_prefetch
#   To boost the performance of pipelined requests to closer
#   match that of a non-proxied environment Squid can try to fetch
#   up to two requests in parallel from a pipeline.
#
#   Defaults to off for bandwidth management and access logging
#   reasons.
#
#Default:
pipeline_prefetch on

http_access allow clientes_registrados

shutdown_lifetime 45 seconds

http_access deny all


Wilson Hernandez
www.figureo56.com
www.optimumwireless.com


On 10/8/2011 3:34 PM, Wilson Hernandez wrote:

So far this is what cache.log looks like:

2011/10/08 15:10:14| Starting Squid Cache version 3.1.14 for 
i686-pc-linux-gnu...

2011/10/08 15:10:14| Process ID 1498
2011/10/08 15:10:14| With 65536 file descriptors available
2011/10/08 15:10:14| Initializing IP Cache...
2011/10/08 15:10:14| DNS Socket created at [::], FD 7
2011/10/08 15:10:14| DNS Socket created at 0.0.0.0, FD 8
2011/10/08 15:10:14| Adding nameserver 172.16.0.2 from squid.conf
2011/10/08 15:10:14| Adding nameserver 172.16.0.1 from squid.conf
2011/10/08 15:10:14| helperOpenServers: Starting 10/10 'zapchain' 
processes

2011/10/08 15:10:15| Unlinkd pipe opened on FD 33
2011/10/08 15:10:15| Swap maxSize 10240 + 1048576 KB, estimated 
1616384 objects

2011/10/08 15:10:15| Target number of buckets: 80819
2011/10/08 15:10:15| Using 131072 Store buckets
2011/10/08 15:10:15| Max Mem  size: 1048576 KB
2011/10/08 15:10:15| Max Swap size: 10240 KB
2011/10/08 15:10:16| Version 1 of swap file with LFS support detected...
2011/10/08 15:10:16| Rebuilding storage in /var2/squid/cache (DIRTY)
2011/10/08 15:10:16| Using Round Robin store dir selection
2011/10/08 15:10:16| Current Directory is /etc/squid
2011/10/08 15:10:16| Loaded Icons.
2011/10/08 15:10:16| Accepting  intercepted HTTP connections at 
172.16.0.1:3128, FD 37.

2011/10/08 15:10:16| HTCP Disabled.
2011/10/08 15:10:16| Squid plugin modules loaded: 0
2011/10/08 15:10:16| Adaptation support is off.
2011/10/08 15:10:16| Ready to serve requests.
2011/10/08 15:10:17| Store rebuilding is 0.01% complete
2011/10/08 15:26:27| Done reading /var2/squid/cache swaplog (32422570 
entries)

2011/10/08 15:26:27| Finished rebuilding storage from disk.
2011/10/08 15:26:27|   18466338 Entries scanned
2011/10/08 15:26:27| 0 Invalid entries.
2011/10/08 15:26:27| 0 With invalid flags.
2011/10/08 15:26:27|   4511874 Objects loaded.
2011/10/08 15:26:27| 0 Objects expired.
2011/10/08 15:26:27|   13954464 Objects cancelled.
2011/10/08 15:26:27| 0 Duplicate URLs purged.
2011/10/08 15:26:27| 0 Swapfile clashes avoided.
2011/10/08 15:26:27|   Took 970.78 seconds (4647.68 objects/sec).
2011/10/08 15:26:27| Beginning Validation Procedure
2011/10/08 15:26:29|   524288 Entries Validated so far.
2011/10/08 15:26:29|   786432 Entries Validated so far.
2011/10/08 15:26:29|   1310720 Entries Validated so far.
2011/10/08 15:26:30|   1572864 Entries Validated so far.
2011/10/08 15:26:30|   1835008 Entries Validated so far.
2011/10/08 15:26:30|   2097152 Entries Validated so far.
2011/10/08 15:26:30

Re: [squid-users] RE: Too Many Open File Descriptors

2011-08-09 Thread Wilson Hernandez
That used to happen to us and had to write a script to start squid like 
this:


#!/bin/sh -e
#

echo Starting squid...

ulimit -HSn 65536
sleep 1
/usr/local/squid/sbin/squid

echo Done..




On 8/9/2011 10:47 PM, Justin Lawler wrote:

Hi,

We have two instances of squid (3.0.15) running on a solaris box. Every so 
often (like many once every month) we get a load of below errors:

2011/08/09 19:22:10| comm_open: socket failure: (24) Too many open files

Sometimes it goes away of its own, sometimes squid crashes and restarts.

When it happens, generally happens on both instances of squid on the same box.

We have number open file descriptors set to 2048 - using squidclient mrg:info:

root@squid01# squidclient mgr:info | grep file
 Maximum number of file descriptors:   2048
 Largest file desc currently in use:   2041
 Number of file desc currently in use: 1903
 Available number of file descriptors:  138
 Reserved number of file descriptors:   100
 Store Disk files open:  68

We're using squid as an ICAP client. Both squid instances point two different 
ICAP servers, so it's unlikely a problem with the ICAP server.

Is this a known issue? As its going on for a long time (over 40 minutes 
continuously), it doesn't seem like it's just the traffic spiking for a long 
period. Also, we're not seeing it on other boxes, which are load balanced.

Any pointers much appreciated.

Regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp





Re: [squid-users] Slow Internet with squid

2011-07-21 Thread Wilson Hernandez

On 7/20/2011 9:12 PM, Amos Jeffries wrote:

On Wed, 20 Jul 2011 11:12:04 -0400, Wilson Hernandez wrote:

Hello.

I am puzzled to see how my bandwidth is used when running squid. I have
a total of 25M/3M of bandwidth, lately I've noticed with iptraf that my
external interface traffic/bandwidth is almost maxed out at 24.8M and my
internal interface (squid) is only at 2.9M as a result most clients have
been calling saying their internet is slow.

I'm wondering why that big of a difference on the interfaces' traffic.

This is what cachemgr shows:

Squid Object Cache: Version 3.1.14

Start Time:Fri, 15 Jul 2011 08:01:48 GMT
Current Time:Wed, 20 Jul 2011 14:39:02 GMT


Connection information for squid:
Number of clients accessing cache:113
Number of HTTP requests received:5198204
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:0
Number of HTCP messages received:0
Number of HTCP messages sent:0
Request failure ratio: 0.00
Average HTTP requests per minute since start:684.2
Average ICP messages per minute since start:0.0
Select loop called: 479758718 times, 0.950 ms avg
Cache information for squid:
Hits as % of all requests:5min: 23.2%, 60min: 19.4%
Hits as % of bytes sent:5min: -219.3%, 60min: -314.7%
Memory hits as % of hit requests:5min: 13.2%, 60min: 9.5%
Disk hits as % of hit requests:5min: 64.6%, 60min: 62.5%
Storage Swap size:66028580 KB
Storage Swap capacity:64.5% used, 35.5% free
Storage Mem size:1042556 KB
Storage Mem capacity:100.0% used,  0.0% free
Mean Object Size:23.52 KB
Requests given to unlinkd:0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.12106  0.02069
Cache Misses:  0.24524  0.30459
Cache Hits:0.05046  0.02899
Near Hits: 0.17711  0.22004
Not-Modified Replies:  0.00307  0.00091
DNS Lookups:   0.31806  0.17048


DNS is very slow as well. Probably due to remote queries over this 
full link.




Please help me understand why this is happening and if there is a
solution to make squid perform better.


Squid optimizes web delivery as the slogan goes. So when the server 
side is acting very inefficiently it can consume more than the client 
side. Could be any of these or a few other things I'm not aware of:


1) client requests an object. Squid has it cached, but server is 
requiring 'must-revalidate'. While revalidating the server forces an 
entire new object back at squid, along with a timestamp stating it has 
not changed. Squid only sends the small no-change reply to the client.


2a) client requests a small range of an object. Squid passes this on. 
Server replies with again, forcing an entire new object back at squid. 
Squid only sends the small range asked for to the client.


2b) client requests a small range of an object. Squid passes this on 
but requests the full object (refresh_offset_limit). Server replies 
with the whole object. Squid stores it and only sends the small range 
asked for to the client.


3) client requests info about an object (HEAD). Squid relays this 
request on. Server replies, forcing an entire new object back at 
squid. Squid only sends the small header asked for to the client.


4) client requests an object, then abandons it before receiving the 
reply. Squid continues to wait and receive it, in hopes that it can be 
stored. If not storable it may be discarded and the cycle repeat. Or 
it could be stored but never again requested. This behaviour is 
modified by the quick_abort_* directives.



Or it could be you configured an open proxy. Configuration problems 
can allow external access to external sites. When discovered attackers 
can use this and consume all your external bandwidth. Usually its 
caused by mistakenly removing or bypassing the controls on CONNECT 
tunnels. Though it can also happen on other requests.


Amos

Amos.

Thanks for replying. Now, you have left me confused and in doubt. I 
don't know if my actual configuration is ok or not. I will post it here 
so, you can take a look and let me know where or what I'm doing wrong. 
Thanks again.


squid.conf:
# Port Squid listens on
http_port 172.16.0.1:3128 intercept disable-pmtu-discovery=off

error_default_language es-do

# Access-lists (ACLs) will permit or deny hosts to access the proxy
acl lan-access src 172.16.0.0/16
acl localhost src 127.0.0.1
acl localnet src 172.16.0.0/16
acl proxy src 172.16.0.1
acl clientes_registrados src /etc/msd/ipAllowed

acl adstoblock dstdomain /etc/squid/blockAds

acl CONNECT method CONNECT


# Do not cache these

#acl special_domains dstdomain .facebook.com .fbcdn.net .verisign.com 
.mail.yahoo.com

#cache deny special_domains

#---

http_access allow proxy
http_access allow localhost

# Block some sites

acl blockanalysis01 dstdomain .scorecardresearch.com clkads.com
acl blockads01  dstdomain .rad.msn.com ads1.msn.com ads2.msn.com

[squid-users] Slow Internet with squid

2011-07-20 Thread Wilson Hernandez

Hello.

I am puzzled to see how my bandwidth is used when running squid. I have
a total of 25M/3M of bandwidth, lately I've noticed with iptraf that my
external interface traffic/bandwidth is almost maxed out at 24.8M and my
internal interface (squid) is only at 2.9M as a result most clients have
been calling saying their internet is slow.

I'm wondering why that big of a difference on the interfaces' traffic.

This is what cachemgr shows:

Squid Object Cache: Version 3.1.14

Start Time: Fri, 15 Jul 2011 08:01:48 GMT
Current Time:   Wed, 20 Jul 2011 14:39:02 GMT


Connection information for squid:
Number of clients accessing cache:  113
Number of HTTP requests received:   5198204
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   684.2
Average ICP messages per minute since start:0.0
Select loop called: 479758718 times, 0.950 ms avg
Cache information for squid:
Hits as % of all requests:  5min: 23.2%, 60min: 19.4%
Hits as % of bytes sent:5min: -219.3%, 60min: -314.7%
Memory hits as % of hit requests:   5min: 13.2%, 60min: 9.5%
Disk hits as % of hit requests: 5min: 64.6%, 60min: 62.5%
Storage Swap size:  66028580 KB
Storage Swap capacity:  64.5% used, 35.5% free
Storage Mem size:   1042556 KB
Storage Mem capacity:   100.0% used,  0.0% free
Mean Object Size:   23.52 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.12106  0.02069
Cache Misses:  0.24524  0.30459
Cache Hits:0.05046  0.02899
Near Hits: 0.17711  0.22004
Not-Modified Replies:  0.00307  0.00091
DNS Lookups:   0.31806  0.17048

Please help me understand why this is happening and if there is a solution to 
make squid perform better.

Thanks.



Re: [squid-users] WARNING: An error inside Squid has caused an HTTP reply without Date

2011-07-09 Thread Wilson Hernandez

Thank again Amos  for replying.

When the cache was being rebuilt (creating the swap.state file) I didn't 
have any connection through squid and still was taking too much time: 
Over 24hrs.


I had to delete the entire cache, something I really didn't want to do. 
Also, upgraded to squid 3.1.14. Hope that version works better.


Thanks.

On 7/8/2011 1:04 AM, Amos Jeffries wrote:

On 08/07/11 02:58, Wilson Hernandez wrote:

On 7/7/2011 10:42 AM, Wilson Hernandez wrote:

snip

Forgot to mention:

I am running squid along with squidguard and I don't think it messes
with squid's cache directory... This cache was used by an older squid
version (3.1.9) that maybe messed it up.

Any recommendation?


SG does not play with Squids cache.

It could be 3.1.9 ignoring absent Date: headers and letting those into 
the cache. In which case you should see those particular warnings 
reduce as the bad objects are found.



On 08/07/11 02:36, Wilson Hernandez wrote:


 I did not override the waiting with -F. This cache directory is 200GB
 not in the TB. I have this system running on a 3.2Ghz, 4GB RAM machine

I refer you back to Squids message:
 WARNING: Disk space over limit: 3615298144 KB  20480 KB

in other words:
  squid.conf says 20 MB (~196GB). But Squid found 3.6 TB of stored 
objects in the cache_dir.



I suspect this is a symptom of traffic going faster than Squids erase 
mechanisms can cope with. So lots of supposedly erased files are left 
on disk and recovered during the DIRTY scan.

 Only faster disks can fix this for the current Squid.

 and is taking for ever to rebuild the cache (almost 20hrs), don't know
 why is taking so looong. I am thinking of deleting the entire cache 
but,

 don't want to loose it all.

Squid is doing a DIRTY scan. It opens every folder, every file 
inside, loads teh fil data parses it into the index and oves on to the 
next one.
 There are 3.2 millions objects to do this for. At the same time as 
handling live traffic which is adding new ones rapidly. 10-20 hours is 
fairly normal for a 200 GB cache of small objects.


 The best way to avoid this lag in older Squid is to let it complete, 
save the results into swap.state. And to ensure that shutdown_timeout 
is long enough that swap.state is fully saved on shutdown in future.


 You can use a workaround of starting an instance of Squid without 
caching and letting clients use that directly. Starting a second in 
the background to do the caching. With a bit of config tweaking the 
first one can bypass the second until the cache is ready and available.



The squid-3.2 rock store has some fixes trying to improve the 
situation. But I'm not yet sure how successful they are yet.



On 08/07/11 02:58, Wilson Hernandez wrote:

Just installed ver 3.1.14 and get the same results, no internet and is
taking a looong time for swap.state to complete.

2011/07/07 10:52:27| Starting Squid Cache version 3.1.14 for
i686-pc-linux-gnu...
2011/07/07 10:52:27| Process ID 13441
2011/07/07 10:52:27| With 32768 file descriptors available
2011/07/07 10:52:27| Initializing IP Cache...
2011/07/07 10:52:27| DNS Socket created at [::], FD 7
2011/07/07 10:52:27| DNS Socket created at 0.0.0.0, FD 8
2011/07/07 10:52:27| Adding nameserver 172.16.0.2 from squid.conf
2011/07/07 10:52:27| Adding nameserver 172.16.0.1 from squid.conf
2011/07/07 10:52:27| helperOpenServers: Starting 10/10 'zapchain' 
processes

2011/07/07 10:52:27| Unlinkd pipe opened on FD 32
2011/07/07 10:52:27| Store logging disabled
2011/07/07 10:52:27| Swap maxSize 20480 + 1048576 KB, estimated
3216384 objects
2011/07/07 10:52:27| Target number of buckets: 160819
2011/07/07 10:52:27| Using 262144 Store buckets
2011/07/07 10:52:27| Max Mem size: 1048576 KB
2011/07/07 10:52:27| Max Swap size: 20480 KB
2011/07/07 10:52:27| Rebuilding storage in /var2/squid/cache (DIRTY)
2011/07/07 10:52:27| Using Round Robin store dir selection
2011/07/07 10:52:27| Current Directory is /workingdir/squid-3.1.14
2011/07/07 10:52:27| Loaded Icons.
2011/07/07 10:52:27| Accepting intercepted HTTP connections at
172.16.0.1:3128, FD 34.
2011/07/07 10:52:27| HTCP Disabled.
2011/07/07 10:52:27| Squid plugin modules loaded: 0
2011/07/07 10:52:27| Adaptation support is off.
2011/07/07 10:52:27| Ready to serve requests.




Amos




Re: [squid-users] WARNING: An error inside Squid has caused an HTTP reply without Date

2011-07-07 Thread Wilson Hernandez

On 7/7/2011 1:25 AM, Amos Jeffries wrote:

On 07/07/11 05:32, Wilson Hernandez wrote:

On 7/6/2011 1:08 PM, Wilson Hernandez wrote:

On 7/6/2011 12:28 PM, Wilson Hernandez wrote:

Hello.

I have a partition for squid that is almost full (I'm getting a
message):
WARNING: An error inside Squid has caused an HTTP reply without
Date:. Please report this
2011/07/06 12:23:35| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:35| /var2/squid/cache/18/A6/0057F5D8
2011/07/06 12:23:35| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:35| /var2/squid/cache/18/A6/0057F5E4
2011/07/06 12:23:35| clientProcessRequest: Invalid Request
2011/07/06 12:23:40| WARNING: Disk space over limit: 3615331284 KB 
20480 KB
2011/07/06 12:23:41| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:41| /var2/squid/cache/18/A9/0057F8BD
2011/07/06 12:23:42| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:42| /var2/squid/cache/18/A9/0057F8DC
2011/07/06 12:23:43| could not parse headers from on disk structure!
2011/07/06 12:23:43| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:23:44| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:44| /var2/squid/cache/18/A9/0057F8E8
2011/07/06 12:23:46| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:46| /var2/squid/cache/18/A9/0057F923
2011/07/06 12:23:51| WARNING: Disk space over limit: 3615319720 KB 
20480 KB
2011/07/06 12:24:02| WARNING: Disk space over limit: 3615308076 KB 
20480 KB
2011/07/06 12:24:07| could not parse headers from on disk structure!
2011/07/06 12:24:07| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:09| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:24:09| /var2/squid/cache/3C/0B/003B93E7
2011/07/06 12:24:09| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:24:09| /var2/squid/cache/13/B8/0053126B
2011/07/06 12:24:10| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:24:10| /var2/squid/cache/00/68/6868
2011/07/06 12:24:10| could not parse headers from on disk structure!
2011/07/06 12:24:10| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:13| WARNING: Disk space over limit: 3615298144 KB 
20480 KB
2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:14| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:24:14| /var2/squid/cache/11/0D/00506C25
2011/07/06 12:24:16| could not parse headers from on disk structure!
2011/07/06 12:24:16| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:23| could not parse headers from on disk structure!
2011/07/06 12:24:23| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:24| WARNING: Disk space over limit: 3615285372 KB 
20480 KB


I don't know if this is caused by disk space or what. If is caused by
disk space how can I clean old cached files from the squid's cache?


Looks like something other than Squid is playing with the cache files 
on disk.
 No such file or directory - something erased a file without 
informing Squid. Though it could be that Squid scheduled the file for 
erase and the OS did not get around to it until after Squid was 
re-using the same fileno/name for another object. (rare condition but 
possible).


Disk space over limit - seriously nasty. The object size accounting 
in Squid has been screwed up somehow.
 Squid will divert resources to erasing that extra 3.4 _TB_ of 
unwanted cache content ASAP. I'm a little surprised to see numbers 
that far out in 3.1.11, AFAIK the cache accounting was fixed in an 
earlier 3.1 release.



could not parse headers from on disk structure! and WARNING: An 
error inside Squid has caused an HTTP reply without Date:.
 - strong signs of either disk corruption, a bug, or objects stored by 
some older Squid version without Date: headers. We are still trying to 
track down which. If you can move to 3.1.12 there are some extra 
details logged about those objects to help figure this out.





Thanks.


/usr/local/squid/sbin/squid -v
Squid Cache: Version 3.1.11
configure options: '--prefix=/usr/local/squid'
'--sysconfdir=/etc/squid

Re: [squid-users] WARNING: An error inside Squid has caused an HTTP reply without Date

2011-07-07 Thread Wilson Hernandez

On 7/7/2011 1:25 AM, Amos Jeffries wrote:

On 07/07/11 05:32, Wilson Hernandez wrote:

On 7/6/2011 1:08 PM, Wilson Hernandez wrote:

On 7/6/2011 12:28 PM, Wilson Hernandez wrote:

Hello.

I have a partition for squid that is almost full (I'm getting a
message):
WARNING: An error inside Squid has caused an HTTP reply without
Date:. Please report this
2011/07/06 12:23:35| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:35| /var2/squid/cache/18/A6/0057F5D8
2011/07/06 12:23:35| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:35| /var2/squid/cache/18/A6/0057F5E4
2011/07/06 12:23:35| clientProcessRequest: Invalid Request
2011/07/06 12:23:40| WARNING: Disk space over limit: 3615331284 KB 
20480 KB
2011/07/06 12:23:41| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:41| /var2/squid/cache/18/A9/0057F8BD
2011/07/06 12:23:42| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:42| /var2/squid/cache/18/A9/0057F8DC
2011/07/06 12:23:43| could not parse headers from on disk structure!
2011/07/06 12:23:43| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:23:44| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:44| /var2/squid/cache/18/A9/0057F8E8
2011/07/06 12:23:46| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:46| /var2/squid/cache/18/A9/0057F923
2011/07/06 12:23:51| WARNING: Disk space over limit: 3615319720 KB 
20480 KB
2011/07/06 12:24:02| WARNING: Disk space over limit: 3615308076 KB 
20480 KB
2011/07/06 12:24:07| could not parse headers from on disk structure!
2011/07/06 12:24:07| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:09| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:24:09| /var2/squid/cache/3C/0B/003B93E7
2011/07/06 12:24:09| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:24:09| /var2/squid/cache/13/B8/0053126B
2011/07/06 12:24:10| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:24:10| /var2/squid/cache/00/68/6868
2011/07/06 12:24:10| could not parse headers from on disk structure!
2011/07/06 12:24:10| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:13| WARNING: Disk space over limit: 3615298144 KB 
20480 KB
2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:14| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:24:14| /var2/squid/cache/11/0D/00506C25
2011/07/06 12:24:16| could not parse headers from on disk structure!
2011/07/06 12:24:16| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:23| could not parse headers from on disk structure!
2011/07/06 12:24:23| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:24| WARNING: Disk space over limit: 3615285372 KB 
20480 KB


I don't know if this is caused by disk space or what. If is caused by
disk space how can I clean old cached files from the squid's cache?


Looks like something other than Squid is playing with the cache files 
on disk.
 No such file or directory - something erased a file without 
informing Squid. Though it could be that Squid scheduled the file for 
erase and the OS did not get around to it until after Squid was 
re-using the same fileno/name for another object. (rare condition but 
possible).


Disk space over limit - seriously nasty. The object size accounting 
in Squid has been screwed up somehow.
 Squid will divert resources to erasing that extra 3.4 _TB_ of 
unwanted cache content ASAP. I'm a little surprised to see numbers 
that far out in 3.1.11, AFAIK the cache accounting was fixed in an 
earlier 3.1 release.



could not parse headers from on disk structure! and WARNING: An 
error inside Squid has caused an HTTP reply without Date:.
 - strong signs of either disk corruption, a bug, or objects stored by 
some older Squid version without Date: headers. We are still trying to 
track down which. If you can move to 3.1.12 there are some extra 
details logged about those objects to help figure this out.





Thanks.


/usr/local/squid/sbin/squid -v
Squid Cache: Version 3.1.11
configure options: '--prefix=/usr/local/squid'
'--sysconfdir=/etc/squid

Re: [squid-users] WARNING: An error inside Squid has caused an HTTP reply without Date

2011-07-07 Thread Wilson Hernandez

On 7/7/2011 10:42 AM, Wilson Hernandez wrote:

On 7/7/2011 1:25 AM, Amos Jeffries wrote:

On 07/07/11 05:32, Wilson Hernandez wrote:

On 7/6/2011 1:08 PM, Wilson Hernandez wrote:

On 7/6/2011 12:28 PM, Wilson Hernandez wrote:

Hello.

I have a partition for squid that is almost full (I'm getting a
message):
WARNING: An error inside Squid has caused an HTTP reply without
Date:. Please report this
2011/07/06 12:23:35| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:35| /var2/squid/cache/18/A6/0057F5D8
2011/07/06 12:23:35| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:35| /var2/squid/cache/18/A6/0057F5E4
2011/07/06 12:23:35| clientProcessRequest: Invalid Request
2011/07/06 12:23:40| WARNING: Disk space over limit: 3615331284 KB 
20480 KB
2011/07/06 12:23:41| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:41| /var2/squid/cache/18/A9/0057F8BD
2011/07/06 12:23:42| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:42| /var2/squid/cache/18/A9/0057F8DC
2011/07/06 12:23:43| could not parse headers from on disk structure!
2011/07/06 12:23:43| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:23:44| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:44| /var2/squid/cache/18/A9/0057F8E8
2011/07/06 12:23:46| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:23:46| /var2/squid/cache/18/A9/0057F923
2011/07/06 12:23:51| WARNING: Disk space over limit: 3615319720 KB 
20480 KB
2011/07/06 12:24:02| WARNING: Disk space over limit: 3615308076 KB 
20480 KB
2011/07/06 12:24:07| could not parse headers from on disk structure!
2011/07/06 12:24:07| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:09| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:24:09| /var2/squid/cache/3C/0B/003B93E7
2011/07/06 12:24:09| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:24:09| /var2/squid/cache/13/B8/0053126B
2011/07/06 12:24:10| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:24:10| /var2/squid/cache/00/68/6868
2011/07/06 12:24:10| could not parse headers from on disk structure!
2011/07/06 12:24:10| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:13| WARNING: Disk space over limit: 3615298144 KB 
20480 KB
2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:14| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2011/07/06 12:24:14| /var2/squid/cache/11/0D/00506C25
2011/07/06 12:24:16| could not parse headers from on disk structure!
2011/07/06 12:24:16| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:23| could not parse headers from on disk structure!
2011/07/06 12:24:23| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this
2011/07/06 12:24:24| WARNING: Disk space over limit: 3615285372 KB 
20480 KB


I don't know if this is caused by disk space or what. If is caused by
disk space how can I clean old cached files from the squid's cache?


Looks like something other than Squid is playing with the cache files 
on disk.
 No such file or directory - something erased a file without 
informing Squid. Though it could be that Squid scheduled the file for 
erase and the OS did not get around to it until after Squid was 
re-using the same fileno/name for another object. (rare condition but 
possible).


Disk space over limit - seriously nasty. The object size accounting 
in Squid has been screwed up somehow.
 Squid will divert resources to erasing that extra 3.4 _TB_ of 
unwanted cache content ASAP. I'm a little surprised to see numbers 
that far out in 3.1.11, AFAIK the cache accounting was fixed in an 
earlier 3.1 release.



could not parse headers from on disk structure! and WARNING: An 
error inside Squid has caused an HTTP reply without Date:.
 - strong signs of either disk corruption, a bug, or objects stored 
by some older Squid version without Date: headers. We are still 
trying to track down which. If you can move to 3.1.12 there are some 
extra details logged about those objects to help figure this out.





Thanks.


/usr/local/squid/sbin/squid -v
Squid Cache: Version 3.1.11
configure options: '--prefix

[squid-users] WARNING: An error inside Squid has caused an HTTP reply without Date

2011-07-06 Thread Wilson Hernandez

Hello.

I have a partition for squid that is almost full (I'm getting a message):
 WARNING: An error inside Squid has caused an HTTP reply without Date:. 
Please report this
2011/07/06 12:23:35| DiskThreadsDiskFile::openDone: (2) No such file or 
directory

2011/07/06 12:23:35|/var2/squid/cache/18/A6/0057F5D8
2011/07/06 12:23:35| DiskThreadsDiskFile::openDone: (2) No such file or 
directory

2011/07/06 12:23:35|/var2/squid/cache/18/A6/0057F5E4
2011/07/06 12:23:35| clientProcessRequest: Invalid Request
2011/07/06 12:23:40| WARNING: Disk space over limit: 3615331284 KB  
20480 KB
2011/07/06 12:23:41| DiskThreadsDiskFile::openDone: (2) No such file or 
directory

2011/07/06 12:23:41|/var2/squid/cache/18/A9/0057F8BD
2011/07/06 12:23:42| DiskThreadsDiskFile::openDone: (2) No such file or 
directory

2011/07/06 12:23:42|/var2/squid/cache/18/A9/0057F8DC
2011/07/06 12:23:43| could not parse headers from on disk structure!
2011/07/06 12:23:43| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this
2011/07/06 12:23:44| DiskThreadsDiskFile::openDone: (2) No such file or 
directory

2011/07/06 12:23:44|/var2/squid/cache/18/A9/0057F8E8
2011/07/06 12:23:46| DiskThreadsDiskFile::openDone: (2) No such file or 
directory

2011/07/06 12:23:46|/var2/squid/cache/18/A9/0057F923
2011/07/06 12:23:51| WARNING: Disk space over limit: 3615319720 KB  
20480 KB
2011/07/06 12:24:02| WARNING: Disk space over limit: 3615308076 KB  
20480 KB

2011/07/06 12:24:07| could not parse headers from on disk structure!
2011/07/06 12:24:07| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this
2011/07/06 12:24:09| DiskThreadsDiskFile::openDone: (2) No such file or 
directory

2011/07/06 12:24:09|/var2/squid/cache/3C/0B/003B93E7
2011/07/06 12:24:09| DiskThreadsDiskFile::openDone: (2) No such file or 
directory

2011/07/06 12:24:09|/var2/squid/cache/13/B8/0053126B
2011/07/06 12:24:10| DiskThreadsDiskFile::openDone: (2) No such file or 
directory

2011/07/06 12:24:10|/var2/squid/cache/00/68/6868
2011/07/06 12:24:10| could not parse headers from on disk structure!
2011/07/06 12:24:10| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this
2011/07/06 12:24:13| WARNING: Disk space over limit: 3615298144 KB  
20480 KB

2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this

2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this

2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this
2011/07/06 12:24:14| DiskThreadsDiskFile::openDone: (2) No such file or 
directory

2011/07/06 12:24:14|/var2/squid/cache/11/0D/00506C25
2011/07/06 12:24:16| could not parse headers from on disk structure!
2011/07/06 12:24:16| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this

2011/07/06 12:24:23| could not parse headers from on disk structure!
2011/07/06 12:24:23| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this
2011/07/06 12:24:24| WARNING: Disk space over limit: 3615285372 KB  
20480 KB



I don't know if this is caused by disk space or what. If is caused by 
disk space how can I clean old cached files from the squid's cache?


Thanks.


Re: [squid-users] WARNING: An error inside Squid has caused an HTTP reply without Date

2011-07-06 Thread Wilson Hernandez

On 7/6/2011 12:28 PM, Wilson Hernandez wrote:

Hello.

I have a partition for squid that is almost full (I'm getting a message):
 WARNING: An error inside Squid has caused an HTTP reply without 
Date:. Please report this
2011/07/06 12:23:35| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:23:35|/var2/squid/cache/18/A6/0057F5D8
2011/07/06 12:23:35| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:23:35|/var2/squid/cache/18/A6/0057F5E4
2011/07/06 12:23:35| clientProcessRequest: Invalid Request
2011/07/06 12:23:40| WARNING: Disk space over limit: 3615331284 KB  
20480 KB
2011/07/06 12:23:41| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:23:41|/var2/squid/cache/18/A9/0057F8BD
2011/07/06 12:23:42| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:23:42|/var2/squid/cache/18/A9/0057F8DC
2011/07/06 12:23:43| could not parse headers from on disk structure!
2011/07/06 12:23:43| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this
2011/07/06 12:23:44| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:23:44|/var2/squid/cache/18/A9/0057F8E8
2011/07/06 12:23:46| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:23:46|/var2/squid/cache/18/A9/0057F923
2011/07/06 12:23:51| WARNING: Disk space over limit: 3615319720 KB  
20480 KB
2011/07/06 12:24:02| WARNING: Disk space over limit: 3615308076 KB  
20480 KB

2011/07/06 12:24:07| could not parse headers from on disk structure!
2011/07/06 12:24:07| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this
2011/07/06 12:24:09| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:24:09|/var2/squid/cache/3C/0B/003B93E7
2011/07/06 12:24:09| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:24:09|/var2/squid/cache/13/B8/0053126B
2011/07/06 12:24:10| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:24:10|/var2/squid/cache/00/68/6868
2011/07/06 12:24:10| could not parse headers from on disk structure!
2011/07/06 12:24:10| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this
2011/07/06 12:24:13| WARNING: Disk space over limit: 3615298144 KB  
20480 KB

2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this

2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this

2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this
2011/07/06 12:24:14| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:24:14|/var2/squid/cache/11/0D/00506C25
2011/07/06 12:24:16| could not parse headers from on disk structure!
2011/07/06 12:24:16| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this

2011/07/06 12:24:23| could not parse headers from on disk structure!
2011/07/06 12:24:23| WARNING: An error inside Squid has caused an HTTP 
reply without Date:. Please report this
2011/07/06 12:24:24| WARNING: Disk space over limit: 3615285372 KB  
20480 KB



I don't know if this is caused by disk space or what. If is caused by 
disk space how can I clean old cached files from the squid's cache?


Thanks.


/usr/local/squid/sbin/squid -v
Squid Cache: Version 3.1.11
configure options:  '--prefix=/usr/local/squid' 
'--sysconfdir=/etc/squid' '--enable-delay-pools' 
'--enable-kill-parent-hack' '--disable-htcp' 
'--enable-default-err-language=es' '--enable-linux-netfilter' 
'--disable-ident-lookups' '--localstatedir=/var/log/squid' 
'--enable-stacktraces' '--with-default-user=proxy' '--with-large-files' 
'--enable-icap-client' '--enable-async-io' '--enable-storeio=aufs' 
'--enable-removal-policies=heap,lru' '--with-maxfd=65536' 
--with-squid=/workingdir/squid-3.1.11 --enable-ltdl-convenience





Re: [squid-users] WARNING: An error inside Squid has caused an HTTP reply without Date

2011-07-06 Thread Wilson Hernandez

On 7/6/2011 1:08 PM, Wilson Hernandez wrote:

On 7/6/2011 12:28 PM, Wilson Hernandez wrote:

Hello.

I have a partition for squid that is almost full (I'm getting a 
message):
 WARNING: An error inside Squid has caused an HTTP reply without 
Date:. Please report this
2011/07/06 12:23:35| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:23:35|/var2/squid/cache/18/A6/0057F5D8
2011/07/06 12:23:35| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:23:35|/var2/squid/cache/18/A6/0057F5E4
2011/07/06 12:23:35| clientProcessRequest: Invalid Request
2011/07/06 12:23:40| WARNING: Disk space over limit: 3615331284 KB  
20480 KB
2011/07/06 12:23:41| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:23:41|/var2/squid/cache/18/A9/0057F8BD
2011/07/06 12:23:42| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:23:42|/var2/squid/cache/18/A9/0057F8DC
2011/07/06 12:23:43| could not parse headers from on disk structure!
2011/07/06 12:23:43| WARNING: An error inside Squid has caused an 
HTTP reply without Date:. Please report this
2011/07/06 12:23:44| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:23:44|/var2/squid/cache/18/A9/0057F8E8
2011/07/06 12:23:46| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:23:46|/var2/squid/cache/18/A9/0057F923
2011/07/06 12:23:51| WARNING: Disk space over limit: 3615319720 KB  
20480 KB
2011/07/06 12:24:02| WARNING: Disk space over limit: 3615308076 KB  
20480 KB

2011/07/06 12:24:07| could not parse headers from on disk structure!
2011/07/06 12:24:07| WARNING: An error inside Squid has caused an 
HTTP reply without Date:. Please report this
2011/07/06 12:24:09| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:24:09|/var2/squid/cache/3C/0B/003B93E7
2011/07/06 12:24:09| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:24:09|/var2/squid/cache/13/B8/0053126B
2011/07/06 12:24:10| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:24:10|/var2/squid/cache/00/68/6868
2011/07/06 12:24:10| could not parse headers from on disk structure!
2011/07/06 12:24:10| WARNING: An error inside Squid has caused an 
HTTP reply without Date:. Please report this
2011/07/06 12:24:13| WARNING: Disk space over limit: 3615298144 KB  
20480 KB

2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an 
HTTP reply without Date:. Please report this

2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an 
HTTP reply without Date:. Please report this

2011/07/06 12:24:14| could not parse headers from on disk structure!
2011/07/06 12:24:14| WARNING: An error inside Squid has caused an 
HTTP reply without Date:. Please report this
2011/07/06 12:24:14| DiskThreadsDiskFile::openDone: (2) No such file 
or directory

2011/07/06 12:24:14|/var2/squid/cache/11/0D/00506C25
2011/07/06 12:24:16| could not parse headers from on disk structure!
2011/07/06 12:24:16| WARNING: An error inside Squid has caused an 
HTTP reply without Date:. Please report this

2011/07/06 12:24:23| could not parse headers from on disk structure!
2011/07/06 12:24:23| WARNING: An error inside Squid has caused an 
HTTP reply without Date:. Please report this
2011/07/06 12:24:24| WARNING: Disk space over limit: 3615285372 KB  
20480 KB



I don't know if this is caused by disk space or what. If is caused by 
disk space how can I clean old cached files from the squid's cache?


Thanks.


/usr/local/squid/sbin/squid -v
Squid Cache: Version 3.1.11
configure options:  '--prefix=/usr/local/squid' 
'--sysconfdir=/etc/squid' '--enable-delay-pools' 
'--enable-kill-parent-hack' '--disable-htcp' 
'--enable-default-err-language=es' '--enable-linux-netfilter' 
'--disable-ident-lookups' '--localstatedir=/var/log/squid' 
'--enable-stacktraces' '--with-default-user=proxy' 
'--with-large-files' '--enable-icap-client' '--enable-async-io' 
'--enable-storeio=aufs' '--enable-removal-policies=heap,lru' 
'--with-maxfd=65536' --with-squid=/workingdir/squid-3.1.11 
--enable-ltdl-convenience




Hello.

I deleted the swap.state file and re-run squid -z and get the following:

2011/07/06 13:29:29| Rebuilding storage in /var2/squid/cache (DIRTY)
2011/07/06 13:29:29| Using Round Robin store dir selection
2011/07/06 13:29:29| Current Directory is /
2011/07/06 13:29:29| Loaded Icons.
2011/07/06 13:29:29| Accepting  intercepted HTTP connections at 
172.16.0.1:3128, FD 34.

2011/07/06 13:29:29| HTCP Disabled.
2011/07/06 13:29:29| Squid plugin modules loaded: 0
2011/07/06 13:29:29| Adaptation support is off.
2011/07/06 13:29:29| Ready to serve requests.


but no access to internet.

What's wrong now?




[squid-users] Compress/Zipped Web pages?

2009-06-30 Thread Wilson Hernandez - MSD, S. A.

Hello.

I heard a friend of mine talking about how he compresses the requested 
web pages and serves it to users (compressed) with MS ISA Server. Can 
that be done with Squid? if is true.


[squid-users] Help me save bandwidth

2009-04-27 Thread Wilson Hernandez - MSD, S. A.

Hello everybody.

I am somewhat confused on how squid helps to save bandwidth. I know it
saves visited websites to cache and when someone else request the same
site it will serve it from the cache. Please correct me if that is wrong.

Now, I've been checking my traffic before (external nic) and after
(inside network) squid. Eveerytime I request a page (google.com) the
request is sent to the internet as well so, in this case there isn't
much saving done. But, if I have offline_mode on I get the old page
stored locally. It seems that if I have offline_mode enabled is when
bandwidth is been saved.

Please correct me and help me understand on how squid can really help
maintain and save bandwidth on my network.

Thanks in advanced for your help.




Re: [squid-users] Long running squid proxy slows way down

2009-04-27 Thread Wilson Hernandez - MSD, S. A.



I have a similar setup, squid was slow and crashing when it had a long 
time running, crashing every three to six days. I never found out why it 
crashed. I looked in the log files and couldn't find anything. It just 
crashed for no reason. There are some post to the least about it. I 
decided to restart the system everyday from a cron job at 4am. I know 
that doesn't sound too stable as I'm running it on a linux box but, it 
worked. It hasn't crash ever since.


Re: [squid-users] Squid and TC - Traffic Shaping

2009-04-23 Thread Wilson Hernandez - MSD, S. A.

I have a similar script but, I'm doing the filtering with iptables:

$iptables -t mangle -A FORWARD -s 192.168.2.100 -j MARK --set-mark 1
$iptables -t mangle -A FORWARD -d 192.168.2.100 -j MARK --set-mark 1

$iptables -t mangle -A FORWARD -d 192.168.2.101 -j MARK --set-mark 2
$iptables -t mangle -A FORWARD -d 192.168.2.101 -j MARK --set-mark 2

... and so on.

$tc qdisc del dev eth1 root

$tc qdisc add dev eth1 root handle 1:0 htb default 20 r2q 1

# Class
$tc class add dev eth1 parent 1:   classid 1:1 htb rate 384kbit ceil 384kbit
$tc class add dev eth1 parent 1:2  classid 1:2 htb rate 200kbit ceil 200kbit
$tc class add dev eth1 parent 1:3  classid 1:3 htb rate 100kbit  ceil 
150kbit


$tc qdisc add dev eth1 parent 1:2 handle 20: sfq perturb 10
$tc qdisc add dev eth1 parent 1:3 handle 30: sfq perturb 10

$tc filter add dev eth1 protocol ip parent 1:0 prio 1 handle 1 fw 
classid 1:2
$tc filter add dev eth1 protocol ip parent 1:0 prio 1 handle 2 fw 
classid 1:3

 more filtering.



Everytime I test with squid running I don't get a correct speed I'm 
supposed to obtain but, when I disable squid and run a test, I get the 
correct speed on both computers: 194 - 200 max for 192.168.1.100 and 96 
- 99 for 192.168.2.101. When I run the test with squid running I get 
different speeds on both machines ranging from 50 to 300.


I know it sounds strange but, I've ran the tests too many times now and 
I get the same results.


I'm using squid 3.0 STABLE 11.

Thanks.

Indunil Jayasooriya wrote:

On Wed, Apr 22, 2009 at 2:55 PM, Amos Jeffries squ...@treenet.co.nz wrote:

Wilson Hernandez - MSD, S. A. wrote:

Hello.

I was writing a script to control traffic on our network. I created my
rules with tc and noticed that it wasn't working correctly.

I tried this traffic shaping on a linux router that has squid doing
transparent cache.

When measuring the download speed on speedtest.net the download speed is
70kbps when is supposed to be over 300kbps. I found it strange since
I've done traffic shaping in the past and worked but not on a box with
squid. I stopped the squid server and ran the test again and it gave me
the speed I assigned to that machine. I assigned different bw and the
test gave the correct speed.

Have anybody used traffic shaping (TC in linux) on a box with squid? Is
there a way to combine both a have them work side by side?


About  2years ago, I used the below script on a CentOS 4.4 box acting
as a firewall (iptables), routing (iproute2) and squid 2.5 transparent
intercepting.



#traffic shaping on eth1 - i.e: LAN INTERFACE (For Downloading). eth0
is connected to the Internet

INTERFAZ_LAN=eth1
FULLBANDWIDTH=256
BANDWIDTH4LAN=64

tc qdisc del root dev $INTERFAZ_LAN

tc qdisc add dev $INTERFAZ_LAN root handle 1: htb r2q 4
tc class add dev $INTERFAZ_LAN parent 1: classid 1:1 htb rate
$FULLBANDWIDTHKbit
tc class add dev $INTERFAZ_LAN parent 1:1 classid 1:10 htb rate
$BANDWIDTH4LANKbit
tc qdisc add dev $INTERFAZ_LAN parent 1:10 handle 10: sfq perturb 10
tc filter add dev $INTERFAZ_LAN parent 1: protocol ip prio 1 u32 match
ip dst 192.168.100.0/24 classid 1:10



192.168.100.0/24 is my LAN RANGE.

According to the above script, My FULL bandwidth was 256 kbit. I
allocated 64 kbit for downloading. it is actually NOTHING to do with
squid for me. ALL went fine with iproute2 pkg.



I am also seeking a TC expert to help several users already needing to use
it with TPROXYv4 and/or WCCP setups.


I am NOT a tc expert. just a guy with an interest.







--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


[squid-users] Squid and TC - Traffic Shaping

2009-04-21 Thread Wilson Hernandez - MSD, S. A.

Hello.

I was writing a script to control traffic on our network. I created my
rules with tc and noticed that it wasn't working correctly.

I tried this traffic shaping on a linux router that has squid doing
transparent cache.

When measuring the download speed on speedtest.net the download speed is
70kbps when is supposed to be over 300kbps. I found it strange since
I've done traffic shaping in the past and worked but not on a box with
squid. I stopped the squid server and ran the test again and it gave me
the speed I assigned to that machine. I assigned different bw and the
test gave the correct speed.

Have anybody used traffic shaping (TC in linux) on a box with squid? Is
there a way to combine both a have them work side by side?

Thanks in advanced for your help and input.



[squid-users] Cache directory size decreased

2009-04-11 Thread Wilson Hernandez - MSD, S. A.

Hello..

I've noticed that my server's /var/log/squid/cache decreased from 37G to 
35G. I just need to know if this is normal.


thanks.


[squid-users] How to cache all traffic?

2009-03-21 Thread Wilson Hernandez - MSD, S. A.

Hello.

I would like to know which is a safe mode to have a squid server cache 
everyhing (http, mp3s, mpegs, jpgs, gifs, and so on). Since most my 
users visit the same pages I would like to have a big cache and when I 
need to do some network maintenance I could unplug the internet cable 
and see if some user can still receive some web content from the squid 
server. I don't know if that can be done but, that's just a thought.


I would like to cache everything though to at least save bandwith.

Thanks for your help.


[squid-users] Segmentation Fault

2009-03-09 Thread Wilson Hernandez - MSD, S. A.

Hello.

Now, I have another problem when trying to build squid. After issuing
make all I get the following error:

cf_gen.cc: In function âint main(int, char**)â:
cf_gen.cc:499: internal compiler error: Segmentation fault
Please submit a full bug report,
with preprocessed source if appropriate.
See file:///usr/share/doc/gcc-4.3/README.Bugs for instructions.
make[1]: *** [cf_gen.o] Error 1
make[1]: Leaving directory `/working_dir/squid-3.0.STABLE12/src'
make: *** [all-recursive] Error 1

What causes this error?


--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente



Re: [squid-users] Squid fails

2009-03-08 Thread Wilson Hernandez - MSD, S. A.

Amos,

I rebuilt squid:

Squid Cache: Version 3.0.STABLE12
configure options:  '--prefix=/usr/local/squid' 
'--sysconfdir=/etc/squid' '--enable-delay-pools' 
'--enable-kill-parent-hack' '--disable-htcp' 
'--enable-default-err-language=Spanish' '--enable-linux-netfilter' 
'--disable-ident-lookups' '--disable-internal-dns' 
'--localstatedir=/var/log/squid' '--enable-stacktraces' 
'--with-default-user=proxy' '--with-large-files'


I added disable internal dns and squid has being working fine. Its being 
working for about 2 days without a crash where the previous built only 
worked for a couple of hours and then crash.


Is it ok to run it like this even though is performing better than it was?

Amos Jeffries wrote:

Wilson Hernandez - f, S. A. wrote:

Thank you Amos for your reply.

I downloaded version 3.0 and here how I built it:

Squid Cache: Version 3.0.STABLE12
  configure options:  '--prefix=/usr/local/squid' 
'--sysconfdir=/etc/squid'

  '--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp'
  '--enable-default-err-language=Spanish' '--enable-linux-netfilter'
  '--localstatedir=/var/log/squid'
  '--enable-stacktraces' '--with-default-user=proxy' 
'--with-large-files'


Amos Jeffries wrote:

Wilson Hernandez - MSD, S. A. wrote:

Hello once again.

Here's my second problem I am experiencing with squid. Squid is 
running normally and after a while doesn't serve any pages it gives 
the user an error regarding dns I don't remember exactly but, it 
tells the user that it timed out trying to access the ip but, that 
page (google.com) is being used by many as home page. I don't  know 
why is failing with some dns errors. I try doing ping to the same 
address and the dns server resolves the ip.


What can be causing this to happen?


It's a DNS failure.

For better help we will need to know:
 * the version of squid you are using,
 * whether or not --disable-internal-dns was used to build it,
 * and what is the actual error page content given when things go wrong.

Amos




Okay those look normal enough.

for further tracking try running Squid with flags  -D -d 5 and see if 
you can grab what it produces on stderr during the system reboot.


 -D should stop it running DNS tests too early.
 -d 5 produces the debug before config has finished loading.

Amos


--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


Re: [squid-users] Unsupported methods

2009-03-08 Thread Wilson Hernandez - MSD, S. A.

Where I can find the type of extension_methods?

You can add up to 20 additional request extension methods here for 
enabling Squid to allow access unknown methods


But where can I find a list of these methods?

Thanks.




Chris Robertson wrote:

Wilson Hernandez - MSD, S. A. wrote:

Hello.

I noticed a lot of unsupported method log in cache.log and is filling 
the log with those type of messages. What type of methods are these? 
Can someone please explain or guide me to where I can better 
understand the extension methods and or its types? and what they 
really are? So I can avoid having these type of logs:


2009/02/17 15:18:46| clientParseRequestMethod: Unsupported method 
attempted by 192.168.2.245: This is not a bug. see squid.conf 
extension_methods
2009/02/17 15:18:46| clientParseRequestMethod: Unsupported method in 
request 'NICK n[M-00-CRI-XP-14]___'



2009/02/21 20:36:29| clientParseRequestMethod: Unsupported method 
attempted by 192.168.2.241: This is not a bug. see squid.conf 
extension_methods
2009/02/21 20:36:29| clientParseRequestMethod: Unsupported method in 
request 'NICK [00|ESP|016294017]__'



Thanks in advanced for your help.


http://www.squid-cache.org/mail-archive/squid-users/200812/0313.html

Chris




--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


Re: [squid-users] Squid fails

2009-03-07 Thread Wilson Hernandez - MSD, S. A.

Thank you Amos for your reply.

I downloaded version 3.0 and here how I built it:

Squid Cache: Version 3.0.STABLE12
 configure options:  '--prefix=/usr/local/squid' '--sysconfdir=/etc/squid'
 '--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp'
 '--enable-default-err-language=Spanish' '--enable-linux-netfilter'
 '--localstatedir=/var/log/squid'
 '--enable-stacktraces' '--with-default-user=proxy' '--with-large-files'

Amos Jeffries wrote:

Wilson Hernandez - MSD, S. A. wrote:

Hello once again.

Here's my second problem I am experiencing with squid. Squid is 
running normally and after a while doesn't serve any pages it gives 
the user an error regarding dns I don't remember exactly but, it tells 
the user that it timed out trying to access the ip but, that page 
(google.com) is being used by many as home page. I don't  know why is 
failing with some dns errors. I try doing ping to the same address and 
the dns server resolves the ip.


What can be causing this to happen?


It's a DNS failure.

For better help we will need to know:
 * the version of squid you are using,
 * whether or not --disable-internal-dns was used to build it,
 * and what is the actual error page content given when things go wrong.

Amos


--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


[squid-users] Starting squid

2009-03-06 Thread Wilson Hernandez - MSD, S. A.

Hello.

I am experiencing some weird problems with squid. I was testing to see 
if squid would come up at boot time, sometimes it starts sometimes 
doesnt (This is problem number one of two). I have the following line in 
rc.local


/usr/local/squid/sbin/squid -D

I am using the lastes stable version 3.0 downloaded last week.

Squid Cache: Version 3.0.STABLE11
configure options:  '--prefix=/usr/local/squid' 
'--sysconfdir=/etc/squid' '--enable-delay-pools' 
'--enable-kill-parent-hack' '--disable-htcp' 
'--enable-default-err-language=Spanish' '--enable-linux-netfilter' 
'--disable-ident-lookups' '--localstatedir=/var/log/squid' 
'--enable-stacktraces' '--with-default-user=proxy' '--with-large-files'


What is the correct flag to run squid on?



[squid-users] Squid fails

2009-03-06 Thread Wilson Hernandez - MSD, S. A.

Hello once again.

Here's my second problem I am experiencing with squid. Squid is running 
normally and after a while doesn't serve any pages it gives the user an 
error regarding dns I don't remember exactly but, it tells the user that 
it timed out trying to access the ip but, that page (google.com) is 
being used by many as home page. I don't  know why is failing with some 
dns errors. I try doing ping to the same address and the dns server 
resolves the ip.


What can be causing this to happen?


Re: [squid-users] Squid Crashes when cache dir fills

2009-02-26 Thread Wilson Hernandez - MSD, S. A.
The store system is actually aufs and the partition is 77GB. I don't 
have anything else other than iptables and squid installed on this system.


joost.deh...@getronics.com wrote:

I have cache_dir ufs /var/log/squid 6 255 255



with a 80GB harddrive.


- ufs is an old store system, aufs will probably give you better
performance.
- How is the inode usage on the disk?
- The disksize is irrelevant, the partition size of the partition where
/var/log/squid resides is relevant. If that partition is only 10G, then
this cache won't work.

Joost




--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


Re: [squid-users] Squid Crashes when cache dir fills

2009-02-26 Thread Wilson Hernandez - MSD, S. A.
I have approximately 75 users on my lan that continually use that 
server. I will consider your solutions and implement them and test how 
squid behaves.


So, what is the recommended cache size for a squid system running with 
over 75 users?


Thanks.

Angela Williams wrote:

Hi!
On Thursday 26 February 2009, Wilson Hernandez - MSD, S. A. wrote:

Amos Jeffries wrote:

Wilson Hernandez - MSD, S. A. wrote:

I have cache_dir ufs /var/log/squid 6 255 255


Your cache_dir line says create a squid cache directory structure 255 wide by 
255 deep and allow it to grow up to 60G. Squid will continue to try and store 
objects into the directory structure until either 60G limit is reached or the 
filesystem - more like as not /var - runs out of space.
If it runs out of space squid is no longer able to write to its log files you 
configured in your squid.conf file. If it cannot write to the log file it 
cannot tell you want happened!


Do you want a nice simple solution?
Add a second hard drive and have squid create its 60G sized cache there.
Make certain logrotate is rotating the normal system logs  and asking squid to 
rotate its logs!


Use df and see just how fuch free space is really available in /var if it is a 
filesystem in its own right.


Another thought?
How many users are trying to access the 'net via this proxy that you want to 
store 60G in a 256 by 256 structure? Last time I looked through the docs and 
faq's there was a good write up on sizing the cache - both hard and soft.


Cheers
Ang


--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


Re: [squid-users] Squid Crashes when cache dir fills

2009-02-25 Thread Wilson Hernandez - MSD, S. A.



Amos Jeffries wrote:

Wilson Hernandez - MSD, S. A. wrote:

I have cache_dir ufs /var/log/squid 6 255 255

with a 80GB harddrive.


So its probably not the cache dir filling up then.
It will be something else causing the system to use more than 20 GB for 
other stuff.


Logs or journaling maybe? Are they all rotating regularly?



I don't think so. But, I'll keep that in mind.

One other thing. As of late, the cache has stopped working without 
giving me any clue of why it did. I reviewed the cache.log file but 
can't figure out why is crashing. Where else can I look for clues of why 
  is crashing?


[squid-users] Squid Crashes when cache dir fills

2009-02-24 Thread Wilson Hernandez - MSD, S. A.

Hello.

I have experienced some sort of crash with squid. I noticed that when 
the cache directory fills up, squid stops caching, it only allows 
communication through with the messenger and users start getting the 
unable to redirect message on the browser.


If I delete the directory and create it again squid -z then everything 
works fine.


[squid-users] Unsupported methods

2009-02-24 Thread Wilson Hernandez - MSD, S. A.

Hello.

I noticed a lot of unsupported method log in cache.log and is filling 
the log with those type of messages. What type of methods are these? Can 
someone please explain or guide me to where I can better understand the 
extension methods and or its types? and what they really are? So I can 
avoid having these type of logs:


2009/02/17 15:18:46| clientParseRequestMethod: Unsupported method 
attempted by 192.168.2.245: This is not a bug. see squid.conf 
extension_methods
2009/02/17 15:18:46| clientParseRequestMethod: Unsupported method in 
request 'NICK n[M-00-CRI-XP-14]___'



2009/02/21 20:36:29| clientParseRequestMethod: Unsupported method 
attempted by 192.168.2.241: This is not a bug. see squid.conf 
extension_methods
2009/02/21 20:36:29| clientParseRequestMethod: Unsupported method in 
request 'NICK [00|ESP|016294017]__'



Thanks in advanced for your help.


Re: [squid-users] Squid Crashes when cache dir fills

2009-02-24 Thread Wilson Hernandez - MSD, S. A.

I have cache_dir ufs /var/log/squid 6 255 255

with a 80GB harddrive.

Andrew Loughnan wrote:

Hi Wilson

Check that your cache_dir is not too big ? (cache_dir diskd /var/spool/squid 
2 32 256)

Let us know what you have as your configs for this?
If it runs out of space it will crash all the time 


Regards
Andrew Loughnan
Computer Services Manager
 
St Joseph's College 
135 Aphrasia Street Newtown Vic 3220

E: andr...@sjc.vic.edu.au
P/h: (03) 5226-8165 
M:  0412-523-011
Fax:(03) 5221-6983 


-Original Message-
From: Wilson Hernandez - MSD, S. A. [mailto:w...@msdrd.com] 
Sent: Tuesday, 24 February 2009 10:37 PM

To: squid-users@squid-cache.org
Subject: [squid-users] Squid Crashes when cache dir fills

Hello.

I have experienced some sort of crash with squid. I noticed that when 
the cache directory fills up, squid stops caching, it only allows 
communication through with the messenger and users start getting the 
unable to redirect message on the browser.


If I delete the directory and create it again squid -z then everything 
works fine.





--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


Re: [squid-users] Problem configure squid 3.1

2009-01-06 Thread Wilson Hernandez - MSD, S. A.


build-essential did it. It configured.

Thanks for your help.

Gregori Parker wrote:

I'm sorry, I meant apt-get install libc-dev (I'm obviously not a Debian
user)

I've also read that you may need the 'build-essential' package as well,
so you might want to try that


-Original Message-
From: Gregori Parker [mailto:gregori.par...@theplatform.com] 
Sent: Monday, January 05, 2009 4:33 PM

To: w...@msdrd.com
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Problem configure squid 3.1

Try 'apt-get libc-dev' and report back

-Original Message-
From: Wilson Hernandez - MSD, S. A. [mailto:w...@msdrd.com] 
Sent: Monday, January 05, 2009 6:01 PM

To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Problem configure squid 3.1

I've already have it installed and still not working.

Gregori Parker wrote:

Sounds like you need a c++ compiler, do a 'apt-get gcc' (you're

running

debian IIRC)

-Original Message-
From: Wilson Hernandez [mailto:w...@msdrd.com] 
Sent: Monday, January 05, 2009 1:50 PM

To: squid-users@squid-cache.org
Subject: [squid-users] Problem configure squid 3.1

Hello.
Me again.

It seems that everyhting I try to do can't go smoothly. Now, I'm
trying 
to get squid-3.1.0.3 installed in my system trying to upgrade from an 
older version but now come accross a problem when I run ./configure
I get the following error (I searched the internet but, can't get a 
solutions) :


checking for C++ compiler default output file name...
configure: error: C++ compiler cannot create executables
See `config.log' for more details.
configure: error: ./configure failed for lib/libTrie

I removed the previous squid version which was installed as a package.

Please help.

Thanks.







--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


[squid-users] Problem configure squid 3.1

2009-01-05 Thread Wilson Hernandez

Hello.
Me again.

It seems that everyhting I try to do can't go smoothly. Now, I'm trying 
to get squid-3.1.0.3 installed in my system trying to upgrade from an 
older version but now come accross a problem when I run ./configure
I get the following error (I searched the internet but, can't get a 
solutions) :


checking for C++ compiler default output file name...
configure: error: C++ compiler cannot create executables
See `config.log' for more details.
configure: error: ./configure failed for lib/libTrie

I removed the previous squid version which was installed as a package.

Please help.

Thanks.



Re: [squid-users] Problem configure squid 3.1

2009-01-05 Thread Wilson Hernandez - MSD, S. A.

I've already have it installed and still not working.

Gregori Parker wrote:

Sounds like you need a c++ compiler, do a 'apt-get gcc' (you're running
debian IIRC)

-Original Message-
From: Wilson Hernandez [mailto:w...@msdrd.com] 
Sent: Monday, January 05, 2009 1:50 PM

To: squid-users@squid-cache.org
Subject: [squid-users] Problem configure squid 3.1

Hello.
Me again.

It seems that everyhting I try to do can't go smoothly. Now, I'm trying 
to get squid-3.1.0.3 installed in my system trying to upgrade from an 
older version but now come accross a problem when I run ./configure
I get the following error (I searched the internet but, can't get a 
solutions) :


checking for C++ compiler default output file name...
configure: error: C++ compiler cannot create executables
See `config.log' for more details.
configure: error: ./configure failed for lib/libTrie

I removed the previous squid version which was installed as a package.

Please help.

Thanks.





--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


Re: [squid-users] More access.log questions

2008-12-17 Thread Wilson Hernandez - MSD, S. A.
 swapin MD5 mismatches
2008/12/15 22:18:39| ctx: enter level  0:
'http://74.125.93.104/translate_c?hl=essl=enu=http://nov1.m.yahoo.net/nov0/Gftp6CpGg29dN5ckuR3PK 

Q__/1228216588/nov_ses_id033_tl5iz1xll0pimbet/www.kqzyfj.com/click-886648-2202641prev=/search%3Fq%3Dwww.ebey.com%26hl%3Des%26lr%3D%26client 

%3Dfirefox-a%26channel%3Ds%26rls%3Dorg.mozilla:es-ES:official%26hs%3DTNk%26sa%3DGusg=ALkJrhjCucbvExTxNxW7LnRua07fpKTBmg' 


2008/12/15 22:18:39| ctx: enter level  1:
'http://74.125.93.104/translate_c?hl=essl=enu=http://nov1.m.yahoo.net/nov0/Gftp6CpGg29dN5ckuR3PK 

Q__/1228216588/nov_ses_id033_tl5iz1xll0pimbet/www.kqzyfj.com/click-886648-2202641prev=/search%3Fq%3Dwww.ebey.com%26hl%3Des%26lr%3D%26client 

%3Dfirefox-a%26channel%3Ds%26rls%3Dorg.mozilla:es-ES:official%26hs%3DTNk%26sa%3DGusg=ALkJrhjCucbvExTxNxW7LnRua07fpKTBmg' 


2008/12/15 22:18:39| HttpStateData::cacheableReply: unknown http status
code in reply
2008/12/15 23:14:48| ctx: exit levels from  1 down to  0
2008/12/15 23:14:48| WARNING: 10 swapin MD5 mismatches
2008/12/15 23:14:58| assertion failed: client_side.cc:2479:
conn-in.abortedSize == (size_t)conn-bodySizeLeft()


Squid crashes again on same invalid request received causing memory 
corruption.


... and the cycle continues.

Amos


--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


Re: [squid-users] More access.log questions

2008-12-17 Thread Wilson Hernandez - MSD, S. A.
.
2008/12/15 22:07:13| 0 With invalid flags.
2008/12/15 22:07:13|268719 Objects loaded.
2008/12/15 22:07:13| 0 Objects expired.
2008/12/15 22:07:13|   185 Objects cancelled.
2008/12/15 22:07:13|  1788 Duplicate URLs purged.
2008/12/15 22:07:13|26 Swapfile clashes avoided.
2008/12/15 22:07:13|   Took 41.2 seconds (6517.7 objects/sec).
2008/12/15 22:07:13| Beginning Validation Procedure
2008/12/15 22:07:13|   262144 Entries Validated so far.
2008/12/15 22:07:13| storeLateRelease: released 2 objects
2008/12/15 22:07:14|   524288 Entries Validated so far.
2008/12/15 22:07:14|   Completed Validation Procedure
2008/12/15 22:07:14|   Validated 533992 Entries
2008/12/15 22:07:14|   store_swap_size = 15636412
2008/12/15 22:14:36| WARNING: 1 swapin MD5 mismatches
2008/12/15 22:18:39| ctx: enter level  0:
'http://74.125.93.104/translate_c?hl=essl=enu=http://nov1.m.yahoo.net/nov0/Gftp6CpGg29dN5ckuR3PK

Q__/1228216588/nov_ses_id033_tl5iz1xll0pimbet/www.kqzyfj.com/click-886648-2202641prev=/search%3Fq%3Dwww.ebey.com%26hl%3Des%26lr%3D%26client

%3Dfirefox-a%26channel%3Ds%26rls%3Dorg.mozilla:es-ES:official%26hs%3DTNk%26sa%3DGusg=ALkJrhjCucbvExTxNxW7LnRua07fpKTBmg'

2008/12/15 22:18:39| ctx: enter level  1:
'http://74.125.93.104/translate_c?hl=essl=enu=http://nov1.m.yahoo.net/nov0/Gftp6CpGg29dN5ckuR3PK

Q__/1228216588/nov_ses_id033_tl5iz1xll0pimbet/www.kqzyfj.com/click-886648-2202641prev=/search%3Fq%3Dwww.ebey.com%26hl%3Des%26lr%3D%26client

%3Dfirefox-a%26channel%3Ds%26rls%3Dorg.mozilla:es-ES:official%26hs%3DTNk%26sa%3DGusg=ALkJrhjCucbvExTxNxW7LnRua07fpKTBmg'

2008/12/15 22:18:39| HttpStateData::cacheableReply: unknown http status
code in reply
2008/12/15 23:14:48| ctx: exit levels from  1 down to  0
2008/12/15 23:14:48| WARNING: 10 swapin MD5 mismatches
2008/12/15 23:14:58| assertion failed: client_side.cc:2479:
conn-in.abortedSize == (size_t)conn-bodySizeLeft()

Squid crashes again on same invalid request received causing memory
corruption.

... and the cycle continues.

Amos

--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente








--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


Re: [squid-users] Squid3 just Died

2008-12-10 Thread Wilson Hernandez - MSD, S. A.

Amos,

I checked all the squid3 directories:
squid3: /usr/sbin/squid3 /etc/squid3 /usr/lib/squid3 /usr/share/squid3

and didn't find any core.* files. I even did a

/etc/squid3# whereis core
core: /usr/share/man/man5/core.5.gz

and it didn't find anything significant.

I installed squid3 on debian with apt-get install squid3. Looks like 
debian doesn't have a current version of squid3 other than PRE5. I will 
download it from the website and install it from source.


I don't think the logging drive was running out of space because Is a 
pretty big drive and partition. What can other thing make squid act like 
this?




Amos Jeffries wrote:

Wilson Hernandez - MSD, S. A. wrote:

Pieter,

I had to delete my cache. I didn't get any answers from the list and I 
had a lot of people calling.


I did:
rm -r /squid/cache/*

it took over 30 minutes.
Then,
restarted squid (/etc/init.d/squid3 start) and it was doing the same: 
It started and it just died without giving any errors.


I then decided to also delete, actually rename, access.log and
store.log files. This time when I restarted, it worked. I had to do it 
that way because that was the only solution I found. I there is 
another way to restore all the previous objects please let me know.


One more thing, What could have caused that problem? Will this happen 
again in a couple of months?


Hmm, if it was the logging drive running out of space yes it might 
happen again. Check for core.* files in the squid home directory.


Amos



Thanks

Pieter De Wit wrote:
Well - something is killing it. It got a lot future than before, it 
stopped

at 0.6% iirc last time ?

On Tue, 09 Dec 2008 22:42:52 -0400, Wilson Hernandez - MSD, S. A.
[EMAIL PROTECTED] wrote:
That i486 thing just might have been the original kernel. I don't 
know why it says i486.


I ran tail -f /var/log/squid/cache.log and noticed that squid tries 
to rebuild the cache, it stops and restarts again:


Store rebuilding is 10.1% complete

I don't want to delete my cache. That's the only solutions I've 
found on the internet:


1) Shutdown your squid server
squid -k shutdown

2) Remove the cache directory
rm -r /squid/cache/*

3) Re-Create the squid cache directory
squid -z

4) Start the squid

My cache is pretty big and it would take a while to delete all the 
stuff in there. Also, I will loose all that data from months of 
objects...


Pieter De Wit wrote:

Hi,

Might be totally off here, but I noted your swap size is large. 
Could

it

be that the cache has more objects (in count and in byte count ?) than

can

fit into a 32-bit counter ?

I got to this by seeing that it crashes at the cache rebuild 
section as

well as the fact the the build is i486.

Like I said, might be *way* off but hey :)

Cheers,

Pieter

On Tue, 09 Dec 2008 19:04:54 -0700, [EMAIL PROTECTED] wrote:

Hello.

I came across something weird. Squid3 just stopped working, just dies
without any error message. My server was running as usual and all 
over

a

sudden users weren't getting internet. I checked if all the normal
processes were running and noticed squid wasn't. Now, I try to start

the

server and it starts and dies after a few seconds. Heres part of the
cache.log file:

2008/12/09 22:03:07| Starting Squid Cache version 3.0.PRE5 for
i486-pc-linux-gnu...
2008/12/09 22:03:07| Process ID 4063
2008/12/09 22:03:07| With 1024 file descriptors available
2008/12/09 22:03:07| DNS Socket created at 0.0.0.0, port 33054, FD 8
2008/12/09 22:03:07| Adding nameserver 200.42.213.11 from squid.conf
2008/12/09 22:03:07| Adding nameserver 200.42.213.21 from squid.conf
2008/12/09 22:03:07| Unlinkd pipe opened on FD 13
2008/12/09 22:03:07| Swap maxSize 10240 KB, estimated 7876923
objects
2008/12/09 22:03:07| Target number of buckets: 393846
2008/12/09 22:03:07| Using 524288 Store buckets
2008/12/09 22:03:07| Max Mem  size: 102400 KB
2008/12/09 22:03:07| Max Swap size: 10240 KB
2008/12/09 22:03:07| Rebuilding storage in /var/log/squid/cache 
(DIRTY)

2008/12/09 22:03:07| Using Least Load store dir selection
2008/12/09 22:03:07| Current Directory is /
2008/12/09 22:03:07| Loaded Icons.
2008/12/09 22:03:07| Accepting transparently proxied HTTP connections

at

192.168.2.1, port 3128, FD 15.
2008/12/09 22:03:07| HTCP Disabled.
2008/12/09 22:03:07| WCCP Disabled.
2008/12/09 22:03:07| Ready to serve requests.
2008/12/09 22:03:11| Starting Squid Cache version 3.0.PRE5 for
i486-pc-linux-gnu...
2008/12/09 22:03:11| Process ID 4066
2008/12/09 22:03:11| With 1024 file descriptors available
2008/12/09 22:03:11| DNS Socket created at 0.0.0.0, port 33054, FD 8
2008/12/09 22:03:11| Adding nameserver 200.42.213.11 from squid.conf
2008/12/09 22:03:11| Adding nameserver 200.42.213.21 from squid.conf
2008/12/09 22:03:11| Unlinkd pipe opened on FD 13
2008/12/09 22:03:11| Swap maxSize 10240 KB, estimated 7876923
objects
2008/12/09 22:03:11| Target number of buckets: 393846
2008/12/09 22:03:11| Using 524288 Store buckets
2008/12/09 22:03:11| Max Mem

Re: [squid-users] Squid3 just Died

2008-12-09 Thread Wilson Hernandez - MSD, S. A.


Please help. Your help will be appreciated.

Thank you in advanced.






--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


Re: [squid-users] Squid3 just Died

2008-12-09 Thread Wilson Hernandez - MSD, S. A.

Pieter,

I had to delete my cache. I didn't get any answers from the list and I 
had a lot of people calling.


I did:
rm -r /squid/cache/*

it took over 30 minutes.
Then,
restarted squid (/etc/init.d/squid3 start) and it was doing the same: It 
started and it just died without giving any errors.


I then decided to also delete, actually rename, access.log and store.log 
files. This time when I restarted, it worked. I had to do it that way 
because that was the only solution I found. I there is another way to 
restore all the previous objects please let me know.


One more thing, What could have caused that problem? Will this happen 
again in a couple of months?


Thanks

Pieter De Wit wrote:

Well - something is killing it. It got a lot future than before, it stopped
at 0.6% iirc last time ?

On Tue, 09 Dec 2008 22:42:52 -0400, Wilson Hernandez - MSD, S. A.
[EMAIL PROTECTED] wrote:
That i486 thing just might have been the original kernel. I don't know 
why it says i486.


I ran tail -f /var/log/squid/cache.log and noticed that squid tries to 
rebuild the cache, it stops and restarts again:


Store rebuilding is 10.1% complete

I don't want to delete my cache. That's the only solutions I've found on 
the internet:


1) Shutdown your squid server
squid -k shutdown

2) Remove the cache directory
rm -r /squid/cache/*

3) Re-Create the squid cache directory
squid -z

4) Start the squid

My cache is pretty big and it would take a while to delete all the stuff 
in there. Also, I will loose all that data from months of objects...


Pieter De Wit wrote:

Hi,

Might be totally off here, but I noted your swap size is large. Could

it

be that the cache has more objects (in count and in byte count ?) than

can

fit into a 32-bit counter ?

I got to this by seeing that it crashes at the cache rebuild section as
well as the fact the the build is i486.

Like I said, might be *way* off but hey :)

Cheers,

Pieter

On Tue, 09 Dec 2008 19:04:54 -0700, [EMAIL PROTECTED] wrote:

Hello.

I came across something weird. Squid3 just stopped working, just dies
without any error message. My server was running as usual and all over

a

sudden users weren't getting internet. I checked if all the normal
processes were running and noticed squid wasn't. Now, I try to start

the

server and it starts and dies after a few seconds. Heres part of the
cache.log file:

2008/12/09 22:03:07| Starting Squid Cache version 3.0.PRE5 for
i486-pc-linux-gnu...
2008/12/09 22:03:07| Process ID 4063
2008/12/09 22:03:07| With 1024 file descriptors available
2008/12/09 22:03:07| DNS Socket created at 0.0.0.0, port 33054, FD 8
2008/12/09 22:03:07| Adding nameserver 200.42.213.11 from squid.conf
2008/12/09 22:03:07| Adding nameserver 200.42.213.21 from squid.conf
2008/12/09 22:03:07| Unlinkd pipe opened on FD 13
2008/12/09 22:03:07| Swap maxSize 10240 KB, estimated 7876923
objects
2008/12/09 22:03:07| Target number of buckets: 393846
2008/12/09 22:03:07| Using 524288 Store buckets
2008/12/09 22:03:07| Max Mem  size: 102400 KB
2008/12/09 22:03:07| Max Swap size: 10240 KB
2008/12/09 22:03:07| Rebuilding storage in /var/log/squid/cache (DIRTY)
2008/12/09 22:03:07| Using Least Load store dir selection
2008/12/09 22:03:07| Current Directory is /
2008/12/09 22:03:07| Loaded Icons.
2008/12/09 22:03:07| Accepting transparently proxied HTTP connections

at

192.168.2.1, port 3128, FD 15.
2008/12/09 22:03:07| HTCP Disabled.
2008/12/09 22:03:07| WCCP Disabled.
2008/12/09 22:03:07| Ready to serve requests.
2008/12/09 22:03:11| Starting Squid Cache version 3.0.PRE5 for
i486-pc-linux-gnu...
2008/12/09 22:03:11| Process ID 4066
2008/12/09 22:03:11| With 1024 file descriptors available
2008/12/09 22:03:11| DNS Socket created at 0.0.0.0, port 33054, FD 8
2008/12/09 22:03:11| Adding nameserver 200.42.213.11 from squid.conf
2008/12/09 22:03:11| Adding nameserver 200.42.213.21 from squid.conf
2008/12/09 22:03:11| Unlinkd pipe opened on FD 13
2008/12/09 22:03:11| Swap maxSize 10240 KB, estimated 7876923
objects
2008/12/09 22:03:11| Target number of buckets: 393846
2008/12/09 22:03:11| Using 524288 Store buckets
2008/12/09 22:03:11| Max Mem  size: 102400 KB
2008/12/09 22:03:11| Max Swap size: 10240 KB
2008/12/09 22:03:11| Rebuilding storage in /var/log/squid/cache (DIRTY)
2008/12/09 22:03:11| Using Least Load store dir selection
2008/12/09 22:03:11| Current Directory is /
2008/12/09 22:03:11| Loaded Icons.
2008/12/09 22:03:11| Accepting transparently proxied HTTP connections

at

192.168.2.1, port 3128, FD 15.
2008/12/09 22:03:11| HTCP Disabled.
2008/12/09 22:03:11| WCCP Disabled.
2008/12/09 22:03:11| Ready to serve requests.
2008/12/09 22:03:12| Store rebuilding is  0.6% complete
2008/12/09 22:03:17| Starting Squid Cache version 3.0.PRE5 for
i486-pc-linux-gnu...
2008/12/09 22:03:17| Process ID 4069
2008/12/09 22:03:17| With 1024 file descriptors available
2008/12/09 22:03:17| DNS Socket created at 0.0.0.0, port 33054, FD 8
2008/12/09 22:03:17| Adding

[squid-users] Disk space over limit Warning

2008-11-27 Thread Wilson Hernandez - MSD, S. A.

Hello;

I currently have a network with about 30 users and my swap space tends 
to fill up quite quickly. I increased the swap three weeks ago from:


#cache_dir ufs /var/log/squid/cache 5000 16 256 to
cache_dir ufs /var/log/squid/cache 1 255 255

Now, I'm getting the same warning:

2008/11/27 13:59:00| WARNING: Disk space over limit: 10241036 KB  
1024 KB


If I leave it as is will I have problems in the future or should I 
change it to what? What is a safe size for this?


Thank you in advanced for all your help.


Re: [squid-users] Disk space over limit Warning

2008-11-27 Thread Wilson Hernandez - MSD, S. A.

Yes. I did run squid -z and it created all the directories.

Paul Bertain wrote:

Hi Wilson,

Did you run squid -z after changing your settings?  For themto take 
effect, I believe you need to run squid -a again.


Paul



On Nov 28, 2008, at 15:53, Wilson Hernandez - MSD, S. A. 
[EMAIL PROTECTED] wrote:



Hello;

I currently have a network with about 30 users and my swap space tends 
to fill up quite quickly. I increased the swap three weeks ago from:


#cache_dir ufs /var/log/squid/cache 5000 16 256 to
cache_dir ufs /var/log/squid/cache 1 255 255

Now, I'm getting the same warning:

2008/11/27 13:59:00| WARNING: Disk space over limit: 10241036 KB  
1024 KB


If I leave it as is will I have problems in the future or should I 
change it to what? What is a safe size for this?


Thank you in advanced for all your help.





--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


[squid-users] Re: [NoCat] Non-Authenticating Splash Page

2008-11-26 Thread Wilson Hernandez - MSD, S. A.

Colin,

I tried to add some php code but it doesn't get embedded, I guess is 
because the gateway is run with perl. I don't know perl, it would be 
easier to use perl instead of php in this case. How can I embed php code 
on the splash page to build a dynamic page?


Thanks again.



Colin A. White wrote:
Have you tried editing the splash page to embed your own content? And 
change the submit button to say, Dismiss or Ignore?




On Nov 25, 2008, at 4:55 PM, Wilson Hernandez - MSD, S. A. 
[EMAIL PROTECTED] wrote:




I am currently running Nocat in open mode but, the only way it would
work in open mode is if I have a splash page that with a submit button
accepting the agreement (see below). I don't want users to really do
that. All I want is a page to show up every 60mins and let users click
on the splash page' contents or proceed with whatever it was doing.

form method=GET action=$action
   input type=hidden name=redirect value=$redirect
   input type=hidden name=accept_terms value=yes
   input type=hidden name=mode_login
   input type=submit class=button value=Enter
/form


Thanks for replying.



Colin White wrote:

Set the time out to 60 mins and run the gateway in Open mode.
 On Tue, Nov 25, 2008 at 10:47 AM, Wilson Hernandez - MSD, S. A. 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

   Hello.
   I would like to know if there is a way of redirecting users to a
   splash page every hour and have the user to continue browsing the
   internet without authenticating or accepting an user agreement?
   Thanks.
   -- ___
   NoCat mailing list
   [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
   http://lists.nocat.net/mailman/listinfo/nocat
--
Colin A. White
P : +1 605 940 5863


--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente

___
NoCat mailing list
[EMAIL PROTECTED]
http://lists.nocat.net/mailman/listinfo/nocat





--
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


[squid-users] Where are the objects?

2008-11-22 Thread Wilson Hernandez - MSD, S. A.


I have a question regarding the storage of objects: Where is really 
saved to on disk? Is it in /var/log/squid3/access.log or in: 
/var/log/squid3/store.log?


I've noticed that in the /var/log/squid3/cache swaplog there isn't any 
entries as stated below. How exactly are objects saved?


2008/11/20 08:49:53| Starting Squid Cache version 3.0.PRE5 for 
i486-pc-linux-gnu...

2008/11/20 08:49:53| Process ID 2789
2008/11/20 08:49:53| With 1024 file descriptors available
2008/11/20 08:49:53| DNS Socket created at 0.0.0.0, port 32794, FD 8
2008/11/20 08:49:53| Adding nameserver 200.88.127.22 from squid.conf
2008/11/20 08:49:53| Adding nameserver 196.3.81.5 from squid.conf
2008/11/20 08:49:53| Adding nameserver 200.88.127.23 from squid.conf
2008/11/20 08:49:53| Adding nameserver 196.3.81.182 from squid.conf
2008/11/20 08:49:53| Unlinkd pipe opened on FD 13
2008/11/20 08:49:53| Swap maxSize 1024 KB, estimated 787692 objects
2008/11/20 08:49:53| Target number of buckets: 39384
2008/11/20 08:49:53| Using 65536 Store buckets
2008/11/20 08:49:53| Max Mem  size: 102400 KB
2008/11/20 08:49:53| Max Swap size: 1024 KB
2008/11/20 08:49:53| Rebuilding storage in /var/log/squid3/cache (CLEAN)
2008/11/20 08:49:53| Using Least Load store dir selection
2008/11/20 08:49:53| Current Directory is /
2008/11/20 08:49:53| Loaded Icons.
2008/11/20 08:49:53| Accepting transparently proxied HTTP connections at 
192.168.2.1, port 3128, FD 14.

2008/11/20 08:49:53| HTCP Disabled.
2008/11/20 08:49:53| WCCP Disabled.
2008/11/20 08:49:53| Ready to serve requests.
2008/11/20 08:50:13| Done scanning /var/log/squid3/cache swaplog (0 entries)
2008/11/20 08:50:13| Finished rebuilding storage from disk.
2008/11/20 08:50:13| 0 Entries scanned
2008/11/20 08:50:13| 0 Invalid entries.
2008/11/20 08:50:13| 0 With invalid flags.
2008/11/20 08:50:13| 0 Objects loaded.
2008/11/20 08:50:13| 0 Objects expired.
2008/11/20 08:50:13| 0 Objects cancelled.
2008/11/20 08:50:13| 0 Duplicate URLs purged.
2008/11/20 08:50:13| 0 Swapfile clashes avoided.
2008/11/20 08:50:13|   Took 19.7 seconds (   0.0 objects/sec).
2008/11/20 08:50:13| Beginning Validation Procedure
2008/11/20 08:50:13|   Completed Validation Procedure
2008/11/20 08:50:13|   Validated 25 Entries
2008/11/20 08:50:13|   store_swap_size = 0
2008/11/20 08:50:13| storeLateRelease: released 0 objects
2008/11/20 08:50:41| Preparing for shutdown after 0 requests
2008/11/20 08:50:41| Waiting 30 seconds for active connections to finish
2008/11/20 08:50:41| FD 14 Closing HTTP connection
2008/11/20 08:51:12| Shutting down...
2008/11/20 08:51:12| Closing unlinkd pipe on FD 13
2008/11/20 08:51:12| storeDirWriteCleanLogs: Starting...
2008/11/20 08:51:12|   Finished.  Wrote 0 entries.
2008/11/20 08:51:12|   Took 0.0 seconds (   0.0 entries/sec).
CPU Usage: 0.036 seconds = 0.024 user + 0.012 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:2736 KB
Ordinary blocks: 2715 KB  9 blks
Small blocks:   0 KB  0 blks
Holding blocks:  1500 KB  7 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  20 KB
Total in use:4215 KB 154%
Total free:20 KB 1%
2008/11/20 08:51:12| Squid Cache (Version 3.0.PRE5): Exiting normally.
2008/11/20 08:51:18| Starting Squid Cache version 3.0.PRE5 for 
i486-pc-linux-gnu...

2008/11/20 08:51:18| Process ID 2824
2008/11/20 08:51:18| With 1024 file descriptors available
2008/11/20 08:51:18| DNS Socket created at 0.0.0.0, port 32794, FD 8
2008/11/20 08:51:18| Adding nameserver 200.88.127.22 from squid.conf
2008/11/20 08:51:18| Adding nameserver 196.3.81.5 from squid.conf
2008/11/20 08:51:18| Adding nameserver 200.88.127.23 from squid.conf
2008/11/20 08:51:18| Adding nameserver 196.3.81.182 from squid.conf
2008/11/20 08:51:18| Unlinkd pipe opened on FD 13
2008/11/20 08:51:18| Swap maxSize 1024 KB, estimated 787692 objects
2008/11/20 08:51:18| Target number of buckets: 39384
2008/11/20 08:51:18| Using 65536 Store buckets
2008/11/20 08:51:18| Max Mem  size: 102400 KB
2008/11/20 08:51:18| Max Swap size: 1024 KB
2008/11/20 08:51:18| Rebuilding storage in /var/log/squid3/cache (CLEAN)
2008/11/20 08:51:18| Using Least Load store dir selection
2008/11/20 08:51:18| Current Directory is /
2008/11/20 08:51:18| Loaded Icons.
2008/11/20 08:51:18| Accepting transparently proxied HTTP connections at 
192.168.2.1, port 3128, FD 14.

2008/11/20 08:51:18| HTCP Disabled.
2008/11/20 08:51:18| WCCP Disabled.
2008/11/20 08:51:18| Ready to serve requests.



[squid-users] Squid very slow

2008-11-19 Thread Wilson Hernandez - MSD, S. A.
I am running Nocat with along with squid3 and I am experiencing some 
problems:


Sometimes everything works fine but, sometimes the system is extremely 
slow and I get the following error on browser:


The requested URL could not be retrieved

While trying to retrieve the URL: 
http://us.mc625.mail.yahoo.com/mc/showFolder;_ylt=ArsEohpYUGGoVGGsFGqujqJjk70X?

The following error was encountered:

   Unable to determine IP address from host name for us.mc625.mail.yahoo.com 


The dnsserver returned:

   Refused: The name server refuses to perform the specified operation. 


This means that:

The cache was not able to resolve the hostname presented in the URL. 
Check if the address is correct. 


Your cache administrator is [EMAIL PROTECTED]
Generated Thu, 20 Nov 2008 01:07:31 GMT by localhost (squid/3.0.PRE5) 


--

I added my dns servers to the squid.conf with the dns_nameservers