RE: [squid-users] First post

2010-11-06 Thread David Parks
Hi Luke, Squid is a proxy server, it simply re-directs traffic like a broker
handles a transaction for a client so the client doesn't work directly with
the seller.

It can cache data like images so that when, for example, UserA goes to a
website, when UserB goes to that same website the images and such don't need
to be downloaded again, they are sent from the local squid. But for a single
user your browser does this caching already.

There are other uses, but far more technical.

Try this google search, I think it will get you going in the direction you
want to follow:
http://www.google.com/search?q=download+webpages+for+offline+viewingie=utf-
8oe=utf-8aq=trls=org.mozilla:en-US:officialclient=firefox-a

David


-Original Message-
From: Luke [mailto:luke...@gmail.com] 
Sent: Saturday, November 06, 2010 12:34 PM
To: squid-users@squid-cache.org
Subject: [squid-users] First post

So I have never set up a Squid web cache before but I think it is what I
need.  Let me explain:
My father and mother live out in the middle of no where and currently get
their internet access through satellite service.  It reminds me of the old
days of dial up.  My dad uses the internet to get most of his current event
and sporting news.  He is very patient but when I go there to visit it
drives me nuts.  I was wondering if Squid could do the
following:
Download during the night his favorite news sites and their linked articles
so when he gets up in the morning to read the morning news it is lightning
fast.  Can this be done?

Thanks

Luke Brown



Re: [squid-users] squid and ntlm without winbind

2010-11-06 Thread Kinkie
On Fri, Nov 5, 2010 at 3:26 PM, Maurizio Marini mau...@datalogica.com wrote:
 Hi there
[...]
 samba is pdc with ldap backend
 Now i should authenticate squid with samba on the same server. I cannot use
 winbind (winbind should be used on samba domain member, isn'it), so following
 link:
 http://wiki.squid-cache.org/ConfigExamples/Authenticate/NtlmCentOS5
 is not useful, or, better: i tried to configure winbind using this wiki with 
 no
 success.

A domain controller is also a domain member ; the same configuration
should apply.
You may want to detail what you did, and what error messages you got - if any.

-- 
    /kinkie


[squid-users] Squid 3.0 icap HIT

2010-11-06 Thread Luis Enrique Sanchez Arce

When squid resolve the resource from cache does not send the answer to ICAP.
How I can change this behavior?

I use squid 3.0 STABLE8 and GreasySpoon (Implementation of icap protocol)


[squid-users] squid3 timeouts after bootup

2010-11-06 Thread William Montgomery

I am using squid3 (Version 3.0.STABLE8) on Debian Lenny,
squid gets ERR_CONNECT_FAIL after the initial power up on my machine but 
if I

do a restart (/etc/init.d/squid3 restart) then all is well.  Excerpts from
cache.log below:

Right after boot up...
010/11/06 16:43:36.140| The request GET http://yahoo.com/ is ALLOWED, 
because it

2010/11/06 16:43:36.140| storeCreateEntry: 'http://yahoo.com/'
2010/11/06 16:43:36.140| storeKeyPrivate: GET http://yahoo.com/
2010/11/06 16:43:36.140| FwdState::start() 'http://yahoo.com/'
2010/11/06 16:43:36.140| peerSelect: http://yahoo.com/
2010/11/06 16:43:36.141| peerSelectFoo: 'GET yahoo.com'
2010/11/06 16:43:36.141| peerSelectFoo: 'GET yahoo.com'
2010/11/06 16:43:36.141| peerSelectCallback: http://yahoo.com/
2010/11/06 16:43:36.141| fwdStartComplete: http://yahoo.com/
2010/11/06 16:43:36.141| fwdConnectStart: http://yahoo.com/
2010/11/06 16:43:36.141| fd_open FD 16 http://yahoo.com/
2010/11/06 16:43:36.141| commConnectStart: FD 16, data 0xa4f5360, 
yahoo.com:80
2010/11/06 16:43:36.141| idnsALookup: buf is 27 bytes for yahoo.com, id 
= 0xb694

2010/11/06 16:44:36.004| fwdConnectTimeout: FD 16: 'http://yahoo.com/'
2010/11/06 16:44:36.004| fwdFail: ERR_CONNECT_FAIL Gateway Time-out
http://yahoo.com/

Right after a restart - everything works fine...
2010/11/06 16:46:59.116| The request GET http://yahoo.com/ is ALLOWED, 
because i

2010/11/06 16:46:59.116| storeCreateEntry: 'http://yahoo.com/'
2010/11/06 16:46:59.116| storeKeyPrivate: GET http://yahoo.com/
2010/11/06 16:46:59.116| FwdState::start() 'http://yahoo.com/'
2010/11/06 16:46:59.116| peerSelect: http://yahoo.com/
2010/11/06 16:46:59.116| peerSelectFoo: 'GET yahoo.com'
2010/11/06 16:46:59.117| peerSelectFoo: 'GET yahoo.com'
2010/11/06 16:46:59.117| peerSelectCallback: http://yahoo.com/
2010/11/06 16:46:59.117| fwdStartComplete: http://yahoo.com/
2010/11/06 16:46:59.117| fwdConnectStart: http://yahoo.com/
2010/11/06 16:46:59.117| fd_open FD 16 http://yahoo.com/
2010/11/06 16:46:59.117| commConnectStart: FD 16, data 0x9350b40, 
yahoo.com:80
2010/11/06 16:46:59.117| idnsALookup: buf is 27 bytes for yahoo.com, id 
= 0xd694

2010/11/06 16:46:59.149| ipcacheCycleAddr: yahoo.com now at 209.191.122.70
2010/11/06 16:46:59.250| fwdConnectDone: FD 16: 'http://yahoo.com/'

This should not be necessary.  Any ideas about why this happens?

Regards,
Wm


[squid-users] possible bug on 2.7S9

2010-11-06 Thread Leonardo Rodrigues


Hi,

i'll try to describe with the most details i can what i think is 
something like a forwarding-loop-detection bug on 2.7S9


i have squid 2.7S9 running on a CentOS 5.5 x64 box whici has 4 
NICs. 3 NICs are for internal networks (192.168.x) and 1 NIC is for 
internet (189.73.x.x). It was built with:


[r...@firewall squid]# squid -v
Squid Cache: Version 2.7.STABLE9
configure options:  '--prefix=/usr' '--exec-prefix=/usr/bin' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--libexecdir=/usr/bin' 
'--sysconfdir=/etc/squid' '--datadir=/var/squid' '--localstatedir=/var' 
'--enable-removal-policies=heap,lru' '--enable-storeio=ufs,aufs,null' 
'--enable-delay-pools' '--enable-http-violations' '--with-maxfd=8192' 
'--enable-async-io=8' '--enable-err-languages=Portuguese English' 
'--enable-default-err-language=Portuguese' '--enable-snmp' 
'--disable-ident-lookups' '--enable-linux-netfilter' 
'--enable-auth=basic digest ntlm negotiate' 
'--enable-basic-auth-helpers=DB LDAP NCSA SMB' 
'--enable-digest-auth-helpers=password ldap' 
'--enable-external-acl-helpers=ip_user ldap_group session wbinfo_group' 
'--enable-negotiate-auth-helpers=squid_kerb_auth' 
'--enable-ntlm-auth-helpers=fakeauth no_check' '--enable-useragent-log' 
'--enable-referer-log' '--disable-wccp' '--disable-wccpv2' 
'--enable-arp-acl' '--with-large-files' '--enable-large-cache-files' 
'--enable-ssl' '--enable-icmp'



i've setup squid with something like:

acl localhost src 127.0.0.1/255.255.255.255
acl localhost_to dst 127.0.0.1/255.255.255.255

acl network1 src 192.168.1.0/255.255.255.0
acl network1_to dst 192.168.1.0/255.255.255.0

acl network2 src 192.168.2.0/255.255.255.0
acl network2_to dst 192.168.2.0/255.255.255.0

acl network3 src 192.168.3.0/255.255.255.0
acl network3_to dst 192.168.3.0/255.255.255.0

http_port 8080 transparent
http_port 3128 transparent

tcp_outgoing_address 127.0.0.1 localhost_to
tcp_outgoing_address 192.168.1.1 network1_to
tcp_outgoing_address 192.168.2.1 network2_to
tcp_outgoing_address 192.168.3.1 network3_to
tcp_outgoing_address 189.73.x.x all



config is OK, it runs just fine.

problem is, on a given day, squid stop responding new connections 
and i have to stop it (service squid stop). After searching logs, i have 
found some interesting requests:



1288136326.944  48437 192.168.2.15 TCP_MISS/000 0 GET 
http://localhost:8080/sync/sis/index.php - DIRECT/127.0.0.1 -
1288136326.944  48426 127.0.0.1 TCP_MISS/000 0 GET 
http://localhost:8080/sync/sis/index.php - DIRECT/127.0.0.1 -

(and this second line repeated about 13000 times)

and during these, i got also on cache.log:

2010/10/26 21:37:59| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:38:15| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:38:31| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:38:48| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:39:04| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:39:20| WARNING! Your cache is running out of filedescriptors

i'm running with 8192 filedescriptors on a 150 clients network, 
that's more than enough filedescriptors for normal usage.


(from cache.log)
2010/10/31 12:27:50| Starting Squid Cache version 2.7.STABLE9 for 
x86_64-unknown-linux-gnu...

2010/10/31 12:27:50| Process ID 16093
2010/10/31 12:27:50| With 8192 file descriptors available


Well . after found that, i tried to reproduce it doing some 
request to localhost:8080 on 8080 squid port and i could successfully 
reproduce it, all the times, with the above squid.conf configuration.


after some tryings, i have found that:

1) removing the:
tcp_outgoing_address 127.0.0.1 localhost_to

would avoid the problem and make the forwarding-loop-detection 
works fine


2) removing the transparent from
http_port 8080 transparent

would avoid the problem too, even with the tcp_outgoing_address 
127.0.0.1 active



question is . squid NOT detecting this forwarding-loop should 
be expected with this transparent and tcp_outgoing_address combination ? 
Are we talking of a bug or are we talking of some expected behavior ? Is 
there any other information that i could provide to help tracking this ?





--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






[squid-users] Re: This cache is currently building its digest.

2010-11-06 Thread david robertson
Anyone have any ideas?

On Wednesday, November 3, 2010, david robertson d...@nevernet.com wrote:
 Hello, I'm having a cache-digest related issue that I'm hoping someone
 here can help me with.

 I've got a few frontend servers, which talk to a handful of backend
 servers.  Everything is working swimmingly, with the exception of
 cache digests.

 The digests used to work without issue, but suddenly all of my backend
 servers have stopped building their digests.  They all say This cache
 is currently building its digest. when you try to access the digest.
 It's as if the digest rebuild never finishes.  Nothing has changed
 with my configuration, and all of the backends (6 of them) have
 started doing this at roughly the same time.

 My first thought would be cache corruption, but I've reset all of the
 caches, and the issue still persists.

 Any ideas?


 Squid Cache: Version 2.7.STABLE9
 configure options:  '--prefix=/squid2' '--enable-async-io'
 '--enable-icmp' '--enable-useragent-log' '--enable-snmp'
 '--enable-cache-digests' '--enable-follow-x-forwarded-for'
 '--enable-storeio=null,aufs' '--enable-removal-policies=heap,lru'
 '--with-maxfd=16384' '--enable-poll' '--disable-ident-lookups'
 '--enable-truncate' '--with-pthreads' 'CFLAGS=-DNUMS=60 -march=nocona
 -O3 -pipe -fomit-frame-pointer -funroll-loops -ffast-math
 -fno-exceptions'



Re: [squid-users] squid3 timeouts after bootup

2010-11-06 Thread Amos Jeffries

On 07/11/10 11:25, William Montgomery wrote:

I am using squid3 (Version 3.0.STABLE8) on Debian Lenny,
squid gets ERR_CONNECT_FAIL after the initial power up on my machine but
if I
do a restart (/etc/init.d/squid3 restart) then all is well. Excerpts from
cache.log below:


Probably one of the two versions of this:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=600521

Please try the backports squid3 package. If that resolves the problem 
please report the info in the Debian bugzilla.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.2


Re: [squid-users] possible bug on 2.7S9

2010-11-06 Thread Amos Jeffries

On 07/11/10 13:21, Leonardo Rodrigues wrote:


Hi,

i'll try to describe with the most details i can what i think is
something like a forwarding-loop-detection bug on 2.7S9

i have squid 2.7S9 running on a CentOS 5.5 x64 box whici has 4 NICs. 3
NICs are for internal networks (192.168.x) and 1 NIC is for internet
(189.73.x.x). It was built with:

[r...@firewall squid]# squid -v
Squid Cache: Version 2.7.STABLE9
configure options: '--prefix=/usr' '--exec-prefix=/usr/bin'
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--libexecdir=/usr/bin'
'--sysconfdir=/etc/squid' '--datadir=/var/squid' '--localstatedir=/var'
'--enable-removal-policies=heap,lru' '--enable-storeio=ufs,aufs,null'
'--enable-delay-pools' '--enable-http-violations' '--with-maxfd=8192'
'--enable-async-io=8' '--enable-err-languages=Portuguese English'
'--enable-default-err-language=Portuguese' '--enable-snmp'
'--disable-ident-lookups' '--enable-linux-netfilter'
'--enable-auth=basic digest ntlm negotiate'
'--enable-basic-auth-helpers=DB LDAP NCSA SMB'
'--enable-digest-auth-helpers=password ldap'
'--enable-external-acl-helpers=ip_user ldap_group session wbinfo_group'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-ntlm-auth-helpers=fakeauth no_check' '--enable-useragent-log'
'--enable-referer-log' '--disable-wccp' '--disable-wccpv2'
'--enable-arp-acl' '--with-large-files' '--enable-large-cache-files'
'--enable-ssl' '--enable-icmp'


i've setup squid with something like:

acl localhost src 127.0.0.1/255.255.255.255
acl localhost_to dst 127.0.0.1/255.255.255.255

acl network1 src 192.168.1.0/255.255.255.0
acl network1_to dst 192.168.1.0/255.255.255.0

acl network2 src 192.168.2.0/255.255.255.0
acl network2_to dst 192.168.2.0/255.255.255.0

acl network3 src 192.168.3.0/255.255.255.0
acl network3_to dst 192.168.3.0/255.255.255.0

http_port 8080 transparent
http_port 3128 transparent

tcp_outgoing_address 127.0.0.1 localhost_to
tcp_outgoing_address 192.168.1.1 network1_to
tcp_outgoing_address 192.168.2.1 network2_to
tcp_outgoing_address 192.168.3.1 network3_to
tcp_outgoing_address 189.73.x.x all



config is OK, it runs just fine.


Obviously not. Or you would not be reporting this problem.



problem is, on a given day, squid stop responding new connections and i
have to stop it (service squid stop). After searching logs, i have found
some interesting requests:


1288136326.944 48437 192.168.2.15 TCP_MISS/000 0 GET
http://localhost:8080/sync/sis/index.php - DIRECT/127.0.0.1 -


DoS attack by 192.168.2.15.


1288136326.944 48426 127.0.0.1 TCP_MISS/000 0 GET
http://localhost:8080/sync/sis/index.php - DIRECT/127.0.0.1 -
(and this second line repeated about 13000 times)


This is why the default Squid ruleset includes:
 http_access deny to_localhost




and during these, i got also on cache.log:

2010/10/26 21:37:59| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:38:15| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:38:31| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:38:48| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:39:04| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:39:20| WARNING! Your cache is running out of filedescriptors

i'm running with 8192 filedescriptors on a 150 clients network, that's
more than enough filedescriptors for normal usage.

(from cache.log)
2010/10/31 12:27:50| Starting Squid Cache version 2.7.STABLE9 for
x86_64-unknown-linux-gnu...
2010/10/31 12:27:50| Process ID 16093
2010/10/31 12:27:50| With 8192 file descriptors available


Well . after found that, i tried to reproduce it doing some request
to localhost:8080 on 8080 squid port and i could successfully reproduce
it, all the times, with the above squid.conf configuration.

after some tryings, i have found that:

1) removing the:
tcp_outgoing_address 127.0.0.1 localhost_to

would avoid the problem and make the forwarding-loop-detection works fine



Indicating that your NAT rules are incorrect.

The above line is simply forcing Squid to send from 127.0.0.1. It would 
only have any effect if your NAT intercept rules were forcing all 
localhost traffic back into Squid.


Removing the above line may mean that you are simply shifting the 
problem from your Squid to some web server elsewhere. Your Squid will be 
passing it requests for http://localhost:8080/...;. The upside is that 
at least it will not be a DoS flood when it arrives there.




2) removing the transparent from
http_port 8080 transparent

would avoid the problem too, even with the tcp_outgoing_address
127.0.0.1 active


Yes. Not doing NAT is good protection against this whole class of IP 
address problems.




question is . squid NOT detecting this forwarding-loop should be
expected with this transparent and tcp_outgoing_address combination ?


Do you have both via on and forwarded_for on set in your squid.conf? 
They are both needed.



Are we talking of a bug or are we talking of some 

Re: [squid-users] This cache is currently building its digest.

2010-11-06 Thread Amos Jeffries

On 04/11/10 13:12, david robertson wrote:

Hello, I'm having a cache-digest related issue that I'm hoping someone
here can help me with.

I've got a few frontend servers, which talk to a handful of backend
servers.  Everything is working swimmingly, with the exception of
cache digests.

The digests used to work without issue, but suddenly all of my backend
servers have stopped building their digests.  They all say This cache
is currently building its digest. when you try to access the digest.
It's as if the digest rebuild never finishes.  Nothing has changed
with my configuration, and all of the backends (6 of them) have
started doing this at roughly the same time.

My first thought would be cache corruption, but I've reset all of the
caches, and the issue still persists.

Any ideas?


Possibly negative_ttl making Squid cache the digest error.

Possibly due to the digest generation period being synchronized with the 
re-fetch period.


What is your digest rebuild time set to?
 your cache_dir and cache_mem sizes?
 and your negative_ttl setting?

What do you get back when making a manual digest fetch from one of the 
Squid?
 squidclient -h $squid-visible_hostname 
mgr:squid-internal-periodic/store_digest


debug_options 71,9 may shed more light on what the digest rebuild is 
doing.




Squid Cache: Version 2.7.STABLE9
configure options:  '--prefix=/squid2' '--enable-async-io'
'--enable-icmp' '--enable-useragent-log' '--enable-snmp'
'--enable-cache-digests' '--enable-follow-x-forwarded-for'
'--enable-storeio=null,aufs' '--enable-removal-policies=heap,lru'
'--with-maxfd=16384' '--enable-poll' '--disable-ident-lookups'
'--enable-truncate' '--with-pthreads' 'CFLAGS=-DNUMS=60 -march=nocona
-O3 -pipe -fomit-frame-pointer -funroll-loops -ffast-math
-fno-exceptions'


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Unable to make Squid work as a transparent proxy (Squid 3.1.7, Linux Debian, WCCP2)

2010-11-06 Thread Amos Jeffries

On 06/11/10 04:50, Leonardo wrote:

Hi all,

I have compiled and installed Squid 3.1.7 on a Linux 2.6.26 (Debian
5.0.5), and successfully tested it as a non-transparent proxy (i.e.
the proxy address:port is explicitly specified in the web browser).

Now I need to use it to do transparent proxying.  For this, I'm
following the example at
http://wiki.squid-cache.org/ConfigExamples/Intercept/CiscoAsaWccp2 .
The clients will be on subnet 10.11.1.0/24.  $ROUTER_IP and $SQUID_IP
are both on the subnet 10.8.0.0/16.

Squid has been compiled as follows:
configure options:  '--enable-linux-netfilter' '--enable-wccp'
'--prefix=/usr' '--localstatedir=/var' '--libexecdir=/lib/squid'
'--srcdir=.' '--datadir=/share/squid' '--sysconfdir=/etc/squid'
'CPPFLAGS=-I../libltdl' --with-squid=/root/squid-3.1.7
--enable-ltdl-convenience


=== Squid configuration: ===

File /etc/rc.local :

modprobe ip_gre
ip tunnel add wccp0 mode gre remote $ROUTER_IP local $SQUID_IP dev eth0
ifconfig wccp0 $SQUID_IP netmask 255.255.255.255 up
echo 0/proc/sys/net/ipv4/conf/wccp0/rp_filter
echo 0/proc/sys/net/ipv4/conf/eth0/rp_filter
echo 1/proc/sys/net/ipv4/ip_forward
iptables -t nat -A PREROUTING -i wccp0 -p tcp --dport 80 -j REDIRECT
--to-port 3128
iptables -t nat -A POSTROUTING -j MASQUERADE


File /etc/squid/squid.conf : I am basically using the default config,
adding only the commands for transparent proxying:

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 10.0.0.0/8
acl localnet src fc00::/7
acl localnet src fe80::/10

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

# Transparent proxying
http_port 3128 transparent


http_port 3128 intercept

You will also need a separate port for the normal browser-configured and 
management requests. 3.1 will reject these if sent to a NAT interception 
port.



wccp2_router $ROUTER_IP
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_service standard 0

# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on localhost is a local user
http_access deny to_localhost

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /var/cache 5 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/cache

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

=== ===


=== Configuration of the router, a Cisco ASA 5520 firewall: ===

access-list wccp_redirect extended deny ip host $SQUID_IP any
access-list wccp_redirect extended permit tcp 10.11.1.0 255.255.255.0 any eq www
wccp web-cache redirect-list wccp_redirect
wccp interface inside web-cache redirect in

=== ===


This does not work.  The browser gives an error Unable to connect to
remote server after a timeout.

Here is is the output of tcpdump -vvnn -i eth0 port 2048 on the Squid machine:
15:05:01.279896 IP (tos 0x0, ttl 64, id 22913, offset 0, flags [none],
proto UDP (17), length 172) $SQUID_IP.2048  $ROUTER_IP.2048: UDP,
length 144
15:05:01.280090 IP (tos 0x0, ttl 255, id 5011, offset 0, flags [none],
proto UDP (17), length 168) $ROUTER_IP.2048  $SQUID_IP.2048: UDP,
length 140
15:05:11.279893 IP (tos 0x0, ttl 64, id 22914, offset 0, flags [none],
proto UDP (17), length 172) $SQUID_IP.2048  $ROUTER_IP.2048: UDP,
length 144
15:05:11.280083 IP (tos 0x0, ttl 255, id 20123, offset 0, flags
[none], proto UDP (17), length 168) $ROUTER_IP.2048  $SQUID_IP.2048:
UDP, length 140

This is what I see on the Cisco ASA when I turn debugging on with
debug ip wccp packets:
WCCP-PKT:S00: Received valid Here_I_Am packet from $SQUID_IP 

Re: [squid-users] Squid 3.1.9 OSX client_side.cc okToAccept: WARNING! Your cache is running out of filedescriptors

2010-11-06 Thread Amos Jeffries

On 04/11/10 01:56, donovan jeffrey j wrote:

greetings
updated 2 transparent proxies last night. and both are spewing noise about 
filedescriptors. this is coming from the system.

2010/11/03 08:48:36| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:48:52| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:49:08| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:49:24| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:49:40| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:49:56| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:50:12| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:50:28| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:50:44| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:51:00| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors

here is what sysctl -a gives me.


kern.exec: unknown type returned
kern.maxfiles = 12288
kern.maxfilesperproc = 10240
kern.corefile = /cores/core.%P
kern.maxfiles: 12288
kern.maxfilesperproc: 10240


what should i set these to and do I need to recompile with any special 
adjustments ?

./configure --enable-icmp --enable-storeio=diskd,ufs,aufs --enable-delay-pools 
--disable-htcp --enable-ssl --enable-ipfw-transparent --enable-snmp 
--enable-underscores --enable-basic-auth-helpers=NCSA,LDAP,getpwnam



Any other info you can provide about whats using the FDs?

With interception proxies it is usually forwarding loops. They may only 
show up after a long slow shutdown as thousands of entries in the 
access.log.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.2