Eliezer and Jenny,

        No lucky with the changes. Now with about 85k connections it started
to drop:

Every 5.0s: ./log.sh
Fri Nov 18 09:53:03 2011

Conexoes ativas: 87427

Ultimas linhas do log:
Nov 18 08:21:25 01 kernel: ip_conntrack version 2.4 (16384 buckets, 131072
max) - 320 bytes per conntrack
Nov 18 08:21:51 01 ntpd[2023]: frequency initialized -3.892 PPM from
/var/lib/ntp/drift
Nov 18 08:25:03 01 ntpd[2023]: synchronized to 146.164.48.5, stratum 1
Nov 18 08:25:02 01 ntpd[2023]: time reset -0.470795 s
Nov 18 08:25:02 01 ntpd[2023]: kernel time sync enabled 0001
Nov 18 08:28:25 01 ntpd[2023]: synchronized to LOCAL(0), stratum 10
Nov 18 08:29:21 01 ntpd[2023]: synchronized to 146.164.48.5, stratum 1
Nov 18 08:30:33 01 squid[2241]: Squid Parent: child process 2243 started
Nov 18 08:31:43 01 kernel: IP_TPROXY: Transparent proxy support initialized
2.0.6
Nov 18 08:31:43 01 kernel: IP_TPROXY: Copyright (c) 2002-2006 BalaBit IT
Ltd.

Arquivos abertos no squid:
3501

sockets: used 3578
TCP: inuse 3823 orphan 531 tw 1103 alloc 3993 mem 3481
UDP: inuse 9 mem 0
RAW: inuse 1
FRAG: inuse 0 memory 0

--



> -----Mensagem original-----
> De: Eliezer Croitoru [mailto:elie...@ec.hadorhabaac.com]
> Enviada em: quinta-feira, 17 de novembro de 2011 20:11
> Para: squid-users@squid-cache.org
> Assunto: Re: [squid-users] Squid box dropping connections
> 
> On 17/11/2011 16:11, Nataniel Klug wrote:
> >                  Hello all,
> >
> >                  I am facing a very difficult problem in my network. I
> > am using a layout like this:
> >
> > (internet) ==<router>  ==<squid>  == [clients]
> >
> >                  I am running CentOS v5.1 with Squid-2.6 STABLE22 and
> > Tproxy (cttproxy-2.6.18-2.0.6). My kernel is kernel-2.6.18-92. This is
> > the most reliable setup I ever made running Squid. My problem is that
> > I am having serious connections troubles when running squid over
> > 155000 conntrack connections.
> >
> >                  From my clients I start losing packets to router when
> > the connections go over 155000. My kernel is prepared to run over 260k
> > connections. I am sending a screenshot about the problem where I have
> > 156k connections and I started connections on port 80 to go through
> > squid (bellow I will post every rule I am using for my firewall and
> > transparent connections, also I will send my squid.conf).
> >
> > http://imageshack.us/photo/my-images/12/problemsg.png/
> >
> >                  The configuration I am using:
> >
> > /etc/firewall/firewall
> > #!/bin/bash
> > IPT="/sbin/iptables"
> > RT="/sbin/route"
> > SYS="/sbin/sysctl -w"
> > $IPT -F
> > $IPT -t nat -F
> > $IPT -t nat -X
> > $IPT -t mangle -F
> > $IPT -t mangle -X
> > $IPT -t filter -F
> > $IPT -t filter -X
> > $IPT -X
> > $IPT -F INPUT
> > $IPT -F FORWARD
> > $IPT -F OUTPUT
> > $SYS net.ipv4.ip_forward=1
> > $SYS net.ipv4.ip_nonlocal_bind=1
> > $SYS net.ipv4.netfilter.ip_conntrack_max=262144
> >
> > /etc/firewall/squid-start
> > #!/bin/bash
> > IP="/sbin/ip"
> > IPT="/sbin/iptables"
> > FWDIR="/etc/firewall"
> > /etc/firewall/firewall
> > $IPT -t tproxy -F
> > for i in `cat $FWDIR/squid-no-dst`
> > do
> >         $IPT -t tproxy -A PREROUTING -d $i -j ACCEPT done for i in
> > `cat $FWDIR/squid-no-src` do
> >         $IPT -t tproxy -A PREROUTING -s $i -j ACCEPT done $IPT -t
> > tproxy -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 3128
> >
> > /etc/squid/squid.conf
> > http_port 3128 tproxy transparent
> > tcp_outgoing_address XXX.XXX.144.67
> > icp_port 0
> >
> > cache_mem 128 MB
> >
> > cache_swap_low 92
> > cache_swap_high 96
> > maximum_object_size 1000000 KB
> > cache_replacement_policy heap LFUDA
> > memory_replacement_policy heap LFUDA
> >
> > cache_dir aufs /cache/01/01 47000 64 256 cache_dir aufs /cache/01/02
> > 47000 64 256 cache_dir aufs /cache/02/01 47000 64 256 cache_dir aufs
> > /cache/02/02 47000 64 256 cache_dir aufs /cache/03/01 47000 64 256
> > cache_dir aufs /cache/03/02 47000 64 256 #--[ Max Usage : by Drive
> > ]--# # sdb1 ( max = 228352 / usg = 95400 (41,77%) ] # sdb1 ( max =
> > 228352 / usg = 95400 (41,77%) ] # sdb3 [ max = 234496 / usg = 95400
> > (40,68%) ]
> > #-- [ Max HDD sdb Usage ]--#
> > # sdb [ max = 923994 / aloc = 691200 (74,81%) ]
> >
> > cache_store_log none
> > access_log /usr/local/squid/var/logs/access.log squid client_netmask
> > 255.255.255.255 ftp_user sq...@cnett.com.br
> >
> > diskd_program /usr/local/squid/libexec/diskd unlinkd_program
> > /usr/local/squid/libexec/unlinkd
> >
> > error_directory /usr/local/squid/share/errors/Portuguese
> >
> > dns_nameservers XXX.XXX.144.14 XXX.XXX.144.6
> >
> > acl all src 0.0.0.0/0
> > acl localhost src 127.0.0.1/32
> > acl to_localhost dst 127.0.0.0/8
> > acl QUERY urlpath_regex cgi-bin \?
> > acl SSL_ports port 443
> > acl Safe_ports port 80 21 443 70 210 280 488 591 777 1025-65535 acl
> > CONNECT method CONNECT
> >
> > acl ASN53226_001 src XXX.XXX.144.0/22
> > acl ASN53226_002 src XXX.XXX.148.0/22
> >
> > http_access allow ASN53226_001
> > http_access allow ASN53226_002
> >
> > http_access allow localhost
> > http_access allow to_localhost
> >
> > cache deny QUERY
> >
> > http_access deny !Safe_ports
> > http_access deny CONNECT !SSL_ports
> > http_access deny all
> > icp_access deny all
> >
> > cache_mgr supo...@cnett.com.br
> > cache_effective_user squid
> > cache_effective_group squid
> > visible_hostname cache
> > unique_hostname 02.cache
> >
> >                  When I first start linux and there is just a few
> > connections going through the squid box it works just fine. When the
> > connections go over 155k the problems began. Is there anything I can do
to
> solve the problem?
> well this is one of the big problems of the conntrack thingy..
> what you can try is to also to change the tcp to:
> sysctl net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=3600
> cause it might causing the problem of such a huge ammount of connection
> tracking size.
> the basic size is 120 minutes which can cause a lot of troubles in many
cases
> of open connections.
> and by the way.. do you really have 155K connections? it seems like too
> much.
> 
> hope to hear more about the situation.
> 
> Regards Eliezer
> >
> > --
> > Att,
> >
> > Nataniel Klug
> >
> >

Reply via email to