RES: RES: RES: [squid-users] Squid box dropping connections
Thank you Amos, Amos [Nataniel Klug] How can I see if there is corresponding requests on squid? Att, Nataniel Klug squidclient mgr:utilization | grep syscalls.sock.accepts You can also get the report of what the open FD are used for in mgr:filedescriptors Amos
Re: RES: RES: [squid-users] Squid box dropping connections
On 19/11/2011 1:55 a.m., Nataniel Klug wrote: Hello Amos, [Nataniel Klug] So Eliezer, I don't think I have 155k connections. Most of them are FIN_WAIT1 (about 35~45k). I have 1000 pppoe clients behind this squid box so even if each of them had 50 connections, I would have 50k. I think closing really fast can solve the problem. I set it to close on 5 minutes and I will make a try right now. Some assumption in there needs a double-check. Modern websites can use 50 (or more) connections to load any given page. Clients are not uncommonly having several such pages browsing at once in tabbed browser agents. And Squid uses 2x sockets per client connection. So, while 150K for 1K clients does seem unusual normally. It is within the upper limits they *could* be using if they happend to all be browsing at the same time. I would expect to see some correspondingly high request rate in the Squid stats though. Amos [Nataniel Klug] How can I see if there is corresponding requests on squid? Att, Nataniel Klug squidclient mgr:utilization | grep syscalls.sock.accepts You can also get the report of what the open FD are used for in mgr:filedescriptors Amos
RES: [squid-users] Squid box dropping connections
Hi Eliezer, Thanks for you answer: well this is one of the big problems of the conntrack thingy.. what you can try is to also to change the tcp to: sysctl net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=3600 cause it might causing the problem of such a huge ammount of connection tracking size. the basic size is 120 minutes which can cause a lot of troubles in many cases of open connections. and by the way.. do you really have 155K connections? it seems like too much. hope to hear more about the situation. Regards Eliezer [Nataniel Klug] So Eliezer, I don't think I have 155k connections. Most of them are FIN_WAIT1 (about 35~45k). I have 1000 pppoe clients behind this squid box so even if each of them had 50 connections, I would have 50k. I think closing really fast can solve the problem. I set it to close on 5 minutes and I will make a try right now. Att, Nataniel Klug
RES: RES: [squid-users] Squid box dropping connections
Hello Jenny, I will comment bellow: Nov 17 15:43:13 02 kernel: Out of socket memory Well, there you go. Here is your problem. You will need to decrease your hashsize. I suggest you experiment with conntract max and hashsize nad buckets and watch for errors like these. There are couple of good docs out there explaining kernel memory use with conntrack. [Nataniel Klug] This can be the problem. I made a change to my conntrack hashsize so it's now double of it's default value (8192*2). You can check available port range with: cat /proc/sys/net/ipv4/ip_local_port_range [Nataniel Klug] Ok, I'll look for it. And increase it with: echo 1024 65535 /proc/sys/net/ipv4/ip_local_port_range This is for RHEL6, I don't recall if it is the same for RHEL5. [Nataniel Klug] I made the change over sysctl on boot time. I am not using 1024~65535 in the second try. I set it to 16000~65000 (it has 35k more than default value). Here is a small perl script to log these for post-mortem review. Put it to cron, run every minute as root. Then you can review later. Your orphans don't look good to me. However, you have nolocalbind and you are using tproxy. [Nataniel Klug] The orphans start to grow when squid start to grow on files usage it's using more than 25k files and somehow it drop some of the files and them the orphans grow. It's almost exact amount of sockets/files unused by squid that are orphans. I am neither linux, nor perl, nor tproxy, nor tcp expert. Just someone trying to solve her problems. So approach all these with caution, I take no responsibility. Good luck! Jenny [Nataniel Klug] No problem Jenny, thank you so much for your help. Att, Nataniel Klug
Re: RES: [squid-users] Squid box dropping connections
On 19/11/2011 12:21 a.m., Nataniel Klug wrote: Hi Eliezer, Thanks for you answer: well this is one of the big problems of the conntrack thingy.. what you can try is to also to change the tcp to: sysctl net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=3600 cause it might causing the problem of such a huge ammount of connection tracking size. the basic size is 120 minutes which can cause a lot of troubles in many cases of open connections. and by the way.. do you really have 155K connections? it seems like too much. hope to hear more about the situation. Regards Eliezer [Nataniel Klug] So Eliezer, I don't think I have 155k connections. Most of them are FIN_WAIT1 (about 35~45k). I have 1000 pppoe clients behind this squid box so even if each of them had 50 connections, I would have 50k. I think closing really fast can solve the problem. I set it to close on 5 minutes and I will make a try right now. Some assumption in there needs a double-check. Modern websites can use 50 (or more) connections to load any given page. Clients are not uncommonly having several such pages browsing at once in tabbed browser agents. And Squid uses 2x sockets per client connection. So, while 150K for 1K clients does seem unusual normally. It is within the upper limits they *could* be using if they happend to all be browsing at the same time. I would expect to see some correspondingly high request rate in the Squid stats though. Amos
RES: [squid-users] Squid box dropping connections
Eliezer and Jenny, No lucky with the changes. Now with about 85k connections it started to drop: Every 5.0s: ./log.sh Fri Nov 18 09:53:03 2011 Conexoes ativas: 87427 Ultimas linhas do log: Nov 18 08:21:25 01 kernel: ip_conntrack version 2.4 (16384 buckets, 131072 max) - 320 bytes per conntrack Nov 18 08:21:51 01 ntpd[2023]: frequency initialized -3.892 PPM from /var/lib/ntp/drift Nov 18 08:25:03 01 ntpd[2023]: synchronized to 146.164.48.5, stratum 1 Nov 18 08:25:02 01 ntpd[2023]: time reset -0.470795 s Nov 18 08:25:02 01 ntpd[2023]: kernel time sync enabled 0001 Nov 18 08:28:25 01 ntpd[2023]: synchronized to LOCAL(0), stratum 10 Nov 18 08:29:21 01 ntpd[2023]: synchronized to 146.164.48.5, stratum 1 Nov 18 08:30:33 01 squid[2241]: Squid Parent: child process 2243 started Nov 18 08:31:43 01 kernel: IP_TPROXY: Transparent proxy support initialized 2.0.6 Nov 18 08:31:43 01 kernel: IP_TPROXY: Copyright (c) 2002-2006 BalaBit IT Ltd. Arquivos abertos no squid: 3501 sockets: used 3578 TCP: inuse 3823 orphan 531 tw 1103 alloc 3993 mem 3481 UDP: inuse 9 mem 0 RAW: inuse 1 FRAG: inuse 0 memory 0 -- -Mensagem original- De: Eliezer Croitoru [mailto:elie...@ec.hadorhabaac.com] Enviada em: quinta-feira, 17 de novembro de 2011 20:11 Para: squid-users@squid-cache.org Assunto: Re: [squid-users] Squid box dropping connections On 17/11/2011 16:11, Nataniel Klug wrote: Hello all, I am facing a very difficult problem in my network. I am using a layout like this: (internet) ==router ==squid == [clients] I am running CentOS v5.1 with Squid-2.6 STABLE22 and Tproxy (cttproxy-2.6.18-2.0.6). My kernel is kernel-2.6.18-92. This is the most reliable setup I ever made running Squid. My problem is that I am having serious connections troubles when running squid over 155000 conntrack connections. From my clients I start losing packets to router when the connections go over 155000. My kernel is prepared to run over 260k connections. I am sending a screenshot about the problem where I have 156k connections and I started connections on port 80 to go through squid (bellow I will post every rule I am using for my firewall and transparent connections, also I will send my squid.conf). http://imageshack.us/photo/my-images/12/problemsg.png/ The configuration I am using: /etc/firewall/firewall #!/bin/bash IPT=/sbin/iptables RT=/sbin/route SYS=/sbin/sysctl -w $IPT -F $IPT -t nat -F $IPT -t nat -X $IPT -t mangle -F $IPT -t mangle -X $IPT -t filter -F $IPT -t filter -X $IPT -X $IPT -F INPUT $IPT -F FORWARD $IPT -F OUTPUT $SYS net.ipv4.ip_forward=1 $SYS net.ipv4.ip_nonlocal_bind=1 $SYS net.ipv4.netfilter.ip_conntrack_max=262144 /etc/firewall/squid-start #!/bin/bash IP=/sbin/ip IPT=/sbin/iptables FWDIR=/etc/firewall /etc/firewall/firewall $IPT -t tproxy -F for i in `cat $FWDIR/squid-no-dst` do $IPT -t tproxy -A PREROUTING -d $i -j ACCEPT done for i in `cat $FWDIR/squid-no-src` do $IPT -t tproxy -A PREROUTING -s $i -j ACCEPT done $IPT -t tproxy -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 3128 /etc/squid/squid.conf http_port 3128 tproxy transparent tcp_outgoing_address XXX.XXX.144.67 icp_port 0 cache_mem 128 MB cache_swap_low 92 cache_swap_high 96 maximum_object_size 100 KB cache_replacement_policy heap LFUDA memory_replacement_policy heap LFUDA cache_dir aufs /cache/01/01 47000 64 256 cache_dir aufs /cache/01/02 47000 64 256 cache_dir aufs /cache/02/01 47000 64 256 cache_dir aufs /cache/02/02 47000 64 256 cache_dir aufs /cache/03/01 47000 64 256 cache_dir aufs /cache/03/02 47000 64 256 #--[ Max Usage : by Drive ]--# # sdb1 ( max = 228352 / usg = 95400 (41,77%) ] # sdb1 ( max = 228352 / usg = 95400 (41,77%) ] # sdb3 [ max = 234496 / usg = 95400 (40,68%) ] #-- [ Max HDD sdb Usage ]--# # sdb [ max = 923994 / aloc = 691200 (74,81%) ] cache_store_log none access_log /usr/local/squid/var/logs/access.log squid client_netmask 255.255.255.255 ftp_user sq...@cnett.com.br diskd_program /usr/local/squid/libexec/diskd unlinkd_program /usr/local/squid/libexec/unlinkd error_directory /usr/local/squid/share/errors/Portuguese dns_nameservers XXX.XXX.144.14 XXX.XXX.144.6 acl all src 0.0.0.0/0 acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl QUERY urlpath_regex cgi-bin \? acl SSL_ports port 443 acl Safe_ports port 80 21 443 70 210 280 488 591 777 1025-65535 acl CONNECT method CONNECT acl ASN53226_001 src XXX.XXX.144.0/22 acl ASN53226_002 src XXX.XXX.148.0/22 http_access allow ASN53226_001 http_access allow ASN53226_002 http_access allow localhost http_access allow to_localhost cache deny QUERY http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access deny
RES: RES: [squid-users] Squid box dropping connections
Hello Amos, [Nataniel Klug] So Eliezer, I don't think I have 155k connections. Most of them are FIN_WAIT1 (about 35~45k). I have 1000 pppoe clients behind this squid box so even if each of them had 50 connections, I would have 50k. I think closing really fast can solve the problem. I set it to close on 5 minutes and I will make a try right now. Some assumption in there needs a double-check. Modern websites can use 50 (or more) connections to load any given page. Clients are not uncommonly having several such pages browsing at once in tabbed browser agents. And Squid uses 2x sockets per client connection. So, while 150K for 1K clients does seem unusual normally. It is within the upper limits they *could* be using if they happend to all be browsing at the same time. I would expect to see some correspondingly high request rate in the Squid stats though. Amos [Nataniel Klug] How can I see if there is corresponding requests on squid? Att, Nataniel Klug
[squid-users] Squid box dropping connections
Hello all, I am facing a very difficult problem in my network. I am using a layout like this: (internet) == router == squid == [clients] I am running CentOS v5.1 with Squid-2.6 STABLE22 and Tproxy (cttproxy-2.6.18-2.0.6). My kernel is kernel-2.6.18-92. This is the most reliable setup I ever made running Squid. My problem is that I am having serious connections troubles when running squid over 155000 conntrack connections. From my clients I start losing packets to router when the connections go over 155000. My kernel is prepared to run over 260k connections. I am sending a screenshot about the problem where I have 156k connections and I started connections on port 80 to go through squid (bellow I will post every rule I am using for my firewall and transparent connections, also I will send my squid.conf). http://imageshack.us/photo/my-images/12/problemsg.png/ The configuration I am using: /etc/firewall/firewall #!/bin/bash IPT=/sbin/iptables RT=/sbin/route SYS=/sbin/sysctl -w $IPT -F $IPT -t nat -F $IPT -t nat -X $IPT -t mangle -F $IPT -t mangle -X $IPT -t filter -F $IPT -t filter -X $IPT -X $IPT -F INPUT $IPT -F FORWARD $IPT -F OUTPUT $SYS net.ipv4.ip_forward=1 $SYS net.ipv4.ip_nonlocal_bind=1 $SYS net.ipv4.netfilter.ip_conntrack_max=262144 /etc/firewall/squid-start #!/bin/bash IP=/sbin/ip IPT=/sbin/iptables FWDIR=/etc/firewall /etc/firewall/firewall $IPT -t tproxy -F for i in `cat $FWDIR/squid-no-dst` do $IPT -t tproxy -A PREROUTING -d $i -j ACCEPT done for i in `cat $FWDIR/squid-no-src` do $IPT -t tproxy -A PREROUTING -s $i -j ACCEPT done $IPT -t tproxy -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 3128 /etc/squid/squid.conf http_port 3128 tproxy transparent tcp_outgoing_address XXX.XXX.144.67 icp_port 0 cache_mem 128 MB cache_swap_low 92 cache_swap_high 96 maximum_object_size 100 KB cache_replacement_policy heap LFUDA memory_replacement_policy heap LFUDA cache_dir aufs /cache/01/01 47000 64 256 cache_dir aufs /cache/01/02 47000 64 256 cache_dir aufs /cache/02/01 47000 64 256 cache_dir aufs /cache/02/02 47000 64 256 cache_dir aufs /cache/03/01 47000 64 256 cache_dir aufs /cache/03/02 47000 64 256 #--[ Max Usage : by Drive ]--# # sdb1 ( max = 228352 / usg = 95400 (41,77%) ] # sdb1 ( max = 228352 / usg = 95400 (41,77%) ] # sdb3 [ max = 234496 / usg = 95400 (40,68%) ] #-- [ Max HDD sdb Usage ]--# # sdb [ max = 923994 / aloc = 691200 (74,81%) ] cache_store_log none access_log /usr/local/squid/var/logs/access.log squid client_netmask 255.255.255.255 ftp_user sq...@cnett.com.br diskd_program /usr/local/squid/libexec/diskd unlinkd_program /usr/local/squid/libexec/unlinkd error_directory /usr/local/squid/share/errors/Portuguese dns_nameservers XXX.XXX.144.14 XXX.XXX.144.6 acl all src 0.0.0.0/0 acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl QUERY urlpath_regex cgi-bin \? acl SSL_ports port 443 acl Safe_ports port 80 21 443 70 210 280 488 591 777 1025-65535 acl CONNECT method CONNECT acl ASN53226_001 src XXX.XXX.144.0/22 acl ASN53226_002 src XXX.XXX.148.0/22 http_access allow ASN53226_001 http_access allow ASN53226_002 http_access allow localhost http_access allow to_localhost cache deny QUERY http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access deny all icp_access deny all cache_mgr supo...@cnett.com.br cache_effective_user squid cache_effective_group squid visible_hostname cache unique_hostname 02.cache When I first start linux and there is just a few connections going through the squid box it works just fine. When the connections go over 155k the problems began. Is there anything I can do to solve the problem? -- Att, Nataniel Klug
RE: [squid-users] Squid box dropping connections
I am running CentOS v5.1 with Squid-2.6 STABLE22 and Tproxy (cttproxy-2.6.18-2.0.6). My kernel is kernel-2.6.18-92. This is the most reliable setup I ever made running Squid. My problem is that I am having serious connections troubles when running squid over 155000 conntrack connections. From my clients I start losing packets to router when the connections go over 155000. My kernel is prepared to run over 260k connections. ... $SYS net.ipv4.netfilter.ip_conntrack_max=262144 Just because you have conntract max at 260K does not mean that you can handle 260K connections. You will need to increase hashsize as well: echo 262144 /sys/module/ip_conntrack/parameters/hashsize I would be checking kernel logs for conntrack overflows and cache log for commBind errors. You might need to increase ephemeral port ranges to 64K (don't know if this would apply to tproxy though). Jenny PS: I am not responsible if this blows up your datacenter. It works for me when i am doing 500-600 reqs/sec with CONNECTs on forward proxy.
RES: [squid-users] Squid box dropping connections
Hello Jenny, Thanks for your answer. Sorry I haven't wrote but my hashsize is already in the same value as conntrack_max. I have some out of memory in dmesg: Nov 17 15:43:13 02 kernel: Out of socket memory And in cache.log I was not able to find any CommBind. I am reading about this port ranges (ephemeral). I think my squid is using too many sockets: sockets: used 16662 TCP: inuse 28433 orphan 12185 tw 2191 alloc 28787 mem 18786 UDP: inuse 8 mem 0 RAW: inuse 1 FRAG: inuse 0 memory 0 And it has about 16k files open right now. I will try to find a way to make more ports available. Thanks! Att, Nataniel Klug -- De: Jenny Lee [mailto:bodycar...@live.com] Enviada em: quinta-feira, 17 de novembro de 2011 14:30 Para: listas.n...@cnett.com.br; squid-users@squid-cache.org Assunto: RE: [squid-users] Squid box dropping connections Prioridade: Alta I am running CentOS v5.1 with Squid-2.6 STABLE22 and Tproxy (cttproxy-2.6.18-2.0.6). My kernel is kernel-2.6.18-92. This is the most reliable setup I ever made running Squid. My problem is that I am having serious connections troubles when running squid over 155000 conntrack connections. From my clients I start losing packets to router when the connections go over 155000. My kernel is prepared to run over 260k connections. ... $SYS net.ipv4.netfilter.ip_conntrack_max=262144 Just because you have conntract max at 260K does not mean that you can handle 260K connections. You will need to increase hashsize as well: echo 262144 /sys/module/ip_conntrack/parameters/hashsize I would be checking kernel logs for conntrack overflows and cache log for commBind errors. You might need to increase ephemeral port ranges to 64K (don't know if this would apply to tproxy though). Jenny PS: I am not responsible if this blows up your datacenter. It works for me when i am doing 500-600 reqs/sec with CONNECTs on forward proxy.
RE: RES: [squid-users] Squid box dropping connections
From: listas.n...@cnett.com.br To: bodycar...@live.com; squid-users@squid-cache.org Date: Thu, 17 Nov 2011 15:55:20 -0300 Subject: RES: [squid-users] Squid box dropping connections Hello Jenny, Thanks for your answer. Sorry I haven't wrote but my hashsize is already in the same value as conntrack_max. I have some out of memory in dmesg: Nov 17 15:43:13 02 kernel: Out of socket memory Well, there you go. Here is your problem. You will need to decrease your hashsize. I suggest you experiment with conntract max and hashsize nad buckets and watch for errors like these. There are couple of good docs out there explaining kernel memory use with conntrack. And in cache.log I was not able to find any CommBind. I am reading about this port ranges (ephemeral). I think my squid is using too many sockets: sockets: used 16662 TCP: inuse 28433 orphan 12185 tw 2191 alloc 28787 mem 18786 UDP: inuse 8 mem 0 RAW: inuse 1 FRAG: inuse 0 memory 0 And it has about 16k files open right now. I will try to find a way to make more ports available. Thanks! You can check available port range with: cat /proc/sys/net/ipv4/ip_local_port_range And increase it with: echo 1024 65535 /proc/sys/net/ipv4/ip_local_port_range This is for RHEL6, I don't recall if it is the same for RHEL5. Here is a small perl script to log these for post-mortem review. Put it to cron, run every minute as root. Then you can review later. Your orphans don't look good to me. However, you have nolocalbind and you are using tproxy. I am neither linux, nor perl, nor tproxy, nor tcp expert. Just someone trying to solve her problems. So approach all these with caution, I take no responsibility. Good luck! Jenny #!/usr/bin/perl $ct = `cat /proc/sys/net/netfilter/nf_conntrack_count`; chomp $ct; @ss = `ss -s`; foreach (@ss) { if (/TCP:\s+(\d+)\s+\(estab\s+(\d+),.+orphaned\s+(\d+),.+timewait\s+(\d+).+ports\s+(\d+)/) { $tcp = $1; $est = $2; $orp = $3; $tw = $4; $ports = $5; } } $file = /var/log/tcp.log; $date=localtime(); open(OUT, $file); print OUT $date: CT:$ct TCP:$tcp EST:$est ORP:$orp TW:$tw PORTS:$ports\n; close OUT;
Re: [squid-users] Squid box dropping connections
On 17/11/2011 16:11, Nataniel Klug wrote: Hello all, I am facing a very difficult problem in my network. I am using a layout like this: (internet) ==router ==squid == [clients] I am running CentOS v5.1 with Squid-2.6 STABLE22 and Tproxy (cttproxy-2.6.18-2.0.6). My kernel is kernel-2.6.18-92. This is the most reliable setup I ever made running Squid. My problem is that I am having serious connections troubles when running squid over 155000 conntrack connections. From my clients I start losing packets to router when the connections go over 155000. My kernel is prepared to run over 260k connections. I am sending a screenshot about the problem where I have 156k connections and I started connections on port 80 to go through squid (bellow I will post every rule I am using for my firewall and transparent connections, also I will send my squid.conf). http://imageshack.us/photo/my-images/12/problemsg.png/ The configuration I am using: /etc/firewall/firewall #!/bin/bash IPT=/sbin/iptables RT=/sbin/route SYS=/sbin/sysctl -w $IPT -F $IPT -t nat -F $IPT -t nat -X $IPT -t mangle -F $IPT -t mangle -X $IPT -t filter -F $IPT -t filter -X $IPT -X $IPT -F INPUT $IPT -F FORWARD $IPT -F OUTPUT $SYS net.ipv4.ip_forward=1 $SYS net.ipv4.ip_nonlocal_bind=1 $SYS net.ipv4.netfilter.ip_conntrack_max=262144 /etc/firewall/squid-start #!/bin/bash IP=/sbin/ip IPT=/sbin/iptables FWDIR=/etc/firewall /etc/firewall/firewall $IPT -t tproxy -F for i in `cat $FWDIR/squid-no-dst` do $IPT -t tproxy -A PREROUTING -d $i -j ACCEPT done for i in `cat $FWDIR/squid-no-src` do $IPT -t tproxy -A PREROUTING -s $i -j ACCEPT done $IPT -t tproxy -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 3128 /etc/squid/squid.conf http_port 3128 tproxy transparent tcp_outgoing_address XXX.XXX.144.67 icp_port 0 cache_mem 128 MB cache_swap_low 92 cache_swap_high 96 maximum_object_size 100 KB cache_replacement_policy heap LFUDA memory_replacement_policy heap LFUDA cache_dir aufs /cache/01/01 47000 64 256 cache_dir aufs /cache/01/02 47000 64 256 cache_dir aufs /cache/02/01 47000 64 256 cache_dir aufs /cache/02/02 47000 64 256 cache_dir aufs /cache/03/01 47000 64 256 cache_dir aufs /cache/03/02 47000 64 256 #--[ Max Usage : by Drive ]--# # sdb1 ( max = 228352 / usg = 95400 (41,77%) ] # sdb1 ( max = 228352 / usg = 95400 (41,77%) ] # sdb3 [ max = 234496 / usg = 95400 (40,68%) ] #-- [ Max HDD sdb Usage ]--# # sdb [ max = 923994 / aloc = 691200 (74,81%) ] cache_store_log none access_log /usr/local/squid/var/logs/access.log squid client_netmask 255.255.255.255 ftp_user sq...@cnett.com.br diskd_program /usr/local/squid/libexec/diskd unlinkd_program /usr/local/squid/libexec/unlinkd error_directory /usr/local/squid/share/errors/Portuguese dns_nameservers XXX.XXX.144.14 XXX.XXX.144.6 acl all src 0.0.0.0/0 acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl QUERY urlpath_regex cgi-bin \? acl SSL_ports port 443 acl Safe_ports port 80 21 443 70 210 280 488 591 777 1025-65535 acl CONNECT method CONNECT acl ASN53226_001 src XXX.XXX.144.0/22 acl ASN53226_002 src XXX.XXX.148.0/22 http_access allow ASN53226_001 http_access allow ASN53226_002 http_access allow localhost http_access allow to_localhost cache deny QUERY http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access deny all icp_access deny all cache_mgr supo...@cnett.com.br cache_effective_user squid cache_effective_group squid visible_hostname cache unique_hostname 02.cache When I first start linux and there is just a few connections going through the squid box it works just fine. When the connections go over 155k the problems began. Is there anything I can do to solve the problem? well this is one of the big problems of the conntrack thingy.. what you can try is to also to change the tcp to: sysctl net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=3600 cause it might causing the problem of such a huge ammount of connection tracking size. the basic size is 120 minutes which can cause a lot of troubles in many cases of open connections. and by the way.. do you really have 155K connections? it seems like too much. hope to hear more about the situation. Regards Eliezer -- Att, Nataniel Klug