Re: [squid-users] Strange High CPU usage

2004-01-20 Thread Giulio Cervera
After couple of day  lost in OS and Squid tuning,
today i have disabled Write and Read cache into the Perc3/Di RAID 
Controller (128MB onboard),

cache disabled
client_http.requests = 152.155593/sec
cpu_usage = 76.142118%
cache enabled
client_http.requests = 153.641384/sec
cpu_usage = 99.619241%
this is very strange, but i found the issue

Giulio Cervera ha scritto:

The problem is the following:
We have approximately 700Reqs/sec distributed on
4 x Dell 2650 (sibling with digest) after a load balancer
each one with
2 x Xeon 1.8Ghz (HT disabled)
2GB Ram
PERC3/Di RAID controller
5 x 36GB HD SCSI 10k (2 RAID1 for os and swap, 3 for cache)
3 NIC, 1 internet, 1 intranet, 1 proxy lan (digest and proxy 
comunication)

52Mbit link

Slackware 8.1
Squid 2.5.STABLE4
some line of fstab
/dev/sdb1/var/cache/spool/0  reiserfsnoatime,notail   1   2
/dev/sdc1/var/cache/spool/1  reiserfsnoatime,notail   1   2
/dev/sdd1/var/cache/spool/2  reiserfsnoatime,notail   1   2
configure options:
--prefix=/usr --exec_prefix=/usr --bindir=/usr/sbin 
--libexecdir=/usr/lib/squid --sysconfdir=/etc/squid 
--localstatedir=/var --with-aufs-threads=48 --with-pthreads --with-aio 
--enable-async-io --enable-storeio=diskd,aufs --disable-wccp 
--enable-default-err-language=English --enable-err-languages=English 
--disable-ident-lookups --enable-underscores 
--enable-removal-policies=heap,lru --enable-snmp 
--enable-cache-digests --enable-gnuregex

and actually configuration:

cache_mem 64 MB
cache_swap_low 85
cache_swap_high 90
maximum_object_size 65536 KB
maximum_object_size_in_memory 24 KB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
cache_dir diskd /var/cache/spool/0 28000 96 256 Q1=72 Q2=64
cache_dir diskd /var/cache/spool/1 28000 96 256 Q1=72 Q2=64
cache_dir diskd /var/cache/spool/2 28000 96 256 Q1=72 Q2=64
memory_pools_limit 50 MB
cache_access_log /var/cache/log/access.log
cache_log /var/cache/log/cache.log
buffered_logs on
Squid sames to work fine, but cpu usage after ~60Req/Sec on each 
server is always 100%
I have tried many changes to squid.conf with no results.

Last question
I have also to upgrade our os to RedHat AS3, any experience with it?
thanks


--

*Giulio Cervera*

EDS PA SpA
Via Atanasio Soldati 80
00155 Roma (Italy)
tel: +39 06 22739 270
fax: +39 06 22739 233
e-mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]



[squid-users] Strange log messages =strip(nnumber)

2004-01-13 Thread Giulio Cervera
Today reading cache.log i found a lot of strange messages 
'=strip(nnumber)'

...
2004/01/13 15:41:57| urlParse: Illegal character in hostname 
'=strip(nnumber)'
2004/01/13 15:41:57| urlParse: Illegal character in hostname 
'=strip(nnumber)'
2004/01/13 15:41:58| urlParse: Illegal character in hostname 
'=strip(nnumber)'
2004/01/13 15:42:01| urlParse: Illegal character in hostname 
'=strip(nnumber)'
2004/01/13 15:42:08| urlParse: Illegal character in hostname 
'=strip(nnumber)'
2004/01/13 15:42:09| urlParse: Illegal character in hostname 
'194.213.2.5:8080194.213.2.5'
2004/01/13 15:42:09| urlParse: Illegal character in hostname 
'=strip(nnumber)'
2004/01/13 15:42:09| urlParse: Illegal character in hostname 
'194.213.2.5:8080194.213.2.5'
2004/01/13 15:42:10| urlParse: Illegal character in hostname 
'=strip(nnumber)'
2004/01/13 15:42:11| urlParse: Illegal character in hostname 
'=strip(nnumber)'
2004/01/13 15:42:13| urlParse: Illegal character in hostname 
'=strip(nnumber)'
2004/01/13 15:42:15| urlParse: Illegal character in hostname 
'=strip(nnumber)'
2004/01/13 15:42:18| urlParse: Illegal character in hostname 
'=strip(nnumber)'
...

Some ideas?

--

*Giulio Cervera*

EDS PA SpA
Via Atanasio Soldati 80
00155 Roma (Italy)
tel: +39 06 22739 270
fax: +39 06 22739 233
e-mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]



Re: [squid-users] Strange High CPU usage

2004-01-08 Thread Giulio Cervera
Henrik Nordstrom ha scritto:

On Wed, 7 Jan 2004, Giulio Cervera wrote:

 

this is the full acl, i have also attached the full config
   

Try using half_closed_clients off

Regards
Henrik
 

ops ...
sorry ...
i have wrong cut  paste, i need more holiday :(
the previous msg leak some part of config
this is full (verified), and half_closed_clients is already off
do you think this acl is too big for our targer ( ~200Reqs/sec ) ?

thank's and sorry again



http_port 8080
icp_port 3130
cache_peer 194.218.2.8   parent8080  0 proxy-only no-query 
no-digest
cache_peer 194.218.2.20  parent8080  0 proxy-only no-query 
no-digest
cache_peer 10.253.16.1   sibling   8080  3130  proxy-only
cache_peer 10.253.16.2   sibling   8080  3130  proxy-only
cache_peer 10.253.16.3   sibling   8080  3130  proxy-only
#cache_peer 10.253.16.4   sibling   8080  3130  proxy-only

hierarchy_stoplist cgi-bin ?

acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
cache_mem 64 MB

cache_swap_low 85
cache_swap_high 90
maximum_object_size 65536 KB

maximum_object_size_in_memory 24 KB

ipcache_size 2048

cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
cache_dir diskd /var/cache/spool/0 28000 96 256 Q1=72 Q2=64
cache_dir diskd /var/cache/spool/1 28000 96 256 Q1=72 Q2=64
cache_dir diskd /var/cache/spool/2 28000 96 256 Q1=72 Q2=64
cache_access_log /var/cache/log/access.log
cache_log /var/cache/log/cache.log
cache_store_log none
log_ip_on_direct on

pid_filename /var/cache/run/cache.pid

ftp_user [EMAIL PROTECTED]

dns_timeout 1 minutes

hosts_file none

refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern .020%4320
quick_abort_min 0 KB
quick_abort_max 0 KB
positive_dns_ttl 1 hours

range_offset_limit 0 KB

read_timeout 10 minutes

half_closed_clients off

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Tunnel_ports port 443-499
acl Tunnel_no_src src 10.253.0.0/16
acl Tunnel_method method CONNECT
acl Safe_ports port 80# http
acl Safe_ports port 81# http 2
acl Safe_ports port 21# ftp
acl Safe_ports port 443-499 # https
acl Safe_ports port 1025-65535# unregistered ports
acl clients src 10.0.0.0/8
acl clients src 172.16.0.0/12
acl clients src 192.168.0.0/16
acl clients src 194.218.0.0/19
acl locallan dst 10.253.0.0/16
acl locallan dst 194.218.2.0/23
acl proxylan dst 10.253.16.0/27
acl allowed_peer src 10.253.16.1
acl allowed_peer src 10.253.16.2
acl allowed_peer src 10.253.16.3
acl allowed_peer src 10.253.16.4
acl siteallow_url url_regex -i ^.{3,4}://.*\.public\.rupa\.it
acl siteallow_dst dst 194.218.2.160/27
acl siteallow_dst dst 10.253.64.0/24
acl siteallow_dst dst 10.253.16.0/27
acl dangurl urlpath_regex -i \.id[aq]\?.{100,}   # CodeRED
acl dangurl urlpath_regex -i /readme\.(eml|nws|exe)  # NIMDA
acl mgmtlan src 10.253.0.0/23
acl FTP proto FTP
acl SITIRUPA dst 194.218.0.0/19
acl SITIRUPA dst 10.0.0.0/8
acl SITIRUPA dst 172.16.0.0/16
acl LLPPProxy src 10.136.1.206
acl LLPPsicoge dst 194.218.14.15
#SNMP ACL
acl SNMPallow src 127.0.0.1/32
acl SNMPallow src 10.253.0.0/16
acl snmppublic snmp_community edsaipa
http_access allow allowed_peer

http_access allow manager localhost
http_access allow manager mgmtlan
http_access deny manager
http_access deny to_localhost
http_access deny !Safe_ports
http_access deny dangurl
http_access deny Tunnel_method Tunnel_no_src !Tunnel_ports

http_access allow siteallow_url
http_access allow siteallow_dst
http_access deny locallan
http_access allow LLPPsicoge LLPPProxy
http_access deny LLPPsicoge
http_access allow clients

http_access deny all

http_reply_access allow all

icp_access allow allowed_peer
icp_access deny all
cache_peer_access 194.218.2.8 allow FTP
cache_peer_access 194.218.2.20 allow SITIRUPA
cache_peer_access 194.218.2.20 deny all
cache_peer_access 10.253.16.1 deny SITIRUPA
cache_peer_access 10.253.16.1 allow all
cache_peer_access 10.253.16.2 deny SITIRUPA
cache_peer_access 10.253.16.2 allow all
cache_peer_access 10.253.16.3 deny SITIRUPA
cache_peer_access 10.253.16.3 allow all
cache_mgr [EMAIL PROTECTED]

visible_hostname caspy008.cgi.rupa.it

logfile_rotate 0

memory_pools_limit 50 MB

store_avg_object_size 25 KB

client_db off

buffered_logs off

always_direct allow proxylan
always_direct deny FTP
always_direct deny SITIRUPA
always_direct deny all
never_direct deny proxylan
never_direct allow SITIRUPA
snmp_port 3401

snmp_access allow snmppublic SNMPallow
snmp_access deny all
coredump_dir /var/cache

ignore_unknown_nameservers off

digest_rebuild_period 15 minute

digest_rewrite_period 15 minute

--

*Giulio Cervera*

EDS PA SpA
Via Atanasio Soldati 80
00155 Roma (Italy)
tel: +39 06 22739 270
fax: +39 06 22739 233
e-mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]



Re: Re: [squid-users] Strange High CPU usage

2004-01-07 Thread Giulio Cervera
thank's again, and sorry for double post ( i have reach max size, just 
removed all comments from squid.conf from previous mail )

We have some ACL,

our network is
2 proxy for FTP (with antivirus)
2 proxy for local LAN ( we have many remote site and just this 2 machine
have access to their firewall )
and this 4 proxy with squid, only for internet (there is no other
product running on it)
this is the full acl, i have also attached the full config
--
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Tunnel_ports port 443-499
acl Tunnel_no_src src 10.253.0.0/16
acl Tunnel_method method CONNECT
acl Safe_ports port 80# http
acl Safe_ports port 81# http 2
acl Safe_ports port 21# ftp
acl Safe_ports port 443-499 # https
acl Safe_ports port 1025-65535# unregistered ports
acl clients src 10.0.0.0/8
acl clients src 172.16.0.0/12
acl clients src 192.168.0.0/16
acl clients src 194.218.0.0/19
acl locallan dst 10.253.0.0/16
acl locallan dst 194.218.2.0/23
acl proxylan dst 10.253.16.0/27
acl allowed_peer src 10.253.16.1
acl allowed_peer src 10.253.16.2
acl allowed_peer src 10.253.16.3
acl allowed_peer src 10.253.16.4
acl siteallow_url url_regex -i ^.{3,4}://.*\.public\.rupa\.it
acl siteallow_dst dst 194.218.2.160/27
acl siteallow_dst dst 10.253.64.0/24
acl siteallow_dst dst 10.253.16.0/27
acl dangurl urlpath_regex -i \.id[aq]\?.{100,}   # CodeRED
acl dangurl urlpath_regex -i /readme\.(eml|nws|exe)  # NIMDA
acl mgmtlan src 10.253.0.0/23
acl FTP proto FTP
acl SITIRUPA dst 194.218.0.0/19
acl SITIRUPA dst 10.0.0.0/8
acl SITIRUPA dst 172.16.0.0/16
acl LLPPProxy src 10.136.1.206
acl LLPPsicoge dst 194.218.14.15
#SNMP ACL
acl SNMPallow src 127.0.0.1/32
acl SNMPallow src 10.253.0.0/16
acl snmppublic snmp_community edsaipa
http_access allow allowed_peer

# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access allow manager mgmtlan
http_access deny manager
http_access deny to_localhost
http_access deny !Safe_ports
http_access deny dangurl
http_access deny Tunnel_method Tunnel_no_src !Tunnel_ports

http_access allow siteallow_url
http_access allow siteallow_dst
http_access deny locallan
http_access allow LLPPsicoge LLPPProxy
http_access deny LLPPsicoge
http_access allow clients

http_access deny all

http_reply_access allow all

icp_access allow allowed_peer
icp_access deny all
cache_peer_access 194.218.2.8 allow FTP
cache_peer_access 194.218.2.20 allow SITIRUPA
cache_peer_access 194.218.2.20 deny all
cache_peer_access 10.253.16.1 deny SITIRUPA
cache_peer_access 10.253.16.1 allow all
cache_peer_access 10.253.16.2 deny SITIRUPA
cache_peer_access 10.253.16.2 allow all
cache_peer_access 10.253.16.3 deny SITIRUPA
cache_peer_access 10.253.16.3 allow all
#cache_peer_access 10.253.16.4 deny SITIRUPA
#cache_peer_access 10.253.16.4 allow all
always_direct allow proxylan
always_direct deny FTP
always_direct deny SITIRUPA
always_direct deny all
never_direct deny proxylan
never_direct allow SITIRUPA
--

Duane Wessels ha scritto:

On Fri, 19 Dec 2003, Giulio Cervera wrote:

 

thank's for your reply:

i'm monitoring  median_select_fds

this morning with 150req/sec

select_loops = 280.262863/sec
select_fds = 1502.051748/sec
average_select_fd_period = 0.000660/fd
median_select_fds = 3.984375
thin evening with 40req/sec

select_loops = 383.217992/sec
select_fds = 457.205789/sec
average_select_fd_period = 0.001830/fd
median_select_fds = 0.00
  


I assume that you see high 99% usage at 150 req/sec, and
okay CPU usage at 40 req/sec.
From the above numbers, it looks like the high CPU usage is not due to
some stuck file descriptor.

Was that the entire squid configuration that you sent?  Or do you have 
some
long ACL lists or something that could be causing the high CPU usage?

Duane W.

 

--

*Giulio Cervera*

EDS PA SpA
Via Atanasio Soldati 80
00155 Roma (Italy)
tel: +39 06 22739 270
fax: +39 06 22739 233
e-mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]




Re: [squid-users] Strange High CPU usage

2003-12-22 Thread Giulio Cervera
thank's again, and sorry for double post ( i have reach max size, just 
removed all comments from squid.conf from previous mail )

We have some ACL, but this are needed to our network rules, can u think
upgrading cpu's can solve our problem?
our network is
2 proxy for FTP (with antivirus)
2 proxy for local LAN ( we have many remote site and just this 2 machine
have access to their firewall )
and this 4 proxy with squid, only for internet (there is no other
product running on it)
this is the full acl, i have also attached the full config
--
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Tunnel_ports port 443-499
acl Tunnel_no_src src 10.253.0.0/16
acl Tunnel_method method CONNECT
acl Safe_ports port 80# http
acl Safe_ports port 81# http 2
acl Safe_ports port 21# ftp
acl Safe_ports port 443-499 # https
acl Safe_ports port 1025-65535# unregistered ports
acl clients src 10.0.0.0/8
acl clients src 172.16.0.0/12
acl clients src 192.168.0.0/16
acl clients src 194.218.0.0/19
acl locallan dst 10.253.0.0/16
acl locallan dst 194.218.2.0/23
acl proxylan dst 10.253.16.0/27
acl allowed_peer src 10.253.16.1
acl allowed_peer src 10.253.16.2
acl allowed_peer src 10.253.16.3
acl allowed_peer src 10.253.16.4
acl siteallow_url url_regex -i ^.{3,4}://.*\.public\.rupa\.it
acl siteallow_dst dst 194.218.2.160/27
acl siteallow_dst dst 10.253.64.0/24
acl siteallow_dst dst 10.253.16.0/27
acl dangurl urlpath_regex -i \.id[aq]\?.{100,}   # CodeRED
acl dangurl urlpath_regex -i /readme\.(eml|nws|exe)  # NIMDA
acl mgmtlan src 10.253.0.0/23
acl FTP proto FTP
acl SITIRUPA dst 194.218.0.0/19
acl SITIRUPA dst 10.0.0.0/8
acl SITIRUPA dst 172.16.0.0/16
acl LLPPProxy src 10.136.1.206
acl LLPPsicoge dst 194.218.14.15
#SNMP ACL
acl SNMPallow src 127.0.0.1/32
acl SNMPallow src 10.253.0.0/16
acl snmppublic snmp_community edsaipa
http_access allow allowed_peer

# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access allow manager mgmtlan
http_access deny manager
http_access deny to_localhost
http_access deny !Safe_ports
http_access deny dangurl
http_access deny Tunnel_method Tunnel_no_src !Tunnel_ports

http_access allow siteallow_url
http_access allow siteallow_dst
http_access deny locallan
http_access allow LLPPsicoge LLPPProxy
http_access deny LLPPsicoge
http_access allow clients

http_access deny all

http_reply_access allow all

icp_access allow allowed_peer
icp_access deny all
cache_peer_access 194.218.2.8 allow FTP
cache_peer_access 194.218.2.20 allow SITIRUPA
cache_peer_access 194.218.2.20 deny all
cache_peer_access 10.253.16.1 deny SITIRUPA
cache_peer_access 10.253.16.1 allow all
cache_peer_access 10.253.16.2 deny SITIRUPA
cache_peer_access 10.253.16.2 allow all
cache_peer_access 10.253.16.3 deny SITIRUPA
cache_peer_access 10.253.16.3 allow all
#cache_peer_access 10.253.16.4 deny SITIRUPA
#cache_peer_access 10.253.16.4 allow all
always_direct allow proxylan
always_direct deny FTP
always_direct deny SITIRUPA
always_direct deny all
never_direct deny proxylan
never_direct allow SITIRUPA
--

Duane Wessels ha scritto:

On Fri, 19 Dec 2003, Giulio Cervera wrote:

 

thank's for your reply:

i'm monitoring  median_select_fds

this morning with 150req/sec

select_loops = 280.262863/sec
select_fds = 1502.051748/sec
average_select_fd_period = 0.000660/fd
median_select_fds = 3.984375
thin evening with 40req/sec

select_loops = 383.217992/sec
select_fds = 457.205789/sec
average_select_fd_period = 0.001830/fd
median_select_fds = 0.00
   

I assume that you see high 99% usage at 150 req/sec, and
okay CPU usage at 40 req/sec.
From the above numbers, it looks like the high CPU usage is not due to
some stuck file descriptor.

Was that the entire squid configuration that you sent?  Or do you have some
long ACL lists or something that could be causing the high CPU usage?
Duane W.

 

--

*Giulio Cervera*

EDS PA SpA
Via Atanasio Soldati 80
00155 Roma (Italy)
tel: +39 06 22739 270
fax: +39 06 22739 233
e-mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]



squid.conf.zip
Description: Binary data


Re: Re: [squid-users] Strange High CPU usage

2003-12-19 Thread Giulio Cervera
Duane Wessels ha scritto:

and actually configuration:

cache_mem 64 MB
cache_swap_low 85
cache_swap_high 90
maximum_object_size 65536 KB
maximum_object_size_in_memory 24 KB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
cache_dir diskd /var/cache/spool/0 28000 96 256 Q1=72 Q2=64
cache_dir diskd /var/cache/spool/1 28000 96 256 Q1=72 Q2=64
cache_dir diskd /var/cache/spool/2 28000 96 256 Q1=72 Q2=64
memory_pools_limit 50 MB
cache_access_log /var/cache/log/access.log
cache_log /var/cache/log/cache.log
buffered_logs on
Squid sames to work fine, but cpu usage after ~60Req/Sec on each server
is always 100%
I have tried many changes to squid.conf with no results.
   



Probably the first thing to look at is whether or not the high CPU
problem comes from select/poll looping very quickly.  For example:
% squidclient mgr:5min | grep -i select
select_loops = 72.180305/sec
select_fds = 19.528906/sec
average_select_fd_period = 0.007477/fd
median_select_fds = 0.00
I think median_select_fds is essentially broken and always returns 0.
 

median_select_fds = 3.984375

I plot the values for my caches, which you can see here:
http://www.ircache.net/Statistics/Vitals/rrd/cgi/select.day.cgi
You might want to add 'half_closed_clients off' to your config file
and see if that helps.
Duane W.

 

half_closed_clients is already off

this is the actually 5min stat :

sample_start_time = 1071827437.3708 (Fri, 19 Dec 2003 09:50:37 GMT)
sample_end_time = 1071827737.11348 (Fri, 19 Dec 2003 09:55:37 GMT)
client_http.requests = 146.539601/sec
client_http.hits = 43.662221/sec
client_http.errors = 0.00/sec
client_http.kbytes_in = 90.097706/sec
client_http.kbytes_out = 1206.619271/sec
client_http.all_median_svc_time = 0.092188 seconds
client_http.miss_median_svc_time = 0.127833 seconds
client_http.nm_median_svc_time = 0.011645 seconds
client_http.nh_median_svc_time = 0.167753 seconds
client_http.hit_median_svc_time = 0.023168 seconds
server.all.requests = 107.887252/sec
server.all.errors = 0.00/sec
server.all.kbytes_in = 1004.031097/sec
server.all.kbytes_out = 78.504667/sec
server.http.requests = 104.694000/sec
server.http.errors = 0.00/sec
server.http.kbytes_in = 931.439613/sec
server.http.kbytes_out = 67.644944/sec
server.ftp.requests = 0.00/sec
server.ftp.errors = 0.00/sec
server.ftp.kbytes_in = 0.00/sec
server.ftp.kbytes_out = 0.00/sec
server.other.requests = 3.193252/sec
server.other.errors = 0.00/sec
server.other.kbytes_in = 72.594818/sec
server.other.kbytes_out = 10.859723/sec
icp.pkts_sent = 153.342762/sec
icp.pkts_recv = 152.362786/sec
icp.queries_sent = 75.238084/sec
icp.replies_sent = 78.104678/sec
icp.queries_recv = 78.104678/sec
icp.replies_recv = 74.258109/sec
icp.replies_queued = 0.00/sec
icp.query_timeouts = 0.543319/sec
icp.kbytes_sent = 13.262996/sec
icp.kbytes_recv = 13.189664/sec
icp.q_kbytes_sent = 6.676497/sec
icp.r_kbytes_sent = 6.586499/sec
icp.q_kbytes_recv = 6.893158/sec
icp.r_kbytes_recv = 6.296506/sec
icp.query_median_svc_time = 0.007296 seconds
icp.reply_median_svc_time = 0.00 seconds
dns.median_svc_time = 0.005733 seconds
unlink.requests = 0.00/sec
page_faults = 0.00/sec
select_loops = 280.262863/sec
select_fds = 1502.051748/sec
average_select_fd_period = 0.000660/fd
median_select_fds = 3.984375
swap.outs = 15.922928/sec
swap.ins = 38.489020/sec
swap.files_cleaned = 0.00/sec
aborted_requests = 13.556321/sec
syscalls.polls = 419.062661/sec
syscalls.disk.opens = 54.431947/sec
syscalls.disk.closes = 54.411948/sec
syscalls.disk.reads = 77.951348/sec
syscalls.disk.writes = 153.126100/sec
syscalls.disk.seeks = 0.00/sec
syscalls.disk.unlinks = 16.652909/sec
syscalls.sock.accepts = 62.735069/sec
syscalls.sock.sockets = 66.334977/sec
syscalls.sock.connects = 65.908322/sec
syscalls.sock.binds = 66.134982/sec
syscalls.sock.closes = 100.097451/sec
syscalls.sock.reads = 777.553532/sec
syscalls.sock.writes = 767.850445/sec
syscalls.sock.recvfroms = 353.044342/sec
syscalls.sock.sendtos = 244.583771/sec
cpu_time = 297.944000 seconds
wall_time = 300.007640 seconds
cpu_usage = 99.312138%
--

*Giulio Cervera*

EDS PA SpA
Via Atanasio Soldati 80
00155 Roma (Italy)
tel: +39 06 22739 270
fax: +39 06 22739 233
e-mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]



Re: [squid-users] Strange High CPU usage

2003-12-19 Thread Giulio Cervera
thank's for your reply:

i'm monitoring  median_select_fds

this morning with 150req/sec

select_loops = 280.262863/sec
select_fds = 1502.051748/sec
average_select_fd_period = 0.000660/fd
median_select_fds = 3.984375
thin evening with 40req/sec

select_loops = 383.217992/sec
select_fds = 457.205789/sec
average_select_fd_period = 0.001830/fd
median_select_fds = 0.00


Duane Wessels ha scritto:

and actually configuration:

cache_mem 64 MB
cache_swap_low 85
cache_swap_high 90
maximum_object_size 65536 KB
maximum_object_size_in_memory 24 KB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
cache_dir diskd /var/cache/spool/0 28000 96 256 Q1=72 Q2=64
cache_dir diskd /var/cache/spool/1 28000 96 256 Q1=72 Q2=64
cache_dir diskd /var/cache/spool/2 28000 96 256 Q1=72 Q2=64
memory_pools_limit 50 MB
cache_access_log /var/cache/log/access.log
cache_log /var/cache/log/cache.log
buffered_logs on
Squid sames to work fine, but cpu usage after ~60Req/Sec on each server
is always 100%
I have tried many changes to squid.conf with no results.
   



Probably the first thing to look at is whether or not the high CPU
problem comes from select/poll looping very quickly.  For example:
% squidclient mgr:5min | grep -i select
select_loops = 72.180305/sec
select_fds = 19.528906/sec
average_select_fd_period = 0.007477/fd
median_select_fds = 0.00
I think median_select_fds is essentially broken and always returns 0.

I plot the values for my caches, which you can see here:
http://www.ircache.net/Statistics/Vitals/rrd/cgi/select.day.cgi
You might want to add 'half_closed_clients off' to your config file
and see if that helps.
Duane W.

 



--

*Giulio Cervera*

EDS PA SpA
Via Atanasio Soldati 80
00155 Roma (Italy)
tel: +39 06 22739 270
fax: +39 06 22739 233
e-mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]



[squid-users] Strange High CPU usage

2003-12-18 Thread Giulio Cervera
The problem is the following:
We have approximately 700Reqs/sec distributed on
4 x Dell 2650 (sibling with digest) after a load balancer
each one with
2 x Xeon 1.8Ghz (HT disabled)
2GB Ram
PERC3/Di RAID controller
5 x 36GB HD SCSI 10k (2 RAID1 for os and swap, 3 for cache)
3 NIC, 1 internet, 1 intranet, 1 proxy lan (digest and proxy comunication)
52Mbit link

Slackware 8.1
Squid 2.5.STABLE4
some line of fstab
/dev/sdb1/var/cache/spool/0  reiserfsnoatime,notail   1   2
/dev/sdc1/var/cache/spool/1  reiserfsnoatime,notail   1   2
/dev/sdd1/var/cache/spool/2  reiserfsnoatime,notail   1   2
configure options:
--prefix=/usr --exec_prefix=/usr --bindir=/usr/sbin 
--libexecdir=/usr/lib/squid --sysconfdir=/etc/squid --localstatedir=/var 
--with-aufs-threads=48 --with-pthreads --with-aio --enable-async-io 
--enable-storeio=diskd,aufs --disable-wccp 
--enable-default-err-language=English --enable-err-languages=English 
--disable-ident-lookups --enable-underscores 
--enable-removal-policies=heap,lru --enable-snmp --enable-cache-digests 
--enable-gnuregex

and actually configuration:

cache_mem 64 MB
cache_swap_low 85
cache_swap_high 90
maximum_object_size 65536 KB
maximum_object_size_in_memory 24 KB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
cache_dir diskd /var/cache/spool/0 28000 96 256 Q1=72 Q2=64
cache_dir diskd /var/cache/spool/1 28000 96 256 Q1=72 Q2=64
cache_dir diskd /var/cache/spool/2 28000 96 256 Q1=72 Q2=64
memory_pools_limit 50 MB
cache_access_log /var/cache/log/access.log
cache_log /var/cache/log/cache.log
buffered_logs on
Squid sames to work fine, but cpu usage after ~60Req/Sec on each server 
is always 100%
I have tried many changes to squid.conf with no results.

Last question
I have also to upgrade our os to RedHat AS3, any experience with it?
thanks
--
*Giulio Cervera*

EDS PA SpA
Via Atanasio Soldati 80
00155 Roma (Italy)
tel: +39 06 22739 270
fax: +39 06 22739 233
e-mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]