Цитую Henrik Nordstrom <[EMAIL PROTECTED]>:
> On Fri, 5 Dec 2003, Andriy Korud wrote: > > > Hi, I need to setup squid for approx 3000 clients on 30Mbit link. > Estimated > > load is 10000reqs/min. > > > > Hardware is Xeon/2.8GHz, 1G RAM, Intel 1000/PRO Ethrenet and 2x36 Ultra320 > SCSI > > disks dedicated for cache. > > Should be fine, but you might want more than 2 drives for the cache here.. > > > The problem is the following: > > at about 2000reqs/min Squid works fine using ~50% CPU, however when load > grows > > to 3000reqs/min Squid suddenly goes to 100% CPU and system becomes > > unresponsible at all. > > Makybe the disks can't keep up and a backlog of requests builds up, > and things spiral out from there.. > > Is there anything interesting in cache.log? > Nothing except comm_accept: FD 12: (53) Software caused connection abort 1-5 times per second. I don't think problem is with disks: diskd processes use 1% CPU, while squid process itself - 99-100%. And as CPU usage grows, visual disks activity reduces. Part of my squid.conf and system maxfiles: cache_mem 100 MB maximum_object_size 8096 KB maximum_object_size_in_memory 24 KB cache_dir diskd /cache1/squid 31000 38 256 cache_dir diskd /cache2/squid 31000 38 256 proxy# sysctl -a | grep maxfiles kern.maxfiles: 4136 kern.maxfilesperproc: 3722 Andriy P.S. Maybe somebody knows where can I read about optimal squid&FreeBSD configuration for such scale?
