> > Saudaēões, > Uma ideia, porque nao coloca os HD um em cada ID, a perfomance aumenta um pouco ... Pos o squid usa disco tbm
Rizzo > > Alguém tem sugestões ou dicas para resolver o problema de alto consumo > de CPU pelo squid no FreeBSD 5.4? > Trata-se de um proxy webcache transparente, implementado com o pf e > protocolo WCCP (Cisco). > > Dados relevantes: > > Hardware: P4 2.80GHz HT / 2GB RAM / (2x)80GB HDD / Intel PRO/1000 (em0) > ---------------------- > CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz (2793.01-MHz 686-class CPU) > Hyperthreading: 2 logical CPUs > real memory = 2146631680 (2047 MB) > avail memory = 2094350336 (1997 MB) > cpu0: <ACPI CPU> on acpi0 > agp0: <Intel 82865 host to AGP bridge> mem 0xf8000000-0xfbffffff at > device 0.0 on pci0 > em0: <Intel(R) PRO/1000 Network Connection, Version - 1.7.35> port > 0xcf80-0xcf9f mem > 0xfe9e0000-0xfe9fffff irq 18 at device 1.0 on pci2 > em0: Link is up 100 Mbps Full Duplex > ad0: 76319MB <ST380011A/3.06> [155061/16/63] at ata0-master UDMA100 > ad1: 76319MB <ST380011A/3.06> [155061/16/63] at ata0-slave UDMA100 > -----------//----------------------//----------------------//----------- > > > Software: FreeBSD 5.4-PRERELEASE, squid 2.5.9, pf > > Outras informaēões: > ------------------ > > # top -S > -------- > last pid: 20950; load averages: 0.88, 0.99, 0.96 up 1+01:36:07 > 12:11:52 > 101 processes: 3 running, 65 sleeping, 33 waiting > CPU states: 34.5% user, 0.0% nice, 51.6% system, 7.0% interrupt, 7.0% > idle > Mem: 210M Active, 1544M Inact, 180M Wired, 66M Cache, 112M Buf, 3008K Free > Swap: 4069M Total, 120K Used, 4069M Free > > PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND > 3134 squid 122 0 193M 190M RUN 211:17 84.03% 84.03% squid > 11 root 171 52 0K 8K RUN 934:47 6.98% 6.98% idle > 35 root -44 -163 0K 8K WAIT 26:04 2.83% 2.83% swi1: net > 28 root -68 -187 0K 8K WAIT 14:05 1.03% 1.03% irq18: > em0 uhci2 > 36 root -28 -147 0K 8K WAIT 2:24 0.00% 0.00% swi5: > clock sio > 3136 squid -4 0 1744K 1076K msgwai 2:07 0.00% 0.00% diskd > 56 root 20 0 0K 8K syncer 2:00 0.00% 0.00% syncer > -----------//----------------------//----------------------//----------- > > # systat -vmstat 1 > ------------------ > 3 users Load 1.15 1.00 0.95 Apr 6 12:18 > > Mem:KB REAL VIRTUAL VN PAGER SWAP PAGER > Tot Share Tot Share Free in out in out > Act 209444 4416 261896 6540 82364 count > All 2045952 7132 4500392 10168 pages > Interrupts > Proc:r p d s w Csw Trp Sys Int Sof Flt cow 1515 total > 1 4 32 3680 730 5051 2944 1139 183612 wire 1: > atkb > 215036 act 3: > sio1 > 52.2%Sys 3.7%Intr 28.4%User 0.0%Nice 15.7%Idl 1569204 inact 4: > sio0 > | | | | | | | | | | 79356 cache 6: > fdc0 > ==========================++>>>>>>>>>>>>>> 3008 free 128 8: rtc > daefr > 13: npx > Namei Name-cache Dir-cache prcfr > 14: ata > Calls hits % hits % react > 15: ata > 47 37 79 pdwak 1288 > 18: em0 > zfod pdpgs 99 0: clk > Disks ad0 ad1 ofod intrn > KB/t 0.00 0.00 %slo-z 114880 buf > tps 0 0 1657 tfree 120 dirtybuf > MB/s 0.00 0.00 100000 desiredvnodes > % busy 0 0 90535 numvnodes > 9789 freevnodes > -----------//----------------------//----------------------//----------- > > # netstat -mb > ------------- > 1188 mbufs in use > 1161/32768 mbuf clusters in use (current/max) > 0/3/4608 sfbufs in use (current/peak/max) > 2619 KBytes allocated to network > 0 requests for sfbufs denied > 0 requests for sfbufs delayed > 0 requests for I/O initiated by sendfile > 214 calls to protocol drain routines > -----------//----------------------//----------- > > # vmstat -i > ----------- > interrupt total rate > irq1: atkbd0 2129 0 > irq3: sio1 2 0 > irq4: sio0 2 0 > irq6: fdc0 14 0 > irq8: rtc 11852670 127 > irq13: npx0 1 0 > irq14: ata0 1145295 12 > irq15: ata1 58 0 > irq18: em0 uhci2 75076124 810 > irq0: clk 9260481 99 > Total 97336776 1051 > -----------//----------------------//----------- > > # iostat > -------- > tty ad0 ad1 cpu > tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id > 0 196 15.53 4 0.06 18.19 8 0.14 15 0 22 3 61 > -----------//----------------------//----------- > > > Obrigado antecipadamente, > > Alex > > > > _______________________________________________ > Freebsd mailing list > Freebsd@fug.com.br > http://mail.fug.com.br/mailman/listinfo/freebsd_fug.com.br > _______________________________________________ Freebsd mailing list Freebsd@fug.com.br http://mail.fug.com.br/mailman/listinfo/freebsd_fug.com.br