On Saturday 01 March 2003 05:55 pm, Allan wrote:
> Hello,
>
> we are currently using a single server (squid 2.4-STABLE7, Linux
> RedHat 7.3, 1.4 GHz Pentium III, 2 Gb Ram, 4096 FileDescriptors) as
> reverse-proxy for a small site (approx 2,5K objects, about 150 Mb).

To extract the most out of this box, try
* Squid ufs over Linux tmpfs or 
   the null fs and large cache_mem & maximum_object_size_in_memory
* If this uses an Intel NIC, replace it with a 3com
* A kernel update -- the 2.4 jam patches are very nice.

> Due to growing load - 400 hits/sec, we are experincing loads about
> 30% user and 65% system-time
>
> Is this alarming? Should we consider bying "web"-switches and adding
> furter servers?

The system time seems a bit high for a squid server that should be 100% 
in memory.  The NIC and the filesystem are the likely culprits.  At 400 
req/sec, I would consider a trio of servers just for smoothing over 
uprades or other downtime.  Obviously the budget is not always so 
flexible.

> As the site is quite small, would it be a good idea to build a
> "mini"-linux booting of floppy/CD with no harddrives/filesystems?

Not really necessary -- with that much RAM, yous shouldn't be touching 
the hard drive, anyway.  I would go with null storage or tmpfs for the 
squid cache, though.

        -- Brian

Reply via email to