Gus Wirth wrote:
> There was some discussion at the December meeting about replacing the 
> existing KPLUG server. I can't remember who got the last one, I think 
> Joshua? Anyway, we have the funds to get a new one so I say Neil 
> delegates this to whoever wants to volunteer and we go ahead and do it. 
> How much capability do we need? That will determine the price.

I have LogWatch logs going back to March 17. I could make a graph
showing disk usage, over time. I seem to see some spikes in September,
as far as / goes. I was not paying as much attention to the other
partitions as much.

Current disk usage:
Wed Dec 20 06:25:38 2006
/dev/sda3             7.5G  3.3G  4.3G  44% /
/dev/sda1              89M  5.3M   79M   7% /boot
/dev/sda7              16G  1.4G   14G  10% /home
/dev/sda6             957M   50M  908M   6% /tmp
/dev/sda5             9.4G  7.2G  2.2G  77% /var

A further breakdown of /var:

1.2G    /var/amavisd
   1.1G    /var/amavisd/quarantine
5.3G    /var/lib
   1.6G    /var/lib/mailman
   1.4G    /var/lib/zope2.7
   2.3G    /var/lib/zope2.8

The trend appears to be relatively stable, with a couple spikes in
September. I can generate more precise statistics if anyone is
interested.

The system load is currently:
top - 08:28:47 up 299 days, 13:39,  3 users,  load average: 1.45, 1.14, 0.73

Top five processes at that time:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND           
  508 jaqque    19   0  1104 1104  844 R  3.8  0.2   0:00.35 top               
28560 zope      11   0  337m 134m  92m S  1.6 27.6  22:39.81 python2.3         
    6 root      10   0     0    0    0 S  0.3  0.0  35:47.88 kupdated          
28556 zope      10   0  337m 134m  92m S  0.3 27.6   1:16.37 python2.3         
31136 www-data   9   0  3440 2160 1964 S  0.3  0.4   0:01.09 apache2           

du's seem to take a long time, so I would say that the disk i/o is the
bg bottleneck vice CPU horsepower.

-john

-- 
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-steer

Reply via email to