hello all,
on my laptop, with os20xx.xx, b134, there is high disk usage since few days, and i don't understand what is happening.
The laptop has 4Go RAM, CPU Core(tm)2 Duo CPU     T9600  @ 2.80GHz
i limited the arc size in /etc/system at 1Go
iostat gives me:
   r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  142.0    0.0 10058.9    0.0  7.0  1.0   49.2    6.8  85  97 c5t0d0

prstat shows a minimal load:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
     5 root        0K    0K sleep   99  -20   0:59:11 6.6% zpool-rpool/39
1279 henry 732M 347M sleep 49 0 2:49:16 4.9% thunderbird-bin/24
  1043 henry    2004M 1348M cpu1    59    0   2:05:59 4.6% firefox-bin/12
   697 henry     325M  287M sleep   59    0   0:17:32 1.7% Xorg/3
  2206 root       48M   46M sleep   25    0   0:00:30 1.5% powertop/1
...
Total: 126 processes, 399 lwps, load averages: 0.53, 0.54, 0.66

or memory related:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
  1043 henry    2004M 1346M sleep   56    0   2:06:10 4.4% firefox-bin/12
1279 henry 732M 347M sleep 59 0 2:49:26 3.6% thunderbird-bin/25
  1377 henry     335M  120M sleep   49    0   0:00:53 0.1% acroread/2
   697 henry     325M  287M sleep   59    0   0:17:36 1.4% Xorg/3
   878 henry     189M   62M sleep   49    0   0:00:18 0.1% nautilus/1
   877 henry     157M   36M sleep   59    0   0:00:25 0.0% gnome-panel/1
   881 henry     151M   25M sleep   49    0   0:00:48 0.2% gnome-terminal/2
889 henry 147M 21M sleep 59 0 0:00:01 0.0% gnome-power-man/1 NPROC USERNAME SWAP RSS MEMORY TIME CPU
    66 henry    2881M 2465M    60%   5:18:36  12%
    54 root      160M  160M   3.9%   1:02:32 8.2%
     3 daemon   3136K   13M   0.3%   0:00:00 0.0%
     1 smmsp    1636K 6120K   0.1%   0:00:00 0.0%
     1 gdm       304K 3516K   0.1%   0:00:00 0.0%
Total: 126 processes, 400 lwps, load averages: 0.52, 0.53, 0.64

What can i do to find what process causes disk access?

thanks in advance for help,

gerard
_______________________________________________
opensolaris-discuss mailing list
[email protected]

Reply via email to