Here is the info required.


PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
  6179 nobody    312M  225M sleep   51    0  12:42:09 0.8% BackupPC_dump/1
 7783 root     3812K 2984K cpu7    50    0   0:00:03 0.4% prstat/1
 7803 root     2948K 1736K sleep   54    0   0:00:00 0.0% top/1
 900 nobody     88M 4140K cpu3    59    0   0:00:00 0.0% httpd/1
 832 nobody     88M 3800K sleep   59    0   0:00:00 0.0% httpd/1
 898 nobody     88M 3700K sleep   59    0   0:00:00 0.0% httpd/1
 7782 root     6172K 3448K sleep   59    0   0:00:00 0.0% sshd/1
 7772 root     2748K 1644K sleep   59    0   0:00:00 0.0% iostat/1
 746 root     3164K 1616K sleep   59    0   0:00:00 0.0% dmispd/1
 516 root     2800K 1532K sleep   59    0   0:00:00 0.0% automountd/2
 513 root     2516K  948K sleep   59    0   0:00:00 0.0% automountd/2
 532 root     4120K 1876K sleep   59    0   0:00:00 0.0% syslogd/13
 829 nobody     88M 3568K sleep   59    0   0:00:00 0.0% httpd/1
 831 nobody     88M 4124K sleep   59    0   0:00:00 0.0% httpd/1
 352 daemon   2436K 1292K sleep   60  -20   0:00:00 0.0% nfs4cbd/2
 430 root     2060K  676K sleep   59    0   0:00:00 0.0% smcboot/1
 300 root     2752K  940K sleep   59    0   0:00:00 0.0% cron/1
 359 daemon   4704K 1752K sleep   59    0   0:00:00 0.0% nfsmapid/3
 173 daemon   4216K 2068K sleep   59    0   0:00:00 0.0% kcfd/3
 517 root     3020K 2020K sleep   59    0   0:00:00 0.0% vold/5
 152 root     1820K 1028K sleep   59    0   0:00:00 0.0% powerd/3
 425 root     4884K 3260K sleep   59    0   0:00:00 0.0% inetd/3
 138 root     4964K 1908K sleep   59    0   0:00:00 0.0% syseventd/15
 428 root     2060K  964K sleep   59    0   0:00:00 0.0% smcboot/1
 393 root     2068K  912K sleep   59    0   0:00:00 0.0% sac/1
 163 root     3684K 2000K sleep   59    0   0:00:00 0.0% devfsadm/6
 167 root     3880K 2620K sleep   59    0   0:00:00 0.0% picld/5
 899 nobody     88M 4100K sleep   59    0   0:00:00 0.0% httpd/1
 398 root     1428K  648K sleep   59    0   0:00:00 0.0% utmpd/1
 350 daemon   2768K 1592K sleep   59    0   0:00:00 0.0% statd/1
NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
  12 nobody    901M  512M   6.2%  12:46:35 0.8%
  47 root      329M  209M   2.5%   0:14:01 0.4%
   1 noaccess  171M  204M   2.5%   0:00:59 0.0%
   1 smmsp    1200K 3272K   0.0%   0:00:00 0.0%
   6 daemon   6352K 6216K   0.1%   0:00:00 0.0%






Total: 67 processes, 243 lwps, load averages: 18.49, 15.84, 13.77

 iostat -x 5
>
               extended device statistics                device    r/s
 w/s   kr/s   kw/s wait actv  svc_t  %w  %b
sd0       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd1       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd2       0.0   18.9    0.0  195.9  0.0  0.0    1.2   0   1
sd3       0.0   19.4    0.0  196.4  0.0  0.0    1.4   0   1
sd4       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd5       0.0   18.9    0.0  176.4  0.0  0.0    1.3   0   1
sd6       0.0   18.4    0.0  166.2  0.0  0.0    1.4   0   1
sd7       0.0   19.4    0.0  175.7  0.0  0.0    1.3   0   1
sd8       0.0   20.2    0.0  178.3  0.0  0.0    1.3   0   1
sd9       0.0   19.9    0.0  213.8  0.0  0.0    1.1   0   1
sd10      0.0   19.4    0.0  196.5  0.0  0.0    1.2   0   1
sd11      0.0   19.7    0.0  200.6  0.0  0.0    1.2   0   1
sd12      0.0   19.4    0.0  175.9  0.0  0.0    1.4   0   1
sd13      0.0   19.4    0.0  188.0  0.0  0.0    1.3   0   1
nfs1      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0


> zpool iostat 5 (if you are using ZFS)
>
-bash-3.00# zpool iostat 5
             capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
pool1       1.68T  8.32T      3    168   371K  9.81M
pool1       1.68T  8.32T      0     68      0  1.58M
pool1       1.68T  8.32T      0     98      0  2.29M
pool1       1.68T  8.32T      0     36      0  1.23M
pool1       1.68T  8.32T      0    103      0  2.67M
pool1       1.68T  8.32T      0     16      0  90.8K
pool1       1.68T  8.32T      0    104      0  2.88M
pool1       1.68T  8.32T      0     86      0  1.65M
pool1       1.68T  8.32T      0     35      0  1.03M
pool1       1.68T  8.32T      0    162      0  4.03M
pool1       1.68T  8.32T      0     46      0  1.35M
pool1       1.68T  8.32T      0     53      0  1.11M
pool1       1.68T  8.32T      0     75      0  2.15M

Also top:

last pid:  7803;  load avg:  18.5,  15.8,  13.8;  up 1+21:19:03

                10:06:00
67 processes: 63 sleeping, 2 running, 2 on cpu
CPU states:  7.1% idle,  0.6% user, 92.3% kernel,  0.0% iowait,  0.0% swap
Kernel: 194 ctxsw, 13 trap, 18419 intr, 2955 syscall, 9 flt
Memory: 8191M phys mem, 615M free mem, 20G total swap, 20G free swap

 PID USERNAME LWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
 7783 root       1  50    0 3812K 2984K run      0:03  0.70% prstat
 6179 nobody     1  51    0  312M  225M run    762:09  0.48% BackupPC_dump
 898 nobody     1  59    0   88M 3700K sleep    0:00  0.03% httpd
 7803 root       1  54    0 2884K 1672K cpu/7    0:00  0.01% top
 900 nobody     1  59    0   88M 4140K cpu/3    0:00  0.00% httpd
 7793 root       1  59    0 5984K 3060K sleep    0:00  0.00% zpool
 7772 root       1  59    0 2748K 1644K sleep    0:00  0.00% iostat
 723 root      28  59    0  247M   81M sleep   11:44  0.00% java
 6045 nobody     1  59    0  473M  465M sleep    4:24  0.00% BackupPC_dump
 832 nobody     1  59    0   88M 3800K sleep    0:00  0.00% httpd
 895 noaccess  20  59    0  252M  152M sleep    0:59  0.00% java
 7776 root       1  59    0 5952K 1868K sleep    0:00  0.00% sshd
 7723 root       1  59    0 5952K 1868K sleep    0:00  0.00% sshd
 176 root      34  59    0 7496K 4432K sleep    0:04  0.00% nscd
 781 root       1  59    0 9660K 5992K sleep    0:01  0.00% snmpd
   7 root      13  59    0   14M   11M sleep    0:02  0.00% svc.startd
 819 root       1  59    0 7388K 1988K sleep    0:02  0.00% sendmail
 7787 root       1  59    0 5952K 1868K sleep    0:00  0.00% sshd
 398 root       1  59    0 1428K  648K sleep    0:00  0.00% utmpd
 826 root       1  59    0   88M 9988K sleep    0:02  0.00% httpd
 899 nobody     1  59    0   88M 4100K sleep    0:00  0.00% httpd
 725 root      19  59    0   20M   15M sleep    1:57  0.00% fmd
   9 root      15  59    0   11M 9768K sleep    0:05  0.00% svc.configd
 6023 nobody     1  59    0   12M 6992K sleep    0:02  0.00% BackupPC
 792 root       1  59    0 5560K 1520K sleep    0:01  0.00% dtlogin
 6024 nobody     1  59    0 6000K 4676K sleep    0:00  0.00% BackupPC_trashC
 831 nobody     1  59    0   88M 4124K sleep    0:00  0.00% httpd
 828 nobody     1  59    0   88M 4120K sleep    0:00  0.00% httpd
 830 nobody     1  59    0   88M 3708K sleep    0:00  0.00% httpd
 829 nobody     1  59    0   88M 3568K sleep    0:00  0.00% httpd

On Thu, Mar 12, 2009 at 9:27 PM, Jeff Williams <j...@umn.edu> wrote:

> Maybe you're also seeing this one?
>
> 6586537 async zio taskqs can block out userland commands
>
> -Jeff
>
>
>
> Blake wrote:
>
>> I think we need some data to look at to find out what's being slow.
>> Try some commands like this to get data:
>>
>> prstat -a
>>
>> iostat -x 5
>>
>> zpool iostat 5 (if you are using ZFS)
>>
>> and then report sample output to this list.
>>
>>
>> You might also consider enabling sar (svcadm enable sar), then reading
>> the sar manpage.
>>
>>
>>
>>
>> On Thu, Mar 12, 2009 at 10:36 AM, Marius van Vuuren
>> <mar...@breakpoint.co.za> wrote:
>>
>>> Hi,
>>>
>>> I have a X4150 with a J4200 connected populated with 12 x 1 TB Disks
>>> (SATA)
>>>
>>> I run backup_pc as my software for backing up.
>>>
>>> Is there anything I can do to make the command line more responsive
>>> during
>>> backup windows? At the moment it grinds to a complete standstill.
>>>
>>> Thanks
>>>
>>>
>>>
>>> _______________________________________________
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>
>>>
>>>  _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to