On Wed, Feb 9, 2011 at 12:02 AM, Richard Elling
<richard.ell...@gmail.com> wrote:
> The data below does not show heavy CPU usage. Do you have data that
> does show heavy CPU usage?  mpstat would be a good start.

Here is mpstat output during a network copy; I think one of the CPUs
disappeared due to a L2 Cache error.

movax@megatron:~# mpstat -p
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl set
  1  333   0    6  4057 3830 19467  140   27  265    0  1561    1  48
 0  51   0

> Some ZFS checksums are always SHA-256. By default, data checksums are
> Fletcher4 on most modern ZFS implementations, unless dedup is enabled.
I see, thanks for the info.

>> Second, a copy from my desktop PC to my new zpool. (5900rpm drive over
>> GigE to 2 6-drive RAID-Z2s). Load average are around ~3.
>
> Lockstat won't provide direct insight to the run queue (which is used to 
> calculate
> load average). Perhaps you'd be better off starting with prstat.
Ah, gotcha. I ran prstat, which is more of what I wanted:
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
  1434 root        0K    0K run      0  -20   0:01:54  23% zpool-tank/136
  1515 root     9804K 3260K cpu1    59    0   0:00:00 0.1% prstat/1
  1438 root       14M 9056K run     59    0   0:00:00 0.0% smbd/16

zpool thread near the top of usage, which is what I suppose you would expect.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to