[zfs-discuss] zpool iostat and iostat discrepancy

2009-06-20 Thread tester
Hi,

Does anyone know the difference between zpool iostat  and iostat?


dd if=/dev/zero of=/test/test1/trash count=1 bs=1024k;sync

pool only shows 236K IO and 13 write ops. whereas iostat shows a correctly meg 
of activity.

zpool iostat -v test 5

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
test1.14M   100G  0 13  0   236K
  c8t60060E800475F50075F50525d0   182K  25.0G  0  4  0  
36.8K
  c8t60060E800475F50075F50526d0   428K  25.0G  0  4  0  
87.7K
  c8t60060E800475F50075F50540d0   558K  50.0G  0  4  0   
111K
--  -  -  -  -  -  -

iostat -xnz [devices] 5

   extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
2.46.06.8   88.2  0.0  0.00.01.0   0   0 
c8t60060E800475F50075F50540d0
2.45.46.8   37.0  0.0  0.00.00.9   0   0 
c8t60060E800475F50075F50526d0
2.45.06.8  112.0  0.0  0.00.00.9   0   0 
c8t60060E800475F50075F50525d0

dtrace also concurs with iostat
 
device bytes IOPS
==      
  /devices/scsi_vhci/s...@g60060e800475f50075f50525:a   224416  
 35
  /devices/scsi_vhci/s...@g60060e800475f50075f50526:a   486560  
 37
  /devices/scsi_vhci/s...@g60060e800475f50075f50540:a   608416  
 33

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool iostat and iostat discrepancy

2009-06-20 Thread Neil Perrin

On 06/20/09 11:14, tester wrote:

Hi,

Does anyone know the difference between zpool iostat  and iostat?


dd if=/dev/zero of=/test/test1/trash count=1 bs=1024k;sync

pool only shows 236K IO and 13 write ops. whereas iostat shows a correctly meg 
of activity.


The zfs numbers are per second as well. So 236K * 5 = 1180K
zpool iostat -v test 1 would make this clearer.

The iostat output below also shows 237K (88+37+112) being written per second.
I'm not sure why any reads occurred though. When I did a quick
experiment there were no reads.

Enabling compression gives much better numbers when writing zeros!

Neil.



zpool iostat -v test 5

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
test1.14M   100G  0 13  0   236K
  c8t60060E800475F50075F50525d0   182K  25.0G  0  4  0  
36.8K
  c8t60060E800475F50075F50526d0   428K  25.0G  0  4  0  
87.7K
  c8t60060E800475F50075F50540d0   558K  50.0G  0  4  0   
111K
--  -  -  -  -  -  -

iostat -xnz [devices] 5

   extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
2.46.06.8   88.2  0.0  0.00.01.0   0   0 
c8t60060E800475F50075F50540d0
2.45.46.8   37.0  0.0  0.00.00.9   0   0 
c8t60060E800475F50075F50526d0
2.45.06.8  112.0  0.0  0.00.00.9   0   0 
c8t60060E800475F50075F50525d0

dtrace also concurs with iostat
 
device bytes IOPS

==      
  /devices/scsi_vhci/s...@g60060e800475f50075f50525:a   224416  
 35
  /devices/scsi_vhci/s...@g60060e800475f50075f50526:a   486560  
 37
  /devices/scsi_vhci/s...@g60060e800475f50075f50540:a   608416  
 33

Thanks

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool iostat and iostat discrepancy

2009-06-20 Thread tester
Neil,

Thanks.

That makes sense. May be man page for zpool can say that it is a rate as iostat 
man page does. I think reads are from the zpool iostat command itself. zpool 
iostat doesn't capture that.

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss