[zfs-discuss] iostat and monitoring

2008-07-05 Thread Matt Harrison
Hi gurus,

I like zpool iostat and I like system monitoring, so I setup a script 
within sma to let me get the zpool iostat figures through snmp.

The problem is that as zpool iostat is only run once for each snmp 
query, it always reports a static set of figures, like so:

[EMAIL PROTECTED]:snmp # zpool iostat -v
capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank 443G  1.60T  4  4   461K   467K
   raidz1 443G  1.60T  4  4   461K   467K
 c1t0d0  -  -  1  2   227K   234K
 c1t1d0  -  -  1  2   228K   234K
 c2t0d0  -  -  1  2   227K   234K
--  -  -  -  -  -  -

Whereas if I run it an interval, the figures even out after a few 
seconds. What I'm wondering is: Is there any way to get iostat to report 
accurate figures from a one time invocation?

Alternatively is there a better way to get read/write ops etc from my 
pool for monitoring applications?

I would really love if monitoring zfs pools from snmp was better all 
round, but I'm not going to reel off my wish list here at this point ;)

Thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iostat and monitoring

2008-07-05 Thread Mike Gerdts
On Sat, Jul 5, 2008 at 2:33 PM, Matt Harrison
[EMAIL PROTECTED] wrote:
 Alternatively is there a better way to get read/write ops etc from my
 pool for monitoring applications?

 I would really love if monitoring zfs pools from snmp was better all
 round, but I'm not going to reel off my wish list here at this point ;)

You can access the kstats directly to get the counter values.

$ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
unix:0:vopstats_zfs:nread   418787
unix:0:vopstats_zfs:read_bytes  612076305
unix:0:vopstats_zfs:nwrite  163544
unix:0:vopstats_zfs:write_bytes 255725992

These are the counters used by fsstat.  In the case of a single pool,
I would expect (perhaps naively) to match up with zpool iostat
numbers.

On my list of things to do when I get around to it is to enable
parseable output in fsstat(1M).  See
http://mail.opensolaris.org/pipermail/on-discuss/2008-June/000127.html
for details.  Parseable is currently disabled for reasons that are
discussed in the mail folder, linked at
http://opensolaris.org/os/community/arc/caselog/2006/180/.

It is interesting to look at the numbers at this level compared to
iostat.  While iostat shows physical reads and writes only zpool
iostat and fsstat show reads that are satisfied by a cache and never
result in physical I/O activity.  As such, a workload that looks
write-intensive on UFS monitored via iostat may seem to have shifted
to being very read intensive.

--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iostat and monitoring

2008-07-05 Thread Matt Harrison
Mike Gerdts wrote:
 $ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
 unix:0:vopstats_zfs:nread 418787
 unix:0:vopstats_zfs:read_bytes612076305
 unix:0:vopstats_zfs:nwrite163544
 unix:0:vopstats_zfs:write_bytes   255725992

Thanks Mike, thats exactly what I was looking for. I can work my way
around the other snmp problems, like not reporting total space on a zfs :)

Thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iostat and monitoring

2008-07-05 Thread Brian Hechinger
On Sat, Jul 05, 2008 at 03:03:34PM -0500, Mike Gerdts wrote:
 
 You can access the kstats directly to get the counter values.

First off, let me say that:  kstat++

That's too cool.

 $ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
 unix:0:vopstats_zfs:nread 418787
 unix:0:vopstats_zfs:read_bytes612076305
 unix:0:vopstats_zfs:nwrite163544
 unix:0:vopstats_zfs:write_bytes   255725992

# kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
#

uhm, but:

kstat -p ::vopstats_zfs
[snip]
unix:0:vopstats_zfs:nwrite  24201307
unix:0:vopstats_zfs:read_bytes  1557032944566
unix:0:vopstats_zfs:readdir_bytes   129267
unix:0:vopstats_zfs:snaptime3281423.01228961
unix:0:vopstats_zfs:write_bytes 222641182203

what gives? This is:

SunOS wiggum.4amlunch.net 5.11 snv_81 i86pc i386 i86pc

-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iostat and monitoring

2008-07-05 Thread Mike Gerdts
On Sat, Jul 5, 2008 at 9:48 PM, Brian Hechinger [EMAIL PROTECTED] wrote:
 On Sat, Jul 05, 2008 at 03:03:34PM -0500, Mike Gerdts wrote:
 $ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
 unix:0:vopstats_zfs:nread 418787
 unix:0:vopstats_zfs:read_bytes612076305
 unix:0:vopstats_zfs:nwrite163544
 unix:0:vopstats_zfs:write_bytes   255725992

This was on a virtual machine with a 12 GB zpool (one virtual disk)
that had been up for a few days (but suspended most of the time).  My
guess is that most of the activity my zpool was seeing was from the
swap device.

 # kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
 #

 uhm, but:

 kstat -p ::vopstats_zfs
 [snip]
 unix:0:vopstats_zfs:nwrite  24201307

24 million write operations.

 unix:0:vopstats_zfs:read_bytes  1557032944566

$ perl -e 'print (1557032944566  30)'
1450

Looks like you've read about 1.4 TB since boot.

 unix:0:vopstats_zfs:readdir_bytes   129267

1.2 GB of readdir activity.  Lots of files?  Is someone doing find or
du through the area with lots of files?

 unix:0:vopstats_zfs:snaptime3281423.01228961
 unix:0:vopstats_zfs:write_bytes 222641182203

$ perl -e 'print (222641182203  30)'
207

207 MB of writes.

$ perl -e 'print 222641182203 / 24201307'
9199.55199952631

Average write size was a bit over 9 KB.


 what gives? This is:

 SunOS wiggum.4amlunch.net 5.11 snv_81 i86pc i386 i86pc

Do the numbers seem unreasonable for the size of the pool, the uptime
of the system, etc.?  Remember my comments earlier about how you can
now see the reads (and readdirs) that came from cache and didn't do
physical I/O.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss