On 6/23/06, Richard Elling <[EMAIL PROTECTED]> wrote:
comment on analysis below...

Tao Chen wrote:

>               === Top 5 Devices with largest number of I/Os ===
>
>       DEVICE      READ AVG.ms     MB    WRITE AVG.ms     MB      IOs SEEK
>       -------  ------- ------ ------  ------- ------ ------  ------- ----
>       sd1            6   0.34      0     4948 387.88    413     4954   0%
>       sd2            6   0.25      0     4230 387.07     405     4236   0%
>       cmdk0         23   8.11      0      152   0.84      0      175  10%
>
>
> Average response time of > 300ms is bad.

Average is totally useless with this sort of a distribution.
I'd suggest using a statistical package to explore the distribution.
Just a few 3-second latencies will skew the average quite a lot.
  -- richard

A summary report is nothing more than an indication of issues, or non-issue.
So I agree that an "average" is just, an average.
However, "a few 3-second latencies" will not spoil the result too much when there're more than 4000 I/Os sampled.

The script saves the "raw" data in a .rec file, so you can run whatever statistic tool you have against it. I am currently more worried about how accurate and useful the raw data is, which is generated from a DTrace command in it.

The "raw" record is in this format:
- Timestamp(sec.microsec)     
- DeviceName
- W/R
- BLK_NO (offset) 
- BLK_CNT (I/O size)
- IO_Time (I/O elapsed time)

Tao
                                   ( msec.xx)


Tao
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to