[email protected] wrote:
> Below is the data I've collected using the collected wisdom I've been 
> presented with here.
> 
> Insignts welcome!

Often these things are relative, and not easy to spot by looking at one 
set of numbers. You'll have to do this a few times to spot any trends. 
(I'd recommend being a bit more focused in your data collection.)


> NON STUTTERING DATA
> [r...@glutton ~]# iostat
> Linux 2.6.18-92.1.18.el5 (glutton.home.sinister.net)    03/26/2009
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            4.47    0.01    8.56   12.30    0.00   74.67
> 
> Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
> sda              71.45      4225.38       528.99    1064374     133252
> sdb               0.93        18.67         0.67       4704        168
> 
> STUTTERING DATA
> [r...@glutton ~]# iostat
> Linux 2.6.18-92.1.18.el5 (glutton.home.sinister.net)    03/26/2009
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            0.29    0.00    1.66    0.05    0.00   98.00
> 
> Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
> sda               2.12         9.46        40.87    1508622    6517412
> sdb               0.04         9.76         0.05    1556664       8720

The block reads and writes per second look considerably different 
between the two states, but I don't use iostat enough to know what's 
typical. This could be indicating that the storage system is under 
performing, or that something is getting hung up at a higher layer and 
there just isn't a call for that much I/O.

On my MythTV server, all drives report 200 ~ 600 blocks/s reads and 100 
~ 300 to writes. The variations seem to be correlated to the drive model 
(a bunch of Seagate drives have one set of numbers, and a couple of WD 
drives have another set).

Looking at the iostat man page, this command might be more useful:

mythtv:/etc# iostat -dmx
Linux 2.6.24-23-386 (mythtv)    04/01/2009

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s 
avgrq-sz avgqu-sz   await  svctm  %util
sda              16.79    13.66   19.01    1.50     0.14     0.06 
20.15     0.02    1.00   0.82   1.67
sdb              16.83    13.65   19.01    1.51     0.14     0.06 
20.15     0.02    1.12   0.90   1.84


In particular you want to look for differences in the last 3 columns, 
and make sure the last one isn't getting close to 100%, as that 
indicates your I/O is saturated.

Having iostat poll for these statistics every few seconds (see the man 
page) while the problem is occurring might also turn something up.

What purpose do these two drives serve in your system?


> NON STUTTERING DATA
> [r...@glutton ~]# hdparm -tT /dev/sda
> 
> /dev/sda:
>  Timing cached reads:   3896 MB in  2.00 seconds = 1947.22 MB/sec
>  Timing buffered disk reads:  230 MB in  3.01 seconds =  76.32 MB/sec
> 
> STUTTERING DATA
> [r...@glutton ~]# hdparm -tT /dev/sda
> 
> /dev/sda:
>  Timing cached reads:   3956 MB in  2.00 seconds = 1978.27 MB/sec
>  Timing buffered disk reads:  228 MB in  3.01 seconds =  75.78 MB/sec

> NON STUTTERING DATA
> [r...@glutton ~]# hdparm -tT /dev/sdb
> 
> /dev/sdb:
>  Timing cached reads:   4012 MB in  2.00 seconds = 2006.56 MB/sec
>  Timing buffered disk reads:  234 MB in  3.00 seconds =  78.00 MB/sec
> 
> STUTTERING DATA
> [r...@glutton ~]# hdparm -tT /dev/sdb
> 
> /dev/sdb:
>  Timing cached reads:   3880 MB in  2.00 seconds = 1940.68 MB/sec
>  Timing buffered disk reads:  234 MB in  3.02 seconds =  77.37 MB/sec

These numbers look pretty much identical before and after for both 
drives, which may suggest that the problem lies at a higher layer.


> STUTTERING DATA
> mvpmc throughput test ~13 mb/sec

What about the non-stuttering baseline? Though that seems adequate for 
smooth payback.


I didn't do a side-by-side comparison of the other data, but you should, 
if you haven't already, and post anything you think might be notable 
differences.

  -Tom


------------------------------------------------------------------------------
_______________________________________________
Mvpmc-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/mvpmc-users
mvpmc wiki: http://mvpmc.wikispaces.com/

Reply via email to