On Mon, Mar 29, 2010 at 10:34:26AM +0200, Jiri Novosad wrote:
> On 27.3.2010 23:21, Pasi Kärkkäinen wrote:
> > On Thu, Mar 25, 2010 at 06:54:38PM +0100, vinc...@cojot.name wrote:
> >>
> >> Hello Jiri,
> >>
> >> The high load may be caused by I/O wait (check with sar -u). At any case, 
> >> 30mb/s seems a little slow for an FC array of any kind..
> >>
> Hi,
> 
> > 
> > 30 MB/sec can be a LOT.. depending on the IO pattern and IO size.
> > 
> > If you're doing totally random IO where each IO is 512 bytes in size,
> > then 30 MB/sec would equal over 61000 IOPS.
> > 
> > Single 15k SAS/FC disk can do around 300-400 random IOPS max, so 61000 IOPS
> > would require you to have around 150x (15k) disks in raid-0.
> > 
> > -- Pasi
> 
> Actually that was 15 MB/sec (the units were blocks).
> But it's true that our access pattern is quite random. There are 30+ users
> logged in over ssh, others access their mail over imap/pop3 and also
> some 80 PCs mount the user's home directories over NFS.
> Sometimes 30 users use Netbeans at once + there's Firefox ...
> 
> The real issue here is not the overall speed and I'm sorry if I didn't make
> myself clear. The problem is that when there is a large amount of writes
> to a single LUN only a small percentage of requests (if any) make it to the
> other LUNs.
> 
> I ran another test to compare what happens when I don't use the disk cache
> (oflags=direct equals opening the output file with O_DIRECT).
> 
> I generate some reads from all LUNS and all looks well (now its in KiB):
> 
> Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
> sda             327.00      1440.00      5940.00       1440       5940
> sdb              19.00      1588.00        28.00       1588         28
> sdc              15.00      1720.00         0.00       1720          0
> sdd              21.00      1700.00        32.00       1700         32
> sde              28.00      1660.00        60.00       1660         60
> sdf              13.00      1664.00         0.00       1664          0
> sdg              71.00      1664.00       228.00       1664        228
> 
> Then I run
> $ dd  if=/dev/zero of=file bs=$((2**20)) count=128.
> It finishes in half a second and after a while iostat says:
> 
> Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
> sda               1.00       140.00         0.00        140          0
> sdb               1.00       192.00         0.00        192          0
> sdc               1.00       180.00         0.00        180          0
> sdd               1.00       128.00         0.00        128          0
> sde               1.00       128.00         0.00        128          0
> sdf              46.00       144.00     23400.00        144      23400
> sdg               2.00       128.00         4.00        128          4
> 
> On the other hand, if I run
> $ dd oflag=direct if=/dev/zero of=file bs=$((2**20)) count=128,
> it takes cca 8 seconds to finish and iostat says something like:
> 
> Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
> sda             258.42      1702.97      3251.49       1720       3284
> sdb              30.69      1710.89       102.97       1728        104
> sdc              45.54      1699.01       704.95       1716        712
> sdd              23.76      1817.82        15.84       1836         16
> sde              18.81      1766.34        27.72       1784         28
> sdf              85.15      1770.30     16308.91       1788      16472
> sdg              62.38      1778.22       198.02       1796        200
> 
> Is it possible that writing the FS cache has higher priority than other 
> accesses?
> 
> > 
> >> I don't know your DS-4300 at all but if you're using a SAN or an FC loop  
> >> to connect to your array, here are (maybe) a few things you might want to 
> >> look for:
> >>
> >> - What kind of disks are used in your DS4300? 10k or 15k rpm FC disks? Did
> >>   you check how heavily used were your disks during transfers? (there
> >>   should be software provided with the array to allow that, perhaps even
> >>   an embedded webserver).
> 7200 rpm SATA.
> 

7200 rpm SATA disk can do around 100-150 random IOPS max. 
How many SATA disks you have? 

Controller caches help for small dataset, but when you have 
enough random IO and the dataset grows big enough, then 
the caches don't really help anymore.. then you really 
need more spindles to handle the load.

-- Pasi

_______________________________________________
rhelv5-list mailing list
rhelv5-list@redhat.com
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to