At 00:32 12.02.00 -0800, smart wrote:
>For this application, space is more important that hard drive failures, 
>so I've configured it as one large raid0 array, giving me a 160Gb.
>
>Here are the performance stats using hdparm (and I humbly admit that I don't 
>even know if this is the right way to determine these, of if I'm being a 
>fool)

Let's just say that the figures from hdparm are as close to meaningless as
you can get while still providing measured numbers :-)

Try the tests again with a test like tiotest; make sure the size of your
testfiles is at least double your physical RAM.

As an example, I'll include results from one of my systems using two 8Gig
Maxtor 90871U2 drives. All measurements done from the same region(s) of the
same drives.

First, the output from hdparm:

/dev/hda8:
 Timing buffer-cache reads:   128 MB in  1.16 seconds =110.34 MB/sec
 Timing buffered disk reads:  64 MB in  3.87 seconds =16.54 MB/sec

/dev/md3: (raid1)
 Timing buffer-cache reads:   128 MB in  1.06 seconds =120.75 MB/sec
 Timing buffered disk reads:  64 MB in  6.87 seconds = 9.32 MB/sec

/dev/md3 (raid0):
 Timing buffer-cache reads:   128 MB in  1.05 seconds =121.90 MB/sec
 Timing buffered disk reads:  64 MB in  3.05 seconds =20.98 MB/sec

As you can see, I get a very tiny improvrment from rai0 and raid1
performance seems abysmal.

Now the same disks measured using tiotest, using 200MB testfilesize (64MB
RAM). Fileystems were created using mke2fs -b 4096 -i 16384 for plain disk
and mirror and using mke2fs -b 4096 -i 16384 -Rstride=8 and chunksize 32K
for the stripeset.

Single ATA Disk:

 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
----- ------ ------- ---- ------------- -------------- --------------
hda8   200    4096    1   9.44094 5.85% 8.05522 7.77%  126.708 0.88%
hda8   200    4096    2   10.0298 6.06% 8.73864 8.38%  141.826 0.49%
hda8   200    4096    4   10.2249 7.05% 8.90083 9.12%  147.253 0.47%
hda8   200    4096    8   10.6042 7.42% 8.78935 10.8%  155.544 0.54%

Raid1 mirror:

 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
----- ------ ------- ---- ------------- -------------- --------------
md1    200    4096    1   10.3814 5.81% 9.43394 9.66%  133.826 1.20%
md1    200    4096    2   11.5381 7.73% 9.48548 9.34%  142.070 0.81%
md1    200    4096    4   12.8123 8.00% 9.55569 9.60%  151.459 0.68%
md1    200    4096    8   13.0791 9.74% 9.17384 9.54%  156.069 0.58%

Raid0 Stripeset:

 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
----- ------ ------- ---- ------------- -------------- --------------
md2    200    4096    1   19.4054 11.1% 16.5846 16.4%  150.003 0.93%
md2    200    4096    2   19.0280 10.1% 16.7709 16.7%  203.939 1.17%
md2    200    4096    4   19.1422 13.7% 16.5799 16.6%  218.725 1.20%
md2    200    4096    8   19.1280 12.0% 14.6366 14.7%  244.285 1.22%

Now the picture looks decidedly different: Performance for the single disk
is back in reasonable range (no way my 8Gig disk can actually do 16MB/sec)
you can see that raid1 starts getting an advantage if there are several
concurrent reads, and the stripeset clocks in at about double the
performance of a single disk.

Bye, Martin

PS: I got tiotest from http://www.icon.fi/~mak/tiotest/ - thanks to Mika!


"you have moved your mouse, please reboot to make this change take effect"
--------------------------------------------------
 Martin Bene               vox: +43-316-813824
 simon media               fax: +43-316-813824-6
 Andreas-Hofer-Platz 9     e-mail: [EMAIL PROTECTED]
 8010 Graz, Austria        
--------------------------------------------------
finger [EMAIL PROTECTED] for PGP public key

Reply via email to