So this thread was a little long in comming back.


Bob, I can't run this on my big server (ultra 160 raid 1+0, 15krpm drives) because 
it's a win2000 box, and as of yet, I haven't had time to boot it into linux to test 
it.  I'd like to, but later.

For the new server in question (this is the one with the software raid 5 and 3 10krpm 
ultra scsi drives), I ran your programs.  Here's my results compared with yours:

Recall your drives are ultra/33 ide (33mb/sec) and mine are Ultra Scsi (40mb/sec). 

> Just for grins, I wrote a couple of little programs to measure a
> disk's sequential read rate and its seek rate.  Try running these
> on your RAIDs - I bet they do a lot better on seeking than
> my disks.
> 
> Here's what happens when I run 'em on my personal workstation (-: , an
> aging 450 MHz PII, 440BX/PIIX4 system with two 7200 RPM UltraATA/33
> drives.  (Note that UltraATA/100 is the current standard.)
> 
> # dmesg | grep '^hd[ab].*/'
> hda: IBM-DTLA-307045, 43979MB w/1916kB Cache, CHS=5606/255/63
> hdb: IBM-DJNA-372200, 21557MB w/1966kB Cache, CHS=2748/255/63

Sequential:
> # ./throughput /dev/hda /dev/hdb
> /dev/hda: 1 Gbytes in 48.8933 seconds: 20.9436 Mb/second
> /dev/hdb: 1 Gbytes in 60.1947 seconds: 17.0115 Mb/second
> 0.010u 25.080s 1:49.10 22.9%    0+0k 0+0io 101pf+0w
SINGLE DRIVES:
./throughput /dev/sdc /dev/sda /dev/sdb
/dev/sdc: 1 Gbytes in 43.5394 seconds: 23.5189 Mb/second
/dev/sda: 1 Gbytes in 44.8357 seconds: 22.8389 Mb/second
/dev/sdb: 1 Gbytes in 43.7386 seconds: 23.4118 Mb/second
---these scsi drives read a little faster, individually compared

Concurrently:
> # ./throughput /dev/hda & ./throughput /dev/hdb & wait
> /dev/hda: 1 Gbytes in 71.1773 seconds: 14.3866 Mb/second
> /dev/hdb: 1 Gbytes in 71.1719 seconds: 14.3877 Mb/second
./throughput /dev/sda & ./throughput /dev/sdb & ./throughput /dev/sdc &
/dev/sda: 1 Gbytes in 89.1994 seconds: 11.4799 Mb/second
/dev/sdb: 1 Gbytes in 89.5171 seconds: 11.4392 Mb/second
/dev/sdc: 1 Gbytes in 89.9047 seconds: 11.3898 Mb/second
---this scsi bus is a little wider (see below)

RAID:
./throughput /dev/md1
/dev/md1: 1 Gbytes in 36.0014 seconds: 28.4434 Mb/second
---good improvement over an individual scsi drive!

Sequential:
> # ./seeks /dev/hda /dev/hdb
> /dev/hda: 32768 seeks in 240.949 seconds: 135.996 seeks/second, 7.35318 ms/seek
> /dev/hdb: 32768 seeks in 268.271 seconds: 122.145 seeks/second, 8.18699 ms/seek
SINGLE DRIVES:
./seeks /dev/sda /dev/sdb /dev/sdc
/dev/sda: 32768 seeks in 192.341 seconds: 170.364 seeks/second, 5.86977 ms/seek
/dev/sdb: 32768 seeks in 188.775 seconds: 173.583 seeks/second, 5.76094 ms/seek
/dev/sdc: 32768 seeks in 220.68 seconds: 148.487 seeks/second, 6.73461 ms/seek
---these scsi drives seek faster

Concurrently:
> # ./seeks /dev/hda & ./seeks /dev/hdb & wait
> /dev/hdb: 32768 seeks in 514.204 seconds: 63.7257 seeks/second, 15.6923 ms/seek
> /dev/hda: 32768 seeks in 515.087 seconds: 63.6164 seeks/second, 15.7192 ms/seek
> 
./seeks /dev/sda & ./seeks /dev/sdb & ./seeks /dev/sdc & wait
/dev/sda: 32768 seeks in 192.341 seconds: 170.364 seeks/second, 5.86977 ms/seek
/dev/sdb: 32768 seeks in 189.338 seconds: 173.066 seeks/second, 5.77813 ms/seek
/dev/sdc: 32768 seeks in 221.91 seconds: 147.664 seeks/second, 6.77214 ms/seek
---the ide bus can't handle as many seeks/sec it seems (~135), we didn't hit the limit 
of the scsi bus with seeks in this test

RAID:
./seeks /dev/md1
/dev/md1: 32768 seeks in 125.457 seconds: 261.189 seeks/second, 3.82864 ms/seek
--- oh yeah!  3.8ms access time!  My 15krpm cheetahs are supposed to be that fast!  I 
should test them to see if they are.

> What can we conclude?  Two IDE disks get 28 Mb/sec sequential rate,
> which nearly equals the rate you got from 3 UltraSCSI drives.  One IDE
> disk can seek 120-140 times a second, and two disks can seek 125 times
> a second.
> How many seeks/second can you get on your RAIDs?
> -- 
> Bob Miller                              K<bob>

I read somewhere that ide and scsi drives [of the same class] are about the same 
speed.  Certainly our results support this.  The article continued on saying, the 
difference is the bandwidth of the scsi bus as opposed to the ide bus.  Our results 
show this as well.

Observe:
Throughput:
Sequential single drives:  My drives have a 2-6 mb/sec higher throughput than yours.  
Which they should since yours are 33mb/sec and mine are supposed to be 40mb/sec.

Concurrently: You had two drives and I had three drives going.  You transfered at 
14.3mb/sec * 2 = 28.6mb /sec.  I transfered at 11.4mb/sec * 3 = 34.2mb/sec.  Here we 
both hit the limits of our bus it appears.  I did not get the results of processor 
utilization during this time.

Throughput on my raid (ie concurrently, but kernel level concurrency plus parity 
calculation): 28.4mb/sec.  Faster than any one of my drives alone by 4-5mb/sec.

Seeks:

Sequental single drives: my scsi drives 'sought' faster than yours.

Concurrently: your ide seeks slowed down during concurrency, while the scsi seeks did 
not slow down.

Raid seeks were pretty fast as well!

So by going with a software raid 5, I gained a speed boost over having single drives 
(as well as redundancy).  And IDE and scsi drives seem to be about the same speed.  
Although clearly drives with higher rpms are faster, regardless of the interface.  
Still I would go with scsi over ide in a server, not for speed but for the bandwidth 
and the depth (ie # of drives) of the scsi bus, the ability to put it outside the box, 
the drives seem more bullet proof, they make loud and cool sounding noises when they 
spin up, and the scsi or raid card takes a lot of the processing off of the cpu.

Thanks for those programs!

Cory

Reply via email to