Here for your edification and amusement are some benchmarks comparing
hardware v. software RAID for fairly similar setups.
Sun sell two versions of their 12-disk hot-swap dual-everything disk
array (codename Dilbert):
* the D1000 is a "dumb" array presenting 6 disks on each of two
Ultra Wide Differential SCSI busses.
* the A1000 is similar but has an internal hardware RAID module
which connects to the two busses internally, does its "RAID thing"
and presents a single Ultra Wide Differential bus to the outside
world and talks to an intelligent adapter card on the hosts side.
We have the following configurations which I benchmarked using bonnie:
System 1: A1000 array with 6 x 10000 RPM 4G wide SCSI drives and 64MB
NVRAM cache connected to Sun Ultra 5 with a 270 MHz
UltraSPARC IIi CPU and 320 MB RAM running Solaris 2.6 via a
Symbios 53C875-based card.
System 2: D1000 array with 6 x 10000 RPM 9G wide SCSI drives on one
of its two busses connected to a PC with a 350 MHz PII CPU
and 512 MB RAM running Linux 2.0.36 with with
raid-19981214-0.90 RAID patch.
Both systems were set up as a single 6 disk RAID5 group. System 1 had
a standard Solaris UFS filesystem on the resulting 20GB logical drive.
System 2 used chunk-size 64 for its RAID5 configuration (defaults for
ther settings) and a single ext2 filesystem (with blocksize 4096 and
stride=16). Bonnie was run on both as the only non-idle process on a
1000 MB file.
System 1 System 2
Seq output
----------
per char 7268 K/s @ 66.7% CPU 5104 K/s @ 88.6% CPU
block 12850 K/s @ 31.9% CPU 12922 K/s @ 16.4% CPU
rewrite 8221 K/s @ 45.1% CPU 5973 K/s @ 16.9% CPU
Seq input
---------
per char 8275 K/s @ 99.2% CPU 5058 K/s @ 96.1% CPU
block 21856 K/s @ 46.4% CPU 13080 K/s @ 15.2% CPU
Random Seeks 293.0 /s @ 8.7% CPU 282.3 /s @ 5.7% CPU
--Malcolm
--
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems Programmer
Oxford University Computing Services