The difference between theory and real world performance are always a
lot of fun.
RAID5 does striping and parity, so it could, should, and might be faster.
A lot of it has to do with the drives, the controller, and even the
OS. I went around with this with several people over the years, when I
had lots of diverse hardware to play with. The assumption I was usually
presented with was "SCSI is fastest. SCSI RAID5 is faster."
In most of my tests generally the results were:
* Newer IDE's (current for the year, after about 1996) were faster
than their SCSI counterpart.
* RAID performance varied greatly by hardware vendor. The best
performance I found was with Linux RAID (MD devices).
* If your OS has crappy drivers, the drive performance will suffer.
On that last one, we had some issues with "new" servers installing
Linux. We were doing exactly the same install to all machines. Some
would be less than 5 minutes. Some would be about 20 minutes. It
depended on if good drivers were provided on the boot CD. When we
booted with our own kernels, the faster performance went to what we had
expected to be the faster machines.
I had poor performance on a machine with a Promise VTrak 15100,
configured as a RAID5. I reconfigured for RAID5 with 4 to 15 drives. I
then configured for no RAID, and made an MD device out of them, which
worked much better. I liked the fact that I could stuff 15 drives in.
Unfortunately, it proved to be less than stable under heavy loads.
With some raids, they worked very well under say Linux, but poorly
under say Windows. I can't say that I've found the inverse to be true
with any yet.
The BackupPC machine I'm using right now uses two Promise internal IDE
controllers, with 5 drives (RAID5 MD device), and I'm satisfied with the
performance. This wasn't picked for performance. It was picked because
I had the parts available. It's great, as long as I can keep the power
on. :) It runs from my house (aka off-site backups), so I finally had
to splurge and get an UPS.
I configured two others for a client, with IBM (I think) IDE
controllers as MD devices, which worked satisfactorily.
Adam Goryachev wrote:
Carl Wilhelm Soderstrom wrote:
On 03/03 02:29 , Les Mikesell wrote:
The seek time for these may be the real killer since you drag the parity
drive's head along for the ride.
The more drives you have in an array, the closer your seek time will tend to
approach worst-case, as the controller waits for the drive with the longest
seek time for a given operation. Does anyone know anything about
synchronizing drive spindles? I've heard of it, and I know it requires
drives that are built for it; but never worked with such hardware.
I was always led to believe that the more drives you had in an array the
faster it would get. ie, comparing the same HDD and controller, if you
have 3 HDD in a RAiD5 it would be slower than 6 HDD in a RAID5.
Is that an invalid assumption? How does RAID6 compare in all this? Would
it be faster than RAID5 for the same number of HDD's ? (Exclude CPU
overheads in all this)
Regards,
Adam
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
BackupPC-users mailing list
[email protected]
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
BackupPC-users mailing list
[email protected]
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/