2. Are you trying to increase performance and reliability [per price, in general] or are you aiming at a specific throughput and/or failure-recovery potential?


Reliability and recovery.


Go hardware RAID if you can at all afford it. Also look for a Hardware RAID controller that has a good cache size.. and ... very important... Battery Backup .. so if the server dies the data is saved in cache and committed to disk on next boot.

Some Top end RAID controllers also offer you easy ways to be flexible.. Like double parity ( Min of 4 drives and you can loose 2 and still be up ) and also restriping.. so say you have a 3 drive RAID5 .. and you want to add more drives .. pop in 3 more drives and tell the controller to restripe .. and Wala youve got more space all without ever effecting the OS or remounting or anything..

I've got experience with Dell, IBM and HP/Compaq. The Dell Perc 2 and 3 were pretty piss poor, so I admit I havent seen Dell hardware in a long time. IBM has some of the most flexible container configuration I've seen on high end systems allowing you to slice and dice multiple RIAD configs across a single set of disks. HP high end controlers are the fastest I've seen. I've seen a 72G SCSI drive restripe ( from a disk failure replacement ) in under 15min. But overall less configurable than the IBM's.

Also very important... for random reads or writes STRIPE SIZE should be as close as possible to your format block size. For Linux the block size is typically 4K .. so match your stripe accordingly ( 8K is the lowest on many RIAD controllers, especially if they are designed for windows ) Windows Likes 16K ( yes the C drive gets 4K but .. well it is Microsoft ) .. 8K for SQL or Oracle. For huge files with sequential file access go as big as you can ( 128K or 256K ). In tests I was a part of ( trying to imporve backup times ) I've seen systems go from 12 hour backup times to about 2 hours by simply adjusting the stripe to the OS or application.


Hope this helps,

Mark


_______________________________________________
EUGLUG mailing list
[email protected]
http://www.euglug.org/mailman/listinfo/euglug

Reply via email to