1. Are you doing hardwaaree or software RAID? If HW, what type?
2. Are you trying to increase performance and reliability [per price, in general] or are you aiming at a specific throughput and/or failure-recovery potential?
3. ATA, SATA, SCSI? (curious)
4. Are you doing a live-system cutover, or is this back-endian or happening during scheduled downtime?
RAID5 is great. I'd suggest doing a simple stripe on the boot partition though... and consider having an extra parity disk in the array, since as soon as one fails the situtation is risky. There are some nice combinations of RAID modes which work on varying numbers of drives of course... I like the little notes on napkins to sketch out failure scenarios :)
all the best,
Ben
On 2/7/06, Matthew Jarvis <[EMAIL PROTECTED]> wrote:
We are looking at going RAID5 on our data server, and I was wondering if
I could float my implementation plan past you folks and get input on
points that I may be missing.
In a simplified form, I see it going down like this:
1. Determine size requirements of current system, then scale this to the
other disks in the array. I can do so by doing a time series analysis of
disk usage thanks to the data provided by Intechgra's monitoriing system.
2. Get a different box and install the disk array and configure.
3. Move the data over to this new box and let the RAID do it's thing
striping the data.
4. Test it out.
5. Plug the box into the network.
6. Test it again.
7. Celebrate with copious amounts of cold beer....
--
Matthew S. Jarvis
IT Manager
Bike Friday - "Performance that Packs."
www.bikefriday.com
541/687-0487 x140
[EMAIL PROTECTED]
_______________________________________________ EUGLUG mailing list [email protected] http://www.euglug.org/mailman/listinfo/euglug
