with RAID 5 you have drive redundancy, but with RAID 6 it distributes the data 
to rebuild your array across the volume in such a way where you can lose two 
whole hard drives (but not three) and still rebuild.

With larger drives you have higher likelihood that you’d have your failed 
drive, but then the rebuild data from the other drives might fall on a bad 
sector, meaning you’d lose all your data anyway, that’s why everyone wants RAID 
6 nowadays.

Yes, it take a long time (like hours, maybe many) to rebuild a RAID 6 after 
swapping drives, it all happens behind the scenes and you can use your RAID 
again as soon as you insert the new drive back in the box again, it’ll just be 
a bit slower to read/write while it’s magically rebuilding for you.

15K SAS drives are awesome and fast, but you can add lots of commodity SATA 
drives for a LOT cheaper and decent I/O because you make up for slower drives 
with more spindles turning to give better read/write performance. That’s what a 
lot of the bulk storage guys are doing these days. YMMV.

On 12/6/17 12:02, Adam Moffett wrote:
> Well...I'm using 15K SAS in RAID10. Only 600GB because enterprise drives 
> are $$.
> 
> Out of curiosity what is it that I would need good luck for? 
> ?Reliability problems?
> 
> 


The largest RAID6 I have currently is 12x 4TB with a hot spare. It takes 
forever to rebuild, but it hasn't ever failed outright. The risk is that 
another drive barfs while rebuilding.

Reply via email to