* Robert Woodcock <[email protected]> wrote on [01-03-10y 19:20]:
> On Sun, Jan 03, 2010 at 03:11:25PM -0800, Ryan Allen wrote:
> > Hi SSL,
> > 
> >    I just purchased 4 new 1TB SATA drives, and attempted to upgrade my
> >    RAID 5 system to 3 TB.  however my Dell CERC 6 channel SATA RAID card
> >    will only let me build a RAID volume up to 2 TB.  
> > 
> >    I couldn't find any firmware upgrades on this horribly supported
> >    el-cheepo raid card.  At least its been a solid work horse, with
> >    little problems aside from a horrible UI utility called afacli.
> > 
> >    I am looking for a hardware RAID card, PCI-X (100MHz), that has an
> >    fairly easy to use UI that is well supported in Linux.  It should
> >    also take advantage of ALL my drive space for an expected volume of a
> >    little under 3TB.  
> > 
> >    Any suggestions?  I would like to not spend $550 on an fancy 3ware
> >    card either.  Anything in the $100 range?
> 
> You may not have much money to spend on this, but surely this data is
> important to you?
> 
> With a multi-terabyte array, the chance of silent data corruption leading to
> later rebuild failure exceeds 50% with RAID4/5. 

I am quite interested in this statistic.  Where does this 50% come from?

We have all seen the math.  If one drive has a 5% chance of failure in
one year, the chances of two drives failing --at the same time-- is
multiplicitive (.05^2), or .25%.  Of course having more then two disks
increases the chances that multiple drive could fail at the same time.

I'm under the firm belief that anybody can have a solid, secure, and
more affordable RAID 5 setup  if 1) you have a decently small number of
disks, say <= 6, and 2) the user replaces any failed disks (SMART, or
other failure indicators) as quickly as they can, and 3) have a solid
backup plan in place.   This will cut hardware budget by around 40%, and
consume less power.


>  And yes, I have had enough
> personal experience for it to be statistically valid. I have not had much
> better luck with RAID6 either. No more: http://baarf.com/
> 
> You'll have much better luck with RAID1 or RAID10. You could do it with the
> equipment you have if you're willing to live with 2TB of space.
> 

Yes, of course RAID1 is more "secure".  Just as a monorail from Ballard
to West Seattle will provide traffic free mass transit, Seattle never
built one.  Why?  Money and management.  Every engineering decision has
a trade off.  In this case the requirements are to conserve money and
electricity.  


> You're almost guaranteed to get "fake RAID" in a $100 4-port controller.
> You're certainly not going to get a battery backup unit for that, which is
> *essential* for decent performance with RAID5.
  
Can you enlighten me on the importance of "battery backed RAID cards"
when the entire system is on a massive UPS, programmed to do a clean
shutdown on power failure?  The system in mind has been tuned to do a
worst case shutdown in just under 1/3 the measured battery life.  

> Oh, and you'll probably want to double check that it really is a RAID card
> limitation and not just a partitioning problem - the classic MBR partition
> format only supports 2TB drives. You have to use something more sane (such
> as GPT) to partition a larger drive.

Yeah, its the configuration utility at the system BIOS startup.  Before
the OS boots, its the hardware is preventing me from creating the RAID set.


> -- 
> Robert Woodcock - [email protected]
> "Down, not Across"
>       -- alt.sysadmin.recovery

-- 

+-----------------------------+
|     [email protected]     |
|  http://www.the-summit.net  |
+-----------------------------+

Reply via email to