Tp the original poster,

        FYI,

        Accessing RAID drives at a constant "~70-75%" does not
        probably leave enough excess for degraded mode.

        A normal rule of thumb is 50 to 60% constant to
        allow excess capacity to be absorbed in degraded
        mode.

        An "old" rule of thumb for determining for estimating
        MTBF is if you have 100 drives and the single drive
        is estimated at 30,000 hours (> 3years).. Then the 
        expected failure will occur  in about 1 day/30 hours.

        Thus, excess capacity needs to be always present to
        allow the time to reconstruct the raid, ability
        to reconstuct it within a limited timeframe and to
        minimize any significantly increased latencies for
        normal processing.

        Mitchell Erblich
        -----------------
        

Richard Elling wrote:
> 
> > I have a setup with a T2000 SAN attached to 90 500GB SATA drives
> > presented as individual luns to the host.  We will be sending mostly
> > large streaming writes to the filesystems over the network (~2GB/file)
> > in 5/6 streams per filesystem.  Data protection is pretty important, but
> > we need to have at most 25% overhead for redundancy.
> >
> > Some options I'm considering are:
> >     10 x 7+2 RAIDZ2 w/ no hotspares
> > 7 x 10+2 RAIDZ2 w/ 6 spares
> >
> > Does any one have advice relating to the performance or reliability to
> > either of these?  We typically would swap out a bad drive in 4-6 hrs and
> > we expect the drives to be fairly full most of the  time ~70-75% fs
> > utilization.
> 
> What drive manufacturer & model?
> What is the SAN configuration?  More nodes on a loop can significantly
> reduce performance as loop arbitration begins to dominate.  This problem
> can be reduced by using multiple loops or switched fabric, assuming the
> drives support fabrics.
> 
> The data availability should be pretty good with raidz2.  Having hot spares
> will be better than not, but with a 4-6 hour (assuming 24x7 operations)
> replacement time there isn't an overwhelming need for hot spares -- double
> parity and fast repair time is a good combination.  We do worry more
> about spares when the operations are not managed 24x7 or if you wish
> to save money by deferring repairs to a regularly scheduled service
> window.  In my blog about this, I used a 24 hour logistical response
> time and see about an order of magnitude  difference in the MTTDL.
> http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance
> 
> In general, you will have better performance with more sets, so the
> 10-set config will outperform the 7-set config.
>  -- richard
> 
> 
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to