RAID6 is double parity.  That would make it ok to lose any 2 disks.

RAID10 you could theoretically lose up to half of the disks as long as they weren't in the same mirrored pair, but yeah 2 in the same pair and you're done.

The nice thing about RAID10 is the read performance because there will be 2 copies of any block and you have double the chances of a head being near one for random access, and both can be read simultaneously for higher throughput. (haha remember when disks were round and had heads?)

.....this is all assuming the RAID controller doesn't have a hot spare feature or some such.  There are too many variations to generalize much.



On 12/30/2020 2:17 PM, Josh Baird wrote:
This isn't exactly true.

If you lose (2) disks in a (4) disk RAID10 array and both of the disks are in the same set (mirror), you're screwed.

On Wed, Dec 30, 2020 at 11:43 AM Josh Luthman <[email protected] <mailto:[email protected]>> wrote:

    Raid5 means if you lose 2 disks you just lost all your data.

    Raid10 means if you lost 2 disks you're still OK.  Just get a
    replacement in there already!

    Josh Luthman
    24/7 Help Desk: 937-552-2340
    Direct: 937-552-2343
    1100 Wayne St
    Suite 1337
    Troy, OH 45373


    On Wed, Dec 30, 2020 at 10:17 AM Steve Jones
    <[email protected] <mailto:[email protected]>> wrote:

        esx is the hardware "OS" So I cant see how you would achieve a
        software RAID. unless you want to sit ESX on the platform with
        each disk independent. then install your OS and build are raid
        across the vdisks, sounds like a disaster waiting to happen,
        but potentially not if youre backing up the vmdks. Kind of
        weird if im understanding what youre wanting to achieve. My
        operations platform is a raid10 across SSD, performance over
        raid5 was notable. the boss still builds alot of raid5, to me
        seems like saving the cost of a disk is offset in the long
        term replacing failed disks.
        Disks are relatively cheap, datacenter smart hands are not. a
        couple more disks, a little more redundancy, less writes per
        disk. I like to have a couple independent disks in esx
        depending on the host. hot spares or static data storage

        On Wed, Dec 30, 2020 at 8:27 AM Mark - Myakka Technologies
        <[email protected] <mailto:[email protected]>> wrote:

            Helping  my son setup a ESXI server.  Going  into a datacenter
            a  few  states  away.   I  have  only  done some basic
            ESXI setup when
            fooling  around  with  mikrotik.   My  other  VM servers
            sit on top of
            ubuntu.   I  have  always  used  ubuntu  SW  raid10 when
            setting up my
            servers.

            Looks  like ESXI can be HW raid only.  Can't find anyway
            of doing
            SW raid.  Anything I'm missing?

            He  is going to put SSD's in the server.  I'm thinking
            just going with
            RAID 1 for some redundancy.  Spanning across SSD's does
            not seem to be
            worth it for performance.  Thought's on that?



            --

            Thanks,
             Mark                          mailto:[email protected]
            <mailto:[email protected]>

            Myakka Technologies, Inc.
            www.Myakka.com <http://www.Myakka.com>


-- AF mailing list
            [email protected] <mailto:[email protected]>
            http://af.afmug.com/mailman/listinfo/af_af.afmug.com
            <http://af.afmug.com/mailman/listinfo/af_af.afmug.com>

-- AF mailing list
        [email protected] <mailto:[email protected]>
        http://af.afmug.com/mailman/listinfo/af_af.afmug.com
        <http://af.afmug.com/mailman/listinfo/af_af.afmug.com>

-- AF mailing list
    [email protected] <mailto:[email protected]>
    http://af.afmug.com/mailman/listinfo/af_af.afmug.com
    <http://af.afmug.com/mailman/listinfo/af_af.afmug.com>


-- 
AF mailing list
[email protected]
http://af.afmug.com/mailman/listinfo/af_af.afmug.com

Reply via email to