On Sat 2003-02-15 at 16:22:22 -0500, [EMAIL PROTECTED] wrote:
> 
> Hey everyone,
> 
> Could someone explain to me the advantages and disadvantages of
> RAID10 vs RAID0+1 vs RAID1+0?

First note that usually RAID10 and RAID1+0 mean the same thing. You
can see that by the fact that the your reference,
peripheralstorage.com, has no page for raid1+0, but only raid10 and
raid0+1.

Another thing is, that RAID10 is more commonly used (see below for
why).

> I am about to setup a system that needs high disk IO and also
> redundancy. Doest Linux support RAID10 or just 0+1 and 1+0?

Linux (2.4) Software-RAID, which you are apparently speaking of
(http://en.tldp.org/HOWTO/Software-RAID-HOWTO.html), supports RAID
level 0, 1, 4 and 5 and any combination thereof. I.e. RAID10 and
RAID0+1 work fine, as would RAID50.

[...]
> Also, what is the difference between doing a setup A like this:
> hda+hdb in RAID0 => md0
> hdc+hdd in RAID0 => md1
> md0+md1 in RAID1 => md3 == /var

That's RAID0+1 (a stripe over mirrored disks).

> and setup B like this:
> hda+hdb in RAID1 => md0
> hdc+hdd in RAID1 => md1
> md0+md1 in RAID0 => md3 == /var

That's RAID10 (a mirror of striped disks).

> Which one is better, and why (latency, throughput, etc)?

I have no definite answer about latency or throughput, but I would
expect both to be about the same for both cases, because both have to
do the same writes and reads. Especially throughput is probably
limited by your bus systems (SCSI, PCI, ...), not the RAID array.

RAID10 is the one more commonly used, because it has no obvious
disadvantages compared to RAID0+1, but offers better redundancy with
more than 4 disks: RAID0+1 fails as soon as one arbitrary disk of each
stripe has failed, while RAID10 only fails if both disks of the same
mirror fail.

In your case performance is heavily dependend on your underlying IDE
system. It behaves very bad when writing to two disk on the same bus.
So it could be that timing issues with your IDE controller play a far
bigger role than the chosen RAID level. You have to benchmark this
yourself. Therefore setup B should put the mirrors on different IDE
channels.

The same is true for failure behaviour: If one disk on a IDE bus dies,
the other one usally is not reachable anymore, either. As long as you
take care of this (both disks of a mirror on different IDE channel),
you should be fine

I.e. setup B above is broken, because with hda+hdb both disks or the
mirror will block themselves for read/write accesses and also fail
together. If you set it up as hda+hdc in RAID1 => md0, hdb+hdd in
RAID1 => md1, md0+md1 in RAID0 => md3, you should be fine.

Either way, your redundancy will be only the same as that of a normal
RAID1 (while on a SCSI bus it would be better), as will the
performance (probably).

HTH,

        Benjamin.

Attachment: msg66085/pgp00000.pgp
Description: PGP signature

Reply via email to