Re: [CentOS] Software RAID10 - which two disks can fail?

2014-04-08 Thread Christopher Chan
On Tuesday, April 08, 2014 03:47 AM, Rafał Radecki wrote:
 As far as I know raid10 is ~ a raid0 built on top of two raid1 (
 http://en.wikipedia.org/wiki/Nested_RAID_levels#RAID_1.2B0 - raid10). So I
 think that by default in my case:
No, Linux md raid10 is NOT a nested raid setup where you build a raid0 
on top of two raid1 arrays.


 /dev/sda6 and /dev/sdb6 form the first raid1
 /dev/sdd6 and /dev/sdc6 form the second raid1

 So is it so that if I fail/remove for example:
 - /dev/sdb6 and /dev/sdc6 (different raid1's) - the raid10 will be
 usable/data will be ok?
 - /dev/sda6 and /dev/sdb6 (the same raid1) - the raid10 will be not
 usable/data will be lost?
The man page for md which has a section on RAID10 describes the 
possibility of something is absolutely impossibe with a nested raid1+0 
setup.

Excerpt: If, for example, an array is created with 5 devices and 2 
replicas, then space equivalent to 2.5 of the devices will be available, 
and every block will be stored on two different devices.

So contrary to this statement: RAID10 provides a combination of RAID1 
and RAID0, and is sometimes known as RAID1+0., linux md raid10 is NOT 
raid1+0. Is something entirely new and different but unfortunately 
called raid10 perhaps due to it being able to create a raid1+0 array and 
a different layout using similar concepts.



 I read in context of raid10 about replicas of data (2 by default) and the
 data layout (near/far/offset). I see in the output of mdadm -D the line
 Layout : near=2, far=1 and am not sure which layout is exactly used and
 how it influences data layout/distribution in my case :|

 I would really appreciate a definite answer which partitions I can remove
 and which I cannot remove at the same time because I need to perform some
 disk maintenance tasks on this raid10 array. Thanks for all help!


If you want something that you can be sure about, do what I do. Make two 
raid1 md devices and then use them to make a raid0 device. raid10 is 
something cooked up by Neil Brown and but is not raid1+0. 
http://en.wikipedia.org/wiki/Linux_MD_RAID_10#LINUX-MD-RAID-10
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Software RAID10 - which two disks can fail?

2014-04-08 Thread Rafał Radecki
The raid10 name is very misleading. I came to the same conclusion
yesterday: for sake of clarity I will make two raid1 arrays and combine
them into a raid0 ;)

Thanks for all info.

BR,
Rafal.


2014-04-08 8:49 GMT+02:00 Christopher Chan christopher.c...@bradbury.edu.hk
:

 On Tuesday, April 08, 2014 03:47 AM, Rafał Radecki wrote:
  As far as I know raid10 is ~ a raid0 built on top of two raid1 (
  http://en.wikipedia.org/wiki/Nested_RAID_levels#RAID_1.2B0 - raid10).
 So I
  think that by default in my case:
 No, Linux md raid10 is NOT a nested raid setup where you build a raid0
 on top of two raid1 arrays.

 
  /dev/sda6 and /dev/sdb6 form the first raid1
  /dev/sdd6 and /dev/sdc6 form the second raid1
 
  So is it so that if I fail/remove for example:
  - /dev/sdb6 and /dev/sdc6 (different raid1's) - the raid10 will be
  usable/data will be ok?
  - /dev/sda6 and /dev/sdb6 (the same raid1) - the raid10 will be not
  usable/data will be lost?
 The man page for md which has a section on RAID10 describes the
 possibility of something is absolutely impossibe with a nested raid1+0
 setup.

 Excerpt: If, for example, an array is created with 5 devices and 2
 replicas, then space equivalent to 2.5 of the devices will be available,
 and every block will be stored on two different devices.

 So contrary to this statement: RAID10 provides a combination of RAID1
 and RAID0, and is sometimes known as RAID1+0., linux md raid10 is NOT
 raid1+0. Is something entirely new and different but unfortunately
 called raid10 perhaps due to it being able to create a raid1+0 array and
 a different layout using similar concepts.


 
  I read in context of raid10 about replicas of data (2 by default) and the
  data layout (near/far/offset). I see in the output of mdadm -D the line
  Layout : near=2, far=1 and am not sure which layout is exactly used and
  how it influences data layout/distribution in my case :|
 
  I would really appreciate a definite answer which partitions I can remove
  and which I cannot remove at the same time because I need to perform some
  disk maintenance tasks on this raid10 array. Thanks for all help!
 

 If you want something that you can be sure about, do what I do. Make two
 raid1 md devices and then use them to make a raid0 device. raid10 is
 something cooked up by Neil Brown and but is not raid1+0.
 http://en.wikipedia.org/wiki/Linux_MD_RAID_10#LINUX-MD-RAID-10
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Software RAID10 - which two disks can fail?

2014-04-08 Thread John R Pierce
On 4/8/2014 12:35 AM, Rafał Radecki wrote:
 The raid10 name is very misleading. I came to the same conclusion
 yesterday: for sake of clarity I will make two raid1 arrays and combine
 them into a raid0;)

 Thanks for all info.

its striped mirrors, its just that it treats it all as one big raid 
rather than as distinct subraids.

having the separate mirrors and stripe can be an annoying complication, 
Ive done it that way plenty of times before.

/dev/md2:
   /dev/md0:
  /dev/sda1
  /dev/sdb1
   /dev/md1:
  /dev/sdc1
  /dev/sdd1

meh.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Software RAID10 - which two disks can fail?

2014-04-07 Thread Rafał Radecki
Hi All.

I have a server which uses RAID10 made of 4 partitions for / and boots from
it. It looks like so:

mdadm -D /dev/md1
/dev/md1:
Version : 00.90
  Creation Time : Mon Apr 27 09:25:05 2009
 Raid Level : raid10
 Array Size : 973827968 (928.71 GiB 997.20 GB)
  Used Dev Size : 486913984 (464.36 GiB 498.60 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Mon Apr  7 21:26:29 2014
  State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

 Layout : near=2, far=1
 Chunk Size : 64K

   UUID : 1403e5aa:3152b3f8:086582aa:c95c4fc7
 Events : 0.38695092

Number   Major   Minor   RaidDevice State
   0   860  active sync   /dev/sda6
   1   8   221  active sync   /dev/sdb6
   2   8   542  active sync   /dev/sdd6
   3   8   383  active sync   /dev/sdc6

As far as I know raid10 is ~ a raid0 built on top of two raid1 (
http://en.wikipedia.org/wiki/Nested_RAID_levels#RAID_1.2B0 - raid10). So I
think that by default in my case:

/dev/sda6 and /dev/sdb6 form the first raid1
/dev/sdd6 and /dev/sdc6 form the second raid1

So is it so that if I fail/remove for example:
- /dev/sdb6 and /dev/sdc6 (different raid1's) - the raid10 will be
usable/data will be ok?
- /dev/sda6 and /dev/sdb6 (the same raid1) - the raid10 will be not
usable/data will be lost?

I read in context of raid10 about replicas of data (2 by default) and the
data layout (near/far/offset). I see in the output of mdadm -D the line
Layout : near=2, far=1 and am not sure which layout is exactly used and
how it influences data layout/distribution in my case :|

I would really appreciate a definite answer which partitions I can remove
and which I cannot remove at the same time because I need to perform some
disk maintenance tasks on this raid10 array. Thanks for all help!

BR,
Rafal.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Software RAID10 - which two disks can fail?

2014-04-07 Thread John R Pierce
On 4/7/2014 12:47 PM, Rafał Radecki wrote:
 As far as I know raid10 is ~ a raid0 built on top of two raid1 (
 http://en.wikipedia.org/wiki/Nested_RAID_levels#RAID_1.2B0  - raid10). So I
 think that by default in my case:

 /dev/sda6 and /dev/sdb6 form the first raid1
 /dev/sdd6 and /dev/sdc6 form the second raid1

 So is it so that if I fail/remove for example:
 - /dev/sdb6 and /dev/sdc6 (different raid1's) - the raid10 will be
 usable/data will be ok?
 - /dev/sda6 and /dev/sdb6 (the same raid1) - the raid10 will be not
 usable/data will be lost?

I'm not sure you can verify that I would play it safe, and only drop 
one drive at a time.



-- 
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Software RAID10 - which two disks can fail?

2014-04-07 Thread Keith Keller
On 2014-04-07, Rafa? Radecki radecki.ra...@gmail.com wrote:
 I would really appreciate a definite answer which partitions I can remove
 and which I cannot remove at the same time because I need to perform some
 disk maintenance tasks on this raid10 array. Thanks for all help!

You're likely to get the most definitive answer from the linux RAID
mailing list.

http://vger.kernel.org/vger-lists.html#linux-raid

Many of the md developers hang out there, and should know the correct
answer.  (I'm afraid I don't know it myself.)

--keith

-- 
kkel...@wombat.san-francisco.ca.us


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos