OK... so I retried...  Here is mdstat after installing and booting, and
waiting to ensure all syncing had completed:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] 
[raid10] 
md1 : active raid1 sda2[0] sdb2[1]
      19529656 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sda1[0] sdb1[1]
      48826296 blocks super 1.2 [2/2] [UU]
      
md2 : active raid1 sda3[0] sdb3[1]
      175779768 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

Next, I shut down the system and remove disk 1.  On reboot, I run mdstat
and note the degraded array with missing members:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] 
[raid10] 
md1 : active (auto-read-only) raid1 sda2[0]
      19529656 blocks super 1.2 [2/1] [U_]
      
md0 : active raid1 sda1[0]
      48826296 blocks super 1.2 [2/1] [U_]
      
md2 : active raid1 sda3[0]
      175779768 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

Then I shut down and re-insert drive 2:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] 
[raid10] 
md1 : active raid1 sda2[0] sdb2[1]
      19529656 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sda1[0]
      48826296 blocks super 1.2 [2/1] [U_]
      
md2 : active raid1 sda3[0]
      175779768 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

Then I try manually adding the disks per the test case:
bladernr@ubuntu"~$ sudo mdadm --add /dev/md0 /dev/sdb1
mdadm: /dev/sdb1 reports being an active member for /dev/md0, but a --re-add 
fails.
mdadm: not performing --add as that would convert /dev/sdb1 in to a spare.
mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sdb1" first.
bladernr@ubuntu:~$ sudo mdadm --re-add /dev/md0 /dev/sdb1
mdadm: --re-add for /dev/sdb1 to /dev/md0 is not possible.

I got the same for /dev/md2 when trying to re-add /dev/sdb3, so I zero
the superblocks, which is essentially blanking the disk and adding it as
though it were a brand new disk into the array.

bladernr@ubuntu"~$ sudo mdadm --zero-superblock /dev/sdb3
bladernr@ubuntu"~$ sudo mdadm --zero-superblock /dev/sdb1
bladernr@ubuntu"~$ sudo mdadm --add /dev/md0 /dev/sdb1
mdadm: added /dev/sdb1
bladernr@ubuntu"~$ sudo mdadm --add /dev/md2 /dev/sdb3
mdadm: added /dev/sdb3

bladernr@ubuntu:~$ cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] 
[raid10] 
md1 : active raid1 sda2[0] sdb2[1]
      19529656 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sdb1[2] sda1[0]
      48826296 blocks super 1.2 [2/1] [U_]
      [==>..................]  recovery = 11.9% (5819456/48826296) 
finish=11.9min speed=59970K/sec
      
md2 : active raid1 sdb3[2] sda3[0]
      175779768 blocks super 1.2 [2/1] [U_]
        resync=DELAYED
      
unused devices: <none>

according to the test case, the most I should have to do is just plug
the disk back in and reboot the server which should cause mdadm to
automatically re-add the disk and start re-syncing.  The most I should
have to do is just use the --add command to add the disk back in (or re-
add) manually.

What I am actually having to do is essentially destroy the partitions
for the ext4 LUNs and add them back in as brand new disks.  This again,
did not happen to the SWAP md device which DID boot degraded, and then
re-connected automatically when I put the disk back in.


** Changed in: mdadm (Ubuntu)
       Status: Incomplete => Confirmed

** Changed in: linux (Ubuntu)
       Status: Incomplete => New

** Changed in: mdadm (Ubuntu)
       Status: Confirmed => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/925280

Title:
  Software RAID fails to rebuild after testing degraded cold boot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/925280/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to