Public bug reported:

On a Sun e420 during the install, I created my volumes as follows:

/dev/sda1 /boot
/dev/md0 /home  (/dev/sd[ab]8)
/dev/md1 / (/dev/sd[ab]2)
/dev/md2 /usr (/dev/sd[ab]4)
/dev/md3 swap (/dev/sd[ab]5)
/dev/md4 /tmp (/dev/sd[ab]6)
/dev/md5 /var (/dev/sd[ab]7)

and completed the install.  Upon reboot, my RAID volumes were started
as:

/dev/sda1 /boot
/dev/md0 /
/dev/md1 /usr
/dev/md2 swap
/dev/md3 /tmp
/dev/md4 /var
/dev/md5 /home

apparently started in order of discovery (/dev/sda1 through /dev/sda8),
not honoring the preferred minor or /etc/mdadm.conf, and rendering my
system unbootable until I did some surgery.

After the surgery, I patched to date (incl kernel 2.6.15-25) and did an
event-free reboot.

At this point, the raid volumes are:

/dev/md0 /boot (/dev/sd[ab]1)
/dev/md1 / (/dev/sd[ab]2)
/dev/md2 /usr (/dev/sd[ab]4)
/dev/md3 swap (/dev/sd[ab]5)
/dev/md4 /tmp (/dev/sd[ab]6)
/dev/md5 /var (/dev/sd[ab]7)
/dev/md6 /home  (/dev/sd[ab]8)

I created two raid volumes

mdadm -C -l5 -c64 -n6 -x0 /dev/md11 /dev/sd[cdefgh]
mdadm -C -l5 -c64 -n6 -x0 /dev/md12 /dev/sd[ijklmn]

As you can see, my RAID volume components do have the preferred minor
listed correctly prior to rebooting (the array is still building here):

[EMAIL PROTECTED]:~# mdadm -E /dev/sdc (d-h report similar)
/dev/sdc:
          Magic : a92b4efc
        Version : 00.90.03
           UUID : 71063e4f:f3a0c78b:12a4584b:a8cd9402
  Creation Time : Thu Jun 15 15:21:36 2006
     Raid Level : raid5
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 11

    Update Time : Thu Jun 15 15:28:42 2006
          State : clean
 Active Devices : 5
Working Devices : 6
 Failed Devices : 1
  Spare Devices : 1
       Checksum : 54875fd8 - correct
         Events : 0.48

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       32        0      active sync   /dev/sdc

   0     0       8       32        0      active sync   /dev/sdc
   1     1       8       48        1      active sync   /dev/sdd
   2     2       8       64        2      active sync   /dev/sde
   3     3       8       80        3      active sync   /dev/sdf
   4     4       8       96        4      active sync   /dev/sdg
   5     5       0        0        5      faulty removed
   6     6       8      112        6      spare   /dev/sdh


my mdadm.conf is set correctly:

[EMAIL PROTECTED]:~# cat /etc/mdadm/mdadm.conf
DEVICE partitions
DEVICE /dev/sd[cdefghijklmn]
ARRAY /dev/md11 level=raid5 num-devices=6 
UUID=71063e4f:f3a0c78b:12a4584b:a8cd9402
ARRAY /dev/md12 level=raid5 num-devices=6 
UUID=456e8cd0:0f23591b:14a0ff9f:1a302d54
ARRAY /dev/md6 level=raid1 num-devices=2 
UUID=4b33d5c5:80846d59:dba11e6d:814823f3
ARRAY /dev/md5 level=raid1 num-devices=2 
UUID=76f34ac9:d74a2d9c:d0fc0f95:eab326d2
ARRAY /dev/md4 level=raid1 num-devices=2 
UUID=0eed0b47:c6e81eea:3ed1c7a6:3ed2a756
ARRAY /dev/md3 level=raid1 num-devices=2 
UUID=1d626217:4d20944a:5dbbcb0d:dd7c6e3d
ARRAY /dev/md2 level=raid1 num-devices=2 
UUID=102303be:19a3252d:48a3f79e:33f16ce1
ARRAY /dev/md1 level=raid1 num-devices=2 
UUID=30eedd12:b5b69786:97b18df5:7efabcbf
ARRAY /dev/md0 level=raid1 num-devices=2 
UUID=9b28e9d5:944316d7:f0aacc8b:d5d82b98

And yet when I reboot /dev/md11 is started as /dev/md7 and /dev/md12 is
started as /dev/md8.

[EMAIL PROTECTED]:~# cat /proc/mdstat
Personalities : [raid1] [raid5]
md8 : active raid5 sdi[0] sdn[6] sdm[4] sdl[3] sdk[2] sdj[1]
      177832000 blocks level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_]
      [>....................]  recovery =  0.0% (28416/35566400) 
finish=561.1min speed=10520K/sec

md7 : active raid5 sdc[0] sdh[6] sdg[4] sdf[3] sde[2] sdd[1]
      177832000 blocks level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_]
      [>....................]  recovery =  0.0% (29184/35566400) 
finish=567.9min speed=10420K/sec

md6 : active raid1 sda8[0] sdb8[1]
      6474112 blocks [2/2] [UU]

md5 : active raid1 sda7[0] sdb7[1]
      14651200 blocks [2/2] [UU]

md4 : active raid1 sda6[0] sdb6[1]
      995904 blocks [2/2] [UU]

md3 : active raid1 sda5[0] sdb5[1]
      7815552 blocks [2/2] [UU]

md2 : active raid1 sda4[0] sdb4[1]
      4996096 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      497920 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      120384 blocks [2/2] [UU]

unused devices: <none>

You'll notice that the preferred minor is set correctly:

[EMAIL PROTECTED]:~# mdadm -E /dev/sdc
/dev/sdc:
          Magic : a92b4efc
        Version : 00.90.03
           UUID : 71063e4f:f3a0c78b:12a4584b:a8cd9402
  Creation Time : Thu Jun 15 15:21:36 2006
     Raid Level : raid5
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 11

    Update Time : Thu Jun 15 15:28:42 2006
          State : clean
 Active Devices : 5
Working Devices : 6
 Failed Devices : 1
  Spare Devices : 1
       Checksum : 54875fd8 - correct
         Events : 0.48

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       32        0      active sync   /dev/sdc

   0     0       8       32        0      active sync   /dev/sdc
   1     1       8       48        1      active sync   /dev/sdd
   2     2       8       64        2      active sync   /dev/sde
   3     3       8       80        3      active sync   /dev/sdf
   4     4       8       96        4      active sync   /dev/sdg
   5     5       0        0        5      faulty removed
   6     6       8      112        6      spare   /dev/sdh

The preferred minor is available in the initramfs, so there's no reason
it shouldn't be used to restart the arrays (for i in /dev/hd* /dev/sd*;
do mdadm -E $i; done, etc..)

I'm fairly certain that I've done similar non-linear md layouts in the
past on Ubuntu (hoary/breezy), and it worked without problems, although
I can't say that I've done any this way in Dapper.

And before someone reads this and says I'm asking for trouble, it *is*
possible to safely do a RAID1 /boot on Sparc without issues, but that's
a separate issue.

** Affects: initramfs-tools (Ubuntu)
     Importance: Untriaged
         Status: Unconfirmed

-- 
initramfs/mdrun doesn't honor preferred minor when starting RAID volumes
https://launchpad.net/bugs/49914

--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to