Public bug reported:

Binary package hint: mdadm

This seems to be simular to bug 188392, but with some different
symptoms, so I am not sure its appropriate to add it there.  Apologies
if it turns out that this should have been reported there.

I am running Jaunty beta - installed recently from the Alternate CD,
because I had an already established raid1 with some lvm on top of part
of it set up when I was running debian unstable.  I have two sata disks,
each partitioned into 4 partitions of which one is swap and the other 3
are raid for /boot, root and an lvm pv.  For some historic reason I
don't remember the lvm md device is specified differently - but my
mdadm.conf looks like this

<code>
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md3 level=raid1 num-devices=2 
UUID=5a1acf92:0fe3ad1d:0fab2104:a29fa92f
ARRAY /dev/md1 level=raid1 num-devices=2 
UUID=29d24a32:1d762ed3:0fab2104:a29fa92f
ARRAY /dev/md/2 level=raid1 metadata=1.0 num-devices=2 
UUID=a7a0d77a:7b42f79b:a31e60ac:aaa86c0c name=kanger:2
</code>

I am about to update my hardware to replace the sata disks with bigger
ones, but prior to doing this I needed to find out physically which
device was /dev/sda and which was /dev/sdb, so decided to try to fail
one device (by pulling the cables out - with the power off) and once I
had booted up, shut down again and add the disk back and reboot.

The first few steps worked fine - I removed the drive, powered up again
and was confronted with initramfs asking me if it was OK to continue.
As I was working on something else it timed out and dropped me into
busybox, but I rebooted and confirmed that it was.  My system came up
with one disk failed, and worked perfectly.  I shutdown again.

When I replaced the drive and started up again, it started getting weird
errors related to parts of the system not working.  A quick examination
soon is clear what happened

/dev/md1 and /dev/md3 reformed perfectly.  /dev/md/2 did NOT form during
boot and since it held the lvm and parts of the filesystem, nothing else
would run (in particular gdm - so I was thrown into console mode).

The only thing different about /dev/md/2 is the metadata=1.0 line - so
it looks as though  this newer format is not recovering in the same way
as the default older format.  This is (I think) a bug

I managed to recover this device manually by mdadm --assemble -force
/dev/md/2 /dev/sd[ab]4 awaiting until /dev/sdb4 had fully synced again
(for 143GB it took about 2 hours) and then rebooting.  It didn't
shutdown properly, but after a poweroff and full reboot, it came up
properly again.

** Affects: mdadm (Ubuntu)
     Importance: Undecided
         Status: New

-- 
Raid 1 configuration failed to start on boot
https://bugs.launchpad.net/bugs/362938
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to