** Description changed:

  I can't boot into 2.6.27-2 or -3 with my software raid setup. It's an
  ASUS P5N-E SLI MoBo, and the disks look like this:
  
  Disk /dev/sda: 500.1 GB, 500107862016 bytes
  255 heads, 63 sectors/track, 60801 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
  Disk identifier: 0x3a86cbf3
  
  Device Boot Start End Blocks Id System
  /dev/sda1 1 243 1951866 82 Linux swap / Solaris
  /dev/sda2 244 30637 244139805 fd Linux RAID autodetect
  /dev/sda3 * 30638 60800 242284297+ 7 HPFS/NTFS
  
  Disk /dev/sdb: 500.1 GB, 500107862016 bytes
  255 heads, 63 sectors/track, 60801 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
  Disk identifier: 0x3a86cbf3
  
  Device Boot Start End Blocks Id System
  /dev/sdb1 1 243 1951866 82 Linux swap / Solaris
  /dev/sdb2 244 30637 244139805 fd Linux RAID autodetect
  /dev/sdb3 * 30638 60800 242284297+ 7 HPFS/NTFS
  
  However, on boot I'm dropped into BusyBox with messages:
  
  ALERT! /dev/md0 does not exist
  and
  md0 : inactive dm3[0](S)
  
  At this point I thought I'd try "mdadm --assemble --scan" which does, in
  fact, activate md0 - but only with one of the two raid partitions
  active. If I try to mdadm --add the other one, I get "mdadm: Cannot open
  /dev/sda2: Device or resource busy"
  
  I've checked initrd and both md and raid1 modules are included.
  
  Note: booting into 2.6.24-21 (and other earlier kernels) works
  absolutely fine, so that's what I'm running at the moment.
  
  Booting into 2.6.27-3 recovery mode gives more details on the console.
  Both sda and sdb disks (and partitions thereof) are detected ok, but
  just prior to being dropped into BusyBox there's a load of messages like
  this:
  
  md: md0 stopped
  md: bind<dm-2>
  md: md0 stopped
  md: unbind<dm-2>
  md: export_rdev(dm-2)
  md: bind<dm-2>
  md: md0 stopped
  md: unbind<dm-2>
  md: export_rdev(dm-2)
  md: bind<dm-2>
  md: md0 stopped
  md: unbind<dm-2>
  md: export_rdev(dm-2)
  md: bind<dm-2>
  Done.
  ** WARNING: There appears to be one or more degraded RAID devices **
  
  (However, both drives are fine, as I can boot earlier kernels without
  problems).
  
  Then, I'm offered an option to start the degraded device. If I choose
  "Y", md0 starts but with only one drive active (similar to the result I
  got from mdadm --assemble --scan)
  
  Some kind of race condition on drive detection, maybe?
+ 
+ Update - 15th September
+ 
+ Ok - I think I may be getting to the bottom of this. I *think* kernel
+ 2.6.27 is picking up a FakeRAID mirror set up on my NVIDIA chipset
+ motherboard. This is a change in behavior from previous kernels which
+ didn't detect the NVIDIA mirror and instead I could use software raid on
+ the two softraid partitions, sda2 and sdb2. Now, however, the kernel is
+ picking up a dm device and attempting to boot from that. Of course, as
+ it's a RAID-1 mirror, the kernel sees it as only one device and
+ complains as the original softraid mirror appears to have only one
+ device rather than two.
+ 
+ Maybe if I disable fakeraid support until after boot then manually
+ modprobe dm-mirror... ?

-- 
Intrepid boot fails on 2.6.27 with software RAID1
https://bugs.launchpad.net/bugs/269411
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to