Re: mdadm error - superfluous RAID member

2012-06-15 Thread Tom H
On Wed, Jun 13, 2012 at 6:36 PM, Steve Dowe s...@warpuniversal.co.uk wrote:
 On 13/06/12 23:15, Tom H wrote:

 Since metadata 1.1 or 1.2 stores the metadata at the beginning rather
 than at the end, perhaps using a partitioned mdraid device with that
 metada works with squeeze.

 Good idea.  I'll boot it up with a live CD and report back soon.

I don't think that you can change metadata version without a re-format.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOdo=SzjZokQ6xiObx9W_X-uvOkipZ=UGWpihtYW41x=ym-...@mail.gmail.com



mdadm error - superfluous RAID member

2012-06-13 Thread Steve Dowe

Hi,

I'm trying to re-use an older server, installing squeeze (6.0.5).  I'm 
using software RAID and LVM on the machine (details below).  But I must 
be doing something wrong with the disk set up stage in the installer, as 
when it boots I see an error flash up quickly:


 error: superfluous RAID member (5 found)

It appears that the initramfs then gets loaded, the RAID detection fails 
and it then looks for the LVM volume group, which it can't find (as the 
LVM group exists on the RAID device).  I see this output:


 Loading, please wait...
 mdadm: No devices listed in conf file were found.
  Volume group vgbiff not found
  Skipping volume group vgbiff
  Unable to find LVM volume vgbiff/lvroot
  same messages appear but for lvswap
 Gave up waiting for root device snip
...

It then drops me into the BusyBox shell, with initramfs prompt.

I can then activate the RAID simply by doing

 (initramfs) mdadm --assemble --scan
 mdadm: /dev/md/0 has been started with 5 drives and 1 spare.

and then activate the volume group, using:

  (initramfs) vgchange -a y
  2 logical volume(s) in volume group vgbiff now active

Exiting the busybox shell then boots the system.

The basic configuration is:
- Xeon (64-bit capable) w/4GB RAM
- PCI SCSI controller
- 6 x 73GB SCSI drives

During install, on each drive I created a 500MB primary partition (with 
/dev/sda1 being for /boot) and then a second partition for Linux s/w 
RAID (label set to fd).


In /dev/md0 I then created a LVM partition, and set up the volume group 
to contain two volumes - one for swap, and one for /.  /dev/md0 is 
comprised of 5 drives running in RAID5, with one hot spare.


During installation, I took pains to wipe all the drives and create all 
partitions anew.


When booted, I checked /etc/default/mdadm.  The values INITRDSTART='all' 
and AUTOSTART=true are both set.  I also set VERBOSE=true to give me 
more output when creating a new initramfs.  I checked the contents of 
/etc/mdadm/mdadm.conf - which seems fine.


I then issued update-initramfs -vu, and saw the following:

 I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
 I: mdadm: will start all available MD arrays from the initial ramdisk.
 I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.

and the last output before cpio builds the initial ramdisk is

 Calling hook dmsetup

- so, in my limited knowledge, this suggests the drive mapper is 
incorporated into the initramfs also.


When I take a peek into /boot/grub/grub.cfg I see:

 insmod raid
 insmod raid5rec
 insmod mdraid
 insmod lvm

in the 00_header section.


I'm running low on ideas now.  Re-installing grub doesn't help.  Running 
update-grub simply dumps out many more of those error messages:


 error: superfluous RAID member (5 found).
 repeats 17 times

So it does point to grub being at fault somewhere, rather than the initrd.

Have I missed something blindingly obvious?


Thanks again,
Steve

--
Steve Dowe

Warp Universal Limited
http://warp2.me/sd


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4fd8539b.9080...@warpuniversal.co.uk



Re: mdadm error - superfluous RAID member

2012-06-13 Thread Christofer C. Bell
On Wed, Jun 13, 2012 at 3:47 AM, Steve Dowe s...@warpuniversal.co.uk wrote:
 Hi,

 I'm trying to re-use an older server, installing squeeze (6.0.5).  I'm using
 software RAID and LVM on the machine (details below).  But I must be doing
 something wrong with the disk set up stage in the installer, as when it
 boots I see an error flash up quickly:

  error: superfluous RAID member (5 found)

 It appears that the initramfs then gets loaded, the RAID detection fails and
 it then looks for the LVM volume group, which it can't find (as the LVM
 group exists on the RAID device).

I don't believe you can boot from a striped volume (raid5 being a
stripe + parity).  I found some instructions that may allow this to
work but requires packing a non-standard initrd:

http://nil-techno.blogspot.com/2009/02/booting-fakeraid-raid5-linux-half-assed.html

-- 
Chris


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOEVnYuG3eipFjkxRROp=gRkYr8NhFuJ6K=xv9=izayogyr...@mail.gmail.com



Re: mdadm error - superfluous RAID member

2012-06-13 Thread Tom H
On Wed, Jun 13, 2012 at 4:47 AM, Steve Dowe s...@warpuniversal.co.uk wrote:

 I'm trying to re-use an older server, installing squeeze (6.0.5).  I'm using
 software RAID and LVM on the machine (details below).  But I must be doing
 something wrong with the disk set up stage in the installer, as when it
 boots I see an error flash up quickly:

  error: superfluous RAID member (5 found)

 It appears that the initramfs then gets loaded, the RAID detection fails and
 it then looks for the LVM volume group, which it can't find (as the LVM
 group exists on the RAID device).  I see this output:

  Loading, please wait...
  mdadm: No devices listed in conf file were found.
  Volume group vgbiff not found
  Skipping volume group vgbiff
  Unable to find LVM volume vgbiff/lvroot
  same messages appear but for lvswap
  Gave up waiting for root device snip
 ...

 It then drops me into the BusyBox shell, with initramfs prompt.

 I can then activate the RAID simply by doing

  (initramfs) mdadm --assemble --scan
  mdadm: /dev/md/0 has been started with 5 drives and 1 spare.

 and then activate the volume group, using:

  (initramfs) vgchange -a y
  2 logical volume(s) in volume group vgbiff now active

 Exiting the busybox shell then boots the system.

 The basic configuration is:
 - Xeon (64-bit capable) w/4GB RAM
 - PCI SCSI controller
 - 6 x 73GB SCSI drives

 During install, on each drive I created a 500MB primary partition (with
 /dev/sda1 being for /boot) and then a second partition for Linux s/w RAID
 (label set to fd).

 In /dev/md0 I then created a LVM partition, and set up the volume group to
 contain two volumes - one for swap, and one for /.  /dev/md0 is comprised of
 5 drives running in RAID5, with one hot spare.

 During installation, I took pains to wipe all the drives and create all
 partitions anew.

 When booted, I checked /etc/default/mdadm.  The values INITRDSTART='all' and
 AUTOSTART=true are both set.  I also set VERBOSE=true to give me more output
 when creating a new initramfs.  I checked the contents of
 /etc/mdadm/mdadm.conf - which seems fine.

 I then issued update-initramfs -vu, and saw the following:

  I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
  I: mdadm: will start all available MD arrays from the initial ramdisk.
  I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.

 and the last output before cpio builds the initial ramdisk is

  Calling hook dmsetup

 - so, in my limited knowledge, this suggests the drive mapper is
 incorporated into the initramfs also.

 When I take a peek into /boot/grub/grub.cfg I see:

  insmod raid
  insmod raid5rec
  insmod mdraid
  insmod lvm

 in the 00_header section.


 I'm running low on ideas now.  Re-installing grub doesn't help.  Running
 update-grub simply dumps out many more of those error messages:

  error: superfluous RAID member (5 found).
  repeats 17 times

 So it does point to grub being at fault somewhere, rather than the initrd.

Maybe this bug:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=610184

(What mdraid metadata version are you using?)


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOdo=Sx=gbw25x8__n6q5fob07fy-p6_5r6onbv8tgwdszv...@mail.gmail.com



Re: mdadm error - superfluous RAID member

2012-06-13 Thread Tom H
On Wed, Jun 13, 2012 at 12:40 PM, Christofer C. Bell
christofer.c.b...@gmail.com wrote:
 On Wed, Jun 13, 2012 at 3:47 AM, Steve Dowe s...@warpuniversal.co.uk wrote:

 I'm trying to re-use an older server, installing squeeze (6.0.5).  I'm using
 software RAID and LVM on the machine (details below).  But I must be doing
 something wrong with the disk set up stage in the installer, as when it
 boots I see an error flash up quickly:

  error: superfluous RAID member (5 found)

 It appears that the initramfs then gets loaded, the RAID detection fails and
 it then looks for the LVM volume group, which it can't find (as the LVM
 group exists on the RAID device).

 I don't believe you can boot from a striped volume (raid5 being a
 stripe + parity).  I found some instructions that may allow this to
 work but requires packing a non-standard initrd:

 http://nil-techno.blogspot.com/2009/02/booting-fakeraid-raid5-linux-half-assed.html

grub2 can handle /boot on mdraid raid5 (and possibly dmraid raid5 too).

From a raid5 VM:

[root]# grep md0 /proc/mdstat
md0 : active raid5 sdc1[2] sdb1[1] sda1[0]
[root]#
[root]#
[root]# mount | egrep -v 'udev|sys|run|pts|proc'
/dev/md0 on / type ext4 (rw)
[root]#
[root]#
[root]# cat /etc/fstab
UUID=4b202d73-d5e4-4678-916c-6220eddb1b60 / ext4 defaults 0 1
[root]#
[root]#
[root]# grub-probe -t drive /
(mduuid/53da4b0e979e6faa68401fe357f506a3)
[root]#
[root]#
[root]# grub-probe -t drive /boot
(mduuid/53da4b0e979e6faa68401fe357f506a3)
[root]#


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOdo=sysaqr_et+frb7f+tyttsxys7v_tm4jhhfxpc-nivt...@mail.gmail.com



Re: mdadm error - superfluous RAID member

2012-06-13 Thread Gary Dale

On 13/06/12 04:47 AM, Steve Dowe wrote:

Hi,

I'm trying to re-use an older server, installing squeeze (6.0.5).  I'm 
using software RAID and LVM on the machine (details below).  But I 
must be doing something wrong with the disk set up stage in the 
installer, as when it boots I see an error flash up quickly:


 error: superfluous RAID member (5 found)

It appears that the initramfs then gets loaded, the RAID detection 
fails and it then looks for the LVM volume group, which it can't find 
(as the LVM group exists on the RAID device).  I see this output:


 Loading, please wait...
 mdadm: No devices listed in conf file were found.
  Volume group vgbiff not found
  Skipping volume group vgbiff
  Unable to find LVM volume vgbiff/lvroot
same messages appear but for lvswap
 Gave up waiting for root device snip
...

It then drops me into the BusyBox shell, with initramfs prompt.

I can then activate the RAID simply by doing

 (initramfs) mdadm --assemble --scan
 mdadm: /dev/md/0 has been started with 5 drives and 1 spare.

and then activate the volume group, using:

  (initramfs) vgchange -a y
  2 logical volume(s) in volume group vgbiff now active

Exiting the busybox shell then boots the system.

The basic configuration is:
- Xeon (64-bit capable) w/4GB RAM
- PCI SCSI controller
- 6 x 73GB SCSI drives

During install, on each drive I created a 500MB primary partition 
(with /dev/sda1 being for /boot) and then a second partition for Linux 
s/w RAID (label set to fd).


In /dev/md0 I then created a LVM partition, and set up the volume 
group to contain two volumes - one for swap, and one for /.  /dev/md0 
is comprised of 5 drives running in RAID5, with one hot spare.


During installation, I took pains to wipe all the drives and create 
all partitions anew.


When booted, I checked /etc/default/mdadm.  The values 
INITRDSTART='all' and AUTOSTART=true are both set.  I also set 
VERBOSE=true to give me more output when creating a new initramfs.  I 
checked the contents of /etc/mdadm/mdadm.conf - which seems fine.


I then issued update-initramfs -vu, and saw the following:

 I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
 I: mdadm: will start all available MD arrays from the initial ramdisk.
 I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.

and the last output before cpio builds the initial ramdisk is

 Calling hook dmsetup

- so, in my limited knowledge, this suggests the drive mapper is 
incorporated into the initramfs also.


When I take a peek into /boot/grub/grub.cfg I see:

 insmod raid
 insmod raid5rec
 insmod mdraid
 insmod lvm

in the 00_header section.


I'm running low on ideas now.  Re-installing grub doesn't help.  
Running update-grub simply dumps out many more of those error messages:


 error: superfluous RAID member (5 found).
repeats 17 times

So it does point to grub being at fault somewhere, rather than the 
initrd.


Have I missed something blindingly obvious?


Thanks again,
Steve



I prefer to create a separate boot array as RAID1 with extra redundant 
copies. This circumvents a number of issues between grub, initramfs and 
mdadm. I don't know if Squeeze can actually boot from a RAID5 array in 
practice but I find it's not worth the aggravation of trying to make it 
work.


For example, Squeeze has problems with booting from partitioned RAID 
arrays. After running update-initramfs and update-grub, I find that the 
UUID for the partitions has been replaced with the UUID for the array, 
so that the boot fails. This particular problem can be solved by fixing 
the UUIDs in grub.cfg.


I don't use LVM myself so I don't know if the LVM messages are from the 
same root cause.





--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4fd8dee0.6060...@rogers.com



Re: mdadm error - superfluous RAID member

2012-06-13 Thread Tom H
On Wed, Jun 13, 2012 at 2:41 PM, Gary Dale garyd...@rogers.com wrote:

 For example, Squeeze has problems with booting from partitioned RAID arrays.
 After running update-initramfs and update-grub, I find that the UUID for the
 partitions has been replaced with the UUID for the array, so that the boot
 fails. This particular problem can be solved by fixing the UUIDs in
 grub.cfg.

grub2 was patched about a year ago to boot from a partitioned mdraid
/boot but I don't know whether that change made it into squeeze.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOdo=sxnaczeqoe5ocptidhm1kiewjdoi8p4cwfyu2knngu...@mail.gmail.com



Re: mdadm error - superfluous RAID member

2012-06-13 Thread Steve Dowe

On 13/06/12 19:07, Tom H wrote:

On Wed, Jun 13, 2012 at 12:40 PM, Christofer C. Bell
christofer.c.b...@gmail.com wrote:

I don't believe you can boot from a striped volume (raid5 being a
stripe + parity).  I found some instructions that may allow this to
work but requires packing a non-standard initrd:

http://nil-techno.blogspot.com/2009/02/booting-fakeraid-raid5-linux-half-assed.html


grub2 can handle /boot on mdraid raid5 (and possibly dmraid raid5 too).


That's ok, my boot partition is /dev/sda1 (500MB) - dedicated to being 
/boot and nothing else, and all my RAID partitions are /dev/sd*2.



I didn't realise grub2 could handle that, though. Thanks.

--
Steve Dowe

Warp Universal Limited
http://warp2.me/sd



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4fd8f694.4030...@warpuniversal.co.uk



Re: mdadm error - superfluous RAID member

2012-06-13 Thread Steve Dowe

On 13/06/12 19:56, Tom H wrote:

On Wed, Jun 13, 2012 at 2:41 PM, Gary Dale garyd...@rogers.com wrote:


For example, Squeeze has problems with booting from partitioned RAID arrays.
After running update-initramfs and update-grub, I find that the UUID for the
partitions has been replaced with the UUID for the array, so that the boot
fails. This particular problem can be solved by fixing the UUIDs in
grub.cfg.


grub2 was patched about a year ago to boot from a partitioned mdraid
/boot but I don't know whether that change made it into squeeze.


I have just found the GNU grub development mailing list discussion, here:
https://lists.gnu.org/archive/html/grub-devel/2012-02/msg3.html

Although the symptoms are the same as the Debian bug 
(http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=610184), I'm not sure 
whether the causes are.


I believe, in my case, the cause is the one discussed in the GNU list, 
namely that grub couldn't accurately detect whether a partition of the 
whole disk was assigned for RAID use.  In the developer's own words,


if you have  64KiB between end of disk and end of partition the 
metadata is exactly in the same place for either if the whole disks are 
raided or only partitions. And no field which allows to distinguish 
those...


On that basis, and the fact that grub in squeeze 6.0.5 seemed to exhibit 
the problem, I decided to update the machine to testing/wheezy instead 
and see if the problem disappears.


I can confirm that it has.  The error message no longer appears at boot 
time and I don't need to intervene to get to my login prompt.


For anyone reading this in the same dilemma, I'm not sure if things like 
this would get backported to squeeze or not - perhaps someone has an 
idea how to find out...


Thanks,
Steve

--
Steve Dowe

Warp Universal Limited
http://warp2.me/sd



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4fd8fa19.9040...@warpuniversal.co.uk



Re: mdadm error - superfluous RAID member

2012-06-13 Thread Tom H
On Wed, Jun 13, 2012 at 4:37 PM, Steve Dowe s...@warpuniversal.co.uk wrote:
 On 13/06/12 19:56, Tom H wrote:
 On Wed, Jun 13, 2012 at 2:41 PM, Gary Dale garyd...@rogers.com wrote:

 For example, Squeeze has problems with booting from partitioned RAID
 arrays.
 After running update-initramfs and update-grub, I find that the UUID for
 the
 partitions has been replaced with the UUID for the array, so that the
 boot
 fails. This particular problem can be solved by fixing the UUIDs in
 grub.cfg.

 grub2 was patched about a year ago to boot from a partitioned mdraid
 /boot but I don't know whether that change made it into squeeze.

 I have just found the GNU grub development mailing list discussion, here:
 https://lists.gnu.org/archive/html/grub-devel/2012-02/msg3.html

 Although the symptoms are the same as the Debian bug
 (http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=610184), I'm not sure
 whether the causes are.

 I believe, in my case, the cause is the one discussed in the GNU list,
 namely that grub couldn't accurately detect whether a partition of the whole
 disk was assigned for RAID use.  In the developer's own words,

 if you have  64KiB between end of disk and end of partition the metadata
 is exactly in the same place for either if the whole disks are raided or
 only partitions. And no field which allows to distinguish those...

 On that basis, and the fact that grub in squeeze 6.0.5 seemed to exhibit the
 problem, I decided to update the machine to testing/wheezy instead and see
 if the problem disappears.

 I can confirm that it has.  The error message no longer appears at boot time
 and I don't need to intervene to get to my login prompt.

 For anyone reading this in the same dilemma, I'm not sure if things like
 this would get backported to squeeze or not - perhaps someone has an idea
 how to find out...

Since metadata 1.1 or 1.2 stores the metadata at the beginning rather
than at the end, perhaps using a partitioned mdraid device with that
metada works with squeeze.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOdo=Sy4ReRPHKdZQw+AbqJwjnhBgfMFXNodQNz-=znopze...@mail.gmail.com



Re: mdadm error - superfluous RAID member

2012-06-13 Thread Steve Dowe

On 13/06/12 23:15, Tom H wrote:

Since metadata 1.1 or 1.2 stores the metadata at the beginning rather
than at the end, perhaps using a partitioned mdraid device with that
metada works with squeeze.


Good idea.  I'll boot it up with a live CD and report back soon.

--
Steve Dowe

Warp Universal Limited
http://warp2.me/sd



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4fd915f4.4010...@warpuniversal.co.uk