Your message dated Mon, 22 Jun 2009 20:17:03 +0200
with message-id <[email protected]>
and subject line Re: Bug#533848: [dmraid] dmraid fails to assmble software raid 
array (raid-0) - system fails to boot
has caused the Debian Bug report #533848,
regarding [dmraid] dmraid fails to assmble software raid array (raid-0) - 
system fails to boot
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact [email protected]
immediately.)


-- 
533848: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=533848
Debian Bug Tracking System
Contact [email protected] with problems
--- Begin Message ---
Package: dmraid
Version: 1.0.0.rc15-8
Severity: critical

--- Please enter the report below this line. ---

I have a software raid array (raid 0 type), with lvm partitions built over the 
raid.

My discs have the following structure:
/dev/sda1 ==> /boot
/dev/sda2 ==> first raid device
/dev/sdb1 ==> unused space
/dev/sdb2 ==> second raid device.

orion:/home/liorc# dmraid -r
/dev/sdb: isw, "isw_cjhfigcfig", GROUP, ok, 488397166 sectors, data@ 0
/dev/sda: isw, "isw_cjhfigcfig", GROUP, ok, 488397166 sectors, data@ 0

(LVM structure attached in the end of the bug report).

With kernel 2.6.26-2, and dmraid 1.0.0.rc13-2, the system boots fine (dmraid 
builds the raid array successfully):

[   89.648985] md0: setting max_sectors to 128, segment boundary to 32767
[   89.648985] raid0: looking at sda2
[   89.648985] raid0:   comparing sda2(244099520) with sda2(244099520)
[   89.648985] raid0:   END
[   89.648985] raid0:   ==> UNIQUE
[   89.648985] raid0: 1 zones
[   89.648985] raid0: looking at sdb2
[   89.648985] raid0:   comparing sdb2(244099520) with sda2(244099520)
[   89.648985] raid0:   EQUAL
[   89.648985] raid0: FINAL 1 zones
[   89.648985] raid0: done.
[   89.648985] raid0 : md_size is 488199040 blocks.
[   89.648985] raid0 : conf->hash_spacing is 488199040 blocks.
[   89.648985] raid0 : nb_zone is 1.
[   89.648985] raid0 : Allocating 8 bytes for hash.


However, when I upgrade to dmraid 1.0.0.rc15-8, the system fails to boot.
Also, if I don't upgrade dmraid (and still use 1.0.0.rc13-2), but upgrade to 
any newer kernel (2.6.29-1, 2.6.29-2, 2.6.30-1), boot fails with the following 
error:

Assembling all MD arrays...
mdadm: no devices found for /dev/md0
Failure: failed to assemble all arrays.
Incorrect metadata area header checksum
Volume group "vg0" not found.


I also noticed that 'dmraid -ay' produces the following kernel error:

[ 1711.937067] device-mapper: table: 253:2: striped: Couldn't parse stripe 
destination
[ 1711.937072] device-mapper: ioctl: error adding target to table


Please tell me if there is more information I can provide that could be of 
help.

Thanks.
        Lior.


--- System information. ---
Architecture: amd64
Kernel:       Linux 2.6.26-2-amd64

Debian Release: squeeze/sid
  500 unstable        www.debian-multimedia.org 
  500 unstable        mirror.hamakor.org.il 

--- Package information. ---
Depends                     (Version) | Installed
=====================================-+-=================
libc6                    (>= 2.3.5-1) | 2.9-17
libdevmapper1.02     (>= 2:1.02.02-2) | 2:1.02.08-1
libselinux1                 (>= 1.32) | 2.0.80-1
libsepol1                   (>= 1.14) | 2.0.36-1
lsb-base                              | 3.2-22



--- lvm information ---
orion:/home/liorc# vgdisplay
  Incorrect metadata area header checksum
  --- Volume group ---
  VG Name               vg0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               465.58 GB
  PE Size               4.00 MB
  Total PE              119189
  Alloc PE / Size       119189 / 465.58 GB
  Free  PE / Size       0 / 0
  VG UUID               vqRtDp-ItdV-10Ju-VJKb-5WOF-8c2c-hsN92t

orion:/home/liorc# lvdisplay
  Incorrect metadata area header checksum
  --- Logical volume ---
  LV Name                /dev/vg0/swap-lv
  VG Name                vg0
  LV UUID                dUFQ6h-Od30-Iklp-oRCY-wUSs-bq1Z-ZobN8s
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                4.00 GB
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/vg0/linux-lv
  VG Name                vg0
  LV UUID                plhvTr-ficv-VDmy-LhJP-uWnr-fMVp-H5rvwR
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                461.58 GB
  Current LE             118165
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1




Package's Recommends field is empty.

Package's Suggests field is empty.




-- 



--- End Message ---
--- Begin Message ---
Hi,

Lior Chen ha scritto:
> I didn't quite understand. As far as I know, I'm only mixing two types:
> a software raid (striping), and LVM. Doesn't it make sense to first set
> up the physical abstraction level (raid), and then build a logical
> abstraction over it? (LVM).

It make sense, but if you are using software raid you don't need dmraid, but
only mdadm.


> Personalities : [raid0]
> md0 : active raid0 sda2[0] sdb2[1]
> 488199040 blocks 64k chunks

So you are using /dev/md0


> /dev/sdb: isw, "isw_cjhfigcfig", GROUP, ok, 488397166 sectors, data@ 0
> /dev/sda: isw, "isw_cjhfigcfig", GROUP, ok, 488397166 sectors, data@ 0

Are you using /dev/mapper/isw_cjhfigcfig ?

Check your /etc/fstab, if you are not using /dev/mapper/isw_cjhfigcfig probably
you only need to uninstall dmraid :)

Therefore I'm closing this bug, please reopen it if necessary.


Cheers,
Giuseppe.

Attachment: signature.asc
Description: OpenPGP digital signature


--- End Message ---

Reply via email to