This version is no longer supported. If this is still reproducible on a
newer/supported version, please reopen.


** Changed in: linux (Ubuntu)
       Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/383001

Title:
  booting live cd breaks intel matrix raid

Status in linux package in Ubuntu:
  Won't Fix
Status in linux package in openSUSE:
  Confirmed

Bug description:
  Ubuntu 9.04 64-bit live CD, kernel 2.6.28.11.15
  Hardware: Intel i720, GA-EX58-UD5 motherboard (ICH10R), 6GB RAM
  2x500GB HDD in Intel matrix RAID dual configuration: 250GB in RAID1 
mirroring, rest is RAID0 striping. Windows XP64 is installed on a 150GB 
partition of RAID1 drive, goal is to install Ubuntu on the remaining 100GB. Has 
another two disks but they are not in RAID.

  Setup work as expected in Windows. BIOS ROM shows drives as Raid(0,1)
  members.

  Problem: booting the live CD breaks the RAID arrays permanently (even
  when nothing is installed). After reboot the BIOS RAID utility shows
  both drive as "Offline member". It can be fixed only by deleting the
  RAID metadata content by the BIOS utility on one of the drives, re-
  adding this drive, then the Matrix Raid Manager in Windows can mirror
  back the RAID1 drive and the RAID0 can be recovered by the "Recover
  Volume" option (all this takes about 1 hour on my config -  please
  consider it when asking for tests).

  As the break happens sometime during booting, I can only report how
  the disks looks like *after* by launching a terminal and
  installing/running dmraid. dmraid cannot pair the drives as they are
  having a different name string (probably they should have the same).
  My guess is that either the hardware checks or fuse (?) tries to
  access the drives without knowing the fake raid is there, and spoils
  the metadata content. A bit strange is however that the first drive is
  listed as having 3 disks in the array...

  root@ubuntu:~# dmraid -s -s -vvvv -dddd
  WARN: locking /var/lock/dmraid/.lock
  NOTICE: /dev/sdd: asr     discovering
  NOTICE: /dev/sdd: ddf1    discovering
  NOTICE: /dev/sdd: hpt37x  discovering
  NOTICE: /dev/sdd: hpt45x  discovering
  NOTICE: /dev/sdd: isw     discovering
  NOTICE: /dev/sdd: jmicron discovering
  NOTICE: /dev/sdd: lsi     discovering
  NOTICE: /dev/sdd: nvidia  discovering
  NOTICE: /dev/sdd: pdc     discovering
  NOTICE: /dev/sdd: sil     discovering
  NOTICE: /dev/sdd: via     discovering
  NOTICE: /dev/sdc: asr     discovering
  NOTICE: /dev/sdc: ddf1    discovering
  NOTICE: /dev/sdc: hpt37x  discovering
  NOTICE: /dev/sdc: hpt45x  discovering
  NOTICE: /dev/sdc: isw     discovering
  NOTICE: /dev/sdc: jmicron discovering
  NOTICE: /dev/sdc: lsi     discovering
  NOTICE: /dev/sdc: nvidia  discovering
  NOTICE: /dev/sdc: pdc     discovering
  NOTICE: /dev/sdc: sil     discovering
  NOTICE: /dev/sdc: via     discovering
  NOTICE: /dev/sdb: asr     discovering
  NOTICE: /dev/sdb: ddf1    discovering
  NOTICE: /dev/sdb: hpt37x  discovering
  NOTICE: /dev/sdb: hpt45x  discovering
  NOTICE: /dev/sdb: isw     discovering
  NOTICE: /dev/sdb: isw metadata discovered
  NOTICE: /dev/sdb: jmicron discovering
  NOTICE: /dev/sdb: lsi     discovering
  NOTICE: /dev/sdb: nvidia  discovering
  NOTICE: /dev/sdb: pdc     discovering
  NOTICE: /dev/sdb: sil     discovering
  NOTICE: /dev/sdb: via     discovering
  NOTICE: /dev/sda: asr     discovering
  NOTICE: /dev/sda: ddf1    discovering
  NOTICE: /dev/sda: hpt37x  discovering
  NOTICE: /dev/sda: hpt45x  discovering
  NOTICE: /dev/sda: isw     discovering
  NOTICE: /dev/sda: isw metadata discovered
  NOTICE: /dev/sda: jmicron discovering
  NOTICE: /dev/sda: lsi     discovering
  NOTICE: /dev/sda: nvidia  discovering
  NOTICE: /dev/sda: pdc     discovering
  NOTICE: /dev/sda: sil     discovering
  NOTICE: /dev/sda: via     discovering
  DEBUG: _find_set: searching isw_bghhhefdec
  DEBUG: _find_set: not found isw_bghhhefdec
  DEBUG: _find_set: searching isw_bghhhefdec_RAID1
  DEBUG: _find_set: searching isw_bghhhefdec_RAID1
  DEBUG: _find_set: not found isw_bghhhefdec_RAID1
  DEBUG: _find_set: not found isw_bghhhefdec_RAID1
  DEBUG: _find_set: searching isw_bghhhefdec_RAID0
  DEBUG: _find_set: searching isw_bghhhefdec_RAID0
  DEBUG: _find_set: searching isw_bghhhefdec_RAID0
  DEBUG: _find_set: not found isw_bghhhefdec_RAID0
  DEBUG: _find_set: not found isw_bghhhefdec_RAID0
  DEBUG: _find_set: not found isw_bghhhefdec_RAID0
  NOTICE: added /dev/sdb to RAID set "isw_bghhhefdec"
  DEBUG: _find_set: searching isw_chdbicac
  DEBUG: _find_set: not found isw_chdbicac
  DEBUG: _find_set: searching isw_chdbicac_RAID1
  DEBUG: _find_set: searching isw_chdbicac_RAID1
  DEBUG: _find_set: searching isw_chdbicac_RAID1
  DEBUG: _find_set: not found isw_chdbicac_RAID1
  DEBUG: _find_set: searching isw_chdbicac_RAID1
  DEBUG: _find_set: not found isw_chdbicac_RAID1
  DEBUG: _find_set: not found isw_chdbicac_RAID1
  DEBUG: _find_set: searching isw_chdbicac_RAID1
  DEBUG: _find_set: not found isw_chdbicac_RAID1
  DEBUG: _find_set: not found isw_chdbicac_RAID1
  DEBUG: _find_set: searching isw_chdbicac_RAID0
  DEBUG: _find_set: searching isw_chdbicac_RAID0
  DEBUG: _find_set: searching isw_chdbicac_RAID0
  DEBUG: _find_set: not found isw_chdbicac_RAID0
  DEBUG: _find_set: searching isw_chdbicac_RAID0
  DEBUG: _find_set: not found isw_chdbicac_RAID0
  DEBUG: _find_set: not found isw_chdbicac_RAID0
  DEBUG: _find_set: searching isw_chdbicac_RAID0
  DEBUG: _find_set: searching isw_chdbicac_RAID0
  DEBUG: _find_set: not found isw_chdbicac_RAID0
  DEBUG: _find_set: not found isw_chdbicac_RAID0
  DEBUG: _find_set: not found isw_chdbicac_RAID0
  NOTICE: added /dev/sda to RAID set "isw_chdbicac"
  DEBUG: checking isw device "/dev/sdb"
  ERROR: isw device for volume "RAID0" broken on /dev/sdb in RAID set 
"isw_bghhhefdec_RAID0"
  ERROR: isw: wrong # of devices in RAID set "isw_bghhhefdec_RAID0" [1/2] on 
/dev/sdb
  DEBUG: set status of set "isw_bghhhefdec_RAID0" to 2
  DEBUG: checking isw device "/dev/sdb"
  ERROR: isw device for volume "RAID1" broken on /dev/sdb in RAID set 
"isw_bghhhefdec_RAID1"
  ERROR: isw: wrong # of devices in RAID set "isw_bghhhefdec_RAID1" [1/2] on 
/dev/sdb
  DEBUG: set status of set "isw_bghhhefdec_RAID1" to 2
  DEBUG: checking isw device "/dev/sda"
  ERROR: isw device for volume "RAID0" broken on /dev/sda in RAID set 
"isw_chdbicac_RAID0"
  ERROR: isw: wrong # of devices in RAID set "isw_chdbicac_RAID0" [1/2] on 
/dev/sda
  DEBUG: set status of set "isw_chdbicac_RAID0" to 2
  DEBUG: checking isw device "/dev/sda"
  ERROR: isw device for volume "RAID1" broken on /dev/sda in RAID set 
"isw_chdbicac_RAID1"
  ERROR: isw: wrong # of devices in RAID set "isw_chdbicac_RAID1" [1/2] on 
/dev/sda
  DEBUG: set status of set "isw_chdbicac_RAID1" to 2
  *** Group superset isw_bghhhefdec
  --> Subset
  name   : isw_bghhhefdec_RAID0
  size   : 452474112
  stride : 256
  type   : stripe
  status : broken
  subsets: 0
  devs   : 1
  spares : 0
  --> Subset
  name   : isw_bghhhefdec_RAID1
  size   : 524288256
  stride : 128
  type   : mirror
  status : broken
  subsets: 0
  devs   : 1
  spares : 0
  *** Group superset isw_chdbicac
  --> Subset
  name   : isw_chdbicac_RAID0
  size   : 452474112
  stride : 256
  type   : stripe
  status : broken
  subsets: 0
  devs   : 1
  spares : 0
  --> Subset
  name   : isw_chdbicac_RAID1
  size   : 524288256
  stride : 128
  type   : mirror
  status : broken
  subsets: 0
  devs   : 1
  spares : 0
  WARN: unlocking /var/lock/dmraid/.lock
  DEBUG: freeing devices of RAID set "isw_bghhhefdec_RAID0"
  DEBUG: freeing device "isw_bghhhefdec_RAID0", path "/dev/sdb"
  DEBUG: freeing devices of RAID set "isw_bghhhefdec_RAID1"
  DEBUG: freeing device "isw_bghhhefdec_RAID1", path "/dev/sdb"
  DEBUG: freeing devices of RAID set "isw_bghhhefdec"
  DEBUG: freeing device "isw_bghhhefdec", path "/dev/sdb"
  DEBUG: freeing devices of RAID set "isw_chdbicac_RAID0"
  DEBUG: freeing device "isw_chdbicac_RAID0", path "/dev/sda"
  DEBUG: freeing devices of RAID set "isw_chdbicac_RAID1"
  DEBUG: freeing device "isw_chdbicac_RAID1", path "/dev/sda"
  DEBUG: freeing devices of RAID set "isw_chdbicac"
  DEBUG: freeing device "isw_chdbicac", path "/dev/sda"

  
/////////////////////////////////////////////////////////////////////////////////////////////////

  root@ubuntu:~# dmraid -n
  /dev/sdb (isw):
  0x000 sig: "  Intel Raid ISM Cfg Sig. 1.2.00"
  0x020 check_sum: 4201763611
  0x024 mpb_size: 648
  0x028 family_num: 1677745342
  0x02c generation_num: 180315
  0x030 error_log_size: 4080
  0x034 attributes: 2147483648
  0x038 num_disks: 2
  0x039 num_raid_devs: 2
  0x03a error_log_pos: 2
  0x03c cache_size: 0
  0x040 orig_family_num: 3440023639
  0x0d8 disk[0].serial: " WD-WMASZ0068106"
  0x0e8 disk[0].totalBlocks: 976771055
  0x0ec disk[0].scsiId: 0x0
  0x0f0 disk[0].status: 0x13a
  0x0f4 disk[0].owner_cfg_num: 0x0
  0x108 disk[1].serial: " WD-WMAT00044411"
  0x118 disk[1].totalBlocks: 976773168
  0x11c disk[1].scsiId: 0x10000
  0x120 disk[1].status: 0x13a
  0x124 disk[1].owner_cfg_num: 0x0
  0x138 isw_dev[0].volume: "           RAID1"
  0x14c isw_dev[0].SizeHigh: 0
  0x148 isw_dev[0].SizeLow: 524288000
  0x150 isw_dev[0].status: 0xc
  0x154 isw_dev[0].reserved_blocks: 0
  0x158 isw_dev[0].migr_priority: 0
  0x159 isw_dev[0].num_sub_vol: 0
  0x15a isw_dev[0].tid: 15
  0x15b isw_dev[0].cng_master_disk: 0
  0x15c isw_dev[0].cache_policy: 0
  0x15e isw_dev[0].cng_state: 0
  0x15f isw_dev[0].cng_sub_state: 0
  0x188 isw_dev[0].vol.curr_migr_unit: 1024000
  0x18c isw_dev[0].vol.check_point_id: 0
  0x190 isw_dev[0].vol.migr_state: 0
  0x191 isw_dev[0].vol.migr_type: 1
  0x192 isw_dev[0].vol.dirty: 0
  0x193 isw_dev[0].vol.fs_state: 255
  0x194 isw_dev[0].vol.verify_errors: 1
  0x196 isw_dev[0].vol.verify_bad_blocks: 0
  0x1a8 isw_dev[0].vol.map[0].pba_of_lba0: 0
  0x1ac isw_dev[0].vol.map[0].blocks_per_member: 524288264
  0x1b0 isw_dev[0].vol.map[0].num_data_stripes: 2048000
  0x1b4 isw_dev[0].vol.map[0].blocks_per_strip: 128
  0x1b6 isw_dev[0].vol.map[0].map_state: 0
  0x1b7 isw_dev[0].vol.map[0].raid_level: 1
  0x1b8 isw_dev[0].vol.map[0].num_members: 2
  0x1b9 isw_dev[0].vol.map[0].num_domains: 2
  0x1ba isw_dev[0].vol.map[0].failed_disk_num: 255
  0x1bb isw_dev[0].vol.map[0].ddf: 1
  0x1d8 isw_dev[0].vol.map[0].disk_ord_tbl[0]: 0x0
  0x1dc isw_dev[0].vol.map[0].disk_ord_tbl[1]: 0x1
  0x1e0 isw_dev[1].volume: "           RAID0"
  0x1f4 isw_dev[1].SizeHigh: 0
  0x1f0 isw_dev[1].SizeLow: 904947712
  0x1f8 isw_dev[1].status: 0xc
  0x1fc isw_dev[1].reserved_blocks: 0
  0x200 isw_dev[1].migr_priority: 0
  0x201 isw_dev[1].num_sub_vol: 0
  0x202 isw_dev[1].tid: 1
  0x203 isw_dev[1].cng_master_disk: 0
  0x204 isw_dev[1].cache_policy: 0
  0x206 isw_dev[1].cng_state: 0
  0x207 isw_dev[1].cng_sub_state: 0
  0x230 isw_dev[1].vol.curr_migr_unit: 0
  0x234 isw_dev[1].vol.check_point_id: 0
  0x238 isw_dev[1].vol.migr_state: 0
  0x239 isw_dev[1].vol.migr_type: 4
  0x23a isw_dev[1].vol.dirty: 0
  0x23b isw_dev[1].vol.fs_state: 255
  0x23c isw_dev[1].vol.verify_errors: 0
  0x23e isw_dev[1].vol.verify_bad_blocks: 0
  0x250 isw_dev[1].vol.map[0].pba_of_lba0: 524292360
  0x254 isw_dev[1].vol.map[0].blocks_per_member: 452474120
  0x258 isw_dev[1].vol.map[0].num_data_stripes: 1767476
  0x25c isw_dev[1].vol.map[0].blocks_per_strip: 256
  0x25e isw_dev[1].vol.map[0].map_state: 0
  0x25f isw_dev[1].vol.map[0].raid_level: 0
  0x260 isw_dev[1].vol.map[0].num_members: 2
  0x261 isw_dev[1].vol.map[0].num_domains: 1
  0x262 isw_dev[1].vol.map[0].failed_disk_num: 0
  0x263 isw_dev[1].vol.map[0].ddf: 1
  0x280 isw_dev[1].vol.map[0].disk_ord_tbl[0]: 0x1000000
  0x284 isw_dev[1].vol.map[0].disk_ord_tbl[1]: 0x1

  /dev/sda (isw):
  0x000 sig: "  Intel Raid ISM Cfg Sig. 1.2.00"
  0x020 check_sum: 3599977089
  0x024 mpb_size: 752
  0x028 family_num: 27318202
  0x02c generation_num: 158900
  0x030 error_log_size: 4080
  0x034 attributes: 2147483648
  0x038 num_disks: 3
  0x039 num_raid_devs: 2
  0x03a error_log_pos: 2
  0x03c cache_size: 0
  0x040 orig_family_num: 3440023639
  0x0d8 disk[0].serial: " WD-WMASZ0068106"
  0x0e8 disk[0].totalBlocks: 976773168
  0x0ec disk[0].scsiId: 0x0
  0x0f0 disk[0].status: 0x13a
  0x0f4 disk[0].owner_cfg_num: 0x0
  0x108 disk[1].serial: " WD-WMAT00044411"
  0x118 disk[1].totalBlocks: 976773168
  0x11c disk[1].scsiId: 0x10000
  0x120 disk[1].status: 0x13a
  0x124 disk[1].owner_cfg_num: 0x0
  0x138 disk[2].serial: "D-WMAT00044411:1"
  0x148 disk[2].totalBlocks: 976773120
  0x14c disk[2].scsiId: 0xffffffff
  0x150 disk[2].status: 0x6
  0x154 disk[2].owner_cfg_num: 0x0
  0x168 isw_dev[0].volume: "           RAID1"
  0x17c isw_dev[0].SizeHigh: 0
  0x178 isw_dev[0].SizeLow: 524288000
  0x180 isw_dev[0].status: 0xc
  0x184 isw_dev[0].reserved_blocks: 0
  0x188 isw_dev[0].migr_priority: 0
  0x189 isw_dev[0].num_sub_vol: 0
  0x18a isw_dev[0].tid: 1
  0x18b isw_dev[0].cng_master_disk: 0
  0x18c isw_dev[0].cache_policy: 0
  0x18e isw_dev[0].cng_state: 0
  0x18f isw_dev[0].cng_sub_state: 0
  0x1b8 isw_dev[0].vol.curr_migr_unit: 548336
  0x1bc isw_dev[0].vol.check_point_id: 0
  0x1c0 isw_dev[0].vol.migr_state: 1
  0x1c1 isw_dev[0].vol.migr_type: 1
  0x1c2 isw_dev[0].vol.dirty: 0
  0x1c3 isw_dev[0].vol.fs_state: 255
  0x1c4 isw_dev[0].vol.verify_errors: 0
  0x1c6 isw_dev[0].vol.verify_bad_blocks: 0
  0x1d8 isw_dev[0].vol.map[0].pba_of_lba0: 0
  0x1dc isw_dev[0].vol.map[0].blocks_per_member: 524288264
  0x1e0 isw_dev[0].vol.map[0].num_data_stripes: 2048000
  0x1e4 isw_dev[0].vol.map[0].blocks_per_strip: 128
  0x1e6 isw_dev[0].vol.map[0].map_state: 0
  0x1e7 isw_dev[0].vol.map[0].raid_level: 1
  0x1e8 isw_dev[0].vol.map[0].num_members: 2
  0x1e9 isw_dev[0].vol.map[0].num_domains: 2
  0x1ea isw_dev[0].vol.map[0].failed_disk_num: 1
  0x1eb isw_dev[0].vol.map[0].ddf: 1
  0x208 isw_dev[0].vol.map[0].disk_ord_tbl[0]: 0x0
  0x20c isw_dev[0].vol.map[0].disk_ord_tbl[1]: 0x1
  0x210 isw_dev[0].vol.map[1].pba_of_lba0: 0
  0x214 isw_dev[0].vol.map[1].blocks_per_member: 524288264
  0x218 isw_dev[0].vol.map[1].num_data_stripes: 2048000
  0x21c isw_dev[0].vol.map[1].blocks_per_strip: 128
  0x21e isw_dev[0].vol.map[1].map_state: 2
  0x21f isw_dev[0].vol.map[1].raid_level: 1
  0x220 isw_dev[0].vol.map[1].num_members: 2
  0x221 isw_dev[0].vol.map[1].num_domains: 2
  0x222 isw_dev[0].vol.map[1].failed_disk_num: 1
  0x223 isw_dev[0].vol.map[1].ddf: 1
  0x240 isw_dev[0].vol.map[1].disk_ord_tbl[0]: 0x0
  0x244 isw_dev[0].vol.map[1].disk_ord_tbl[1]: 0x1000002
  0x248 isw_dev[1].volume: "           RAID0"
  0x25c isw_dev[1].SizeHigh: 0
  0x258 isw_dev[1].SizeLow: 904947712
  0x260 isw_dev[1].status: 0x20c
  0x264 isw_dev[1].reserved_blocks: 0
  0x268 isw_dev[1].migr_priority: 0
  0x269 isw_dev[1].num_sub_vol: 0
  0x26a isw_dev[1].tid: 2
  0x26b isw_dev[1].cng_master_disk: 0
  0x26c isw_dev[1].cache_policy: 0
  0x26e isw_dev[1].cng_state: 0
  0x26f isw_dev[1].cng_sub_state: 0
  0x298 isw_dev[1].vol.curr_migr_unit: 0
  0x29c isw_dev[1].vol.check_point_id: 0
  0x2a0 isw_dev[1].vol.migr_state: 0
  0x2a1 isw_dev[1].vol.migr_type: 1
  0x2a2 isw_dev[1].vol.dirty: 0
  0x2a3 isw_dev[1].vol.fs_state: 255
  0x2a4 isw_dev[1].vol.verify_errors: 0
  0x2a6 isw_dev[1].vol.verify_bad_blocks: 0
  0x2b8 isw_dev[1].vol.map[0].pba_of_lba0: 524292360
  0x2bc isw_dev[1].vol.map[0].blocks_per_member: 452474120
  0x2c0 isw_dev[1].vol.map[0].num_data_stripes: 1767476
  0x2c4 isw_dev[1].vol.map[0].blocks_per_strip: 256
  0x2c6 isw_dev[1].vol.map[0].map_state: 3
  0x2c7 isw_dev[1].vol.map[0].raid_level: 0
  0x2c8 isw_dev[1].vol.map[0].num_members: 2
  0x2c9 isw_dev[1].vol.map[0].num_domains: 1
  0x2ca isw_dev[1].vol.map[0].failed_disk_num: 0
  0x2cb isw_dev[1].vol.map[0].ddf: 1
  0x2e8 isw_dev[1].vol.map[0].disk_ord_tbl[0]: 0x1000000
  0x2ec isw_dev[1].vol.map[0].disk_ord_tbl[1]: 0x1

  
/////////////////////////////////////////////////////////////////////////////////////////////////

  Additional system logs (casper.log, dmesg.txt, lspci.txt, mount, df
  and the output of 'dmraid -r -D') are attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/383001/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to