** Description changed:

  On an almost fresh install of Ubuntu 24.04, with dmraid installed. When 
booting the RAID array is detected and up, but the partitions inside of it are 
not detected and I have to manually run `sudo kpartx -a 
/dev/mapper/isw_caiggcbbj_Volume1` to make them appear correctly.
  This is a regression from Ubuntu 20.04, where it just works at boot.
- This may not actually be kpartx's fault, as it may just not be being invoked 
upon boot or in the wrong order of operations.
- 
+ Seemingly kpartx-boot is what is meant to make it just work at boot.
  
  --- MORE INFO ---
  
  It is the same PC, I have the installs side by side on the same boot disk (to 
clarify, my boot drive is not under RAID).
  I am using Intel Rapid Storage Technology configured in my BIOS and dmraid (I 
briefly tried mdadm, but it didn't work in my 15 minutes of trying).
  
  A summary of the difference between the two OSes is at the bottom.
  
  ## For Ubuntu 20.04, where it is working ##
  
  `dmsetup --version`:
  ```
  Library version:   1.02.167 (2019-11-30)
  Driver version:    4.45.0
  ```
  
  `dmraid --version`:
  ```
- dmraid version:               1.0.0.rc16 (2009.09.16) shared 
+ dmraid version:               1.0.0.rc16 (2009.09.16) shared
  dmraid library version:       1.0.0.rc16 (2009.09.16)
  device-mapper version:        4.45.0
  ```
  
  `lsmod | grep 'raid'` produces no output.
  
  `lsmod | grep 'dm_'`:
  ```
  dm_mirror              24576  0
  dm_region_hash         24576  1 dm_mirror
  dm_log                 20480  2 dm_region_hash,dm_mirror
  ```
  
  `dmraid -r`:
  ```
  /dev/sda: isw, "isw_caiggcbbj", GROUP, ok, 468862126 sectors, data@ 0
  /dev/sdb: isw, "isw_caiggcbbj", GROUP, ok, 468862126 sectors, data@ 0
  ```
  
  `dmraid -s`:
  ```
  *** Group superset isw_caiggcbbj
  --> Active Subset
  name   : isw_caiggcbbj_Volume1
  size   : 937714176
  stride : 128
  type   : stripe
  status : ok
  subsets: 0
  devs   : 2
  spares : 0
  ```
  
  `dmsetup ls`:
  ```
  isw_caiggcbbj_Volume1 (253:0)
  isw_caiggcbbj_Volume1p1       (253:2)
  ```
  
  `ls -l /dev/mapper/`:
  ```
  total 0
  crw------- 1 root root  10, 236 Jul 31 19:40 control
  brw-rw---- 1 root disk 253,   1 Jul 31 19:40 isw_caiggcbbj_Volume1
  lrwxrwxrwx 1 root root        7 Jul 31 19:40 isw_caiggcbbj_Volume1p1 -> 
../dm-2
  ```
  
  `lsblk`:
  ```
  NAME                        MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINT
- sda                           8:0    0 223.6G  0 disk   
- └─isw_caiggcbbj_Volume1     253:1    0 447.1G  0 dmraid 
-   └─isw_caiggcbbj_Volume1p1 253:2    0 447.1G  0 part   /media/ben/SSD_ext4
- sdb                           8:16   0 223.6G  0 disk   
- └─isw_caiggcbbj_Volume1     253:1    0 447.1G  0 dmraid 
-   └─isw_caiggcbbj_Volume1p1 253:2    0 447.1G  0 part   /media/ben/SSD_ext4
+ sda                           8:0    0 223.6G  0 disk
+ └─isw_caiggcbbj_Volume1     253:1    0 447.1G  0 dmraid
+   └─isw_caiggcbbj_Volume1p1 253:2    0 447.1G  0 part   /media/ben/SSD_ext4
+ sdb                           8:16   0 223.6G  0 disk
+ └─isw_caiggcbbj_Volume1     253:1    0 447.1G  0 dmraid
+   └─isw_caiggcbbj_Volume1p1 253:2    0 447.1G  0 part   /media/ben/SSD_ext4
  ```
  
  `fdisk -l`:
  ```
  Disk /dev/sda: 223.58 GiB, 240057409536 bytes, 468862128 sectors
  Disk model: TCSUNBOW X3 240G
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disklabel type: dos
  Disk identifier: 0x00000000
  
  Device   Boot Start    End  Sectors  Size Id Type
  /dev/sda1      1 468862127 468862127 223.6G ee GPT
- 
  
  Disk /dev/sdb: 223.58 GiB, 240057409536 bytes, 468862128 sectors
  Disk model: TCSUNBOW X3 240G
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  
  Disk /dev/mapper/isw_caiggcbbj_Volume1: 447.14 GiB, 480109658112 bytes, 
937714176 sectors
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 65536 bytes / 131072 bytes
  Disklabel type: gpt
  Disk identifier: BEA6AF5B-DF82-4D02-BD4A-1027B795151F
  
  Device                              Start       End   Sectors   Size Type
  /dev/mapper/isw_caiggcbbj_Volume1p1  2048 937713663 937711616 447.1G Linux 
filesystem
  ```
  
  `journalctl -b | grep -i "raid\|dm-\d\|isw"`:
  ```
  Jul 31 19:40:34 Ubentu-Desktop kernel: device-mapper: ioctl: 4.45.0-ioctl 
(2021-03-22) initialised: [email protected]
  Jul 31 19:40:34 Ubentu-Desktop kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 
slots 6 ports 6 Gbps 0x3f impl RAID mode
  Jul 31 19:40:34 Ubentu-Desktop dmraid-activate[806]: ERROR: Cannot retrieve 
RAID set information for isw_caiggcbbj_Volume1
  Jul 31 19:40:34 Ubentu-Desktop dmraid-activate[962]: ERROR: Cannot retrieve 
RAID set information for isw_caiggcbbj_Volume1
  Jul 31 19:40:36 Ubentu-Desktop udisksd[1270]: failed to load module mdraid: 
libbd_mdraid.so.2: cannot open shared object file: No such file or directory
  Jul 31 19:40:36 Ubentu-Desktop udisksd[1270]: Failed to load the 'mdraid' 
libblockdev plugin
  ```
  
  ## And then for Ubuntu 24.04, where it is not working ##
  
  `dmsetup --version`:
  ```
  Library version:   1.02.185 (2022-05-18)
  Driver version:    4.48.0
  ```
  
  `dmraid --version`:
  ```
- dmraid version:               1.0.0.rc16 (2009.09.16) shared 
+ dmraid version:               1.0.0.rc16 (2009.09.16) shared
  dmraid library version:       1.0.0.rc16 (2009.09.16)
  device-mapper version:        4.48.0
  ```
  
  `lsmod | grep 'raid'` is the same as for Ubuntu 20.04 (no output).
  
  `lsmod | grep 'dm_'` is the same as for Ubuntu 20.04.
  
  `dmraid -r` is the same as for Ubuntu 20.04.
  
  `dmraid -s` is the same as for Ubuntu 20.04.
  
  `dmsetup ls`:
  ```
  isw_caiggcbbj_Volume1 (252:0)
  ```
  
  `ls -l /dev/mapper/`:
  ```
  crw------- 1 root root  10, 236 Jul 31 20:11 control
  brw-rw---- 1 root disk 252,   0 Jul 31 20:11 isw_caiggcbbj_Volume1
  ```
  
  `lsblk`:
  ```
  NAME                    MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINTS
- sda                       8:0    0 223.6G  0 disk   
- └─isw_caiggcbbj_Volume1 252:0    0 447.1G  0 dmraid 
- sdb                       8:16   0 223.6G  0 disk   
- └─isw_caiggcbbj_Volume1 252:0    0 447.1G  0 dmraid 
+ sda                       8:0    0 223.6G  0 disk
+ └─isw_caiggcbbj_Volume1 252:0    0 447.1G  0 dmraid
+ sdb                       8:16   0 223.6G  0 disk
+ └─isw_caiggcbbj_Volume1 252:0    0 447.1G  0 dmraid
  ```
  
  `fdisk -l`:
  ```
  Disk /dev/sda: 223.57 GiB, 240057409536 bytes, 468862128 sectors
  Disk model: TCSUNBOW X3 240G
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disklabel type: dos
  Disk identifier: 0x00000000
  
  Device     Boot Start       End   Sectors   Size Id Type
  /dev/sda1           1 468862127 468862127 223.6G ee GPT
- 
  
  Disk /dev/sdb: 223.57 GiB, 240057409536 bytes, 468862128 sectors
  Disk model: TCSUNBOW X3 240G
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  
  Disk /dev/mapper/isw_caiggcbbj_Volume1: 447.14 GiB, 480109658112 bytes, 
937714176 sectors
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 65536 bytes / 131072 bytes
  Disklabel type: gpt
  Disk identifier: BEA6AF5B-DF82-4D02-BD4A-1027B795151F
  
  Device                                  Start       End   Sectors   Size Type
  /dev/mapper/isw_caiggcbbj_Volume1-part1  2048 937713663 937711616 447.1G 
Linux filesystem
  ```
  
  `journalctl -b | grep -i "raid\|dm-\d\|isw"`:
  ```
  Jul 31 19:12:19 Ubentu-Desktop-24-04 kernel: ahci 0000:00:17.0: AHCI 
0001.0301 32 slots 6 ports 6 Gbps 0x3f impl RAID mode
  Jul 31 19:12:19 Ubentu-Desktop-24-04 (udev-worker)[468]: dm-0: Process 
'/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/dm-0' failed with 
exit code 1.
  Jul 31 19:12:19 Ubentu-Desktop-24-04 dmraid-activate[668]: ERROR: Cannot 
retrieve RAID set information for isw_caiggcbbj_Volume1
  Jul 31 19:12:19 Ubentu-Desktop-24-04 dmraid-activate[711]: ERROR: Cannot 
retrieve RAID set information for isw_caiggcbbj_Volume1
  ```
  
  ## Summary of differences ##
  
  - Version update of device-mapper.
  - The partition is not showing up in 24.04 (i.e. this bug).
  - `fdisk -l` is reporting that the partition should be 
"/dev/mapper/isw_caiggcbbj_Volume1-part1" instead of 
"/dev/mapper/isw_caiggcbbj_Volume1p1" (suffix change) on 24.04 vs 20.04 
(respectivly). After running the `kpartx` command on 24.04 this does "correct 
itself" and the partition is mapped as "isw_caiggcbbj_Volume1p1".
  - Both "/dev/dm-1" for the RAID array, but when I run the `kpartx` command on 
24.04 the device for the partition becomes "dm-0" (instead of "dm-2" like on 
20.04).
  - The major device number changes from 253 on 20.04 to 252 on 24.04. Although 
if I am not mistaken, this just means it's in the dynamic range, so doesn't 
mean anything.
  - There is an additional error message on 24.04 - "(udev-worker)[468]: dm-0: 
Process '/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/dm-0' 
failed with exit code 1".
+ 
+ ## Things I've tried ##
+ 
+ 1. `apt update && apt upgrade`
+ 2. Rebooting many times (to check for an obvious race condition).
+ 3. `update-initramfs -u` (although this was already done when installing 
dmraid / kpartx-boot).
  
  --- --- ---
  
  ProblemType: Bug
  DistroRelease: Ubuntu 24.04
  Package: kpartx 0.9.4-5ubuntu8
  ProcVersionSignature: Ubuntu 6.8.0-39.39-generic 6.8.8
  Uname: Linux 6.8.0-39-generic x86_64
  ApportVersion: 2.28.1-0ubuntu3
  Architecture: amd64
  CasperMD5CheckResult: pass
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Jul 31 23:40:55 2024
  InstallationDate: Installed on 2024-07-31 (0 days ago)
  InstallationMedia: Ubuntu 24.04 LTS "Noble Numbat" - Release amd64 (20240424)
  ProcEnviron:
-  LANG=en_US.UTF-8
-  PATH=(custom, no user)
-  SHELL=/bin/bash
-  TERM=xterm-256color
+  LANG=en_US.UTF-8
+  PATH=(custom, no user)
+  SHELL=/bin/bash
+  TERM=xterm-256color
  SourcePackage: multipath-tools
  UpgradeStatus: No upgrade log present (probably fresh install)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2075442

Title:
  RAID partitions not auto detected at boot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/2075442/+subscriptions


-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to