I think the issue (or something very similar or related) is still present on 
recent releases.
I use noble (24.04 LTS x86_64) on a Lenovo ThinkStation P3.

What I get every I reboot is:
(sd-umoun[2150]: Failed to umount /run/shutdown/mounts/a669d40b9
3d7989b: Device or resource busy
shutdown[1]: Could not stop MD /dev/md126: Device or resource busy
mdadm: Cannot get exclusive access to /dev/md126:Perhaps a running process, 
mounted filesystem or active volume group?
mdadm: Cannot stop container /dev/md127: member md126 still active
mdadm: Cannot get exclusive access to /dev/md126:Perhaps a running process, 
mounted filesystem or active volume group?
mdadm: Cannot stop container /dev/md127: member md126 still active
(sd-exec-[2152]: /usr/lib/systemd/system-shutdown/mdadm.finalrd failed with 
exit status 1.
shutdown[1]: Unable to finalize remaining file systems, MD devices, ignoring.

This output is the same on all the 3 machines I own, even after a clean
installation of the OS (Ubuntu 24.04LTS server). At reboot the OS
performs fsck, but occasionally rebuilds the whole raid.

Extra info:
- no bcache package is installed;
- installing dracut-core (that installs dmraid as dependency) has no effect;
- none improvement deactivating the swap partition on the raid.

Some info on the setup:
# lspci -tv
...
-[10000:e0]-+-17.0  Intel Corporation Alder Lake-S PCH SATA Controller [AHCI 
Mode]
            +-1a.0-[e1]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller 
PM9A1/PM9A3/980PRO
            +-1b.0  Intel Corporation RST VMD Managed Controller
            \-1b.4-[e2]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller 
PM9A1/PM9A3/980PRO

# cat /proc/mdstat 
Personalities : [raid1] [linear] [raid0] [raid6] [raid5] [raid4] [raid10] 
md126 : active raid1 nvme0n1[1] nvme1n1[0]
      1000202240 blocks super external:/md127/0 [2/2] [UU]
      
md127 : inactive nvme1n1[1](S) nvme0n1[0](S)
      4784 blocks super external:imsm
       
unused devices: <none>

# mdadm -D /dev/md126
/dev/md126:
         Container : /dev/md/imsm0, member 0
        Raid Level : raid1
        Array Size : 1000202240 (953.87 GiB 1024.21 GB)
     Used Dev Size : 1000202240 (953.87 GiB 1024.21 GB)
      Raid Devices : 2
     Total Devices : 2

             State : active 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0

Consistency Policy : resync


              UUID : 03d02f36:fdcb0291:aeb76a8d:876301b0
    Number   Major   Minor   RaidDevice State
       1     259        0        0      active sync   /dev/nvme0n1
       0     259        5        1      active sync   /dev/nvme1n1

# mdadm -D /dev/md127
/dev/md127:
           Version : imsm
        Raid Level : container
     Total Devices : 2

   Working Devices : 2


              UUID : 2496143b:c90af9ab:2df847e8:3b0fa0d0
     Member Arrays : /dev/md/RaidVolume

    Number   Major   Minor   RaidDevice

       -     259        5        -        /dev/nvme1n1
       -     259        0        -        /dev/nvme0n1

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1608495

Title:
  IMSM fakeraid handled by mdadm: unclean mounted volumes on
  shutdown/reboot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1608495/+subscriptions


-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to