Package: mdadm Version: 4.2+20231121-1devuan1 Severity: normal Dear Maintainer,
* What led up to the situation? Upgrading mdadm to the indicated version (or starting from 4.2+20230901-1).q * What exactly did you do (or not do) that was effective (or ineffective)? I did not do anything other than upgrade. * What was the outcome of this action? The mdadm periodic job to check for degraded arrays started issuing the warning message in the subject. Just running `mdadm --detail /dev/md0` produces the same message. * What outcome did you expect instead? No warning message from mdadm command invocations. The /etc/mdadm/mdadm.conf file was created by the installer and never modified afterwards. FYI, it includes the following comment # This configuration was auto-generated on Sat, 06 Aug 2022 12:36:25 +0900 by mkconf which corresponds to the time I installed the system. The installer put mdadm 4.1-11 on the system, in case that matters. I have been tracking "testing" since. On another system running "stable", I have mdadm 4.2-5 which does not produce this warning. That system has a similar mdadm.conf, only the UUID and hostname differ. The warning was introduced upstream on 2023-06-01 in e2eb503b which would indicate it first entered Debian with version 4.2+20230901-1 of the package. The POSIX compatible character set does not include the `:` which is why the warning triggers. After some searching on how to get rid of the warning, I am not sure if I should or even can as the hostname seems to get included automatically. If so, perhaps the POSIX check should skip the hostname and `:`? Or should a postinst somehow rename the (live) array (and rebuild the initramfs) to adjust to the upstream change, seeing that I never made any changes since I installed the system? # Note, my array contains /. BTW, my initrd is compressed by zstd. I've checked that it has no /conf/conf.d/md. It does contain a copy of /etc/mdadm/mdadm.conf. Also, I only have NVMe disks. It looks like /usr/share/bug/mdadm/script needs to catch up with new technology. Hope this helps, -- Package-specific info: --- mdadm.conf HOMEHOST <system> MAILADDR root ARRAY /dev/md/0 metadata=1.2 UUID=7de21d27:72412544:e86f4f1e:b4e1487d name=basecamp:0 --- /etc/default/mdadm AUTOCHECK=true AUTOSCAN=true START_DAEMON=true DAEMON_OPTIONS="--syslog" VERBOSE=false --- /proc/mdstat: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 nvme1n1p2[1] nvme0n1p2[0] 249401664 blocks super 1.2 [2/2] [UU] bitmap: 1/2 pages [4KB], 65536KB chunk unused devices: <none> --- /proc/partitions: major minor #blocks name 259 0 250059096 nvme0n1 259 1 524288 nvme0n1p1 259 2 249533767 nvme0n1p2 259 3 250059096 nvme1n1 259 4 524288 nvme1n1p1 259 5 249533767 nvme1n1p2 9 0 249401664 md0 253 0 29294592 dm-0 253 1 97652736 dm-1 --- LVM physical volumes: LVM does not seem to be used. --- mount output sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) udev on /dev type devtmpfs (rw,nosuid,relatime,size=32594668k,nr_inodes=8148667,mode=755,inode64) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=600,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=6522772k,mode=755,inode64) /dev/mapper/basecamp-root on / type ext4 (rw,relatime,errors=remount-ro) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64) securityfs on /sys/kernel/security type securityfs (rw,relatime) pstore on /sys/fs/pstore type pstore (rw,relatime) none on /sys/firmware/efi/efivars type efivarfs (rw,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=13045540k,inode64) /dev/nvme1n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro) /dev/mapper/basecamp-home on /home type ext4 (rw,relatime) rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime) /etc/autofs/nas.conf on /srv/nas type autofs (rw,relatime,fd=6,pgrp=1593,timeout=300,minproto=5,maxproto=5,indirect,pipe_ino=12446) /dev/mapper/basecamp-root on /var/lib/docker type ext4 (rw,relatime,errors=remount-ro) cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=6522768k,nr_inodes=1630692,mode=700,uid=1000,gid=1000,inode64) --- initrd.img-6.5.0-5-amd64: gzip: /boot/initrd.img-6.5.0-5-amd64: not in gzip format cpio: premature end of archive --- initrd's /conf/conf.d/md: no conf/md file. --- /proc/modules: raid10 77824 0 - Live 0xffffffffc1582000 raid456 200704 0 - Live 0xffffffffc1541000 libcrc32c 12288 3 nf_conntrack,nf_tables,raid456, Live 0xffffffffc1537000 async_raid6_recov 20480 1 raid456, Live 0xffffffffc1522000 async_memcpy 16384 2 raid456,async_raid6_recov, Live 0xffffffffc1518000 async_pq 16384 2 raid456,async_raid6_recov, Live 0xffffffffc150c000 async_xor 16384 3 raid456,async_raid6_recov,async_pq, Live 0xffffffffc1502000 async_tx 16384 5 raid456,async_raid6_recov,async_memcpy,async_pq,async_xor, Live 0xffffffffc14ea000 raid6_pq 122880 3 raid456,async_raid6_recov,async_pq, Live 0xffffffffc14c3000 raid0 24576 0 - Live 0xffffffffc14b4000 multipath 16384 0 - Live 0xffffffffc14a7000 linear 16384 0 - Live 0xffffffffc05a4000 dm_mod 221184 6 - Live 0xffffffffc07e6000 raid1 57344 1 - Live 0xffffffffc0998000 md_mod 225280 8 raid10,raid456,raid0,multipath,linear,raid1, Live 0xffffffffc0933000 --- /var/log/syslog: --- volume detail: /dev/[hsv]d[a-z]* not readable by user. --- /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-6.5.0-5-amd64 loglevel=7 root=/dev/mapper/basecamp-root ro quiet --- grub2: set root='lvmid/078UP1-7gcA-WOOE-Omyg-f63K-AyUo-mwa63t/ZQdABl-CsF2-xvqC-iA6a-c0zB-faeX-Z4cbZH' set root='lvmid/078UP1-7gcA-WOOE-Omyg-f63K-AyUo-mwa63t/ZQdABl-CsF2-xvqC-iA6a-c0zB-faeX-Z4cbZH' set root='lvmid/078UP1-7gcA-WOOE-Omyg-f63K-AyUo-mwa63t/ZQdABl-CsF2-xvqC-iA6a-c0zB-faeX-Z4cbZH' linux /boot/vmlinuz-6.5.0-5-amd64 root=/dev/mapper/basecamp-root ro quiet set root='lvmid/078UP1-7gcA-WOOE-Omyg-f63K-AyUo-mwa63t/ZQdABl-CsF2-xvqC-iA6a-c0zB-faeX-Z4cbZH' linux /boot/vmlinuz-6.5.0-5-amd64 root=/dev/mapper/basecamp-root ro quiet set root='lvmid/078UP1-7gcA-WOOE-Omyg-f63K-AyUo-mwa63t/ZQdABl-CsF2-xvqC-iA6a-c0zB-faeX-Z4cbZH' linux /boot/vmlinuz-6.5.0-5-amd64 root=/dev/mapper/basecamp-root ro single set root='lvmid/078UP1-7gcA-WOOE-Omyg-f63K-AyUo-mwa63t/ZQdABl-CsF2-xvqC-iA6a-c0zB-faeX-Z4cbZH' linux /boot/vmlinuz-6.5.0-4-amd64 root=/dev/mapper/basecamp-root ro quiet set root='lvmid/078UP1-7gcA-WOOE-Omyg-f63K-AyUo-mwa63t/ZQdABl-CsF2-xvqC-iA6a-c0zB-faeX-Z4cbZH' linux /boot/vmlinuz-6.5.0-4-amd64 root=/dev/mapper/basecamp-root ro single set root='lvmid/078UP1-7gcA-WOOE-Omyg-f63K-AyUo-mwa63t/ZQdABl-CsF2-xvqC-iA6a-c0zB-faeX-Z4cbZH' linux /boot/vmlinuz-6.5.0-3-amd64 root=/dev/mapper/basecamp-root ro quiet set root='lvmid/078UP1-7gcA-WOOE-Omyg-f63K-AyUo-mwa63t/ZQdABl-CsF2-xvqC-iA6a-c0zB-faeX-Z4cbZH' linux /boot/vmlinuz-6.5.0-3-amd64 root=/dev/mapper/basecamp-root ro single set root='lvmid/078UP1-7gcA-WOOE-Omyg-f63K-AyUo-mwa63t/ZQdABl-CsF2-xvqC-iA6a-c0zB-faeX-Z4cbZH' linux /boot/vmlinuz-6.5.0-2-amd64 root=/dev/mapper/basecamp-root ro quiet set root='lvmid/078UP1-7gcA-WOOE-Omyg-f63K-AyUo-mwa63t/ZQdABl-CsF2-xvqC-iA6a-c0zB-faeX-Z4cbZH' linux /boot/vmlinuz-6.5.0-2-amd64 root=/dev/mapper/basecamp-root ro single --- udev: un udev <none> <none> (no description available) 9d7dfcdc58fa54941f8d28f6094a7a5b /lib/udev/rules.d/01-md-raid-creating.rules 36a4851143a2c3e426c3f6b569ae6e73 /lib/udev/rules.d/63-md-raid-arrays.rules c326832543a521f671acff86377c43b7 /lib/udev/rules.d/64-md-raid-assembly.rules ba1d376ca9b7364576f950d06ba18207 /lib/udev/rules.d/69-md-clustered-confirm-device.rules --- /dev: brw-rw---- 1 root disk 9, 0 Jan 5 20:14 /dev/md0 /dev/disk/by-diskseq: total 0 lrwxrwxrwx 1 root root 13 Jan 5 20:14 1 -> ../../nvme0n1 lrwxrwxrwx 1 root root 13 Jan 5 20:14 2 -> ../../nvme1n1 /dev/disk/by-id: total 0 lrwxrwxrwx 1 root root 10 Jan 5 20:14 dm-name-basecamp-home -> ../../dm-1 lrwxrwxrwx 1 root root 10 Jan 5 20:14 dm-name-basecamp-root -> ../../dm-0 lrwxrwxrwx 1 root root 10 Jan 5 20:14 dm-uuid-LVM-078UP17gcAWOOEOmygf63KAyUomwa63tA3R4b6PjtpIlAcm2ZJggNQoDzatew54q -> ../../dm-1 lrwxrwxrwx 1 root root 10 Jan 5 20:14 dm-uuid-LVM-078UP17gcAWOOEOmygf63KAyUomwa63tZQdABlCsF2xvqCiA6ac0zBfaeXZ4cbZH -> ../../dm-0 lrwxrwxrwx 1 root root 9 Jan 5 20:14 lvm-pv-uuid-3VnR3b-ZcHX-Ttxx-3NG3-PUcU-93k0-1Bfgtm -> ../../md0 lrwxrwxrwx 1 root root 9 Jan 5 20:14 md-name-basecamp:0 -> ../../md0 lrwxrwxrwx 1 root root 9 Jan 5 20:14 md-uuid-7de21d27:72412544:e86f4f1e:b4e1487d -> ../../md0 lrwxrwxrwx 1 root root 13 Jan 5 20:14 nvme-PCIe_SSD_511220504325000019 -> ../../nvme1n1 lrwxrwxrwx 1 root root 15 Jan 5 20:14 nvme-PCIe_SSD_511220504325000019-part1 -> ../../nvme1n1p1 lrwxrwxrwx 1 root root 15 Jan 5 20:14 nvme-PCIe_SSD_511220504325000019-part2 -> ../../nvme1n1p2 lrwxrwxrwx 1 root root 13 Jan 5 20:14 nvme-PCIe_SSD_511220530254000006 -> ../../nvme0n1 lrwxrwxrwx 1 root root 15 Jan 5 20:14 nvme-PCIe_SSD_511220530254000006-part1 -> ../../nvme0n1p1 lrwxrwxrwx 1 root root 15 Jan 5 20:14 nvme-PCIe_SSD_511220530254000006-part2 -> ../../nvme0n1p2 lrwxrwxrwx 1 root root 13 Jan 5 20:14 nvme-eui.6479a76350c0007c -> ../../nvme1n1 lrwxrwxrwx 1 root root 15 Jan 5 20:14 nvme-eui.6479a76350c0007c-part1 -> ../../nvme1n1p1 lrwxrwxrwx 1 root root 15 Jan 5 20:14 nvme-eui.6479a76350c0007c-part2 -> ../../nvme1n1p2 lrwxrwxrwx 1 root root 13 Jan 5 20:14 nvme-eui.6479a764c0c00007 -> ../../nvme0n1 lrwxrwxrwx 1 root root 15 Jan 5 20:14 nvme-eui.6479a764c0c00007-part1 -> ../../nvme0n1p1 lrwxrwxrwx 1 root root 15 Jan 5 20:14 nvme-eui.6479a764c0c00007-part2 -> ../../nvme0n1p2 /dev/disk/by-partlabel: total 0 lrwxrwxrwx 1 root root 15 Jan 5 20:14 ARCANITE -> ../../nvme0n1p2 lrwxrwxrwx 1 root root 15 Jan 5 20:14 TRIDENITE -> ../../nvme1n1p2 /dev/disk/by-partuuid: total 0 lrwxrwxrwx 1 root root 15 Jan 5 20:14 02aa43e4-f3d1-464e-9b96-07f5794137fa -> ../../nvme0n1p1 lrwxrwxrwx 1 root root 15 Jan 5 20:14 53d296f3-7491-bd46-b88c-8dc6ac19365a -> ../../nvme0n1p2 lrwxrwxrwx 1 root root 15 Jan 5 20:14 c0d4acb9-828f-c546-a1e3-6775036a04ef -> ../../nvme1n1p2 lrwxrwxrwx 1 root root 15 Jan 5 20:14 c9671d7f-1c17-6d41-9301-2e14873b955d -> ../../nvme1n1p1 /dev/disk/by-uuid: total 0 lrwxrwxrwx 1 root root 15 Jan 5 20:14 108A-6ABD -> ../../nvme1n1p1 lrwxrwxrwx 1 root root 10 Jan 5 20:14 3c55dd00-4f07-47d6-825e-abb105b4fa47 -> ../../dm-0 lrwxrwxrwx 1 root root 10 Jan 5 20:14 b2c8f9d2-618c-408c-9554-27a9407b0c65 -> ../../dm-1 /dev/md: total 0 lrwxrwxrwx 1 root root 6 Jan 5 20:14 0 -> ../md0 Auto-generated on Sat, 06 Jan 2024 11:07:40 +0900 by mdadm bugscript -- System Information: Debian Release: trixie/sid APT prefers testing APT policy: (500, 'testing') Architecture: amd64 (x86_64) Kernel: Linux 6.5.0-5-amd64 (SMP w/16 CPU threads; PREEMPT) Kernel taint flags: TAINT_WARN Locale: LANG=C.UTF-8, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set Shell: /bin/sh linked to /usr/bin/dash Init: runit (via /run/runit.stopit) LSM: AppArmor: enabled Versions of packages mdadm depends on: ii debconf [debconf-2.0] 1.5.83 ii eudev [udev] 3.2.14-1 ii init-system-helpers 1.66devuan1 ii libc6 2.37-13 ii libeudev1 3.2.14-1 Versions of packages mdadm recommends: ii kmod 30+20230601-2.1 Versions of packages mdadm suggests: ii dma [mail-transport-agent] 0.13-1+b1 -- debconf information: mdadm/autocheck: true mdadm/start_daemon: true mdadm/mail_to: root mdadm/autoscan: true