On 3/10/19 5:20 PM, Joe Pfeiffer wrote:
Since a recent update, my /var/log/syslog is getting spammed with huge
numbers of messages of the form
Mar 10 14:02:25 snowball systemd-udevd[18681]: Process '/sbin/mdadm
--incremental --export /dev/sda3 --offroot
/dev/disk/by-id/ata-ST3000DM001-1ER166_Z501MTQ3-part3
/dev/disk/by-partuuid/ad6f31ee-1866-400c-84f8-2c54da6abd2e
/dev/disk/by-path/pci-0000:00:11.0-ata-1-part3
/dev/disk/by-id/wwn-0x5000c50086c86ae8-part3' failed with exit code 1.
When I run the command by hand, I get
root@snowball:~# /sbin/mdadm --incremental --export /dev/sda3 --offroot
/dev/disk/by-id/ata-ST3000DM001-1ER166_Z501MTQ3-part3
/dev/disk/by-partuuid/ad6f31ee-1866-400c-84f8-2c54da6abd2e
/dev/disk/by-path/pci-0000:00:11.0-ata-1-part3
/dev/disk/by-id/wwn-0x5000c50086c86ae8-part3
mdadm: cannot reopen /dev/sda3: Device or resource busy.
Which at least gives me a small clue, but really not much of one.
I'm not even really clear on whther this is a systemd or mdadm bug.
So, some questions:
1) what is this command trying to do? I do understand a little about mdadm,
and am running a RAID 1 array on this machine. But this is using an
option (--offroot) that doesn't even appear in the man page, and I've
got no idea what it's trying to accomplish.
2) how can I make it stop?
I also have these kind of messages for my 3 raid1 arrays. The messages started
after an update to testing done on 2/26/18 and have continued up to 3/11/18
(today). I looked at the terminal logs and I was running udev (240-6)
(installed 2/22) and mdadm (4.1-1) (installed 2/6). My udev is now at 241-1 and
mdadm is still at 4.1-1.
Since the arrays are started correctly as shown by "cat /proc/mdstat" I haven't
paid much attention to these messages. They only occur at boot.
My system is running testing and I do an upgrade just about every day.
--
*...Bob*